Updates from: 10/26/2022 01:12:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
-# Conditional Access: Users and groups
+# Conditional Access: Users, groups, and workload identities
-A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user.
+A Conditional Access policy must include a user, group, or workload identity assignment as one of the signals in the decision process. These can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access.
> [!VIDEO https://www.youtube.com/embed/5DsW1hB3Jqs]
If you do find yourself locked out, see [What to do if you're locked out of the
Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction). For policies that are intended to target service provider tenants, use the **Service provider user** external user type available in the **Guest or external users** selection options.
+## Workload identities (Preview)
+
+A workload identity is an identity that allows an application or service principal access to resources, sometimes in the context of a user. Conditional Access policies can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities aren't covered by policy.
+
+Organizations can target specific workload identities to be included or excluded from policy.
+
+For more information, see the article [Conditional Access for workload identities preview](workload-identity.md).
+ ## Next steps - [Conditional Access: Cloud apps or actions](concept-conditional-access-cloud-apps.md)
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
+
+ Title: Filter for applications in Conditional Access policy (Preview) - Azure Active Directory
+description: Use filter for applications in Conditional Access to manage conditions.
+++ Last updated : 09/30/2022++++++++++
+# Conditional Access: Filter for applications (Preview)
+
+Currently Conditional Access policies can be applied to all apps or to individual apps. Organizations with a large number of apps may find this process difficult to manage across multiple Conditional Access policies.
+
+Application filters are a new feature for Conditional Access that allows organizations to tag service principals with custom attributes. These custom attributes are then added to their Conditional Access policies. Filters for applications are evaluated at token issuance runtime, a common question is if apps are assigned at runtime or configuration time.
+
+In this document, you create a custom attribute set, assign a custom security attribute to your application, and create a Conditional Access policy to secure the application.
+
+> [!NOTE]
+> Filter for applications is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Assign roles
+
+Custom security attributes are security sensitive and can only be managed by delegated users. Even global administrators don't have default permissions for custom security attributes. One or more of the following roles should be assigned to the users who manage or report on these attributes.
+
+| Role name | Description |
+| | |
+| Attribute assignment administrator | Assign custom security attribute keys and values to supported Azure AD objects. |
+| Attribute assignment reader | Read custom security attribute keys and values for supported Azure AD objects. |
+| Attribute definition administrator | Define and manage the definition of custom security attributes. |
+| Attribute definition reader | Read the definition of custom security attributes. |
+
+1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Create custom security attributes
+
+Follow the instructions in the article, [Add or deactivate custom security attributes in Azure AD (Preview)](../fundamentals/custom-security-attributes-add.md) to add the following **Attribute set** and **New attributes**.
+
+- Create an **Attribute set** named *ConditionalAccessTest*.
+- Create **New attributes** named *policyRequirement* that **Allow multiple values to be assigned** and **Only allow predefined values to be assigned**. We add the following predefined values:
+ - legacyAuthAllowed
+ - blockGuesUsers
+ - requireMFA
+ - requireCompliantDevice
+ - requireHybridJoinedDevice
+ - requireCompliantApp
++
+> [!NOTE]
+> Conditional Access filters for devices only works with custom security attributes of type "string".
+
+## Create a Conditional Access policy
++
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 1. Select **Done**.
+1. Under **Cloud apps or actions**, select the following options:
+ 1. Select what this policy applies to **Cloud apps**.
+ 1. Include **Select apps**.
+ 1. Select **Edit filter**.
+ 1. Set **Configure** to **Yes**.
+ 1. Select the **Attribute** we created earlier called *policyRequirement*.
+ 1. Set **Operator** to **Contains**.
+ 1. Set **Value** to **requireMFA**.
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Configure custom attributes
+
+### Step 1: Set up a sample application
+
+If you already have a test application that makes use of a service principal, you can skip this step.
+
+Set up a sample application that, demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. Follow the instructions in the article [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](../develop/quickstart-v2-netcore-daemon.md) to create this application.
+
+### Step 2: Assign a custom security attribute to an application
+
+When you don't have a service principal listed in your tenant, it can't be targeted. The Office 365 suite is an example of one such service principal.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Select the service principal you want to apply a custom security attribute to.
+1. Under **Manage** > **Custom security attributes (preview)**, select **Add assignment**.
+1. Under **Attribute set**, select **ConditionalAccessTest**.
+1. Under **Attribute name**, select **policyRequirement**.
+1. Under **Assigned values**, select **Add values**, select **requireMFA** from the list, then select **Done**.
+1. Select **Save**.
+
+### Step 3: Test the policy
+
+Sign in as a user who the policy would apply to and test to see that MFA is required when accessing the application.
+
+## Other scenarios
+
+- Blocking legacy authentication
+- Blocking external access to applications
+- Requiring compliant device or Intune app protection policies
+- Enforcing sign in frequency controls for specific applications
+- Requiring a privileged access workstation for specific applications
+- Require session controls for high risk users and specific applications
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Previously updated : 09/26/2022 Last updated : 10/24/2022
az identity federated-credential delete --name $ficId --identity-name $uaId --re
::: zone-end
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+- Get the information for your external IdP and software workload, which you need in the following steps.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
+- To run the example scripts, you have two options:
+ - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks.
+ - Run scripts locally with Azure PowerShell, as described in the next section.
+- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
+- Find the object ID of the user-assigned managed identity, which you need in the following steps.
+
+### Configure Azure PowerShell locally
+
+To use Azure PowerShell locally for this article instead of using Cloud Shell:
+
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+
+1. Sign in to Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+
+ ```azurepowershell
+ Install-Module -Name PowerShellGet -AllowPrerelease
+ ```
+
+ You might need to `Exit` out of the current PowerShell session after you run this command for the next step.
+
+1. Install the `Az.ManagedServiceIdentity` module to perform the user-assigned managed identity operations in this article.
+
+ ```azurepowershell
+ Install-Module -Name Az.ManagedServiceIdentity
+ ```
+
+## Configure a federated identity credential on a user-assigned managed identity
+
+Run the New-AzFederatedIdentityCredentials command to create a new federated identity credential on your user-assigned managed identity (specified by the object ID of the app). Specify the *name*, *issuer*, *subject*, and other parameters.
+
+```azurepowershell
+New-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 `
+ -Name fic-pwsh01 -Issuer "https://kubernetes-oauth.azure.com" -Subject "system:serviceaccount:ns:svcaccount"
+```
+
+## List federated identity credentials on a user-assigned managed identity
+
+Run the Get-AzFederatedIdentityCredentials command to read all the federated identity credentials configured on a user-assigned managed identity:
+
+```azurepowershell
+Get-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01
+```
+
+## Get a federated identity credential on a user-assigned managed identity
+
+Run the Get-AzFederatedIdentityCredentials command to show a federated identity credential (by ID):
+
+```azurepowershell
+Get-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 -Name fic-pwsh01
+```
+
+## Delete a federated identity credential from a user-assigned managed identity
+
+Run the Remove-AzFederatedIdentityCredentials command to delete a federated identity credential under an existing user assigned identity.
+
+```azurepowershell
+Remove-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 -Name fic-pwsh01
+```
+ ::: zone pivot="identity-wif-mi-methods-arm" ## Prerequisites
All of the template parameters are mandatory.
There is a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
-You must add exactly 1 audience to a federated identity credential, this gets verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
+You must add exactly 1 audience to a federated identity credential. The audience is verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
List, Get, and Delete operations are not available with template. Refer to Azure CLI for these operations. By default, all child federated identity credentials are created in parallel, which triggers concurrency detection logic and causes the deployment to fail with a 409-conflict HTTP status code. To create them sequentially, specify a chain of dependencies using the *dependsOn* property.
https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RES
## Delete a federated identity credential from a user-assigned managed identity
-Delete a federated identity credentials on the specified user-assigned managed identity.
+Delete a federated identity credential on the specified user-assigned managed identity.
```bash curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>/<RESOURCE NAME>/federatedIdentityCredentials/<FEDERATED IDENTITY CREDENTIAL RESOURCENAME>?api-version=2022-01-31-preview' -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
+ # Add Azure Active Directory (Azure AD) as an identity provider for External Identities
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Last updated 05/10/2022
-+ #Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the portal, and understand the end user experience.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Last updated 08/05/2022
-+
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Last updated 06/30/2022
-+
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
adobe-target: true+ # Leave an organization as an external user
Permanent deletion can be initiated by the admin, or it happens at the end of th
## Next steps - Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)-- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
+- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
active-directory User Flow Add Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md
Last updated 03/02/2021 -+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| Permission | Setting explanation | | - | | | **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
-| **Create tenants** | Setting this option to **No** prevents users from creating new Azure AD or Azure AD B2C tenants. You can grant the ability back to specific individuals by adding them to tenant creator role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 09/09/2022 Last updated : 10/24/2022
If you are reviewing access to an application, then before creating the review,
1. In the **Enable review decision helpers** section choose whether you want your reviewer to receive recommendations during the review process: 1. If you select **No sign-in within 30 days**, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial. This 30-day interval is irrespective of whether the sign-ins were interactive or not. The last sign-in date for the specified user will also display along with the recommendation.
+ 1. If you select User-to-Group Affiliation, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
> [!NOTE] > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure DevTest Labs | [Enable user-assigned managed identities on lab virtual machines in Azure DevTest Labs](../../devtest-labs/enable-managed-identities-lab-vms.md) | | Azure Digital Twins | [Enable a managed identity for routing Azure Digital Twins events](../../digital-twins/how-to-enable-managed-identities-portal.md) | | Azure Event Grid | [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
+| Azure Event Hubs | [Authenticate a managed identity with Azure Active Directory to access Event Hubs Resources](../../event-hubs/authenticate-managed-identity.md)
| Azure Image Builder | [Azure Image Builder overview](../../virtual-machines/image-builder-overview.md#permissions) | | Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure IoT Hub | [IoT Hub support for virtual networks with Private Link and Managed Identity](../../iot-hub/virtual-network-support.md) |
active-directory Atea Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
11. Review the user attributes that are synchronized from Azure AD to Atea in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Atea for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Atea API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|
- ||||
- |userName|String|&check;|
- |active|Boolean|
- |emails[type eq "work"].value|String|
- |name.givenName|String|
- |name.familyName|String|
- |name.formatted|String|
- |phoneNumbers[type eq "mobile"].value|String|
- |locale|String|
- |nickName|String|
+ Attribute|Type|Supported for filtering|Required by LawVu|
+ |||||
+ |userName|String|&check;|&check;|
+ |active|Boolean||&check;|
+ |emails[type eq "work"].value|String||&check;|
+ |name.givenName|String|||
+ |name.familyName|String|||
+ |name.formatted|String||&check;|
+ |phoneNumbers[type eq "mobile"].value|String|||
+ |locale|String|||
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion. * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change Log
+* 10/25/2022 - Drop core user attribute **nickName**.
+* 10/25/2022 - Changed the mapping of core user attribute **name.formatted** to **Join(" ", [givenName], [surname]) -> name.formatted**.
+* 10/25/2022 - Domain name of all OAuth config urls of Atea app changed to Atea owned domain.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Atlassian Cloud**. 9. Review the user attributes that are synchronized from Azure AD to Atlassian Cloud in the **Attribute Mapping** section.
- The email attribute will be used to match Atlassian Cloud accounts with your Azure AD accounts.
+ **The email attribute will be used to match Atlassian Cloud accounts with your Azure AD accounts.**
Select the **Save** button to commit any changes. |Attribute|Type|
Once you've configured provisioning, use the following resources to monitor your
* Atlassian Cloud only supports provisioning updates for users with verified domains. Changes made to users from a non-verified domain will not be pushed to Atlassian Cloud. Learn more about Atlassian verified domains [here](https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning/). * Atlassian Cloud does not support group renames today. This means that any changes to the displayName of a group in Azure AD will not be updated and reflected in Atlassian Cloud. * The value of the **mail** user attribute in Azure AD is only populated if the user has a Microsoft Exchange Mailbox. If the user does not have one, it is recommended to map a different desired attribute to the **emails** attribute in Atlassian Cloud.
-* When a group is synced and removed from the sync scope, the corresponding group gets deleted from the Atlassian provisioning directory. It will cause the groups on the Cloud sites, where the group had previously synced to be deleted as well. Deleting groups at the site level is destructive and can't be reversed/recovered. If the synced groups are being used to provide permissions like Jira project permissions, deleting the groups will remove those permissions settings from the Cloud site. Simply re-adding/re-syncing the group won't restore permissions. The Cloud site admins will need to manually rebuild the permissions. Do not remove a group from the sync scope in Azure AD unless you are sure that you want the group to get deleted on the Atlassian Cloud sites.
+ ## Change log
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Taskize Connect account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Taskize Connect. If the connection fails, ensure your Taskize Connect account has Admin permissions and try again.
+1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Gong account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Gong. If the connection fails, ensure your Gong account has Admin permissions and try again.
![Token](media/gong-provisioning-tutorial/gong-authorize.png)
+
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png)
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS. > [!NOTE]
-> - Azure CNI Overlay is currently only available in US West Central region.
-
+> Azure CNI Overlay is currently available in the following regions:
+> - North Central US
+> - West Central US
## Overview of overlay networking In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
+
+ Title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium.
+++ Last updated : 10/24/2022++
+# Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)
+
+Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the dataplane of [Cilium](https://cilium.io/) to provide high-performance networking and security.
+
+By making use of eBPF programs loaded into the Linux kernel and a more efficient API object structure, Azure CNI Powered by Cilium provides the following benefits:
+
+- Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
+- Faster service routing
+- More efficient network policy enforcement
+- Better observability of cluster traffic
+- Support for larger clusters (more nodes, pods, and services)
++
+## IP Address Management (IPAM) with Azure CNI Powered by Cilium
+
+Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs:
+
+- assign IP addresses from a VNet (similar to existing Azure CNI with Dynamic Pod IP Assignment)
+- assign IP addresses from an overlay network (similar to Azure CNI Overlay mode)
+
+> [!NOTE]
+> Azure CNI Overlay networking currently requires the `Microsoft.ContainerService/AzureOverlayPreview` feature and may be available only in certain regions. For more information, see [Azure CNI Overlay networking](./azure-cni-overlay.md).
+
+If you aren't sure which option to select, read ["Choosing a network model to use"](./azure-cni-overlay.md#choosing-a-network-model-to-use).
+
+## Network Policy Enforcement
+
+Cilium enforces [network policies to allow or deny traffic between pods](./operator-best-practices-network.md#control-traffic-flow-with-network-policies). With Cilium, you don't need to install a separate network policy engine such as Azure Network Policy Manager or Calico.
+
+## Limitations
+
+Azure CNI powered by Cilium currently has the following limitations:
+
+* Available only for new clusters.
+* Available only for Linux and not for Windows.
+* Cilium L7 policy enforcement is disabled.
+* Hubble is disabled.
+* Kubernetes services with `internalTrafficPolicy=Local` aren't supported ([Cilium issue #17796](https://github.com/cilium/cilium/issues/17796)).
+* Multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) ([Cilium issue #14287](https://github.com/cilium/cilium/issues/14287)).
+* Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP ([Cilium issue #19406](https://github.com/cilium/cilium/issues/19406)).
+
+## Prerequisites
+
+* Azure CLI version 2.41.0 or later. Run `az --version` to see the currently installed version. If you need to install or upgrade, see [Install Azure CLI][/cli/azure/install-azure-cli].
+* Azure CLI with aks-preview extension 0.5.109 or later.
+* If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `CiliumDataplanePreview` preview feature
+
+To create an AKS cluster with Azure CNI powered by Cilium, you must enable the `CiliumDataplanePreview` feature flag on your subscription.
+
+Register the `CiliumDataplanePreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CiliumDataplanePreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Create a new AKS Cluster with Azure CNI Powered by Cilium
+
+### Option 1: Assign IP addresses from a VNet
+
+Run the following commands to create a resource group and VNet with a subnet for nodes and a subnet for pods.
+
+```azurecli-interactive
+# Create the resource group
+az group create --name <resourceGroupName> --location <location>
+```
+
+```azurecli-interactive
+# Create a VNet with a subnet for nodes and a subnet for pods
+az network vnet create -g <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name podsubnet --address-prefixes <address prefix, example: 10.241.0.0/16> -o none
+```
+
+Create the cluster using `--enable-cilium-dataplane`:
+
+```azurecli-interactive
+az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
+ --pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
+ --enable-cilium-dataplane
+```
+
+### Option 2: Assign IP addresses from an overlay network
+
+Run these commands to create a resource group and VNet with a single subnet:
+
+```azurecli-interactive
+# Create the resource group
+az group create --name <resourceGroupName> --location <location>
+```
+
+```azurecli-interactive
+# Create a VNet with a subnet for nodes and a subnet for pods
+az network vnet create -g <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
+```
+
+Then create the cluster using `--enable-cilium-dataplane`:
+
+```azurecli-interactive
+az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
+ --enable-cilium-dataplane
+```
+
+## Frequently asked questions
+
+- *Can I customize Cilium configuration?*
+
+ No, the Cilium configuration is managed by AKS can't be modified. We recommend that customers who require more control use [AKS BYO CNI](./use-byo-cni.md) and install Cilium manually.
+
+- *Can I use `CiliumNetworkPolicy` custom resources instead of Kubernetes `NetworkPolicy` resources?*
+
+ `CiliumNetworkPolicy` custom resources aren't officially supported. We recommend that customers use Kubernetes `NetworkPolicy` resources to configure network policies.
+
+## Next steps
+
+Learn more about networking in AKS in the following articles:
+
+* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
+* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md)
+
+* [Create a basic ingress controller with external network connectivity][aks-ingress-basic]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
+* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls]
+* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+
+<!-- LINKS - Internal -->
+[aks-ingress-basic]: ingress-basic.md
+[aks-ingress-tls]: ingress-tls.md
+[aks-ingress-static-tls]: ingress-static-ip.md
+[aks-http-app-routing]: http-application-routing.md
+[aks-ingress-internal]: ingress-internal-ip.md
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services
description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Previously updated : 10/21/2022 Last updated : 10/25/2022 # Sustainable software engineering practices in Azure Kubernetes Service (AKS)
We recommend careful consideration of these design patterns for building a susta
| [Enable cluster and node auto-updates](#enable-cluster-and-node-auto-updates) | | ✔️ | | [Install supported add-ons and extensions](#install-supported-add-ons-and-extensions) | ✔️ | ✔️ | | [Containerize your workload where applicable](#containerize-your-workload-where-applicable) | ✔️ | |
-| [Use spot node pools when possible](#use-spot-node-pools-when-possible) | | ✔️ |
+| [Use energy efficient hardware](#use-energy-efficient-hardware) | | ✔️ |
| [Match the scalability needs and utilize auto-scaling and bursting capabilities](#match-the-scalability-needs-and-utilize-auto-scaling-and-bursting-capabilities) | | ✔️ | | [Turn off workloads and node pools outside of business hours](#turn-off-workloads-and-node-pools-outside-of-business-hours) | ✔️ | ✔️ | | [Delete unused resources](#delete-unused-resources) | ✔️ | ✔️ |
Containers allow for reducing unnecessary resource allocation and making better
* Use [Draft](/azure/aks/draft) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
-### Use spot node pools when possible
+### Use energy efficient hardware
-Spot nodes use Spot VMs and are great for workloads that can handle interruptions, early terminations, or evictions such as batch processing jobs and development and testing environments.
+Ampere's Cloud Native Processors are uniquely designed to meet both the high performance and power efficiency needs of the cloud.
-* Use [spot node pools](/azure/aks/spot-node-pool) to take advantage of unused capacity in Azure at a significant cost saving for a more sustainable platform design for your [interruptible workloads](/azure/architecture/guide/spot/spot-eviction).
+* Evaluate if nodes with [Ampere Altra ArmΓÇôbased processors](https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/) are a good option for your workloads.
### Match the scalability needs and utilize auto-scaling and bursting capabilities
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
+
+ Title: Configure kube-proxy (iptables/IPVS) (preview)
+
+description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS).
++ Last updated : 10/25/2022+++
+#Customer intent: As a cluster operator, I want to utilize a different kube-proxy configuration.
++
+# Configure `kube-proxy` in Azure Kubernetes Service (AKS) (preview)
+
+`kube-proxy` is a component of Kubernetes that handles routing traffic for services within the cluster. There are two backends available for Layer 3/4 load balancing in upstream `kube-proxy` - iptables and IPVS.
+
+- iptables is the default backend utilized in the majority of Kubernetes clusters. It is simple and well supported, but is not as efficient or intelligent as IPVS.
+- IPVS utilizes the Linux Virtual Server, a layer 3/4 load balancer built into the Linux kernel. IPVS provides a number of advantages over the default iptables configuration, including state awareness, connection tracking, and more intelligent load balancing.
+
+The AKS managed `kube-proxy` DaemonSet can also be disabled entirely if that is desired to support [bring-your-own CNI][aks-byo-cni].
++
+## Prerequisites
+
+* Azure CLI with aks-preview extension 0.5.105 or later.
+* If using ARM or the REST API, the AKS API version must be 2022-08-02-preview or later.
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `KubeProxyConfigurationPreview` preview feature
+
+To create an AKS cluster with custom `kube-proxy` configuration, you must enable the `KubeProxyConfigurationPreview` feature flag on your subscription.
+
+Register the `KubeProxyConfigurationPreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "KubeProxyConfigurationPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/KubeProxyConfigurationPreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Configurable options
+
+The full `kube-proxy` configuration structure can be found in the [AKS Cluster Schema][aks-schema-kubeproxyconfig].
+
+- `enabled` - whether or not to deploy the `kube-proxy` DaemonSet. Defaults to true.
+- `mode` - can be set to `IPTABLES` or `IPVS`. Defaults to `IPTABLES`.
+- `ipvsConfig` - if `mode` is `IPVS`, this object contains IPVS-specific configuration properties.
+ - `scheduler` - which connection scheduler to utilize. Supported values:
+ - `LeastConnections` - sends connections to the backend pod with the fewest connections
+ - `RoundRobin` - distributes connections evenly between backend pods
+ - `tcpFinTimeoutSeconds` - the value used for timeout after a FIN has been received in a TCP session
+ - `tcpTimeoutSeconds` - the value used for timeout length for idle TCP sessions
+ - `udpTimeoutSeconds` - the value used for timeout length for idle UDP sessions
+
+> [!NOTE]
+> IPVS load balancing operates in each node independently and is still only aware of connections flowing through the local node. This means that while `LeastConnections` results in more even load under higher number of connections, when low numbers of connections (# connects < 2 * node count) occur traffic may still be relatively unbalanced.
+
+## Utilize `kube-proxy` configuration in a new or existing AKS cluster using Azure CLI
+
+`kube-proxy` configuration is a cluster-wide setting. No action is needed to update your services.
+
+>[!WARNING]
+> Changing the kube-proxy configuration may cause a slight interruption in cluster service traffic flow.
+
+To begin, create a JSON configuration file with the desired settings:
+
+### Create a configuration file
+
+```json
+{
+ "enabled": true,
+ "mode": "IPVS",
+ "ipvsConfig": {
+ "scheduler": "LeastConnection",
+ "TCPTimeoutSeconds": 900,
+ "TCPFINTimeoutSeconds": 120,
+ "UDPTimeoutSeconds": 300
+ }
+}
+```
+
+### Deploy a new cluster
+
+Deploy your cluster using `az aks create` and pass in the configuration file:
+
+```bash
+az aks create -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
+```
+
+### Update an existing cluster
+
+Configure your cluster using `az aks update` and pass in the configuration file:
+
+```bash
+az aks update -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
+```
+
+## Next steps
+
+Learn more about utilizing the Standard Load Balancer for inbound traffic at the [AKS Standard Load Balancer documentation][load-balancer-standard.md].
+
+Learn more about using Internal Load Balancer for Inbound traffic at the [AKS Internal Load Balancer documentation](internal-lb.md).
+
+Learn more about Kubernetes services at the [Kubernetes services documentation][kubernetes-services].
+
+<!-- LINKS - External -->
+[kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
+[aks-schema-kubeproxyconfig]: /azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-bicep#containerservicenetworkprofilekubeproxyconfig
+
+<!-- LINKS - Internal -->
+[aks-byo-cni]: use-byo-cni.md
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To improve performance issues, skip:
+ Learn more about [Azure Application Insights](/azure/application-insights/). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).++ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API4:2019 Lack of resources and rate limiti
* Limit the number of parallel backend connections with the [limit concurrency](api-management-advanced-policies.md#LimitConcurrency) policy.
-* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Azure DDoS Protection Service](../ddos-protection/ddos-protection-overview.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
+* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](front-door-api-management.md), or [Azure DDoS Protection](protect-with-ddos-protection.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
## Broken function level authorization
More information about this threat: [API8:2019 Injection](https://github.com/OWA
### Recommendations
-* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](../frontdoor/front-door-overview.md).
+* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](front-door-api-management.md).
> [!IMPORTANT] > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](api-management-access-restriction-policies.md#RestrictCallerIPs), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
The table below summarizes all the observability capabilities supported by API M
- Get started with [Azure Monitor metrics and logs](api-management-howto-use-azure-monitor.md) - Learn how to log requests with [Application Insights](api-management-howto-app-insights.md) - Learn how to log events through [Event Hubs](api-management-howto-log-event-hubs.md)
+- Learn about visualizing Azure Monitor data using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
+
+ Title: Defend API Management against DDoS attacks
+description: Learn how to protect your API Management instance in an external virtual network against volumetric and protocol DDoS attacks by using Azure DDoS Protection Standard.
+++++ Last updated : 10/24/2022++
+# Defend your Azure API Management instance against DDoS attacks
+
+This article shows how to defend your Azure API Management instance against distributed denial of service (DDoS) attacks by enabling [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md). Azure DDoS Protection provides enhanced DDoS mitigation features to defend against volumetric and protocol DDoS attacks.ΓÇï
++
+## Supported configurations
+
+Enabling Azure DDoS Protection for API Management is currently available only for instances deployed (injected) in a VNet in [external mode](api-management-using-with-vnet.md).
+
+Currently, Azure DDoS Protection can't be enabled for the following API Management configurations:
+
+* Instances that aren't VNet-injected
+* Instances deployed in a VNet in [internal mode](api-management-using-with-internal-vnet.md)
+* Instances configured with a [private endpoint](private-endpoint.md)
+
+## Prerequisites
+
+* An API Management instance
+ * The instance must be deployed in an Azure VNet in [external mode](api-management-using-with-vnet.md)
+ * The instance to be configured with an Azure public IP address resource, which is supported only on the API Management `stv2` [compute platform](compute-infrastructure.md).
+ * If the instance is hosted on the `stv1` platform, you must [migrate](compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) to the `stv2` platform.
+* An Azure DDoS Protection [plan](../ddos-protection/manage-ddos-protection.md)
+ * The plan you select can be in the same, or different, subscription than the virtual network and the API Management instance. If the subscriptions differ, they must be associated to the same Azure Active Directory tenant.
+ * You may use a plan created using either the Network DDoS protection SKU or IP DDoS Protection SKU (preview). See [Azure DDoS Protection SKU Comparison](../ddos-protection/ddos-protection-sku-comparison.md).
+
+ > [!NOTE]
+ > Azure DDoS Protection plans incur additional charges. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
+
+## Enable DDoS Protection
+
+Depending on the DDoS Protection plan you use, enable DDoS protection on the virtual network used for your API Management instance, or the IP address resource configured for your virtual network.
+
+### Enable DDoS Protection on the virtual network used for your API Management instance
+
+1. In the [Azure portal](https://portal.azure.com), navigate to the VNet where your API Management is injected.
+1. In the left menu, under **Settings**, select **DDoS protection**.
+1. Select **Enable**, and then select your **DDoS protection plan**.
+1. Select **Save**.
+
+ :::image type="content" source="media/protect-with-ddos-protection/enable-ddos-protection.png" alt-text="Screenshot of enabling a DDoS Protection plan on a VNet in the Azure portal.":::
+
+### Enable DDoS protection on the API Management public IP address
+
+If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-for-an-existing-public-ip-address).
+
+## Next steps
+
+* Learn how to verify DDoS protection of your API Management instance by [testing with simulation partners](../ddos-protection/test-through-simulations.md)
+* Learn how to [view and configure Azure DDoS Protection telemetry](../ddos-protection/telemetry.md)
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
For more information, see [Integrate API Management in an internal virtual netwo
Learn more about:
-* [Connecting a virtual network to backend using VPN Gateway](../vpn-gateway/design.md#s2smulti)
-* [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
-* [Virtual network frequently asked questions](../virtual-network/virtual-networks-faq.md)
- Virtual network configuration with API Management: * [Connect to an external virtual network using Azure API Management](./api-management-using-with-vnet.md). * [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md). * [Connect privately to API Management using a private endpoint](private-endpoint.md)-
+* [Defend your Azure API Management instance against DDoS attacks](protect-with-ddos-protection.md)
Related articles: * [Connecting a Virtual Network to backend using Vpn Gateway](../vpn-gateway/design.md#s2smulti) * [Connecting a Virtual Network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
-* [How to use the API Inspector to trace calls in Azure API Management](api-management-howto-api-inspector.md)
* [Virtual Network Frequently asked Questions](../virtual-network/virtual-networks-faq.md)
-* [Service tags](../virtual-network/network-security-groups-overview.md#service-tags)
api-management Visualize Using Managed Grafana Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visualize-using-managed-grafana-dashboard.md
+
+ Title: Visualize Azure API Management monitoring data with Azure Managed Grafana
+description: Learn how to use an Azure Managed Grafana dashboard to visualize monitoring data from Azure API Management.
+++ Last updated : 10/17/2022++++
+# Visualize API Management monitoring data using a Managed Grafana dashboard
+
+You can use [Azure Managed Grafana](../managed-grafana/index.yml) to visualize API Management monitoring data that is collected into a Log Analytics workspace. Use a prebuilt [API Management dashboard](https://grafana.com/grafana/dashboards/16604-azure-api-management) for real-time visualization of logs and metrics collected from your API Management instance.
+
+* [Learn more about Azure Managed Grafana](../managed-grafan)
+* [Learn more about observability in Azure API Management](observability.md)
+
+## Prerequisites
+
+* API Management instance
+
+ * To visualize resource logs and metrics for API Management, configure [diagnostic settings](api-management-howto-use-azure-monitor.md#resource-logs) to collect resource logs and send them to a Log Analytics workspace
+
+ * To visualize detailed data about requests to the API Management gateway, [integrate](api-management-howto-app-insights.md) your API Management instance with Application Insights.
+
+ > [!NOTE]
+ > To visualize data in a single dashboard, configure the Log Analytics workspace for the diagnostic settings and the Application Insights instance in the same resource group as your API Management instance.
+
+* Managed Grafana workspace
+
+ * To create a Managed Grafana instance and workspace, see the quickstart for the [portal](../managed-grafan).
+
+ * The Managed Grafana instance must be in the same subscription as the API Management instance.
+
+ * When created, the Grafana workspace is automatically assigned an Azure Active Directory managed identity, which is assigned the Monitor Reader role on the subscription. This gives you immediate access to Azure Monitor from the new Grafana workspace without needing to set permissions manually. Learn more about [configuring data sources](../managed-grafan) for Managed Grafana.
+
+
+## Import API Management dashboard
+
+First import the [API Management dashboard](https://grafana.com/grafana/dashboards/16604-azure-api-management) to your Management Grafana workspace.
+
+To import the dashboard:
+
+1. Go to your Azure Managed Grafana workspace. In the portal, on the **Overview** page of your Managed Grafana instance, select the **Endpoint** link.
+1. In the Managed Grafana workspace, go to **Dashboards** > **Browse** > **Import**.
+1. On the **Import** page, under **Import via grafana.com**, enter *16604* and select **Load**.
+1. Select an **Azure Monitor data source**, review or update the other options, and select **Import**.
+
+## Use API Management dashboard
+
+1. In the Managed Grafana workspace, go to **Dashboards** > **Browse** and select your API Management dashboard.
+1. In the dropdowns at the top, make selections for your API Management instance. If configured, select an Application Insights instance and a Log Analytics workspace.
+
+Review the default visualizations on the dashboard, which will appear similar to the following screenshot:
++
+## Next steps
+
+* For more information about managing your Grafana dashboard, see the [Grafana docs](https://grafana.com/docs/grafana/v9.0/dashboards/).
+* Easily pin log queries and charts from the Azure portal to your Managed Grafana dashboard. For more information, see [Monitor your Azure services in Grafana](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana).
++++
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The following table lists the supported languages for print text by the most rec
|Kazakh (Latin) | `kk-latn`|Zhuang | `za` | |Khaling | `klr`|Zulu | `zu` |
-### Preview (2022-06-30-preview)
+### Print text in preview (API version 2022-06-30-preview)
Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
Receipt supports all English receipts with the following locales:
|English (United Kingdom)|`en-gb`| |English (India|`en-in`| |English (United States)| `en-us`|
+|French | 'fr' |
+| Spanish | `es` |
## Business card model
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## October 2022
+### Language expansion
+ With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages. Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+### New Prebuilt Contract model
+
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currenlty in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+ ### Region expansion for training custom neural models Training custom neural models now supported in added regions.
automation Automation Config Aws Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-config-aws-account.md
description: This article tells how to authenticate runbooks with Amazon Web Ser
keywords: aws authentication, configure aws Previously updated : 04/23/2020 Last updated : 10/28/2022+ # Authenticate runbooks with Amazon Web Services
-Automating common tasks with resources in Amazon Web Services (AWS) can be accomplished with Automation runbooks in Azure. You can automate many tasks in AWS using Automation runbooks just like you can with resources in Azure. For authentication, you must have an Azure subscription.
+You can automate common tasks with resources in Amazon Web Services (AWS) using Automation runbooks in Azure. You can automate many tasks in AWS using Automation runbooks similar to the resources in Azure. Ensure that you have the Azure subscription to authenticate.
## Obtain AWS subscription and credentials
-To authenticate with AWS, you must obtain an AWS subscription and specify a set of AWS credentials to authenticate your runbooks running from Azure Automation. Specific credentials required are the AWS Access Key and Secret Key. See [Using AWS Credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html).
+Ensure that you obtain an AWS subscription and specify a set of AWS credentials to authenticate your runbooks running from Azure Automation. Specific credentials required are the AWS Access Key and Secret Key. See [Using AWS Credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html).
## Configure Automation account
You can use an existing Automation account to authenticate with AWS. Alternative
## Store AWS credentials
-You must store the AWS credentials as assets in Azure Automation. See [Managing Access Keys for your AWS Account](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) for instructions on creating the Access Key and the Secret Key. When the keys are available, copy the Access Key ID and the Secret Key ID in a safe place. You can download your key file to store it somewhere safe.
+You must store the AWS credentials as assets in Azure Automation. See [Managing Access Keys for your AWS Account](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) for instructions on how to create the Access Key and the Secret Key. When the keys are available, copy the Access Key ID and the Secret Key ID in a safe place. You can download your key file to store it safely.
-## Create credential asset
+### Create credential asset
-After you have created and copied your AWS security keys, you must create a Credential asset with the Automation account. The asset allows you to securely store the AWS keys and reference them in your runbooks. See [Create a new credential asset with the Azure portal](shared-resources/credentials.md#create-a-new-credential-asset-with-the-azure-portal). Enter the following AWS information in the fields provided:
+After you have created and copied your AWS security keys, you must create a Credential asset with the Automation account. The asset allows you to securely store the AWS keys and reference them in your runbooks. See [Create a new credential asset with the Azure portal](shared-resources/credentials.md#create-a-new-credential-asset-with-the-azure-portal).
+
+Enter the following AWS information in the fields provided:
* **Name** - **AWScred**, or an appropriate value following your naming standards * **User name** - Your access ID
automation Automation Dsc Cd Chocolatey https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-cd-chocolatey.md
# Set up continuous deployment with Chocolatey
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ In a DevOps world, there are many tools to assist with various points in the continuous integration pipeline. Azure Automation [State Configuration](automation-dsc-overview.md) is a welcome new addition to the options that DevOps teams can employ.
automation Automation Dsc Compile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-compile.md
# Compile DSC configurations in Azure Automation State Configuration
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration in the following ways: - Azure State Configuration compilation service
automation Automation Dsc Config Data At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-data-at-scale.md
**Applies to:** :heavy_check_mark: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, and not from Microsoft.
automation Automation Dsc Config From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-from-server.md
description: This article tells how to create configurations from existing serve
keywords: dsc,powershell,configuration,setup Previously updated : 08/08/2019 Last updated : 10/25/2022+ # Create configurations from existing servers
-> Applies To: Windows PowerShell 5.1
-
-Creating configurations from existing servers can be a challenging task.
-You might not want *all* settings,
-just those that you care about.
-Even then you need to know in what order the settings
-must be applied in order for the configuration to apply successfully.
+> **Applies to:** :heavy_check_mark: Windows PowerShell 5.1
> [!NOTE]
-> This article refers to a solution that is maintained by the Open Source community.
-> Support is only available in the form of GitHub collaboration, not from Microsoft.
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+> [!IMPORTANT]
+> The article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, not from Microsoft.
+
+This article explains how to create configuration from existing servers for an Azure Automation state configuration. To create configurations from an existing servers is a challenging task as you need to know the right settings and the order they must be applied to ensure that configuration is successful.
+
+## Community project: ReverseDSC
+
+ The [ReverseDSC](https://github.com/microsoft/reversedsc) is a community maintained solution created to work in this area beginning with the SharePoint. The solution builds on the [SharePointDSC resource](https://github.com/powershell/sharepointdsc) and extends it to orchestrate by [gathering information](https://github.com/Microsoft/sharepointDSC.reverse#how-to-use) from existing servers running SharePoint.
-## Community project: ReverseDSC
+The latest version has multiple [extraction modes](https://github.com/Microsoft/SharePointDSC.Reverse/wiki/Extraction-Modes) to determine the level of information to include. The result of using the solution is generating
+[Configuration Data](https://github.com/Microsoft/sharepointDSC.reverse#configuration-data) that must be used with SharePointDSC configuration scripts.
-A community maintained solution named
-[ReverseDSC](https://github.com/microsoft/reversedsc)
-has been created to work in this area starting SharePoint.
-The solution builds on the
-[SharePointDSC resource](https://github.com/powershell/sharepointdsc)
-and extends it to orchestrate
-[gathering information](https://github.com/Microsoft/sharepointDSC.reverse#how-to-use)
-from existing servers running SharePoint.
-The latest version has multiple
-[extraction modes](https://github.com/Microsoft/SharePointDSC.Reverse/wiki/Extraction-Modes)
-to determine what level of information to include.
+## Create configuration from existing servers for an Azure Automation state configuration
-The result of using the solution is generating
-[Configuration Data](https://github.com/Microsoft/sharepointDSC.reverse#configuration-data)
-to be used with SharePointDSC configuration scripts.
+Follow the steps to create a configuration from existing servers for an Azure Automation state configuration:
-Once the data files have been generated,
-you can use them with
-[DSC Configuration scripts](/powershell/dsc/overview)
-to generate MOF files
-and
-[upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
-Then register your servers from either
-[on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
-or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms)
-to pull configurations.
+1. After you generate the data files, you can use them with [DSC Configuration scripts](/powershell/dsc/overview) to generate *MOF* files.
+1. upload the [MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
+1. Register your servers from either [on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
+or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms) to pull configurations.
-To try out ReverseDSC, visit the
-[PowerShell Gallery](https://www.powershellgallery.com/packages/ReverseDSC/)
-and download the solution or click "Project Site"
-to view the
-[documentation](https://github.com/Microsoft/sharepointDSC.reverse).
+For more information on ReverseDSC, visit the [PowerShell Gallery](https://www.powershellgallery.com/packages/ReverseDSC/) and download the solution or select **Project Site** to view the [documentation](https://github.com/Microsoft/sharepointDSC.reverse).
## Next steps
automation Automation Dsc Configuration Based On Stig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-configuration-based-on-stig.md
> Applies To: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Creating configuration content for the first time can be challenging. In many cases, the goal is to automate configuration of servers following a "baseline" that hopefully aligns to an industry recommendation.
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md
> **Applies to:** :heavy_check_mark: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community and support is only available in the form of GitHub collaboration, not from Microsoft.
automation Automation Dsc Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md
# Integrate Azure Automation State Configuration with Azure Monitor Logs
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Azure Automation State Configuration retains node status data for 30 days. You can send node status data to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) if you prefer to retain this data for a longer period. Compliance status is visible in the Azure portal or with PowerShell, for nodes and for individual DSC resources in node configurations. Azure Monitor Logs provides greater operational visibility to your Automation State Configuration data and can help address incidents more quickly. With Azure Monitor Logs you can:
automation Automation Dsc Extension History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-extension-history.md
# Work with Azure Desired State Configuration extension version history
-The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell.
- > [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell.
This article provides information about each version of the Azure DSC VM extension, what environments it supports, and comments and remarks on new features or changes.
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md
# Get started with Azure Automation State Configuration
-This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
- > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
If you want a sample environment that is already set up without following the steps described in this article, you can use the [Azure Automation Managed Node template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.automation/automation-configuration). This template sets up a complete State Configuration (DSC) environment, including an Azure VM that is managed by State Configuration (DSC).
automation Automation Dsc Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-onboarding.md
# Enable Azure Automation State Configuration
-This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md).
- > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md).
## Enable Azure VMs
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md
# Azure Automation State Configuration overview
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Azure Automation State Configuration is an Azure configuration management service that allows you to write, manage, and compile PowerShell Desired State Configuration (DSC) [configurations](/powershell/dsc/configurations/configurations) for nodes in any cloud or on-premises datacenter. The service also imports [DSC Resources](/powershell/dsc/resources/resources), and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting **State configuration (DSC)** under **Configuration Management**.
-> [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
- You can use Azure Automation State Configuration to manage a variety of machines: - Azure virtual machines
automation Automation Dsc Remediate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-remediate.md
Last updated 07/17/2019
# Remediate noncompliant Azure Automation State Configuration servers
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ When servers are registered with Azure Automation State Configuration, the configuration mode is set to `ApplyOnly`, `ApplyAndMonitor`, or `ApplyAndAutoCorrect`. If the mode isn't set to `ApplyAndAutoCorrect`, servers that drift from a compliant state for any reason
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
Last updated 08/31/2021
# Configure Linux desired state with Azure Automation State Configuration using PowerShell
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+> [!IMPORTANT]
+> The desired state configuration VM extension for Linux will be [retired on **September 30, 2023**](https://aka.ms/dscext4linuxretirement). If you're currently using the desired state configuration VM extension for Linux, you should start planning your migration to the machine configuration feature of Azure Automanage by using the information in this article.
+ In this tutorial, you'll apply an Azure Automation State Configuration with PowerShell to an Azure Linux virtual machine to check whether it complies with a desired state. The desired state is to identify if the apache2 service is present on the node. Azure Automation State Configuration allows you to specify configurations for your machines and ensure those machines are in a specified state over time. For more information about State Configuration, see [Azure Automation State Configuration overview](./automation-dsc-overview.md).
automation Collect Data Microsoft Azure Automation Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/collect-data-microsoft-azure-automation-case.md
description: This article describes the information to gather before opening a c
Previously updated : 09/23/2019+ Last updated : 10/21/2022 # Data to collect when opening a case for Microsoft Azure Automation
-This article describes some of the information that you should gather before you open a case for Azure Automation with Microsoft Azure Support. This information is not required to open the case. However, it can help Microsoft resolve your problem more quickly. Also, you may be asked for this data by the support engineer after you open the case.
-
-## Basic data
+This article describes the information that you should gather before you open a case for Azure Automation with Microsoft Azure Support. Even though this information isn't required to open a case. However, it helps the support team to quickly resolve a problem.
+
+> [!NOTE]
+> For more information, refer the Knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
-Collect the basic data described in the Knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
## Data for Update Management issues on Linux
-1. In addition to the items that are listed in KB [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics), run the following log collection tool:
+1. Run the following log collection tool, in addition to the details in KB [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
- [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
+ - [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
-2. Compress the contents of the **/var/opt/microsoft/omsagent/run/automationworker/** folder, then send the compressed file to Azure Support.
+1. Compress the contents of the **/var/opt/microsoft/omsagent/run/automationworker/** folder, and send the compressed file to Azure Support.
-3. Verify that the ID for the workspace that the Log Analytics agent for Linux reports to is the same as the ID for the workspace being monitored for updates.
+1. Verify that the ID for the workspace that the Log Analytics agent for Linux reports to is the same as the ID for the workspace being monitored for updates.
## Data for Update Management issues on Windows 1. Collect data for the items listed in [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
-2. Export the following event logs into the EVTX format:
+1. Export the following event logs into the EVTX format:
* System * Application
Collect the basic data described in the Knowledge Base article [4034605 - How to
* Operations Manager * Microsoft-SMA/Operational
-3. Verify that the ID of the workspace that the agent reports to is the same as the ID for the workspace being monitored by Windows Updates.
+1. Verify that the ID of the workspace that the agent reports to is the same as the ID for the workspace being monitored by Windows Updates.
## Data for job issues
Collect the basic data described in the Knowledge Base article [4034605 - How to
1. In the Azure portal, go to **Automation Accounts**. 2. Select the Automation account that you are troubleshooting, and note the name. 3. Select **Jobs**.+
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/select-jobs.png" alt-text="Screenshot showing to select jobs menu from automation account.":::
+ 4. Choose the job that you are troubleshooting.
- 5. In the Job Summary pane, look for the GUID value in **Job ID**.
+ 5. In the Job Summary pane, check for the GUID value in **Job ID**.
- ![Job ID within Job Summary Pane](media/collect-data-microsoft-azure-automation-case/job-summary-job-id.png)
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/job-summary-job-id.png" alt-text="Screenshot Job ID within Job Summary Pane.":::
3. Collect a sample of the script that you are running.
Collect the basic data described in the Knowledge Base article [4034605 - How to
3. Select **Jobs**. 4. Choose the job that you are troubleshooting. 5. Select **All Logs**.
- 6. In the resulting pane, collect the data.
-
- ![Data listed under All Logs](media/collect-data-microsoft-azure-automation-case/all-logs-data.png)
+ In the pane below, you can collect the data.
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/all-logs-data.png" alt-text="Screenshot of Data listed under All Logs.":::
+
## Data for module issues
-In addition to the [basic data items](#basic-data), gather the following information:
+In addition to the knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics), obtain the following information:
-* The steps you have followed, so that the problem can be reproduced.
+* The steps you followed, so that the problem can be reproduced.
* Screenshots of any error messages. * Screenshots of the current modules and their version numbers. ## Next steps
-If you need more help:
- * Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. * File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
automation Desired State Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/desired-state-configuration.md
Title: Troubleshoot Azure Automation State Configuration issues
description: This article tells how to troubleshoot and resolve Azure Automation State Configuration issues. Previously updated : 04/16/2019 Last updated : 10/17/2022
VM has reported a failure when processing extension 'Microsoft.Powershell.DSC /
### Cause
-This issue is caused by a bad or expired certificate. See [Re-register a node](../automation-dsc-onboarding.md#re-register-a-node).
+The following are the possible causes:
-This issue might also be caused by a proxy configuration not allowing access to ***.azure-automation.net**. For more information, see [Configuration of private networks](../automation-dsc-overview.md#network-planning).
+- A bad or expired certificate. See [Re-register a node](../automation-dsc-onboarding.md#re-register-a-node).
+
+- A proxy configuration that isn't allowing access to ***.azure-automation.net**. For more information, see [Configuration of private networks](../automation-dsc-overview.md#network-planning).
+
+- When you disable local authentication in Azure Automation. See [Disable local authentication](../disable-local-authentication.md). To fix it, see [re-enable local authentication](../disable-local-authentication.md#re-enable-local-authentication).
### Resolution
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
You can deploy Azure Arc-enabled data services on various types of Kubernetes cl
- Google Kubernetes Engine (GKE) - Open source, upstream Kubernetes (typically deployed by using kubeadm) - OpenShift Container Platform (OCP)
+- Additional [partner-validated Kubernetes distributions](./validation-program.md)
> [!IMPORTANT] > * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 10/12/2022 Last updated : 10/24/2022
With GitOps, you declare the desired state of your Kubernetes clusters in files
Because these files are stored in a Git repository, they're versioned, and changes between versions are easily tracked. Kubernetes controllers run in the clusters and continually reconcile the cluster state with the desired state declared in the Git repository. These operators pull the files from the Git repositories and apply the desired state to the clusters. The operators also continuously assure that the cluster remains in the desired state.
-GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management, among [other features](https://fluxcd.io/docs/).
+GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management, among [other features](https://fluxcd.io/docs/).
## Flux cluster extension
The most recent version of the Flux v2 extension and the two previous versions (
The `microsoft.flux` extension installs by default the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed and can optionally install the Flux image-automation and image-reflector controllers, which provide functionality around updating and retrieving Docker images.
-* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the source.toolkit.fluxcd.io custom resources. Handles the synchronization between the Git repositories, Helm repositories, and Buckets. Handles authorization with the source for private Git and Helm repos. Surfaces the latest changes to the source through a tar archive file.
+* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the source.toolkit.fluxcd.io custom resources. Handles the synchronization between the Git repositories, Helm repositories, Buckets and Azure Blob storage. Handles authorization with the source for private Git, Helm repos and Azure blob storage accounts. Surfaces the latest changes to the source through a tar archive file.
* [Flux Kustomize controller](https://toolkit.fluxcd.io/components/kustomize/controller/): Watches the `kustomization.toolkit.fluxcd.io` custom resources. Applies Kustomize or raw YAML files from the source onto the cluster. * [Flux Helm controller](https://toolkit.fluxcd.io/components/helm/controller/): Watches the `helm.toolkit.fluxcd.io` custom resources. Retrieves the associated chart from the Helm Repository source surfaced by the Source controller. Creates the `HelmChart` custom resource and applies the `HelmRelease` with given version, name, and customer-defined values to the cluster. * [Flux Notification controller](https://toolkit.fluxcd.io/components/notification/controller/): Watches the `notification.toolkit.fluxcd.io` custom resources. Receives notifications from all Flux controllers. Pushes notifications to user-defined webhook endpoints.
The `microsoft.flux` extension installs by default the [Flux controllers](https:
:::image type="content" source="media/gitops/flux2-config-install.png" alt-text="Diagram showing the installation of a Flux configuration in an Azure Arc-enabled Kubernetes or Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-config-install.png":::
-You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos or Bucket sources. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
+You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob Storage. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `m
* Sets up RBAC (service account provisioned, role binding created/assigned, role created/assigned). * Creates `GitRepository` or `Bucket` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource.
-Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository or Bucket) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
+Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository, Bucket or Azure Blob storage) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
> [!NOTE] > The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 09/15/2022 Last updated : 10/24/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
spec:
app.kubernetes.io/name: flux-extension ```
+### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Kubelet Identity enabled
+
+When working with Azure Kubernetes clusters, one of the authentication options to use is kubelet identity. In order to let Flux use this, add a parameter --config useKubeletIdentity=true at the time of Flux extension installation.
+
+```console
+az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
+```
+ ### Flux v2 - `microsoft.flux` extension installation CPU and memory limits The controllers installed in your Kubernetes cluster with the Microsoft.Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 10/12/2022 Last updated : 10/24/2022
Here's an example for including the [Flux image-reflector and image-automation c
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
+### Using Kubelet identity as authentication method for Azure Kubernetes Clusters
+
+When working with Azure Kubernetes clusters, one of the authentication options to use is kubelet identity. In order to let Flux use this, add a parameter --config useKubeletIdentity=true at the time of Flux extension installation.
+
+```console
+az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
+```
+ ### Red Hat OpenShift onboarding guidance Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster prior to onboarding of the `microsoft.flux` extension.
Arguments
--bucket-insecure : Communicate with a bucket without TLS. Allowed values: false, true. --bucket-name : Name of the S3 bucket to sync.
+ --container-name : Name of the Azure Blob Storage container to sync
--interval --sync-interval : Time between reconciliations of the source on the cluster.
- --kind : Source kind to reconcile. Allowed values: bucket, git.
+ --kind : Source kind to reconcile. Allowed values: bucket, git, azblob.
Default: git. --kustomization -k : Define kustomizations to sync sources with parameters ['name', 'path', 'depends_on', 'timeout', 'sync_interval',
Global Arguments
--subscription : Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`. --verbose : Increase logging verbosity. Use --debug for full debug logs.
+
+Azure Blob Storage Account Auth Arguments
+ --sp_client_id : The client ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_tenant_id : The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_client_secret : The client secret for authenticating a service principal with Azure Blob
+ --sp_client_cert : The Base64 encoded client certificate for authenticating a service principal with Azure Blob
+ --sp_client_cert_password : The password for the client certificate used to authenticate a service principal with Azure Blob
+ --sp_client_cert_send_chain : Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate
+ --account_key : The Azure Blob Shared Key for authentication
+ --sas_token : The Azure Blob SAS Token for authentication
+ --mi_client_id : The client ID of the managed identity for authentication with Azure Blob
Examples Create a Flux v2 Kubernetes configuration
Examples
--kind bucket --url https://bucket-provider.minio.io \ --bucket-name my-bucket --kustomization name=my-kustomization \ --bucket-access-key my-access-key --bucket-secret-key my-secret-key
+
+ Create a Kubernetes v2 Flux Configuration with Azure Blob Storage Source Kind
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind azblob --url https://mystorageaccount.blob.core.windows.net \
+ --container-name my-container --kustomization name=my-kustomization \
+ --account-key my-account-key
``` ### Configuration general arguments
Examples
| Parameter | Format | Notes | | - | - | - |
-| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`. Default: `git`. |
+| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`, `azblob`. Default: `git`. |
| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. | | `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. |
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. | | `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. |
+### Azure Blob Storage Account source arguments
+
+If you use a `azblob` source, here are the blob-specific command arguments.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | URL String | The URL for the `azblob`. |
+| `--container-name` | String | Name of the Azure Blob Storage container to sync |
+| `--sp_client_id` | String | The client ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_tenant_id` | String | The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_client_secret` | String | The client secret for authenticating a service principal with Azure Blob |
+| `--sp_client_cert` | String | The Base64 encoded client certificate for authenticating a service principal with Azure Blob |
+| `--sp_client_cert_password` | String | The password for the client certificate used to authenticate a service principal with Azure Blob |
+| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate |
+| `--account_key` | String | The Azure Blob Shared Key for authentication |
+| `--sas_token` | String | The Azure Blob SAS Token for authentication |
+| `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
+ ### Local secret for authentication with source
-You can use a local Kubernetes secret for authentication with a `git` or `bucket` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
+You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
| Parameter | Format | Notes | | - | - | - |
azure-arc Migrate Azure Monitor Agent Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-azure-monitor-agent-ansible.md
Follow the steps below to create the template:
1. Select **Add**. 1. Select **Add job template**, then complete the fields of the form as follows:
- **Name:** Content Lab - Install Arc Agent
+ **Name:** Content Lab - Install Arc Connected Machine Agent
**Job Type:** Run
Follow the steps below to create the template:
1. Select **Add**. 1. Select **Add job template**, then complete the fields of the form as follows:
- **Name:** Content Lab - Replace Log Analytics agent with Arc agent
+ **Name:** Content Lab - Replace Log Analytics agent with Arc Connected Machine agent
**Job Type:** Run
An automation controller workflow allows you to construct complex automation by
1. Select **Save**. 1. Select **Start** to begin the workflow designer.
-1. Set **Node Type** to "Job Template" and select **Content Lab - Replace Log Analytics with Arc Agent**.
+1. Set **Node Type** to "Job Template" and select **Content Lab - Replace Log Analytics with Arc Connected Machine Agent**.
1. Select **Next**. 1. Select **Save**.
-1. Hover over the **Content Lab - Replace Log Analytics with Arc Agent** node and select the **+** button.
+1. Hover over the **Content Lab - Replace Log Analytics with Arc Connected Machine Agent** node and select the **+** button.
1. Select **On Success**. 1. Select **Next**. 1. Set **Node Type** to "Job Template" and select **Content Lab - Uninstall Log Analytics Agent**.
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy. Previously updated : 10/18/2022 Last updated : 10/20/2022
You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled servers at scale using Group Policy.
-You'll first need to set up a local remote share with the Connected Machine Agent and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
+You'll first need to set up a local remote share and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prepare a remote share
-The Group Policy to onboard Azure Arc-enabled servers requires a remote share with the Connected Machine Agent. You will need to:
-
-1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location.
-
-1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+Prepare a remote share to host the configuration file and onboarding script. You need to be able to add files to the distributed location.
## Generate an onboarding script and configuration file from Azure portal
Before you can run the script to connect your machines, you'll need to do the fo
The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
-## Save the onboarding script to a remote share
+## Save the onboarding script to the remote share
Before you can run the script to connect your machines, you'll need to save the onboarding script to the remote share. This will be referenced when creating the Group Policy Object.
In the **General** tab, set the following parameters under **Security Options**:
1. In the field **Configure for**, select **Windows Vista or Window 2008**. ### Assign trigger parameters for the task
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
When configuring the Azure Connected Machine agent with a reduced set of capabil
### Example configuration for monitoring and security scenarios
-It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc agent to only allow these scenarios.
+It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc Connected Machine agent to only allow these scenarios.
#### Azure Monitor Agent only
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
When you "await" the timer task, the orchestrator function will sleep until the
When you create a timer that expires at 4:30 pm UTC, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm UTC. If the function app is scaled down to zero instances in the meantime, the newly visible timer message will ensure that the function app gets activated again on an appropriate VM. > [!NOTE]
-> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to six days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than six days, use the timer APIs in a `while` loop to simulate a longer delay.
+> * For JavaScript, Python, and PowerShell apps, Durable timers are limited to six days. To work around this limitation, you can use the timer APIs in a `while` loop to simulate a longer delay. Up-to-date .NET and Java apps support arbitrarily long timers.
+> * Depending on the version of the SDK and [storage provider](durable-functions-storage-providers.md) being used, long timers of 6 days or more may be internally implemented using a series of shorter timers (e.g., of 3 day durations) until the desired expiration time is reached. This can be observed in the underlying data store but won't impact the the orchestration behavior.
> * Don't use built-in date/time APIs for getting the current time. When calculating a future date for a timer to expire, always use the orchestrator function's current time API. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md#dates-and-times) article. ## Usage for delay
azure-monitor Action Groups Create Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-create-resource-manager-template.md
Title: Create action groups with Resource Manager templates description: Learn how to create an action group by using an Azure Resource Manager template.-+
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Manage action groups in the Azure portal description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions.-+ Last updated 09/07/2022
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
Title: Alert schema definitions in Azure Monitor for Test Action Group description: Understanding the common alert schema definitions for Azure Monitor for Test Action group-+ Last updated 01/14/2022
-ms.revewer: issahn
+ms.revewer: jagummersall
# Common alert schema definitions for Test Action Group (Preview)
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
You can also define filters to narrow down which specific subset of alerts are a
| Filter | Description| |:|:|
-Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. |
+Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. This filter does not apply to log alert search results. |
Alert rule ID | The rule applies only to alerts from a specific alert rule. The value should be the full resource ID, for example, `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. To locate the alert rule ID, open a specific alert rule in the portal, select **Properties**, and copy the **Resource ID** value. You can also locate it by listing your alert rules from PowerShell or the Azure CLI. | Alert rule name | The rule applies only to alerts with this alert rule name. It can also be useful with a **Contains** operator. | Description | The rule applies only to alerts that contain the specified string within the alert rule description field. |
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
Title: Rate limiting for SMS, emails, push notifications description: Understand how Azure limits the number of possible SMS, email, Azure App push or webhook notifications from an action group.--++ Last updated 2/23/2022-+ # Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Title: SMS Alert behavior in Action Groups description: SMS message format and responding to SMS messages to unsubscribe, resubscribe or request help.--++ Last updated 2/23/2022-+ # SMS Alert Behavior in Action Groups
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This table can help you decide when to use what type of alert. For more detailed
|Alert Type |When to Use |Pricing Information| ||||
-|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time-series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for log alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. | ## Metric alerts
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Emails about smart detection performance anomalies are limited to one email per
* Not yet, but you can: * [Set up alerts](./alerts-log.md) that tell you when a metric crosses a threshold.
- * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../app/export-power-bi.md), where you can analyze it yourself.
+ * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../logs/log-powerbi.md), where you can analyze it yourself.
* *How often is the analysis done?* * We run the analysis daily on the telemetry from the previous day (full day in UTC timezone).
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 09/07/2022 Last updated : 10/24/2022 ms.devlang: csharp, java, javascript, vb
None. You don't need to wrap them in try-catch clauses. If the SDK encounters pr
### Is there a REST API to get data from the portal?
-Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Log Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
+Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [Power BI](..\logs\log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md) or [continuous export](./export-telemetry.md) if you're still on a classic resource.
### Why are my calls to custom events and metrics APIs ignored?
azure-monitor Export Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-power-bi.md
- Title: Export to Power BI from Azure Application Insights | Microsoft Docs
-description: Analytics queries can be displayed in Power BI.
- Previously updated : 08/10/2018---
-# Feed Power BI from Application Insights
-
-[Power BI](https://www.powerbi.com/) is a suite of business tools that helps you analyze data and share insights. Rich dashboards are available on every device. You can combine data from many sources, including Analytics queries from [Azure Application Insights](./app-insights-overview.md).
-
-There are three methods of exporting Application Insights data to Power BI:
-
-* [**Export Analytics queries**](#export-analytics-queries). This is the preferred method. Write any query you want and export it to Power BI. You can place this query on a dashboard, along with any other data.
-* [**Continuous export and Azure Stream Analytics**](../../stream-analytics/app-insights-export-stream-analytics.md). This method is useful if you want to store your data for long periods of time. If you don't have an extended data retention requirement, use the export analytics query method. Continuous export and Stream Analytics involves more work to set up and additional storage overhead.
-* **Power BI adapter**. The set of charts is predefined, but you can add your own queries from any other sources.
-
-> [!NOTE]
-> The Power BI adapter is now **deprecated**. The predefined charts for this solution are populated by static uneditable queries. You do not have the ability to edit these queries and depending on certain properties of your data it is possible for the connection to Power BI to be successful, but no data is populated. This is due to exclusion criteria that are set within the hardcoded query. While this solution may still work for some customers, due to the lack of flexibility of the adapter the recommended solution is to use the [**export Analytics query**](#export-analytics-queries) functionality.
-
-## Export Analytics queries
-
-This route allows you to write any Analytics query you like, or export from Usage Funnels, and then export that to a Power BI dashboard. (You can add to the dashboard created by the adapter.)
-
-### One time: install Power BI Desktop
-
-To import your Application Insights query, you use the desktop version of Power BI. Then you can publish it to the web or to your Power BI cloud workspace.
-
-Install [Power BI Desktop](https://powerbi.microsoft.com/en-us/desktop/).
-
-### Export an Analytics query
-
-1. [Open Analytics and write your query](../logs/log-analytics-tutorial.md).
-2. Test and refine the query until you're happy with the results. Make sure that the query runs correctly in Analytics before you export it.
-3. On the **Export** menu, choose **Power BI (M)**. Save the text file.
-
- ![Screenshot of Analytics, with Export menu highlighted](./media/export-power-bi/analytics-export-power-bi.png)
-4. In Power BI Desktop, select **Get Data** > **Blank Query**. Then, in the query editor, under **View**, select **Advanced Editor**.
-
- Paste the exported M Language script into the Advanced Editor.
-
- ![Screenshot of Power BI Desktop, with Advanced Editor highlighted](./media/export-power-bi/power-bi-import-analytics-query.png)
-
-5. To allow Power BI to access Azure, you might have to provide credentials. Use **Organizational account** to sign in with your Microsoft account.
-
- ![Screenshot of Power BI Query Settings dialog box](./media/export-power-bi/power-bi-import-sign-in.png)
-
- If you need to verify the credentials, use the **Data Source Settings** menu command in the query editor. Be sure to specify the credentials you use for Azure, which might be different from your credentials for Power BI.
-6. Choose a visualization for your query, and select the fields for x-axis, y-axis, and segmenting dimension.
-
- ![Screenshot of Power BI Desktop visualization options](./media/export-power-bi/power-bi-analytics-visualize.png)
-7. Publish your report to your Power BI cloud workspace. From there, you can embed a synchronized version into other web pages.
-
- ![Screenshot of Power BI Desktop, with Publish button highlighted](./media/export-power-bi/publish-power-bi.png)
-8. Refresh the report manually at intervals, or set up a scheduled refresh on the options page.
-
-### Export a Funnel
-
-1. [Make your Funnel](./usage-funnels.md).
-2. Select **Power BI**.
-
- ![Screenshot of Power BI button](./media/export-power-bi/button.png)
-
-3. In Power BI Desktop, select **Get Data** > **Blank Query**. Then, in the query editor, under **View**, select **Advanced Editor**.
-
- ![Screenshot of Power BI Desktop, with Blank Query button highlighted](./media/export-power-bi/blankquery.png)
-
- Paste the exported M Language script into the Advanced Editor.
-
- ![Screenshot shows the Power BI Desktop, with Advanced Editor highlighted](./media/export-power-bi/advancedquery.png)
-
-4. Select items from the query, and choose a Funnel visualization.
-
- ![Screenshot shows the Power BI Desktop Funnel visualization options](./media/export-power-bi/selectsequence.png)
-
-5. Change the title to make it meaningful, and publish your report to your Power BI cloud workspace.
-
- ![Screenshot of Power BI Desktop, with title change highlighted](./media/export-power-bi/changetitle.png)
-
-## Troubleshooting
-
-You might encounter errors pertaining to credentials or the size of the dataset. Here is some information about what to do about these errors.
-
-### Unauthorized (401 or 403)
-
-This can happen if your refresh token has not been updated. Try these steps to ensure you still have access:
-
-1. Sign in to the Azure portal, and make sure you can access the resource.
-2. Try to refresh the credentials for the dashboard.
-3. Try to clear the cache from your Power BI Desktop.
-
- If you do have access and refreshing the credentials does not work, please open a support ticket.
-
-### Bad Gateway (502)
-
-This is usually caused by an Analytics query that returns too much data. Try using a smaller time range for the query.
-
-If reducing the dataset coming from the Analytics query doesn't meet your requirements, consider using the [API](https://dev.applicationinsights.io/documentation/overview) to pull a larger dataset. Here's how to convert the M-Query export to use the API.
-
-1. Create an [API key](https://dev.applicationinsights.io/documentation/Authorization/API-key-and-App-ID).
-2. Update the Power BI M script that you exported from Analytics by replacing the Azure Resource Manager URL with the Application Insights API.
- * Replace **https:\//management.azure.com/subscriptions/...**
- * with, **https:\//api.applicationinsights.io/beta/apps/...**
-3. Finally, update the credentials to basic, and use your API key.
-
-**Existing script**
-
-```
- Source = Json.Document(Web.Contents("https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups//providers/microsoft.insights/components//api/query?api-version=2014-12-01-preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))
-```
-
-**Updated script**
-
-```
-Source = Json.Document(Web.Contents("https://api.applicationinsights.io/beta/apps/<APPLICATION_ID>/query?api-version=2014-12-01-preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))
-```
-
-## About sampling
-
-Depending on the amount of data sent by your application, you might want to use the adaptive sampling feature, which sends only a percentage of your telemetry. The same is true if you have manually set sampling either in the SDK or on ingestion. [Learn more about sampling](./sampling.md).
-
-## Power BI adapter (deprecated)
-
-This method creates a complete dashboard of telemetry for you. The initial dataset is predefined, but you can add more data to it.
-
-### Get the adapter
-
-1. Sign in to [Power BI](https://app.powerbi.com/).
-2. Open **Get Data** ![Screenshot of GetData Icon in lower left corner](./media/export-power-bi/001.png), **Services**.
-
- ![Screenshots shows Get button in the Services window.](./media/export-power-bi/002.png)
-
-3. Select **Get it now** under Application Insights.
-
- ![Screenshots of Get from Application Insights data source](./media/export-power-bi/003.png)
-4. Provide the details of your Application Insights resource, and then **Sign-in**.
-
- ![Screenshot shows Connect to Application Insights window.](./media/export-power-bi/005.png)
-
- This information can be found in the Application Insights Overview pane:
-
- ![Screenshot of Get from Application Insights data source](./media/export-power-bi/004.png)
-
-5. Open the newly created Application Insights Power BI App.
-
-6. Wait a minute or two for the data to be imported.
-
- ![Screenshot of Power BI adapter](./media/export-power-bi/010.png)
-
-You can edit the dashboard, combining the Application Insights charts with those of other sources, and with Analytics queries. You can get more charts in the visualization gallery, and each chart has parameters you can set.
-
-After the initial import, the dashboard and the reports continue to update daily. You can control the refresh schedule on the dataset.
-
-## Next steps
-
-* [Power BI - Learn](https://www.powerbi.com/learning/)
-* [Analytics tutorial](../logs/log-analytics-tutorial.md)
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Title: Continuous export of telemetry from Application Insights | Microsoft Docs description: Export diagnostic and usage data to storage in Microsoft Azure, and download it from there. Previously updated : 02/19/2021 Last updated : 10/24/2022
Before you set up continuous export, there are some alternatives you might want
* The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet. * [Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results.
-* If you're looking to [explore your data in Power BI](./export-power-bi.md), you can do that without using Continuous Export.
+* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using Continuous Export if you've [migrated to a workspace-based resource](convert-classic-resource.md).
* The [Data access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. * You can also access setup [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport).
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
This article explains how geolocation lookup and IP address handling work in App
By default, IP addresses are temporarily collected but not stored in Application Insights. The basic process is as follows:
-When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup by using [GeoLite2 from MaxMind](https://dev.maxmind.com/geoip/geoip2/geolite2/). Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
+When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
Geolocation data can be removed in the following ways. * [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md) * [Use a custom initializer](../app/api-filtering-sampling.md)
-> [!NOTE]
-> Application Insights uses an older version of the GeoLite2 database. If you experience accuracy issues with IP to geolocation mappings, then as a workaround you can disable IP masking and utilize another geomapping service to convert the client_IP field of the underlying telemetry to a more accurate geolocation. We are currently working on an update to improve the geolocation accuracy.
- The telemetry types are: * Browser telemetry: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: languages, platforms, and integrations | Microsoft Docs' description: Languages, platforms, and integrations available for Application Insights Previously updated : 10/29/2021 Last updated : 10/24/2022
## Export and data analysis * [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
-* [Stream Analytics](./export-power-bi.md)
+* [Power BI for workspace-based resources](../logs/log-powerbi.md)
## Unsupported SDKs Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when using the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels description: Learn how you can use Funnels to discover how customers are interacting with your application. Previously updated : 07/30/2021 Last updated : 10/24/2022
To create a funnel:
* [Retention](usage-retention.md) * [Workbooks](../visualize/workbooks-overview.md) * [Add user context](./usage-overview.md)
- * [Export to Power BI](./export-power-bi.md)
+ * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
To use [managed identity authentication (preview)](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true
```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
If you want to tweak the default resource requests and limits, you can use the advanced configurations settings: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.resources.daemonset.limits.cpu=150m amalogsagent.resources.daemonset.limits.memory=600Mi amalogsagent.resources.deployment.limits.cpu=1 amalogsagent.resources.deployment.limits.memory=750Mi
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi
``` Checkout the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
Checkout the [resource requests and limits section of Helm chart](https://github
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used. ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.logsettings.custommountpath=/home/data/docker
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.logsettings.custommountpath=/home/data/docker
```
az k8s-extension show --name azuremonitor-containers --cluster-name \<cluster-na
Enable Container insights extension with managed identity authentication option using the workspace returned in the first step. ```cli
-az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
+az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
``` ## [Resource Manager](#tab/migrate-arm)
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
Assign the `Monitoring Data Reader` role to the Grafana System Assigned Identity
| `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metrican'tationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Availability|No|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName|
-|ServiceApiLatency|No|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
-|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Gets the available metrics for a Managed HSM pool|ActivityType, ActivityName, StatusCode, StatusCodeClass|
- ## Microsoft.KeyVault/vaults
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
The following sections describe how to configure Azure Monitor managed service f
> [!IMPORTANT] > This section describes the manual process for adding an Azure Monitor managed service for Prometheus data source to Azure Managed Grafana. You can achieve the same functionality by linking the Azure Monitor workspace and Grafana workspace as described in [Link a Grafana workspace](azure-monitor-workspace-overview.md#link-a-grafana-workspace).
-### Configure system identify
+### Configure system identity
Your Grafana workspace requires the following: - System managed identity enabled
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create an SMB volume for Azure NetApp Files
You can set permissions for a file or folder by using the **Security** tab of th
* [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md). * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na Previously updated : 01/07/2022 Last updated : 10/25/2022 # Delegate a subnet to Azure NetApp Files
You can also create and delegate a subnet when you [create a volume for Azure Ne
* [Create a volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Previously updated : 10/04/2021 Last updated : 10/25/2022 #Customer intent: As an IT admin new to Azure NetApp Files, I want to quickly set up Azure NetApp Files and create a volume.
Use the Azure portal, PowerShell, or the Azure CLI to delete the resource group.
> [!div class="nextstepaction"] > [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)+
+> [!div class="nextstepaction"]
+> [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
You can create an Azure support request to increase the adjustable limits from t
- [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) - [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md)
+- [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
+-
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create a dual-protocol volume for Azure NetApp Files
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
* [Configure AD DS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md) * [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 08/15/2022 Last updated : 10/25/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Ensure that stale DNS records associated with the retired AD DS domain controlle
A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS domain service (SRV) resource record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection. > [!IMPORTANT]
-> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/dns-top) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
+> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/deploy/dns-policies-overview) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
### Consequences of incorrect or incomplete AD Site Name configuration
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/21/2022 Last updated : 10/25/2022 # Use availability zones for high availability in Azure NetApp Files
AzureΓÇ»[availability zones](../availability-zones/az-overview.md#availability-z
Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Azure availability zones let you design and operate applications and databases that automatically transition between zones without interruption. You can design resilient solutions by using Azure services that use availability zones.
-The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
+The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
:::image type="content" alt-text="Diagram of three availability zones in one Azure region." source="../media/azure-netapp-files/availability-zone-diagram.png":::
All Virtual Machines within the region in (peered) VNets can access all Azure Ne
Azure NetApp Files deployments will occur in the availability of zone of choice if Azure NetApp Files is present in that availability zone and has sufficient capacity. >[!IMPORTANT]
->Azure NetApp Files availability zone volume placement provides zonal placement. It doesn't provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
+>Azure NetApp Files availability zone volume placement provides zonal placement. It ***does not*** provide proximity placement towards compute. As such, it ***does not*** provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
You can co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones. Many applications are built for HA across multiple availability zones using application-based replication and failover technologies, like [SQL Server Always-On Availability Groups (AOAG)](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server), [SAP HANA with HANA System Replication (HSR)](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md), and [Oracle with Data Guard](../virtual-machines/workloads/oracle/oracle-reference-architecture.md#high-availability-for-oracle-databases).
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
- Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well Architected Framework](/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
* [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA)
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
The preceding example returns the following object when deployed to global Azure
"vmImageAliasDoc": "https://raw.githubusercontent.com/Azure/azure-rest-api-specs/master/arm-compute/quickstart-templates/aliases.json", "resourceManager": "https://management.azure.com/", "authentication": {
- "loginEndpoint": "https://login.windows.net/",
+ "loginEndpoint": "https://login.microsoftonline.com/",
"audiences": [ "https://management.core.windows.net/", "https://management.azure.com/"
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Previously updated : 12/28/2021 Last updated : 10/26/2022
Learn how to use deployment scripts in Bicep. With [Microsoft.Resources/deployme
- perform data plane operations, for example, copy blobs or seed database - look up and validate a license key - create a self-signed certificate-- create an object in Azure AD
+- create an object in Azure Active Directory (Azure AD)
- look up IP Address blocks from custom system The benefits of deployment script:
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
+If you would rather learn about deployment scripts through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
Property value details:
- [Sample 1](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault.bicep): create a key vault and use deployment script to assign a certificate to the key vault. - [Sample 2](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault-subscription.bicep): create a resource group at the subscription level, create a key vault in the resource group, and then use deployment script to assign a certificate to the key vault. - [Sample 3](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault-mi.bicep): create a user-assigned managed identity, assign the contributor role to the identity at the resource group level, create a key vault, and then use deployment script to assign a certificate to the key vault.
+- [Sample 4](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad): manually create a user-assigned managed identity and assign it permission to use the Microsoft Graph API to create Azure AD applications; in the Bicep file, use a deployment script to create an Azure AD application and service principal, and output the object IDs and client ID.
## Use inline scripts
After the script is tested successfully, you can use it as a deployment script i
| DeploymentScriptContainerGroupInNonterminalState | When creating the Azure container instance (ACI), another deployment script is using the same ACI name in the same scope (same subscription, resource group name, and resource name). | | DeploymentScriptContainerGroupNameInvalid | The Azure container instance name (ACI) specified doesn't meet the ACI requirements. See [Troubleshoot common issues in Azure Container Instances](../../container-instances/container-instances-troubleshooting.md#issues-during-container-group-deployment).|
+## Use Microsoft Graph within a deployment script
+
+A deployment script can use [Microsoft Graph](/graph/overview) to create and work with objects in Azure AD.
+
+### Commands
+
+When you use Azure CLI deployment scripts, you can use commands within the `az ad` command group to work with applications, service principals, groups, and users. You can also directly invoke Microsoft Graph APIs by using the `az rest` command.
+
+When you use Azure PowerShell deployment scripts, you can use the `Invoke-RestMethod` cmdlet to directly invoke the Microsoft Graph APIs.
+
+### Permissions
+
+The identity that your deployment script uses needs to be authorized to work with the Microsoft Graph API, with the appropriate permissions for the operations it performs. You must authorize the identity outside of your Bicep file, such as by pre-creating a user-assigned managed identity and assigning it an app role for Microsoft Graph. For more information, [see this quickstart example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad).
+ ## Next steps In this article, you learned how to use deployment scripts. To walk through a Learn module:
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 09/06/2022 Last updated : 10/26/2022
Learn how to use deployment scripts in Azure Resource templates (ARM templates).
- Perform data plane operations, for example, copy blobs or seed database. - Look up and validate a license key. - Create a self-signed certificate.-- Create an object in Azure AD.
+- Create an object in Azure Active Directory (Azure AD).
- Look up IP Address blocks from custom system. The benefits of deployment script:
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
+If you would rather learn about deployment scripts through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
Property value details:
- [Sample 3](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-mi.json): create a user-assigned managed identity, assign the contributor role to the identity at the resource group level, create a key vault, and then use deployment script to assign a certificate to the key vault. - [Sample 4](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-sub.json): it is the same scenario as Sample 1 in this list. A new resource group is created to run the deployment script. This template is a subscription level template. - [Sample 5](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-group.json): it is the same scenario as Sample 4. This template is a resource group level template.
+- [Sample 6](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad): manually create a user-assigned managed identity and assign it permission to use the Microsoft Graph API to create Azure AD applications; in the Bicep file, use a deployment script to create an Azure AD application and service principal, and output the object IDs and client ID.
## Use inline scripts
After the script is tested successfully, you can use it as a deployment script i
| DeploymentScriptContainerGroupInNonterminalState | When creating the Azure container instance (ACI), another deployment script is using the same ACI name in the same scope (same subscription, resource group name, and resource name). | | DeploymentScriptContainerGroupNameInvalid | The Azure container instance name (ACI) specified doesn't meet the ACI requirements. See [Troubleshoot common issues in Azure Container Instances](../../container-instances/container-instances-troubleshooting.md#issues-during-container-group-deployment).|
+## Use Microsoft Graph within a deployment script
+
+A deployment script can use [Microsoft Graph](/graph/overview) to create and work with objects in Azure AD.
+
+### Commands
+
+When you use Azure CLI deployment scripts, you can use commands within the `az ad` command group to work with applications, service principals, groups, and users. You can also directly invoke Microsoft Graph APIs by using the `az rest` command.
+
+When you use Azure PowerShell deployment scripts, you can use the `Invoke-RestMethod` cmdlet to directly invoke the Microsoft Graph APIs.
+
+### Permissions
+
+The identity that your deployment script uses needs to be authorized to work with the Microsoft Graph API, with the appropriate permissions for the operations it performs. You must authorize the identity outside of your template deployment, such as by pre-creating a user-assigned managed identity and assigning it an app role for Microsoft Graph. For more information, [see this quickstart example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad).
+ ## Next steps In this article, you learned how to use deployment scripts. To walk through a deployment script tutorial:
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
In the [previous tutorial](./deployment-tutorial-linked-template.md), you deploy a linked template. In this tutorial, you learn how to use Azure Pipelines to continuously build and deploy Azure Resource Manager template (ARM template) projects.
-Azure DevOps provides developer services to support teams to plan work, collaborate on code development, and build and deploy applications. Developers can work in the cloud using Azure DevOps Services. Azure DevOps provides an integrated set of features that you can access through your web browser or IDE client. Azure Pipeline is one of these features. Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service. It works with your preferred Git provider and can deploy to most major cloud services. Then you can automate the build, testing, and deployment of your code to Microsoft Azure, Google Cloud Platform, or Amazon Web Services.
+Azure DevOps provides developer services to support teams to plan work, collaborate on code development, and build and deploy applications. Developers can work in the cloud using Azure DevOps Services. Azure DevOps provides an integrated set of features that you can access through your web browser or IDE client. Azure Pipelines is one of these features. Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service. It works with your preferred Git provider and can deploy to most major cloud services. Then you can automate the build, testing, and deployment of your code to Microsoft Azure, Google Cloud Platform, or Amazon Web Services.
> [!NOTE] > Pick a project name. When you go through the tutorial, replace any of the **AzureRmPipeline** with your project name.
azure-signalr Signalr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-overview.md
There are many different ways to program with Azure SignalR Service, as some of
- **[Scale an ASP.NET Core SignalR App](signalr-concept-scale-aspnet-core.md)** - Integrate Azure SignalR Service with an ASP.NET Core SignalR application to scale out to hundreds of thousands of connections. - **[Build serverless real-time apps](signalr-concept-azure-functions.md)** - Use Azure Functions' integration with Azure SignalR Service to build serverless real-time applications in languages such as JavaScript, C#, and Java.-- **[Send messages from server to clients via REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md)** - Azure SignalR Service provides REST API to enable applications to post messages to clients connected with SignalR Service, in any REST capable programming languages.
+- **[Send messages from server to clients via REST API](signalr-reference-data-plane-rest-api.md)** - Azure SignalR Service provides REST API to enable applications to post messages to clients connected with SignalR Service, in any REST capable programming languages.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
Title: Integrate Microsoft Defender for Cloud with Azure VMware Solution
description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the workload protection dashboard. Previously updated : 10/18/2022 Last updated : 10/24/2022+ # Integrate Microsoft Defender for Cloud with Azure VMware Solution
azure-vmware Azure Vmware Solution Citrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-citrix.md
Title: Deploy Citrix on Azure VMware Solution
description: Learn how to deploy VMware Citrix on Azure VMware Solution. Previously updated : 11/02/2021- Last updated : 10/24/2022+
Citrix Virtual Apps and Desktop service supports Azure VMware Solution. Azure VM
[Solution brief](https://www.citrix.com/content/dam/citrix/en_us/documents/solution-brief/citrix-virtual-apps-and-desktop-service-on-azure-vmware-solution.pdf) **FAQ (review Q&As)**
-
-- Q. Can I migrate my existing Citrix desktops and apps to Azure VMware Solution, or operate a hybrid environment that consists of on-premises and Azure VMware Solution-based Citrix workloads? +
+- Q. Can I migrate my existing Citrix desktops and apps to Azure VMware Solution, or operate a hybrid environment that consists of on-premises and Azure VMware Solution-based Citrix workloads?
A. Yes. You can use the same machine images, application packages, and processes you currently use. YouΓÇÖre able to seamlessly link on-premises and Azure VMware Solution-based environments together for a migration. -- Q. Can Citrix be deployed as a standalone environment within Azure VMware Solution?
+- Q. Can Citrix be deployed as a standalone environment within Azure VMware Solution?
- A. Yes. YouΓÇÖre free to migrate, operate a hybrid environment, or deploy a standalone directly into Azure VMware Solution
+ A. Yes. YouΓÇÖre free to migrate, operate a hybrid environment, or deploy a standalone directly into Azure VMware Solution.
-- Q. Does Azure VMware Solution support both PVS and MCS?
+- Q. Does Azure VMware Solution support both PVS and MCS?
- A. Yes
+ A. Yes.
-- Q. Are GPU-based workloads supported in Citrix on Azure VMware Solution?
+- Q. Are GPU-based workloads supported in Citrix on Azure VMware Solution?
- A. Not at this time. However, Citrix workloads on Microsoft Azure support GPU if that use case is important to you.
+ A. Not at this time. However, Citrix workloads on Microsoft Azure support GPU if that use case is important to you.
-- Q. Is Azure VMware Solution supported with on-prem Citrix deployments or LTSR?
+- Q. Is Azure VMware Solution supported with on-premesis Citrix deployments or LTSR?
A. No. Azure VMware Solution is only supported with the Citrix Virtual Apps and Desktops service offerings. -- Q. Who do I call for support?
+- Q. Who do I call for support?
- A. Customers should contact Citrix support www.citrix.com/support for assistance.
+ A. Customers should contact Citrix support www.citrix.com/support for assistance.
- Q. Can I use my Azure Virtual Desktop benefit from Microsoft with Citrix on Azure VMware Solution?
- A. No. Azure Virtual Desktop benefits are applicable to native Microsoft Azure workloads only. Citrix Virtual Apps and Desktops service, as a native Azure offering, can apply your Azure Virtual Desktop benefit alongside your Azure VMware Solution deployment.
+ A. No. Azure Virtual Desktop benefits are applicable to native Microsoft Azure workloads only. Citrix Virtual Apps and Desktops service, as a native Azure offering, can apply your Azure Virtual Desktop benefit alongside your Azure VMware Solution deployment.
-- Q. How do I purchase Citrix Virtual Apps and Desktops service to use Azure VMware Solution?
+- Q. How do I purchase Citrix Virtual Apps and Desktops service to use Azure VMware Solution?
- A. You can purchase Citrix offerings via your Citrix partner or directly from the Azure Marketplace.
+ A. You can purchase Citrix offerings via your Citrix partner or directly from the Azure Marketplace.
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-api-management.md
Title: Concepts - API Management
description: Learn how API Management protects APIs running on Azure VMware Solution virtual machines (VMs) Previously updated : 04/28/2021 Last updated : 10/25/2022+ # Publish and protect APIs running on Azure VMware Solution VMs Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. In addition, both SKUs enable the connectivity between the API Management service and the backend.
-The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. In addition, API Management configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on the Azure VMware Solution.
+The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. API Management also configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on Azure VMware Solution.
## External deployment
The external deployment diagram shows the entire process and the actors involved
The traffic flow goes through the API Management instance, which abstracts the backend services, plugged into the Hub virtual network. The ExpressRoute Gateway routes the traffic to the ExpressRoute Global Reach channel and reaches an NSX Load Balancer distributing the incoming traffic to the different backend service instances.
-API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
+API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
:::image type="content" source="media/api-management/api-management-external-deployment.png" alt-text="Diagram showing an external API Management deployment for Azure VMware Solution" border="false"::: - ## Internal deployment An internal deployment publishes APIs consumed by internal users or systems. DevOps teams and API developers use the same management tools and developer portal as in the external deployment.
The deployment diagram below shows consumers that can be internal or external, w
In an internal deployment, APIs get exposed to the same API Management instance. In front of API Management, Application Gateway gets deployed with Azure Web Application Firewall (WAF) capability activated. Also deployed, a set of HTTP listeners and rules to filter the traffic, exposing only a subset of the backend services running on Azure VMware Solution. -
-* Internal traffic routes through ExpressRoute Gateway to Azure Firewall and then to API Management, directly or through traffic rules.
+* Internal traffic routes through ExpressRoute Gateway to Azure Firewall and then to API Management, directly or through traffic rules.
* External traffic enters Azure through Application Gateway, which uses the external protection layer for API Management. - :::image type="content" source="media/api-management/api-management-internal-deployment.png" alt-text="Diagram showing an internal API Management deployment for Azure VMware Solution" lightbox="media/api-management/api-management-internal-deployment.png" border="false":::
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md
Title: Concept - Integrate an Azure VMware Solution deployment in a hub and spok
description: Learn about integrating an Azure VMware Solution deployment in a hub and spoke architecture on Azure. Previously updated : 10/20/2022 Last updated : 10/24/2022+ # Integrate Azure VMware Solution in a hub and spoke architecture This article provides recommendations for integrating an Azure VMware Solution deployment in an existing or a new [Hub and Spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/#hub-spoke-network-topology) on Azure. - The Hub and Spoke scenario assume a hybrid cloud environment with workloads on: * Native Azure using IaaS or PaaS services
The architecture has the following main components:
- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only. - - **S2S VPN considerations:** Connectivity to Azure VMware Solution private cloud using Azure S2S VPN is supported as long as it meets the [minimum network requirements](https://docs.vmware.com/en/VMware-HCX/4.4/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html) for VMware HCX. - - **Hub virtual network:** Acts as the central point of connectivity to your on-premises network and Azure VMware Solution private cloud. - **Spoke virtual network**
Because an ExpressRoute gateway doesn't provide transitive routing between its c
:::image type="content" source="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png" alt-text="Diagram showing the on-premises to Azure VMware Solution traffic flow." border="false" lightbox="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png"::: - * **Azure VMware Solution to Hub VNET traffic flow** :::image type="content" source="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png" alt-text="Diagram showing the Azure VMware Solution to Hub virtual network traffic flow." border="false" lightbox="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png"::: - For more information on Azure VMware Solution networking and connectivity concepts, see the [Azure VMware Solution product documentation](./concepts-networking.md). ### Traffic segmentation
Create route tables to direct the traffic to Azure Firewall. For the Spoke virt
:::image type="content" source="media/hub-spoke/create-route-table-to-direct-traffic.png" alt-text="Screenshot showing the route tables to direct traffic to Azure Firewall." lightbox="media/hub-spoke/create-route-table-to-direct-traffic.png"::: - > [!IMPORTANT] > A route with address prefix 0.0.0.0/0 on the **GatewaySubnet** setting is not supported.
For more information, see the Azure VMware Solution-specific article on [Applica
:::image type="content" source="media/hub-spoke/azure-vmware-solution-second-level-traffic-segmentation.png" alt-text="Diagram showing the second level of traffic segmentation using the Network Security Groups." border="false"::: - ### Jump box and Azure Bastion Access Azure VMware Solution environment with a jump box, which is a Windows 10 or Windows Server VM deployed in the shared service subnet within the Hub virtual network.
As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
> [!IMPORTANT] > Do not give a public IP address to the jump box VM or expose 3389/TCP port to the public internet. - :::image type="content" source="media/hub-spoke/azure-bastion-hub-vnet.png" alt-text="Diagram showing the Azure Bastion Hub virtual network." border="false":::
As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
For Azure DNS resolution, there are two options available: -- Use the domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
+- Use the domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
-- Deploy and configure an Azure DNS private zone.
+- Deploy and configure an Azure DNS private zone.
The best approach is to combine both to provide reliable name resolution for Azure VMware Solution, on-premises, and Azure. As a general design recommendation, use the existing Active Directory-integrated DNS deployed onto at least two Azure VMs in the Hub virtual network and configured in the Spoke virtual networks to use those Azure DNS servers in the DNS settings.
-You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
+You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
To automatically manage the DNS records' lifecycle for the VMs deployed within the Spoke virtual networks, enable autoregistration. When enabled, the maximum number of private DNS zones is only one. If disabled, then the maximum number is 1000.
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity
description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 06/28/2021 Last updated : 10/25/2022+ # Azure VMware Solution networking and interconnectivity concepts
In the fully interconnected scenario, you can access the Azure VMware Solution f
The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases: - Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution.-- On-Premises to Azure VMware Solution private cloud management access.
+- On-premises to Azure VMware Solution private cloud management access.
:::image type="content" source="media/concepts/adjacency-overview-drawing-double.png" alt-text="Diagram showing the virtual network and on-premises to private cloud interconnectivity." border="false":::
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 08/25/2021 Last updated : 10/25/2022+
-# Azure VMware Solution private cloud and cluster concepts
+# Azure VMware Solution private cloud and cluster concepts
Azure VMware Solution delivers VMware-based private clouds in Azure. The private cloud hardware and software deployments are fully integrated and automated in Azure. You deploy and manage the private cloud through the Azure portal, CLI, or PowerShell. A private cloud includes clusters with: -- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor -- VMware vCenter Server for managing ESXi and vSAN
+- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor
+- VMware vCenter Server for managing ESXi and vSAN
- VMware NSX-T Data Center software-defined networking for vSphere workload VMs - VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility - Resources in the Azure underlay (required for connectivity and to operate the private cloud)
-As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
+As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
-The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters.
+The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters.
## Hosts
The diagram shows a single Azure subscription with two private clouds that repre
## Host maintenance and lifecycle management -- [!INCLUDE [vmware-software-update-frequency](includes/vmware-software-update-frequency.md)] ## Host monitoring and remediation
-Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
+Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster. Azure VMware Solution monitors the following conditions on the host: -- Processor status -- Memory status -- Connection and power state -- Hardware fan status -- Network connectivity loss -- Hardware system board status -- Errors occurred on the disk(s) of a vSAN host -- Hardware voltage -- Hardware temperature status -- Hardware power status -- Storage status -- Connection failure
+- Processor status
+- Memory status
+- Connection and power state
+- Hardware fan status
+- Network connectivity loss
+- Hardware system board status
+- Errors occurred on the disk(s) of a vSAN host
+- Hardware voltage
+- Hardware temperature status
+- Hardware power status
+- Storage status
+- Connection failure
> [!NOTE] > Azure VMware Solution tenant admins must not edit or delete the above defined VMware vCenter Server alarms, as these are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
Now that you've covered Azure VMware Solution private cloud concepts, you may wa
[vCSA versions]: https://kb.vmware.com/s/article/2143838 [ESXi versions]: https://kb.vmware.com/s/article/2143832 [vSAN versions]: https://kb.vmware.com/s/article/2150753-
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Title: Concepts - Run command in Azure VMware Solution (Preview)
description: Learn about using run commands in Azure VMware Solution. Previously updated : 09/17/2021 Last updated : 10/25/2022+
-# Run command in Azure VMware Solution
+# Run command in Azure VMware Solution
-In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
+In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
Azure VMware Solution supports the following operations:
Azure VMware Solution supports the following operations:
- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - >[!NOTE] >Run commands are executed one at a time in the order submitted.
You can view the status of any executed run command, including the output, error
:::image type="content" source="media/run-command/run-execution-status-example-output.png" alt-text="Screenshot showing the output of a run execution.":::
- - **Error** - Error messages generated in the execution of the cmdlet. This is in addition to the terminating error message on the details pane.
+ - **Error** - Error messages generated in the execution of the cmdlet. This is in addition to the terminating error message on the details pane.
:::image type="content" source="media/run-command/run-execution-status-example-error.png" alt-text="Screenshot showing the errors detected during the execution of an execution.":::
- - **Warning** - Warning messages generated during the execution.
+ - **Warning** - Warning messages generated during the execution.
:::image type="content" source="media/run-command/run-execution-status-example-warning.png" alt-text="Screenshot showing the warnings detected during the execution of an execution.":::
- - **Information** - Progress and diagnostic generated messages during the execution of a cmdlet.
+ - **Information** - Progress and diagnostic generated messages during the execution of a cmdlet.
:::image type="content" source="medilet as it runs."::: -- ## Cancel or delete a job -- ### Method 1 This method attempts to cancel the execution, and then deletes it upon completion.
This method attempts to cancel the execution, and then deletes it upon completio
2. Select **Yes** to cancel and remove the job for all users. -- ### Method 2 1. Select **Run command** > **Packages** > **Run execution status**.
This method attempts to cancel the execution, and then deletes it upon completio
3. Select **Yes** to cancel and remove the job for all users. -- ## Next steps Now that you've learned about the Run command concepts, you can use the Run command feature to:
Now that you've learned about the Run command concepts, you can use the Run comm
- [Configure external identity source for vCenter (Run command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter Server, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role. -- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
+- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP to the NSX-T Data Center Edge for Azure VMware Solution
description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 10/17/2022 Last updated : 10/24/2022+ # Enable Public IP to the NSX-T Data Center Edge for Azure VMware Solution
-In this article, you'll learn how to enable Public IP to the NSX-T Data Center Edge for your Azure VMware Solution.
+In this article, you'll learn how to enable Public IP to the NSX-T Data Center Edge for your Azure VMware Solution.
>[!TIP] >Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
-Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment.
+Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment.
>[!IMPORTANT] >The use of Public IPv4 addresses can be consumed directly in Azure VMware Solution and charged based on the Public IPv4 prefix shown on [Pricing - Virtual Machine IP Address Options.](https://azure.microsoft.com/pricing/details/ip-addresses/).
Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution th
The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data Center interface within your Azure VMware Solution private cloud. With this capability, you have the following features:+ - A cohesive and simplified experience for reserving and using a Public IP down to the NSX Edge. - The ability to receive up to 1000 or more Public IPs, enabling Internet access at scale. - Inbound and outbound internet access for your workload VMs.-- DDoS Security protection against network traffic in and out of the Internet.
+- DDoS Security protection against network traffic in and out of the Internet.
- HCX Migration support over the Public Internet. >[!IMPORTANT] >You can configure up to 64 total Public IP addresses across these network blocks. If you want to configure more than 64 Public IP addresses, please submit a support ticket stating how many. ## Prerequisites+ - Azure VMware Solution private cloud - DNS Server configured on the NSX-T Data Center
-## Reference architecture
+## Reference architecture
+ The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX-T Data Center Edge. :::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-expanded.png"::: >[!IMPORTANT]
->The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup.
+>The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup.
## Configure a Public IP in the Azure portal
-1. Log on to the Azure portal.
+
+1. Log in to the Azure portal.
1. Search for and select Azure VMware Solution.
-2. Select the Azure VMware Solution private cloud.
-1. In the left navigation, under **Workload Networking**, select **Internet connectivity**.
-4. Select the **Connect using Public IP down to the NSX-T Edge** button.
+1. Select the Azure VMware Solution private cloud.
+1. In the left navigation, under **Workload Networking**, select **Internet connectivity**.
+1. Select the **Connect using Public IP down to the NSX-T Edge** button.
>[!IMPORTANT] >Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md). This should include a risk mitigation review with your relevant networking and security governance and compliance teams.
-
-5. Select **Public IP**.
+
+6. Select **Public IP**.
:::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge":::
-6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**.
-7. This Public IP should be configured within 20 minutes and will show the subnet.
+6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**.
+7. This Public IP should be configured within 20 minutes and will show the subnet.
:::image type="content" source="media/public-ip-nsx-edge/public-ip-subnet-internet-connectivity.png" alt-text="Diagram that shows Internet connectivity in Azure VMware Solution."::: 1. If you don't see the subnet, refresh the list. If the refresh fails, try the configuration again.
-9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options.
-10. Select **Save**.
+9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options.
+10. Select **Save**.
-You have successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX-T Data Center Edge for your workloads. The NSX-T Data Center is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX-T Data Center Edge.
+You have successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX-T Data Center Edge for your workloads. The NSX-T Data Center is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX-T Data Center Edge.
There are three options for configuring your reserved Public IP down to the NSX-T Data Center Edge: Outbound Internet Access for VMs, Inbound Internet Access for VMs, and Gateway Firewall used to Filter Traffic to VMs at T1 Gateways. ### Outbound Internet access for VMs
-
+ A Sourced Network Translation Service (SNAT) with Port Address Translation (PAT) is used to allow many VMs to one SNAT service. This connection means you can provide Internet connectivity for many VMs. >[!IMPORTANT] > To enable SNAT for your specified address ranges, you must [configure a gateway firewall rule](#gateway-firewall-used-to-filter-traffic-to-vms-at-t1-gateways) and SNAT for the specific address ranges you desire. If you don't want SNAT enabled for specific address ranges, you must create a [No-NAT rule](#no-network-address-translation-rule-for-specific-address-ranges) for the address ranges to exclude. For your SNAT service to work as expected, the No-NAT rule should be a lower priority than the SNAT rule. **Add rule**
-1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**
-2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
-4. Navigate to **NAT Rules**.
-5. Select the T1 Router.
-1. Select **ADD NAT RULE**.
+
+1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**
+2. Locate your NSX-T Manager URL and credentials.
+3. Log in to **VMware NSX-T Manager**.
+4. Navigate to **NAT Rules**.
+5. Select the T1 Router.
+1. Select **ADD NAT RULE**.
**Configure rule** 1. Enter a name.
-1. Select **SNAT**.
+1. Select **SNAT**.
1. Optionally, enter a source such as a subnet to SNAT or destination. 1. Enter the translated IP. This IP is from the range of Public IPs you reserved from the Azure VMware Solution Portal. 1. Optionally, give the rule a higher priority number. This prioritization will move the rule further down the rule list to ensure more specific rules are matched first.
Logging can be enabled by way of the logging slider. For more information on NSX
### No Network Address Translation rule for specific address ranges A No SNAT rule in NSX-T Manager can be used to exclude certain matches from performing Network Address Translation. This policy can be used to allow private IP traffic to bypass existing network translation rules.+ 1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**. 1. Locate your NSX-T Manager URL and credentials.
-1. Log in to **VMware NSX-T Manager** and then select **NAT Rules**.
+1. Log in to **VMware NSX-T Manager** and then select **NAT Rules**.
1. Select the T1 Router and then select **ADD NAT RULE**. 1. Select **NO SNAT** rule as the type of NAT rule. 1. Select the **Source IP** as the range of addresses you do not want to be translated. The **Destination IP** should be any internal addresses you are reaching from the range of Source IP ranges. 1. Select **SAVE**. ### Inbound Internet Access for VMs+ A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs. **Log in to VMware NSX-T Manager**
-1. From your Azure VMware Solution private cloud, select **VMware credentials**.
-2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
+
+1. From your Azure VMware Solution private cloud, select **VMware credentials**.
+2. Locate your NSX-T Manager URL and credentials.
+3. Log in to **VMware NSX-T Manager**.
**Configure the DNAT rule**+ 1. Name the rule. 1. Select **DNAT** as the action. 1. Enter the reserved Public IP in the destination match. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal. 1. Enter the VM Private IP in the translated IP.
-1. Select **SAVE**.
+1. Select **SAVE**.
1. Optionally, configure the Translated Port or source IP for more specific matches.
-
+ The VM is now exposed to the internet on the specific Public IP and/or specific ports. ### Gateway Firewall used to filter traffic to VMs at T1 Gateways
-
-You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall.
+
+You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall.
+ 1. From your Azure VMware Solution Private Cloud, select **VMware credentials**. 2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
-4. From the NSX-T home screen, select **Gateway Policies**.
-5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**.
-6. Select **New Policy** and enter a policy name.
-7. Select the Policy and select **ADD RULE**.
+3. Log in to **VMware NSX-T Manager**.
+4. From the NSX-T home screen, select **Gateway Policies**.
+5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**.
+6. Select **New Policy** and enter a policy name.
+7. Select the Policy and select **ADD RULE**.
8. Configure the rule.
-
+ 1. Select **New Rule**. 1. Enter a descriptive name. 1. Configure the source, destination, services, and action.
-
+ 1. Select **Match External Address** to apply firewall rules to the external address of a NAT rule. For example, the following rule is set to Match External Address, and this setting will allow SSH traffic inbound to the Public IP. :::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot Internet connectivity inbound Public IP." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png":::
-
-If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
-For more information on the NSX-T Data Center Gateway Firewall see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
+
+If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
+
+For more information on the NSX-T Data Center Gateway Firewall see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html).
The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Data Center Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html).
-## Next steps
+## Next steps
+ [Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md) [Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md)
azure-vmware Fix Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/fix-deployment-failures.md
Title: Support for Azure VMware Solution deployment or provisioning failure
description: Get information from your Azure VMware Solution private cloud to file a service request for an Azure VMware Solution deployment or provisioning failure. Previously updated : 10/20/2022 Last updated : 10/24/2022+ # Open a support request for an Azure VMware Solution deployment or provisioning failure
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 10/21/2022 Last updated : 10/25/2022 # Azure Bastion FAQ
Azure Bastion is deployed within VNets or peered VNets, and is associated to an
Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
+### <a name="azure-ad-guests"></a>Does Bastion support Azure AD guest accounts?
+
+Yes, [Azure AD guest accounts](../active-directory/external-identities/what-is-b2b.md) can be granted access to Bastion and can connect to virtual machines.
+ ## <a name="vm"></a>VM features and connection FAQs ### <a name="roles"></a>Are any roles required to access a virtual machine?
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Title: 'Quickstart: Create a profile and endpoint - Bicep'
description: In this quickstart, learn how to create an Azure Content Delivery Network profile and endpoint by using a Bicep file -+ na Last updated 03/14/2022-+ # Quickstart: Create an Azure CDN profile and endpoint - Bicep
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Image Analysis can detect human faces within an image and generate rectangle coo
> [!NOTE] > This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection. + Try out the face detection features quickly and easily in your browser using Vision Studio. > [!div class="nextstepaction"]
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
You can get pronunciation assessment scores for:
- Full text - Words - Syllable groups-- Phonemes in SAPI or IPA format
+- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
> [!NOTE]
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=stt-tts) and [available regions](regions.md#speech-service).
+> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale.
+>
+> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
>
-> The syllable groups, IPA phonemes, and spoken phoneme features of pronunciation assessment are currently only available for the en-US locale.
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+ ## Configuration parameters
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-For some locales, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. The phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format is available for the `en-GB` and `en-US` locales. The phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format is only available for the `en-US` locale. For other locales, you can only get the phoneme score.
+For `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Pronunciation assessment provides various assessment results in different granul
This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+> [!NOTE]
+> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
+>
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+ ## Try out pronunciation assessment You can explore and try out pronunciation assessment even without signing in.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.6.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.6.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.7.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.7.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.5.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.6.0 | Generally available |
## Prerequisites
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following content types are supported for the `interpret-as` and `format` at
| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." | | `cardinal`, `number` | None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">10</say-as> options`<br /><br />As "There are ten options."| | `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
-| `digits`, `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
+| `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
| `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." | | `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." | | `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
+Release note for `3.7.0-amd64`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes | Digest |
+|-|:|:-|
+| `latest` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`|
+| `3.7.0-amd64` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`|
+
+# [Previous version](#tab/previous)
Release note for `3.6.0-amd64`: **Features**
Release note for `3.6.0-amd64`:
| `latest` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`| | `3.6.0-amd64` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`|
-# [Previous version](#tab/previous)
Release note for `3.5.0-amd64`: **Features**
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current) +
+Release note for `3.7.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
+* Support for latest model versions.
++
+| Image Tags | Notes |
+|-|:--|
+| `latest` | Container image with the `en-US` locale. |
+| `3.7.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.7.0-amd64-en-us`. |
+
+This container has the following locales available.
+
+| Locale for v3.7.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae`| Container image with the `ar-AE` locale. | `sha256:06b50c123adb079c470ad37912bf6f0e37578e39f0432bf79d5f1c334f4013b2` |
+| `ar-bh`| Container image with the `ar-BH` locale. | `sha256:37a9ba7b309c5d43fc23d47dd7aaaf9f0775851295d674c0ca546aa9484f3d38` |
+| `ar-eg`| Container image with the `ar-EG` locale. | `sha256:80b7ad3d3d37d99782c8473cb5a36724bec381e321ae13fb2069a88472cea1af` |
+| `ar-iq`| Container image with the `ar-IQ` locale. | `sha256:dce00ea6b3c2ba9f12d8f8b7cee7762d8762c396f41667ed438a4f4420e8109b` |
+| `ar-jo`| Container image with the `ar-JO` locale. | `sha256:cf84e78c25edbd01e42645db3f7d08fcb0b702cbf9648cd0f504f6d5873c916f` |
+| `ar-kw`| Container image with the `ar-KW` locale. | `sha256:7c2548a1073e6bbee58193ba20353d9fb62cff17f57c1618b99d1dd9ca3a457b` |
+| `ar-lb`| Container image with the `ar-LB` locale. | `sha256:dae94f065cf026098181068ee5a21cb0e7d070f62fa36abdaa31c53de6ee5561` |
+| `ar-om`| Container image with the `ar-OM` locale. | `sha256:ae47c7c004161cd424cb8387eb82c7b75a76e5c97516e936a143b800427272e7` |
+| `ar-qa`| Container image with the `ar-QA` locale. | `sha256:07da7898b38f98a33d4dbf61bfab145de3549023988b1f0501227dee6b446799` |
+| `ar-sa`| Container image with the `ar-SA` locale. | `sha256:e58608eae7548c617677973031774bd61a12385162cc281d4d9ec14b10f50123` |
+| `ar-sy`| Container image with the `ar-SY` locale. | `sha256:a8fa1046c2ac8d87a58b6ea80811d8bec70bcde0ccf57f04fb73e18e485dfdad` |
+| `az-az`| Container image with the `az-AZ` locale. | `sha256:0a15dfda36aac86dfe12f5682d952ad38a4e02f5f322314787585cd16f985ba3` |
+| `bg-bg`| Container image with the `bg-BG` locale. | `sha256:638ca1a3c8e0a7e7e56750c013a1006a5c55d246eb6475cc977b017162cddad8` |
+| `bn-in`| Container image with the `bn-IN` locale. | `sha256:b35b74967c70d9480ec0aae95567db5f4eb25a9e78189690cc6fcb5580f3dae6` |
+| `bs-ba`| Container image with the `bs-BA` locale. | `sha256:6c9a35c675274dc358660a70bf01a71ff3d0c28eff7b4b40045acb606c52c311` |
+| `ca-es`| Container image with the `ca-ES` locale. | `sha256:8163db1b99645795a7e427eb31fead772e40423cddb4e99f3dc573c4b6c4e2ec` |
+| `cs-cz`| Container image with the `cs-CZ` locale. | `sha256:b3eeec75abe1d50f4e325dcb3d8ff0c94516eaeacc24493685ac21c5bb7b723f` |
+| `cy-gb`| Container image with the `cy-GB` locale. | `sha256:686c50efe91f4addab5fdb9b25d5b1eac45303d9ac4517150ae7d3b14ba76680` |
+| `da-dk`| Container image with the `da-DK` locale. | `sha256:0375770ab3eae63184fba644ffc820f42bebc28fdabff91e8529a2aeca0a5ab3` |
+| `de-at`| Container image with the `de-AT` locale. | `sha256:00020ceb473244a65aa3c74ac6afe5798c9488e1e6de75991a1ff45b64f639b2` |
+| `de-ch`| Container image with the `de-CH` locale. | `sha256:48e11b783237f1ac3b83f5d9da400c9b785404a0bbd880512b5ca1044c9e5da0` |
+| `de-de`| Container image with the `de-DE` locale. | `sha256:12c854473d84fdafac5095ac4eb1e1dce6bbf4557ecfe74c24270ff5afbb61d4` |
+| `el-gr`| Container image with the `el-GR` locale. | `sha256:d988b1046e8da4b41b1762ffc4a5e1e4463b6563d6466bd268ad5bc70d685ff7` |
+| `en-au`| Container image with the `en-AU` locale. | `sha256:510c056ee039636eacf49558357546553f84d1733ee79be1b7cbb8a00a255b4f` |
+| `en-ca`| Container image with the `en-CA` locale. | `sha256:8ee1190b738ed21bb98398bf485bb6150f8d88283458f4b3be07154ea78802fd` |
+| `en-gb`| Container image with the `en-GB` locale. | `sha256:7983af189b737c91a940e8be9f8859a7b9c069240d68348e1a5468ba1afea8bc` |
+| `en-gh`| Container image with the `en-GH` locale. | `sha256:e1a9bd5b21cbd8c017deb2339175d36a88a29ffaf15413bf530080cb44da8653` |
+| `en-hk`| Container image with the `en-HK` locale. | `sha256:3f6227c250f0f925dab6a6f2c92fb0e9b024cb9fc3ab38fd4556d2302a68b72b` |
+| `en-ie`| Container image with the `en-IE` locale. | `sha256:eae5b1864dd845aafddd8632b0bf86f70d322cbd9f91f4aa38681b9cff78f4b9` |
+| `en-in`| Container image with the `en-IN` locale. | `sha256:1e3c288591fc0df20ae381f520f78ac33aee1f7251037044312c947fd0b8589d` |
+| `en-ke`| Container image with the `en-KE` locale. | `sha256:975515ac47703c0b0a454fe23a5c6174bf5d69be723304449b6ff07d49fb9b3f` |
+| `en-nz`| Container image with the `en-NZ` locale. | `sha256:33f7a214071ec7719f4c697e18225262694198387c7a00ae624b3dc8d6236b7c` |
+| `en-ph`| Container image with the `en-PH` locale. | `sha256:94cb8c27aa2ce700914e4766de48f01f8b6420631c78538c5fcab2e325bc2cc9` |
+| `en-sg`| Container image with the `en-SG` locale. | `sha256:60933b58d251356dc35de461887f0bcfe0f5c47c559a570b98c9f69bbe4ef1f6` |
+| `en-tz`| Container image with the `en-TZ` locale. | `sha256:66d4f800c0a02d2f9dbf4ed4328d5032ced8b1f4d830c0f68ec669c333160962` |
+| `en-us`| Container image with the `en-US` locale. | `sha256:7f1a36eb10de11651d3077387a6e2ee89adb80d2efb1aa7648b9572cde505c64` |
+| `en-za`| Container image with the `en-ZA` locale. | `sha256:536d8edc033d00fda69791681ff15e91ff297edbe6fb0a771685139964857a8c` |
+| `es-ar`| Container image with the `es-AR` locale. | `sha256:a9b2765ed5eb3f18b265b0d088f347ddfb54dc1b21cb6a0a94ffbf1e0ec69f51` |
+| `es-bo`| Container image with the `es-BO` locale. | `sha256:2c1b8297e362ab6df9d2bab4268149656c28ef0a0704d51a608a54e4bf61d643` |
+| `es-cl`| Container image with the `es-CL` locale. | `sha256:d981571c6455c587188782488234852b50cae3b7319147f81f3a46c48c0cc628` |
+| `es-co`| Container image with the `es-CO` locale. | `sha256:07e0fab0ab15f411de6b56a6c5a4bb5dbf6882b7abd49c9dcb54de3ce2b0a20b` |
+| `es-cr`| Container image with the `es-CR` locale. | `sha256:db7c55da662e5e8a52b726819ec8bfe4f9b7a21903f9c8d949a6ddd65f7ca56e` |
+| `es-cu`| Container image with the `es-CU` locale. | `sha256:adbb56aeee08651b2dd030d2010c5c0e81c99fd59ae361302b444116c1086cfa` |
+| `es-do`| Container image with the `es-DO` locale. | `sha256:5dedfebb025725ce9391a324659cc8567f90c02c1b6e4554bada022922457463` |
+| `es-ec`| Container image with the `es-EC` locale. | `sha256:72a267f458b66cc3935d96ebffdacb12fbb8f6f91f5347464f2e9af34273260c` |
+| `es-es`| Container image with the `es-ES` locale. | `sha256:42540849cb394203a81a14ce02ac4a890916b025636666a77ba32cf347200915` |
+| `es-gt`| Container image with the `es-GT` locale. | `sha256:095cb31ec78862831db242fa69dc2f3db564ac7d5caa86274357cee137c62d82` |
+| `es-hn`| Container image with the `es-HN` locale. | `sha256:864179bf6de0d66db3bef50e6aa551b6c124ab37ce10a255c5aa33d03f7dae03` |
+| `es-mx`| Container image with the `es-MX` locale. | `sha256:af4b1d78f141a15ff27adc3c1d20a5bf82c00a4fa3f3cfa52ba17ac50260cb76` |
+| `es-ni`| Container image with the `es-NI` locale. | `sha256:5280d1e305ad3a0d403633cda6a65dc1c402e561ef0fca0ccefd73b99331838e` |
+| `es-pa`| Container image with the `es-PA` locale. | `sha256:5e9326793966a0e0d963bece1bd10b0d8bf9917990773eb65e28a26ef782f91f` |
+| `es-pe`| Container image with the `es-PE` locale. | `sha256:3d014bd060833953b5895f3431f00da332bdb6d8d4e224699045a11ec11a8975` |
+| `es-pr`| Container image with the `es-PR` locale. | `sha256:b8377678b543eea2c581f6fd7ef7d6ca59765c5094aa3303935c350cc3b30030` |
+| `es-py`| Container image with the `es-PY` locale. | `sha256:d5e7400cead88820b888e529d2c5c1ce6333147e210e1e8c4dea21cab8866e4e` |
+| `es-sv`| Container image with the `es-SV` locale. | `sha256:641ce9b02848542c786e1ca8c63134bb3fb97d260b4091b31c56831b1f0da684` |
+| `es-us`| Container image with the `es-US` locale. | `sha256:9b22ebc2b757dbf57205cace1a995b0e5f17c4402b089b41367bae459e45bcf0` |
+| `es-uy`| Container image with the `es-UY` locale. | `sha256:95baf4be71d34a9864d21f0499f7fd36b76e83fa8b8ae2486e56d38f7eb270ad` |
+| `es-ve`| Container image with the `es-VE` locale. | `sha256:552ec7b4df6cf143795ee78e6c3cefbc252baa7389486657390164686f369b87` |
+| `et-ee`| Container image with the `et-EE` locale. | `sha256:d96975024d81899a7c93a2634b8399ab142941a63db743f99471f3519c4aa760` |
+| `eu-es`| Container image with the `eu-ES` locale. | `sha256:789aec61fa61500adb7d60e5441755781b9241e2526d1100afcc1db4b9ea28f7` |
+| `fa-ir`| Container image with the `fa-IR` locale. | `sha256:a7d7d368f6493fcc6efdab07dc51a536da3ae3db92eb374725dba758f464ca99` |
+| `fi-fi`| Container image with the `fi-FI` locale. | `sha256:26370a84499831a50615337fb3de77530b1bcc3245313fd59078549f21b12a1c` |
+| `fil-ph`| Container image with the `fil-PH` locale. | `sha256:de11d47b9b1099f1b418b347c198b98e8cefbce74991773800e954e286f7f766` |
+| `fr-ca`| Container image with the `fr-CA` locale. | `sha256:dc747e1dafda312fcc79f9d0ec1b9b137de149fdde55cdbe9e9da04647e2216f` |
+| `fr-ch`| Container image with the `fr-CH` locale. | `sha256:962366fa475d196f09a932ae002d5479482996d827b2822d48e4632c0a118f53` |
+| `fr-fr`| Container image with the `fr-FR` locale. | `sha256:32dcc215732ed60c149f3a7f27e400fe17c2c885e5788eaa00db694e3b61c6fa` |
+| `ga-ie`| Container image with the `ga-IE` locale. | `sha256:5cfd9b63ed99df7eff27c94466c8f795ce56bddbd519bddce4dd960a4d85f1a0` |
+| `gl-es`| Container image with the `gl-ES` locale. | `sha256:13023950f7630296d2699a2211e7ae45a38188d82fb212a7a6780354087f815e` |
+| `gu-in`| Container image with the `gu-IN` locale. | `sha256:5511eb7c2e0a33ed7b16213297b3a530b1fdb858ea526b5bdaaea709966dac0b` |
+| `he-il`| Container image with the `he-IL` locale. | `sha256:99aa9f70c301f61a6f39793f874d70a45791ec6fd705b84639dc920b3c8b10a5` |
+| `hi-in`| Container image with the `hi-IN` locale. | `sha256:86147556e59e221a8c2c5ceb56fb5a40cead3c6e677aab8ddbbaffa39addd28a` |
+| `hr-hr`| Container image with the `hr-HR` locale. | `sha256:3123fa32f7322e3ab3bedf8c006b34a3d254b9d737f3e747312c45ca9d6c6271` |
+| `hu-hu`| Container image with the `hu-HU` locale. | `sha256:cede22619c83c84cb8356807a589c7992fdc5879f8987dc7fc1ff81abd123961` |
+| `hy-am`| Container image with the `hy-AM` locale. | `sha256:63c8b2e155d453666a5e842a270bc988a41fc7af09bb95e6508698713412a272` |
+| `id-id`| Container image with the `id-ID` locale. | `sha256:6e28166255a2ae55eb7d41aac3fb133403f01dd27fef583a12ac30d4a584ce50` |
+| `it-ch`| Container image with the `it-CH` locale. | `sha256:2065ef047c7704abda14134abd8508a7de7c3b2e30fdb051ee5525b8a8daee32` |
+| `it-it`| Container image with the `it-IT` locale. | `sha256:9ef3c51329c2c44585f8cf41847fd83dcaadeb783d51df55e15b57ff7cabfac7` |
+| `ja-jp`| Container image with the `ja-JP` locale. | `sha256:0d75f2e00c7a93375a56c315961c61cb2a93a7eb83deab210dfcd4c56fc4c519` |
+| `ka-ge`| Container image with the `ka-GE` locale. | `sha256:e05f315a34dec1efe527790c84082358cf9155def79be058f772b8cb05111d0a` |
+| `kk-kz`| Container image with the `kk-KZ` locale. | `sha256:e8d700480fe77edf46f2c8a02838b5bee1b6b76ae22cded45c3297febbd97725` |
+| `ko-kr`| Container image with the `ko-KR` locale. | `sha256:7fc44d9110f3e127d49b74146a9c8cde20f922a6aa8dc58643295d6e1c139fb6` |
+| `lt-lt`| Container image with the `lt-LT` locale. | `sha256:58d583963cc54edf4be231a1681774bc9213befa6a72aab20f556d5040e92f64` |
+| `lv-lv`| Container image with the `lv-LV` locale. | `sha256:37d8f5ce4734c8e3a7096a4e1148258c0b987261dc4911df37dae2586409d1f4` |
+| `mk-mk`| Container image with the `mk-MK` locale. | `sha256:388311b1e87277cc7c2d346eed8e2d8f456900aac8bfd735d26fefddcd1c7ce2` |
+| `mn-mn`| Container image with the `mn-MN` locale. | `sha256:41bdb6afea3c27f4ef67ca6eb9302b0207a8f631481bc16815993e131caf130a` |
+| `mr-in`| Container image with the `mr-IN` locale. | `sha256:2c9bb428c66e0238c65e9f6fedf09524398c2e9348971951b16502576005e244` |
+| `ms-my`| Container image with the `ms-MY` locale. | `sha256:8f61a55cdf6340b1327c2533ff0d6371b70d817308efd4546fce9bffef80ef5e` |
+| `mt-mt`| Container image with the `mt-MT` locale. | `sha256:7573d303239a4e99b646c8b2aa95d4903a37e31fcdc597c29952f0a555c1829e` |
+| `nb-no`| Container image with the `nb-NO` locale. | `sha256:408641de59d99085a8755c3e2f42b430ccf6af4f6b4fb12d7b2a4136d60383ff` |
+| `ne-np`| Container image with the `ne-NP` locale. | `sha256:ee5f5fae979352b572f093bf38e59c6edb636cffbbb49494da8c194b324e1956` |
+| `nl-nl`| Container image with the `nl-NL` locale. | `sha256:22e6b1734be2048c7dbe09f75bf49caae2c864b31687bd03025cbcf6b98ec7c6` |
+| `pl-pl`| Container image with the `pl-PL` locale. | `sha256:86a8c47e03eb75ea55b3b3dd43bb408f9e7cd7e58cc8b22004acb8d9e54d8e16` |
+| `ps-af`| Container image with the `ps-AF` locale. | `sha256:87b30023526044fa4917696128c84f08fa5355a0471c4768b66c6a340565ba98` |
+| `pt-br`| Container image with the `pt-BR` locale. | `sha256:79cd07b0be7249935a581758e3e3c0ce6af08d447c8063b8c580d09385bf0067` |
+| `pt-pt`| Container image with the `pt-PT` locale. | `sha256:c5c248c679122726427d6ca69fed8234331bf20be97f482707ba8ae7c6cdb67e` |
+| `ro-ro`| Container image with the `ro-RO` locale. | `sha256:6e5e203cbba8c60319a2f6dec7bd0b49d2915d6dc726882f633165dc4e239e64` |
+| `ru-ru`| Container image with the `ru-RU` locale. | `sha256:fd896f373deb0e70c394af235c00401b98333ffb70d5ca4ace0869192bd091ca` |
+| `sk-sk`| Container image with the `sk-SK` locale. | `sha256:962aca8128e74a30525d9c0a53a2a410e18f2fb679ecdf1a281d23e337e040a1` |
+| `sl-si`| Container image with the `sl-SI` locale. | `sha256:e4a8b4bbe5d70bf378bead62ccd1d54e994a970729061add43f0ba5c5d9d70b5` |
+| `so-so`| Container image with the `so-SO` locale. | `sha256:999b5f6b1708e0f012481db3e62eae14ab3b99fd12bba9c84a03b7bc79534b0c` |
+| `sq-al`| Container image with the `sq-AL` locale. | `sha256:5d230ed821290893fe90f833d8c6a7468bfd456709cb85abb9b953455fedb132` |
+| `sv-se`| Container image with the `sv-SE` locale. | `sha256:9c900b80eca404751c894ab47a8367d67f26c8a2710815d926c9f542a507990b` |
+| `ta-in`| Container image with the `ta-IN` locale. | `sha256:568c89e3d7ededa5c38682d724a884e59b16221e2945640ce0790d2eafdd9b28` |
+| `te-in`| Container image with the `te-IN` locale. | `sha256:cc66d2d64e62c64f0dd582becc8fdfc54b1cd590be68409bde034d32e5c8c165` |
+| `th-th`| Container image with the `th-TH` locale. | `sha256:ad4fe647e5d37860b1351f8f9c1536269dbef6200af78a35a534994697bf9887` |
+| `tr-tr`| Container image with the `tr-TR` locale. | `sha256:5b8c6de12d72c367c74d695fa90b16cbc96b6fcc2fa891e62475c346520fd10a` |
+| `uk-ua`| Container image with the `uk-UA` locale. | `sha256:552ce967cc23a8629acc6e297ae01d765658724fb711b105afee768b92dc4e7e` |
+| `vi-vn`| Container image with the `vi-VN` locale. | `sha256:4668f0b5f895dd85d2b1ffcf0ce9b9ff23339a82d290b973bfd91113bc0eb68a` |
+| `wuu-cn`| Container image with the `wuu-CN` locale. | `sha256:2c2321e7610bfcd812df267c044281801f5f470f023cfe37273ce6c4ee1748a9` |
+| `yue-cn`| Container image with the `yue-CN` locale. | `sha256:f2eeefd4926e4714e5996a6e13f00e58a985835923113679f40c0f8dcd86000b` |
+| `zh-cn`| Container image with the `zh-CN` locale. | `sha256:5691c41fedfb89d7738afabd5624aad43cf8c427c4de1608e7381674fdcb88a2` |
+| `zh-cn-sichuan`| Container image with the `zh-CN-sichuan` locale. | `sha256:192a125987d397018734c57f952e68df04d7fd550cfb6ae9434f200b7bd44d13` |
+| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:f3f8b50f982c19f31eea553ed92ebfb6c7e333a4d2fa55c81a1c8b680afd6101` |
+| `zh-tw`| Container image with the `zh-` locale. | `sha256:20245c6b1b4da4a393e6d0aaa3c1a013f03de69eec351d9b7e5fe9d542c1f098` |
+
+# [Previous version](#tab/previous)
+ Release note for `3.6.0-amd64-<locale>`: **Features**
This container has the following locales available.
| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:5d21febbb1e8710b01ad1a5727c33080e6853d3a4bfbf5365b059630b76a9901` | | `zh-tw`| Container image with the `zh-TW` locale. | `sha256:15dbadcd92e335705e07a8ecefbe621e3c97b723bdf1c5b0c322a5b9965ea47d` |
-# [Previous version](#tab/previous)
- Release note for `3.5.0-amd64-<locale>`: **Features**
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
+Release notes for `v2.6.0`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes |
+||:|
+| `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `2.6.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.6.0-amd64-en-us-arianeural`. |
++
+| v2.6.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `az-az-babekneural`| Container image with the `az-AZ` locale and `az-AZ-babekneural` voice.|
+| `az-az-banuneural`| Container image with the `az-AZ` locale and `az-AZ-banuneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fa-ir-dilaraneural`| Container image with the `fa-IR` locale and `fa-IR-dilaraneural` voice.|
+| `fa-ir-faridneural`| Container image with the `fa-IR` locale and `fa-IR-faridneural` voice.|
+| `fil-ph-angeloneural`| Container image with the `fil-PH` locale and `fil-PH-angeloneural` voice.|
+| `fil-ph-blessicaneural`| Container image with the `fil-PH` locale and `fil-PH-blessicaneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `he-il-avrineural`| Container image with the `he-IL` locale and `he-IL-avrineural` voice.|
+| `he-il-hilaneural`| Container image with the `he-IL` locale and `he-IL-hilaneural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `id-id-ardineural`| Container image with the `id-ID` locale and `id-ID-ardineural` voice.|
+| `id-id-gadisneural`| Container image with the `id-ID` locale and `id-ID-gadisneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ka-ge-ekaneural`| Container image with the `ka-GE` locale and `ka-GE-ekaneural` voice.|
+| `ka-ge-giorgineural`| Container image with the `ka-GE` locale and `ka-GE-giorgineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `th-th-acharaneural`| Container image with the `th-TH` locale and `th-TH-acharaneural` voice.|
+| `th-th-niwatneural`| Container image with the `th-TH` locale and `th-TH-niwatneural` voice.|
+| `th-th-premwadeeneural`| Container image with the `th-TH` locale and `th-TH-premwadeeneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
++
+# [Previous version](#tab/previous)
+ Release notes for `v2.5.0`: **Features**
Release notes for `v2.5.0`:
| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.| | `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
-# [Previous version](#tab/previous)
Release notes for `v2.4.0`:
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers)
Access is limited to customers that meet the following requirements:
-* Your organization must have a Microsoft Enterprise Agreement or an equivalent agreement and should be identified as strategic customer or partner with Microsoft.
+* Your organization should be identified as strategic customer or partner with Microsoft.
* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements: * Environment or device(s) with zero connectivity to internet. * Remote location that occasionally has internet access.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/call-api.md
Title: how to call the Key Phrase Extraction API
description: How to extract key phrases by using the Key Phrase Extraction API. -+ Last updated 07/27/2022-+
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/use-containers.md
Title: Use Docker containers for Key Phrase Extraction on-premises
description: Learn how to use Docker containers for Key Phrase Extraction on-premises. -+ Last updated 07/27/2022-+ keywords: on-premises, Docker, container, natural language processing
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/language-support.md
Title: Language support for Key Phrase Extraction
description: Use this article to find the natural languages supported by Key Phrase Extraction. -+ Last updated 07/28/2022-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/overview.md
Title: What is key phrase extraction in Azure Cognitive Service for Language?
description: An overview of key phrase extraction in Azure Cognitive Services, which helps you identify main concepts in unstructured text -+ Last updated 06/15/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Title: "Quickstart: Use the Key Phrase Extraction client library"
description: Use this quickstart to start using the Key Phrase Extraction API. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, key phrase
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
Title: 'Tutorial: Integrate Power BI with key phrase extraction'
description: Learn how to use the key phrase extraction feature to get text stored in Power BI. -+ Last updated 09/28/2022-+
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/call-api.md
Title: How to perform language detection
description: This article will show you how to detect the language of written text using language detection. -+ Last updated 03/01/2022-+
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/use-containers.md
Title: Use language detection Docker containers on-premises
description: Use Docker containers for the Language Detection API to determine the language of written text, on-premises. -+ Last updated 11/02/2021-+ keywords: on-premises, Docker, container
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/language-support.md
Title: Language Detection language support
description: This article explains which natural languages are supported by the Language Detection API. -+ Last updated 11/02/2021-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/overview.md
Title: What is language detection in Azure Cognitive Service for Language?
description: An overview of language detection in Azure Cognitive Services, which helps you detect the language that text is written in by returning language codes. -+ Last updated 07/27/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Title: "Quickstart: Use the Language Detection client library"
description: Use this quickstart to start using Language Detection. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, language detection
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Title: "Quickstart: Use Document Summarization (preview)"
description: Use this quickstart to start using Document Summarization. -+
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different models that are available in Azure OpenAI
Last updated 06/24/2022-+
keywords:
# Azure OpenAI models
-The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI.
+The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown.
| Model family | Description | |--|--|
Similar to text search embedding models, there are two input types supported by
When using our Embeddings models, keep in mind their limitations and risks.
+## Model Summary table and region availability
+
+### GPT-3 Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| Ada | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-Ada-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Babbage | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-Babbage-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Curie | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Davinci* | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-davinci-001 | Yes | No | South Central US, West Europe | N/A |
+| Text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
+| Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe |
+
+\*Models available by request only. Please open a support request.
+
+### Codex Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| Code-Cushman-001* | Yes | No | South Central US, West Europe | East US, South Central US, West Europe |
+| Code-Davinci-002 | Yes | No | East US, West Europe | N/A |
+| Code-Davinci-Fine-tune-002* | Yes | No | N/A | East US, West Europe |
+
+\*Models available for Fine-tuning by request only. Please open a support request.
+++
+### Embeddings Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A |
+| text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A |
+| text-similarit-curie-001 | No | Yes | East US, South Central US, West Europe | N/A |
+| text-similarity-davinci-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-ada-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-ada-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-babbage-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-babbage-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-curie-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-curie-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-davinci-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-davinci-query-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-ada-code-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-ada-text-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-babbage-code-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-babbage-text-001 | No | Yes | South Central US, West Europe | N/A |
+
+ ## Next steps [Learn more about Azure OpenAI](../overview.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
| Feature | Azure OpenAI | | | | | Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request |
-| Billing Model| Coming Soon |
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request. Please open a support request|
+| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
| Virtual network support | Yes | | Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Regional availability | South Central US <br> West Europe |
+| Regional availability | East US <br> South Central US <br> West Europe |
| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | ## Responsible AI
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 1 |
+| Requests per second per deployment | 5 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 | | Max training jobs queued | 20 | | Max Files per resource | 50 |
-| Total size of all files per resource | 1 GB|
+| Total size of all files per resource | 1 GB |
| Max training job time (job will fail if exceeded) | 120 hours |
-| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **Davinci**: 500 K |
+| Max training job size (tokens in training file * # of epochs) | **Ada**: 40-M tokens <br> **Babbage**: 40-M tokens <br> **Curie**: 40-M tokens <br> **Cushman**: 40-M tokens <br> **Davinci**: 10-M |
### General best practices to mitigate throttling during autoscaling
cognitive-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-inference-explainability.md
+
+ Title: "How-to: Use Inference Explainability"
+
+description: Personalizer can return feature scores in each Rank call to provide insight on what features are important to the model's decision.
++
+ms.
+++ Last updated : 09/20/2022++
+# Inference Explainability
+Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
+
+Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
+
+## How do I enable inference explainability?
+
+Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+
+```JSON
+{
+ "rewardWaitTime": "PT10M",
+ "defaultReward": 0,
+ "rewardAggregation": "earliest",
+ "explorationPercentage": 0.2,
+ "modelExportFrequency": "PT5M",
+ "logMirrorEnabled": true,
+ "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
+ "logRetentionDays": 7,
+ "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
+ "learningMode": "Online",
+ "isAutoOptimizationEnabled": true,
+ "autoOptimizationFrequency": "P7D",
+ "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
+"isInferenceExplainabilityEnabled": true
+}
+```
+
+> [!NOTE]
+> Enabling inference explainability will significantly increase the latency of calls to the Rank API. We recommend experimenting with this capability and measuring the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
++
+## How to interpret feature scores?
+Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
+
+```JSON
+
+{
+ "ranking": [
+ {
+ "id": "EntertainmentArticle",
+ "probability": 0.8
+ },
+ {
+ "id": "SportsArticle",
+ "probability": 0.15
+ },
+ {
+ "id": "NewsArticle",
+ "probability": 0.05
+ }
+ ],
+ "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
+ "rewardActionId": "EntertainmentArticle",
+ "inferenceExplanation": [
+ {
+ "idΓÇ¥: "EntertainmentArticle",
+ "features": [
+ {
+ "name": "user.profileType",
+ "score": 3.0
+ },
+ {
+ "name": "user.latLong",
+ "score": -4.3
+ },
+ {
+ "name": "user.profileType^user.latLong",
+ "score" : 12.1
+ },
+ ]
+ ]
+}
+```
+
+In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
+
+Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](concepts-exploration.md).
+
+For the best actions returned by Personalizer, the feature scores can provide general insight where:
+* Larger positive scores provide more support for the model choosing this action.
+* Larger negative scores provide more support for the model not choosing this action.
+* Scores close to zero have a small effect on the decision to choose this action.
+
+## Important considerations for Inference Explainability
+* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
+
+* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
+* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm.
++
+## Next steps
+
+[Reinforcement learning](concepts-reinforcement-learning.md)
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
The following list of capabilities is supported for scenarios where at least one
| | Manage Teams transcription | ❌ | | | Receive information of call being transcribed | ✔️ | | | Support for compliance recording | ✔️ |
-| Media | Support for early media | ❌ |
+| Media | Support for early media | ✔️ |
| | Place a phone call honors location-based routing | ❌ | | | Support for survivable branch appliance | ❌ | | Accessibility | Receive closed captions | ❌ |
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
This table shows the maximum number of characters that can be sent per SMS segme
### Can I send/receive long messages (>2048 chars)?
-Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages.
+Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages. We recommend keeping SMS messages to a length of 320 characters and reducing the use of accents to ensure maximum delivery.
+
+*Limitation of US short code - There is a known limit of ~4 segments when sending/receiving a message with Non-ASCII characters. Beyond 4 segments, the message may not be delivered with the right formatting.
### Are there any limits on sending messages?
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features that are currently available in
| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Support for early media | ❌ | ✔️ | ✔️ | ✔️ |
+| | Support for early media | ✔️ | ✔️ | ✔️ | ✔️ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ | ✔️ | | | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
az communication delete --name "acsResourceName" --resource-group "resourceGroup
If you have any phone numbers assigned to your resource upon resource deletion, the phone numbers will be released from your resource automatically at the same time. > [!Note]
-> Resource deletion is **permanent** and no data, including event gird filters, phone numbers, or other data tied to your resource, can be recovered if you delete the resource.
+> Resource deletion is **permanent** and no data, including event grid filters, phone numbers, or other data tied to your resource, can be recovered if you delete the resource.
## Next steps
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
TCP ingress is useful for exposing container apps that use a TCP-based protocol
> [!NOTE] > TCP ingress is in public preview and is only supported in Container Apps environments that use a [custom VNET](vnet-custom.md).
->
-> To enable TCP ingress, use ARM or Bicep (API version `2022-06-01-preview` or above), or the Azure CLI.
With TCP ingress enabled, your container app features the following characteristics:
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-groupId=$(az group show \
+$groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command: ```azurecli
-registryId=$(az acr show \
+$registryId=$(az acr show \
--name <registry-name> \ --resource-group <resource-group-name> \ --query id --output tsv)
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Containers are becoming the preferred way to package, deploy, and manage cloud applications. Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.
-Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend [Azure Kubernetes Service (AKS)](../aks/index.yml).
+Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend [Azure Kubernetes Service (AKS)](../aks/index.yml). We recommend reading through the [considerations and limitations](#considerations) and the [FAQs](./container-instances-faq.yml) to understand the best practices when deploying container instances.
## Fast startup times
Azure Container Instances supports scheduling of [multi-container groups](contai
Azure Container Instances enables [deployment of container instances into an Azure virtual network](container-instances-vnet.md). When deployed into a subnet within your virtual network, container instances can communicate securely with other resources in the virtual network, including those that are on premises (through [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../expressroute/expressroute-introduction.md)).
+## Considerations
+
+There are default limits that require quota increases. Not all quota increases may be approved: [Service quotas and region availability - Azure Container Instances | Microsoft Learn](./container-instances-quotas.md)
+
+Different regions have different default limits, so you should consider the limits in your region: [Resource availability by region - Azure Container Instances | Microsoft Learn](./container-instances-region-availability.md)
+
+If your container group stops working, we suggest trying to restart your container, checking your application code, or your local network configuration before opening a [support request][azure-support].
+
+Container Images cannot be larger than 15 GB, any images above this size may cause unexpected behavior: [How large can my container image be?](./container-instances-faq.yml)
+
+Some Windows Server base images are no longer compatible with Azure Container Instances:
+[What Windows base OS images are supported?](./container-instances-faq.yml)
+
+If a container group restarts, the container groupΓÇÖs IP may change. We advise against using a hard coded IP address in your scenario. If you need a static public IP address, use Application Gateway: [Static IP address for container group - Azure Container Instances | Microsoft Learn](./container-instances-application-gateway.md)
+
+There are ports that are reserved for service functionality. We advise you not to use these ports since this will lead to unexpected behavior: [Does the ACI service reserve ports for service functionality?](./container-instances-faq.yml)
+
+ If youΓÇÖre having trouble deploying or running your container, first check the [Troubleshooting Guide](./container-instances-troubleshooting.md) for common mistakes and issues
+
+Your container groups may restart due to platform maintenance events. These maintenance events are done to ensure the continuous improvement of the underlying infrastructure: [Container had an isolated restart without explicit user input](./container-instances-faq.yml)
+
+ACI does not allow [privileged container operations](./container-instances-faq.yml). We advise you to not depend on using the root directory for your scenario
+ ## Next steps Try deploying a container to Azure with a single command using our quickstart guide:
Try deploying a container to Azure with a single command using our quickstart gu
<!-- LINKS - External --> [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
Title: What is a connected registry description: Overview and scenarios of the connected registry feature of Azure Container Registry-+ Previously updated : 10/11/2022 Last updated : 10/25/2022
In this article, you learn about the *connected registry* feature of [Azure Cont
## Available regions
-* Asia East
-* EU North
-* EU West
-* US East
+* Canada Central
+* East Asia
+* East US
+* North Europe
+* Norway East
+* Southeast Asia
+* West Central US
+* West Europe
## Scenarios
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Previously updated : 10/05/2021 Last updated : 10/24/2022 adobe-target: true
Applications written for Azure Table storage can migrate to the API for Table wi
## API for PostgreSQL
+Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
+
+Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
+
+If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
+ ## Capacity planning when migrating data Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-overview.md
The goal of this article is to explain how Azure Cosmos DB indexes data and how
## From items to trees
-Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. What that means is that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
+Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. This means that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
As an example, consider this item:
Range indexes can be used on scalar values (string or number). The default index
SELECT * FROM c WHERE ST_INTERSECTS(c.property, { 'type':'Polygon', 'coordinates': [[ [31.8, -5], [32, -5], [31.8, -5] ]] }) ```
-Spatial indexes can be used on correctly formatted [GeoJSON](./sql-query-geospatial-intro.md) objects. Points, LineStrings, Polygons, and MultiPolygons are currently supported. To use this index type, set by using the `"kind": "Range"` property when configuring the indexing policy. To learn how to configure spatial indexes, see [Spatial indexing policy examples](how-to-manage-indexing-policy.md#spatial-index)
+Spatial indexes can be used on correctly formatted [GeoJSON](./sql-query-geospatial-intro.md) objects. Points, LineStrings, Polygons, and MultiPolygons are currently supported. To learn how to configure spatial indexes, see [Spatial indexing policy examples](how-to-manage-indexing-policy.md#spatial-index)
### Composite indexes
Here is a table that summarizes the different ways indexes are used in Azure Cos
| Full index scan | Read distinct set of indexed values and load only matching items from the transactional data store | Contains, EndsWith, RegexMatch, LIKE | Increases linearly based on the cardinality of indexed properties | Increases based on number of items in query results | | Full scan | Load all items from the transactional data store | Upper, Lower | N/A | Increases based on number of items in container |
-When writing queries, you should use filter predicate that use the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
+When writing queries, you should use filter predicate that uses the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
## Index usage details
The query predicate (filtering on items where any location has "France" as its c
:::image type="content" source="./media/index-overview/matching-path.png" alt-text="Matching a specific path within a tree" border="false":::
-Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
+Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek, we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
### Precise index scan
FROM company
WHERE company.headquarters.employees = 200 AND CONTAINS(company.headquarters.country, "United") ```
-To execute this query, the query engine must do an index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If, for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
+To execute this query, the query engine must do an index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
## Index utilization for scalar aggregate functions
Queries with aggregate functions must rely exclusively on the index in order to
In some cases, the index can return false positives. For example, when evaluating `Contains` on the index, the number of matches in the index may exceed the number of query results. The query engine will load all index matches, evaluate the filter on the loaded items, and return only the correct results.
-For the majority of queries, loading false positive index matches will not have any noticeable impact on index utilization.
+For most queries, loading false positive index matches will not have any noticeable impact on index utilization.
For example, consider the following query:
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
Any indexing policy has to include the root path `/*` as either an included or a
- If the indexing mode is set to **consistent**, the system properties `id` and `_ts` are automatically indexed.
-When including and excluding paths, you may encounter the following attributes:
--- `kind` can be either `range` or `hash`. Hash index support is limited to equality filters. Range index functionality provides all of the functionality of hash indexes as well as efficient sorting, range filters, system functions. We always recommend using a range index.--- `precision` is a number defined at the index level for included paths. A value of `-1` indicates maximum precision. We recommend always setting this value to `-1`.--- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties that will be indexed.-
-It's no longer necessary to set these properties. When not specified, these properties will have the following default values:
-
-| **Property Name** | **Default Value** |
-| -- | -- |
-| `kind` | `range` |
-| `precision` | `-1` |
-| `dataType` | `String` and `Number` |
- See [this section](how-to-manage-indexing-policy.md#indexing-policy-examples) for indexing policy examples for including and excluding paths. ## Include/exclude precedence
A container's indexing policy can be updated at any time [by using the Azure por
> Index transformation is an operation that consumes [Request Units](request-units.md). Request Units consumed by an index transformation aren't currently billed if you are using [serverless](serverless.md) containers. These Request Units will get billed once serverless becomes generally available. > [!NOTE]
-> You can track the progress of index transformation in the Azure portal or [by using one of the SDKs](how-to-manage-indexing-policy.md).
+> You can track the progress of index transformation in the [Azure portal](how-to-manage-indexing-policy.md#use-the-azure-portal) or by [using one of the SDKs](how-to-manage-indexing-policy.md#dotnet-sdk).
There's no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge.md
The cost of all database operations is normalized by Azure Cosmos DB and is expr
This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for MongoDB. If you're using a different API, see [API for NoSQL](../find-request-unit-charge.md), [API for Cassandra](../cassandr) articles to find the RU/s charge.
-The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB for MongoDB, you have multiple options for retrieving the RU charge.
+The RU charge is exposed by a custom database command named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB for MongoDB, you have multiple options for retrieving the RU charge.
## Use the Azure portal
The RU charge is exposed by a custom [database command](https://docs.mongodb.com
`db.runCommand({getLastRequestStatistics: 1})`
-## Use a MongoDB driver
+## Programmatically
+
+### [Mongo Shell](#tab/mongo-shell)
+
+When you use the Mongo shell, you can execute commands by using runCommand().
+
+```javascript
+db.runCommand('getLastRequestStatistics')
+```
### [.NET driver](#tab/dotnet-driver)
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Previously updated : 10/12/2022 Last updated : 10/24/2022
Capabilities are features that can be added or removed to your API for MongoDB a
| `EnableMongoRoleBasedAccessControl` | Enable support for creating Users/Roles for native MongoDB role-based access control | No | | `EnableMongoRetryableWrites` | Enables support for retryable writes on the account | Yes | | `EnableMongo16MBDocumentSupport` | Enables support for inserting documents upto 16 MB in size | No |
+| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields, as long as the nested field is not an array. | No |
+ ## Enable a capability
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
ms.devlang: javascript Previously updated : 4/5/2022 Last updated : 10/24/2022
In the API for MongoDB, compound indexes are **required** if your query needs th
A compound index or single field indexes for each field in the compound index will result in the same performance for filtering in queries.
+Compounded indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the index.
+
+For example a compound index containing people.tom.age will work in this case since there's no array on the path:
+```javascript
+{ "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } }
+```
+but won't won't work in this case since there's an array in the path:
+```javascript
+{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } }
+```
+
+This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
+ > [!NOTE]
-> You can't create compound indexes on nested properties or arrays.
+> You can't create compound indexes on arrays.
The following command creates a compound index on the fields `name` and `age`:
In the preceding example, omitting the ```"university":1``` clause returns an er
Unique indexes need to be created while the collection is empty.
+Unique indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the unique index and uniqueness wil not be preserved for that value.
+
+For example a unique index on people.tom.age will work in this case since there's no array on the path:
+```javascript
+{ "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } }
+```
+but won't won't work in this case since there's an array in the path:
+```javascript
+{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } }
+```
+
+This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
++ ### TTL indexes To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
Here are some examples of indexing policies shown in [their JSON format](../inde
} ```
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number",
- "precision": -1
- },
- {
- "kind": "Range",
- "dataType": "String",
- "precision": -1
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/path/to/single/excluded/property/?"
- },
- {
- "path": "/path/to/root/of/multiple/excluded/properties/*"
- }
- ]
- }
-```
- ### Opt-in policy to selectively include some property paths ```json
This indexing policy is equivalent to the one below which manually sets ```kind`
} ```
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/path/to/included/property/?",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- },
- {
- "path": "/path/to/root/of/multiple/included/properties/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/*"
- }
- ]
- }
-```
- > [!NOTE] > It is generally recommended to use an **opt-out** indexing policy to let Azure Cosmos DB proactively index any new property that may be added to your data model.
This indexing policy is equivalent to the one below which manually sets ```kind`
## <a id="composite-index"></a>Composite indexing policy examples
-In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have a multiple filters or both a filter and an ORDER BY clause.
+In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have multiple filters or both a filter and an ORDER BY clause.
> [!NOTE] > Composite paths have an implicit `/?` since only the scalar value at that path is indexed. The `/*` wildcard is not supported in composite paths. You shouldn't specify `/?` or `/*` in a composite path.
To create a container with a custom indexing policy see, [Create a container wit
## <a id="dotnet-sdk"></a> Use the .NET SDK
-# [.NET SDK V2](#tab/dotnetv2)
-
-The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
-
-```csharp
-// Retrieve the container's details
-ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
-// Set the indexing mode to consistent
-containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
-// Add an included path
-containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-// Add an excluded path
-containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
-// Add a spatial index
-containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
-// Add a composite index
-containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
-// Update container with changes
-await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
-```
-
-To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
-
-```csharp
-// retrieve the container's details
-ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
-// retrieve the index transformation progress from the result
-long indexTransformationProgress = container.IndexTransformationProgress;
-```
- # [.NET SDK V3](#tab/dotnetv3) The `ContainerProperties` object from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) (see [this Quickstart](quickstart-dotnet.md) regarding its usage) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
ContainerResponse containerResponse = await client.GetContainer("database", "con
long indexTransformationProgress = long.Parse(containerResponse.Headers["x-ms-documentdb-collection-index-transformation-progress"]); ```
-When defining a custom indexing policy while creating a new container, the SDK V3's fluent API lets you write this definition in a concise and efficient way:
+The SDK V3's fluent API lets you write this definition in a concise and efficient way when defining a custom indexing policy while creating a new container:
```csharp await client.GetDatabase("database").DefineContainer(name: "container", partitionKeyPath: "/myPartitionKey")
await client.GetDatabase("database").DefineContainer(name: "container", partitio
.Attach() .CreateIfNotExistsAsync(); ```+
+# [.NET SDK V2](#tab/dotnetv2)
+
+The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
+
+```csharp
+// Retrieve the container's details
+ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
+// Set the indexing mode to consistent
+containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
+// Add an included path
+containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+// Add an excluded path
+containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
+// Add a spatial index
+containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
+// Add a composite index
+containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
+// Update container with changes
+await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
+```
+
+To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
+
+```csharp
+// retrieve the container's details
+ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
+// retrieve the index transformation progress from the result
+long indexTransformationProgress = container.IndexTransformationProgress;
+```
## Use the Java SDK
cosmos-db Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/geospatial-intro.md
Azure Cosmos DB interprets coordinates as represented per the WGS-84 reference s
**LineStrings in GeoJSON** ```json
+{
"type":"LineString",
- "coordinates":[ [
+ "coordinates":[
[ 31.8, -5 ], [ 31.8, -4.7 ]
- ] ]
+ ]
+}
``` ### Polygons
cosmos-db Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/spatial-functions.md
Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in
The following scalar functions perform an operation on a spatial object input value and return a numeric or Boolean value.
+* [ST_AREA](st-area.md)
* [ST_DISTANCE](st-distance.md) * [ST_INTERSECTS](st-intersects.md) * [ST_ISVALID](st-isvalid.md) * [ST_ISVALIDDETAILED](st-isvaliddetailed.md) * [ST_WITHIN](st-within.md) ---
-
- ## Next steps - [System functions Azure Cosmos DB](system-functions.md)
cosmos-db St Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-area.md
+
+ Title: ST_AREA in Azure Cosmos DB query language
+description: Learn about SQL system function ST_AREA in Azure Cosmos DB.
++++ Last updated : 10/21/2022++++
+# ST_AREA (Azure Cosmos DB)
++
+ Returns the total area of a GeoJSON Polygon or MultiPolygon expression. To learn more, see the [Geospatial and GeoJSON location data](geospatial-intro.md) article.
+
+## Syntax
+
+```sql
+ST_AREA (<spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is any valid GeoJSON Polygon or MultiPolygon object expression.
+
+## Return types
+
+ Returns the total area of a set of points. This is expressed in square meters for the default reference system.
+
+## Examples
+
+ The following example shows how to return the area of a polygon using the `ST_AREA` built-in function.
+
+```sql
+SELECT ST_AREA({
+ "type":"Polygon",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 32, -5 ],
+ [ 32, -4.7 ],
+ [ 31.8, -4.7 ],
+ [ 31.8, -5 ]
+ ] ]
+}) as Area
+```
+
+Here is the result set.
+
+```json
+[
+ {
+ "Area": 735970283.0522614
+ }
+]
+```
+
+## Remarks
+
+Using the ST_AREA function to calculate the area of zero or one-dimensional figures like GeoJSON Points and LineStrings will result in an area of 0.
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Distance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-distance.md
ST_DISTANCE (<spatial_expr>, <spatial_expr>)
## Examples
- The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function. .
+ The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
```sql SELECT f.id
cosmos-db Concepts Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-performance-tuning.md
Previously updated : 08/30/2022 Last updated : 10/25/2022 # Performance tuning
cosmos-db Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-modify-distributed-tables.md
Previously updated : 08/02/2022 Last updated : 10/24/2022 # Distribute and modify tables
CREATE TABLE positions (object_id text primary key, position coordinates);
-- data loading thus goes over a single connection: SELECT create_distributed_table(ΓÇÿpositionsΓÇÖ, ΓÇÿobject_idΓÇÖ);+
+SET client_encoding TO 'UTF8';
\COPY positions FROM ΓÇÿpositions.csvΓÇÖ COMMIT;
BEGIN;
CREATE TABLE items (key text, value text); -- parallel data loading: SELECT create_distributed_table(ΓÇÿitemsΓÇÖ, ΓÇÿkeyΓÇÖ);
+SET client_encoding TO 'UTF8';
\COPY items FROM ΓÇÿitems.csvΓÇÖ CREATE TYPE coordinates AS (x int, y int);
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
Previously updated : 06/21/2022 Last updated : 10/24/2022 # Regional availability for Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
-clusters are available in the following Azure regions:
+Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
* Americas: * Brazil South
clusters are available in the following Azure regions:
* France Central * Germany West Central * North Europe
+ * Sweden Central
* Switzerland North
+ * Switzerland WestΓÇá
* UK South * West Europe
-Some of these regions may not be initially activated on all Azure
+ΓÇá This Azure region is a [restricted one](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). To use it you need to request access to it by opening a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+Some of these regions may not be activated on all Azure
subscriptions. If you want to use a region from the list above and don't see it in your subscription, or if you want to use a region not on this list, open a [support
cosmos-db Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-multi-tenant.md
ms.devlang: azurecli Previously updated : 06/29/2022 Last updated : 10/24/2022 #Customer intent: As an developer, I want to design a Azure Cosmos DB for PostgreSQL database so that my multi-tenant application runs efficiently for all tenants.
done
Back inside psql, bulk load the data. Be sure to run psql in the same directory where you downloaded the data files. ```sql
-SET CLIENT_ENCODING TO 'utf8';
+SET client_encoding TO 'UTF8';
\copy companies from 'companies.csv' with csv \copy campaigns from 'campaigns.csv' with csv
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 07/28/2022 Last updated : 10/25/2022
Azure portal supports the following type of billing accounts:
- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts. However, an EA account has a subscription limit of 5000. *Regardless of a subscription's state, its included in the limit. So, deleted and disabled subscriptions are included in the limit*. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container. We recommend that you avoid creating multiple subscriptions to implement access boundaries. To separate resources with an access boundary, consider using a resource group. For more information about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md). -- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
+- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 5 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
- **Microsoft Partner Agreement**: A billing account for a Microsoft Partner Agreement is created for Cloud Solution Provider (CSP) partners to manage their customers in the new commerce experience. Partners need to have at least one customer with an [Azure plan](/partner-center/purchase-azure-plan) to manage their billing account in the Azure portal. For more information, see [Get started with your billing account for Microsoft Partner Agreement](../understand/mpa-overview.md).
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
A tenant is a digital representation of your organization and is primarily assoc
Each tenant is distinct and separate from other tenants, yet you can allow guest users from other tenants to access your tenant to track your costs and manage billing.
+## What's an associated tenant?
+An associated tenant is a tenant that is linked to your primary billing tenantΓÇÖs billing account. You can move Microsoft 365 subscriptions to these tenants. You can also assign billing account roles to users in associated billing tenants. Read more about associated tenants [Manage billing across multiple tenants using associated billing tenants](../manage/manage-billing-across-tenants.md).
+ ## How tenants and subscriptions relate to billing account You use your Microsoft Customer Agreement (billing account) to track costs and manage billing. Each billing account has at least one billing profile. The billing profile allows you to manage your invoice and payment method. Each billing profile includes one invoice section, by default. You can create more invoice sections to group, track, and manage costs at a more granular level if needed. -- Your billing account is associated with a single tenant. It means only users who are part of the tenant can access your billing account.-- When you create a new Azure subscription for your billing account, it's always created in your billing account tenant. However, you can move subscriptions to other tenants. You can also link existing subscriptions from other tenants to your billing account. It allows you to centrally manage billing through one tenant while keeping resources and subscriptions in other tenants based on your needs.
+- Your billing account is associated with a single, primary tenant. Users who are part of the primary tenant or who are part of associated tenants can access your billing account if they have the appropriate billing role assigned.
+- When you create a new Azure subscription for your billing account, it's created in your tenant or one of the other tenants you have access to. You can choose the tenant while creating the subscription.
+- You can move subscriptions to other tenants. You can also link existing subscriptions from other tenants to your billing account. This flexibility allows you to centrally manage billing through one tenant while keeping resources and subscriptions in other tenants based on your needs.
-The following diagram shows how billing account and subscriptions are linked to tenants. The Contoso MCA billing account is associated with Tenant 1 while Contoso PAYG account is associated with Tenant 2. Let's assume Contoso wants to pay for their PAYG subscription through their MCA billing account, they can use a billing ownership transfer to link the subscription to their MCA billing account. The subscription and its resources will still be associated with Tenant 2, but they're paid for using the MCA billing account.
+The following diagram shows how billing account and subscriptions are linked to tenants. Let's assume Contoso would like to streamline their billing management through an MCA. The Contoso MCA billing account is in Tenant 1 while Contoso PAYG account is in Tenant 2. They can use a billing ownership transfer to link the subscription to their MCA billing account. The subscription and its resources will still be associated with Tenant 2, but they're paid for using the MCA billing account.
:::image type="content" source="./media/manage-tenants/billing-hierarchy-example.png" alt-text="Diagram showing an example billing hierarchy." border="false" lightbox="./media/manage-tenants/billing-hierarchy-example.png"::: ## Manage subscriptions under multiple tenants in a single Microsoft Customer Agreement
-Billing owners can create subscriptions when they have the [appropriate permissions](../manage/understand-mca-roles.md#subscription-billing-roles-and-tasks) to the billing account. By default, any new subscriptions created under the Microsoft Customer Agreement are in the Microsoft Customer Agreement tenant.
+Billing owners can create subscriptions when they have the [appropriate permissions](../manage/understand-mca-roles.md#subscription-billing-roles-and-tasks) to the billing account. By default, any new subscriptions created under the Microsoft Customer Agreement are in the current userΓÇÖs tenant. Different tenants can be selected from the list of tenants to which the user has access to create subscriptions.
- You can link subscriptions from other tenants to your Microsoft Customer Agreement billing account. Taking billing ownership of a subscription only changes the invoicing arrangement. It doesn't affect the service tenant or Azure RBAC roles.-- To change the subscription owner in the service tenant, you must transfer the [subscription to a different Azure Active Directory directory](../../role-based-access-control/transfer-subscription.md).-
-An MCA billing account is managed by a single tenant/directory. The billing account only controls billing for the subscriptions in its tenant. However, you can use a billing ownership transfer to link a subscription to a billing account in a different tenant.
### Billing ownership transfer
Billing ownership transfer doesnΓÇÖt affect:
- Resources - Azure RBAC permissions
+## Assign roles to users to your Microsoft Customer Agreement
+
+There are three ways users with billing owner access can assign roles to users to MCA
+
+- Assign billing roles to users in the primary tenant
+- Assign billing roles to external users (outside of your primary tenant) if they are part of an associated tenant
+- If tenants are not associated, [create guest users in primary tenant and assign roles](#add-guest-users-to-your-microsoft-customer-agreement-tenant).
## Add guest users to your Microsoft Customer Agreement tenant
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-visually.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Visual authoring in Azure Data Factory
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-expression-builder.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Build expressions in mapping data flow
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-monitoring.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Monitor Data Flows
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Mapping data flows provide an entirely visual experience with no coding required
Data flows are created from the factory resources pane like pipelines and datasets. To create a data flow, select the plus sign next to **Factory Resources**, and then select **Data Flow**. -
+![Screenshot showing a new data flow.](media/concepts-data-flow-overview/new-data-flow.png)
This action takes you to the data flow canvas, where you can create your transformation logic. Select **Add source** to start configuring your source transformation. For more information, see [Source transformation](data-flow-source.md). ## Authoring data flows
Each transformation contains at least four configuration tabs.
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, see that transformation's documentation page. -
+![Screenshot showing the source settings tab.](media/concepts-data-flow-overview/source-1.png)
#### Optimize The **Optimize** tab contains settings to configure partitioning schemes. To learn more about how to optimize your data flows, see the [mapping data flow performance guide](concepts-data-flow-performance.md). -
+![Screenshot shows the Optimize tab, which includes Partition option, Partition type, and Number of partitions.](media/concepts-data-flow-overview/optimize.png)
#### Inspect The **Inspect** tab provides a view into the metadata of the data stream that you're transforming. You can see column counts, the columns changed, the columns added, data types, the column order, and column references. **Inspect** is a read-only view of your metadata. You don't need to have debug mode enabled to see metadata in the **Inspect** pane.
Mapping data flows are available in the following regions in ADF:
* Learn how to create a [source transformation](data-flow-source.md). * Learn how to build your data flows in [debug mode](concepts-data-flow-debug-mode.md).+
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 09/29/2021 Last updated : 10/25/2022 # Mapping data flows performance and tuning guide
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Linked services in Azure Data Factory and Azure Synapse Analytics
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-github.md
Previously updated : 09/09/2021 Last updated : 10/24/2022
Use the following steps to create a linked service to GitHub in the Azure portal
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
- # [Azure Data Factory](#tab/data-factory)
+ # [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/connector-github/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Azure Synapse](#tab/synapse-analytics)
-
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+ # [Azure Synapse](#tab/synapse-analytics)
+ :::image type="content" source="media/connector-github/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
2. Search for GitHub and select the GitHub connector. :::image type="content" source="media/connector-github/github-connector.png" alt-text="Screenshot of the GitHub connector.":::
The following properties are supported for the GitHub linked service.
## Next steps
-Create a [source dataset](data-flow-source.md) in mapping data flow.
+Create a [source dataset](data-flow-source.md) in mapping data flow.
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from or to MongoDB using Azure Data Factory or Synapse Analytics
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from and to ODBC data stores using Azure Data Factory or Synapse Analytics
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from PostgreSQL using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP Business Warehouse using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP ECC using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-files.md
Previously updated : 10/01/2021 Last updated : 10/23/2022
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
Previously updated : 09/24/2021 Last updated : 10/25/2022
The following is a guide for setting up an Azure Pipelines release that automate
## Requirements -- An Azure subscription linked to Visual Studio Team Foundation Server or Azure Repos that uses the [Azure Resource Manager service endpoint](/azure/devops/pipelines/library/service-endpoints#sep-azure-resource-manager).
+- An Azure subscription linked to Azure DevOps Server (formerly Visual Studio Team Foundation Server) or Azure Repos that uses the [Azure Resource Manager service endpoint](/azure/devops/pipelines/library/service-endpoints#sep-azure-resource-manager).
- A data factory configured with Azure Repos Git integration.
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
Previously updated : 09/24/2021 Last updated : 10/25/2022
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'. -- **Exposure control and feature flags**. When working on a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
+- **Exposure control and feature flags**. When working in a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
To learn how to set up a feature flag, see the below video tutorial:
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-pipeline-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Execute Pipeline activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Expressions and functions in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Fail Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-fail-activity.md
Previously updated : 09/22/2021 Last updated : 10/25/2022 # Execute a Fail activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-filter-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Filter activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
Previously updated : 09/09/2021 Last updated : 10/24/2022 + # Set Variable Activity in Azure Data Factory and Azure Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Use the Set Variable activity to set the value of an existing variable of type S
## Create an Append Variable activity with UI To use a Set Variable activity in a pipeline, complete the following steps:- 1. Select the background of the pipeline canvas and use the Variables tab to add a variable:
- :::image type="content" source="media/control-flow-activities-common/add-pipeline-array-variable.png" alt-text="Shows an empty pipeline canvas with the Variables tab selected having an array type variable named TestVariable.":::
-
-2. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
-1. Select the Set Variable activity on the canvas if it is not already selected, and its **Variables** tab, to edit its details.
+1. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
+1. Select the Set Variable activity on the canvas if it is not already selected, and its **Variables** tab, to edit its details.
1. Select the variable for the Name property.
-1. Enter an expression to set the value. This can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
-
- :::image type="content" source="media/control-flow-set-variable-activity/set-variable-activity.png" alt-text="Shows the UI for a Set Variable activity.":::
-
+1. Enter an expression to set the value for the variables. This expression can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
## Type properties
variableName | Name of the variable that is set by this activity | yes
## Incrementing a variable
-A common scenario involving variables is using a variable as an iterator within an until or foreach activity. In a set variable activity you cannot reference the variable being set in the `value` field. To workaround this limitation, set a temporary variable and then create a second set variable activity. The second set variable activity sets the value of the iterator to the temporary variable.
-
+A common scenario involving variables is using a variable as an iterator within an until or foreach activity. In a set variable activity, you cannot reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second set variable activity. The second set variable activity sets the value of the iterator to the temporary variable.
Below is an example of this pattern:- ``` json {
Variables are currently scoped at the pipeline level. This means that they are n
Learn about another related control flow activity: - [Append Variable Activity](control-flow-append-variable-activity.md)+
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-switch-activity.md
Previously updated : 06/23/2021 Last updated : 10/25/2022
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-system-variables.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # System variables supported by Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-validation-activity.md
Title: Validation activity
-description: The Validation activity in Azure Data Factory and Synapse Analytics delays execution of the pipeline until it a dataset is validated with user-defined criteria.
+description: The Validation activity in Azure Data Factory and Synapse Analytics delays execution of the pipeline until a dataset is validated with user-defined criteria.
Previously updated : 09/09/2021 Last updated : 10/24/2022 # Validation activity in Azure Data Factory and Synapse Analytics pipelines
To use a Validation activity in a pipeline, complete the following steps:
1. Search for _Validation_ in the pipeline Activities pane, and drag a Validation activity to the pipeline canvas. 1. Select the new Validation activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.-
- :::image type="content" source="media/control-flow-validation-activity/validation-activity.png" alt-text="Shows the UI for a Validation activity.":::
- 1. Select a dataset, or define a new one by selecting the New button. For file based datasets like the delimited text example above, you can select either a specific file, or a folder. When a folder is selected, the Validation activity allows you to ignore validation of the existence of child items in the folder, or require whether child items exist or not. 1. The output of the Validation activity can be used as an input to any other activities, and referenced within those activities for any of their properties using dynamic expressions. ## Syntax + ```json {
- "name": "Validation_Activity",
- "type": "Validation",
- "typeProperties": {
- "dataset": {
- "referenceName": "Storage_File",
- "type": "DatasetReference"
- },
- "timeout": "7.00:00:00",
- "sleep": 10,
- "minimumSize": 20
- }
+"name": "Validation_Activity",
+"type": "Validation",
+"typeProperties": {
+"dataset": {
+"referenceName": "Storage_File",
+"type": "DatasetReference"
+},
+"timeout": "0.12:00:00",
+"sleep": 10,
+"minimumSize": 20
+}
}, {
- "name": "Validation_Activity_Folder",
- "type": "Validation",
- "typeProperties": {
- "dataset": {
- "referenceName": "Storage_Folder",
- "type": "DatasetReference"
- },
- "timeout": "7.00:00:00",
- "sleep": 10,
- "childItems": true
- }
+"name": "Validation_Activity_Folder",
+"type": "Validation",
+"typeProperties": {
+"dataset": {
+"referenceName": "Storage_Folder",
+"type": "DatasetReference"
+},
+"timeout": "0.12:00:00",
+"sleep": 10,
+"childItems": true
+}
} ```-- ## Type properties
-Property | Description | Allowed values | Required
| -- | -- | --
-name | Name of the 'Validation' activity | String | Yes |
-type | Must be set to **Validation**. | String | Yes |
-dataset | Activity will block execution until it has validated this dataset reference exists and that it meets the specified criteria, or timeout has been reached. Dataset provided should support "MinimumSize" or "ChildItems" property. | Dataset reference | Yes |
-timeout | Specifies the timeout for the activity to run. If no value is specified, default value is 7 days ("7.00:00:00"). Format is d.hh:mm:ss | String | No |
-sleep | A delay in seconds between validation attempts. If no value is specified, default value is 10 seconds. | Integer | No |
-childItems | Checks if the folder has child items. Can be set to-true : Validate that the folder exists and that it has items. Blocks until at least one item is present in the folder or timeout value is reached.-false: Validate that the folder exists and that it is empty. Blocks until folder is empty or until timeout value is reached. If no value is specified, activity will block until the folder exists or until timeout is reached. | Boolean | No |
-minimumSize | Minimum size of a file in bytes. If no value is specified, default value is 0 bytes | Integer | No |
+|Property | Description | Allowed values | Required|
+|-- | -- | -- | --|
+|name | Name of the 'Validation' activity | String | Yes |
+|type | Must be set to **Validation**. | String | Yes |
+|dataset | Activity will block execution until it has validated this dataset reference exists and that it meets the specified criteria, or timeout has been reached. Dataset provided should support "MinimumSize" or "ChildItems" property. | Dataset reference | Yes |
+|timeout | Specifies the timeout for the activity to run. If no value is specified, default value is 12 hours ("0.12:00:00"). Format is d.hh:mm:ss | String | No |
+|sleep | A delay in seconds between validation attempts. If no value is specified, default value is 10 seconds. | Integer | No |
+|childItems | Checks if the folder has child items. Can be set to-true : Validate that the folder exists and that it has items. Blocks until at least one item is present in the folder or timeout value is reached.-false: Validate that the folder exists and that it is empty. Blocks until folder is empty or until timeout value is reached. If no value is specified, activity will block until the folder exists or until timeout is reached. | Boolean | No |
+|minimumSize | Minimum size of a file in bytes. If no value is specified, default value is 0 bytes | Integer | No |
## Next steps
See other supported control flow activities:
- [Lookup Activity](control-flow-lookup-activity.md) - [Web Activity](control-flow-web-activity.md) - [Until Activity](control-flow-until-activity.md)+
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-wait-activity.md
Previously updated : 09/09/2021 Last updated : 10/24/2022 # Execute Wait activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Web activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-webhook-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Webhook activity in Azure Data Factory
See the following supported control flow activities:
- [Get Metadata Activity](control-flow-get-metadata-activity.md) - [Lookup Activity](control-flow-lookup-activity.md) - [Web Activity](control-flow-web-activity.md)-- [Until Activity](control-flow-until-activity.md)
+- [Until Activity](control-flow-until-activity.md)
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-fault-tolerance.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Fault tolerance of copy activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-monitoring.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Monitor copy activity
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
To copy data from a source to a sink, the service that runs the Copy activity pe
:::image type="content" source="media/copy-activity-overview/copy-activity-overview.png" alt-text="Copy activity overview":::
+> [!NOTE]
+> In case if a self-hosted integration runtime is used in either source or sink data store within a copy activity, than both the source and sink must be accessible from the server hosting the integartion runtime for the copy activity to be successful.
+ ## Supported data stores and formats [!INCLUDE [data-factory-v2-supported-data-stores](includes/data-factory-v2-supported-data-stores.md)]
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-troubleshooting.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Troubleshoot copy activity performance
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy activity performance and scalability guide
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Schema and data type mapping in copy activity
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
description: Learn how to create Azure integration runtime in Azure Data Factory
Previously updated : 09/09/2021 Last updated : 10/24/2022 + # How to create and configure Azure Integration Runtime [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Integration Runtime can be created using the **Set-AzDataFactoryV2IntegrationRun
```powershell Set-AzDataFactoryV2IntegrationRuntime -DataFactoryName "SampleV2DataFactory1" -Name "MySampleAzureIR" -ResourceGroupName "ADFV2SampleRG" -Type Managed -Location "West Europe"
-```
+```
For Azure IR, the type must be set to **Managed**. You do not need to specify compute details because it is fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see [Create and Configure Azure-SSIS IR](create-azure-ssis-integration-runtime.md). You can configure an existing Azure IR to change its location using the Set-AzDataFactoryV2IntegrationRuntime PowerShell cmdlet. For more information about the location of an Azure IR, see [Introduction to integration runtime](concepts-integration-runtime.md).
Use the following steps to create an Azure IR using UI.
1. On the home page for the service, select the [Manage tab](./author-management-hub.md) from the leftmost pane. # [Azure Data Factory](#tab/data-factory)
-
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="The home page Manage button":::
+
+ :::image type="content" source="media/create-azure-integration-runtime/get-started-page-manage-button.png" alt-text="Screenshot showing the home page Manage button.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="The home page Manage button":::
+ :::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="Screenshot showing the home page Manage button.":::
-2. Select **Integration runtimes** on the left pane, and then select **+New**.
+
+
+1. Select **Integration runtimes** on the left pane, and then select **+New**.
# [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/create-azure-integration-runtime/manage-new-integration-runtime.png" alt-text="Screenshot that highlights integration runtimes in the left pane and the +New button.":::
- :::image type="content" source="media/doc-common-process/manage-new-integration-runtime.png" alt-text="Screenshot that highlights Integration runtimes in the left pane and the +New button.":::
-
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Screenshot that highlights Integration runtimes in the left pane and the +New button.":::
-
-3. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
+ :::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Screenshot that highlights integration runtimes in the left pane and the +New button.":::
+
+
+1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
+ :::image type="content" source="media/create-azure-integration-runtime/integration-runtime-setup.png" alt-text="Screenshot showing the Azure self-hosted integration runtime option.":::
1. On the following page, select **Azure** to create an Azure IR, and then select **Continue**.
- :::image type="content" source="media/create-azure-integration-runtime/new-azure-integration-runtime.png" alt-text="Create an integration runtime":::
-
+ :::image type="content" source="media/create-azure-integration-runtime/new-azure-integration-runtime.png" alt-text="Screenshot that shows create an Azure integration runtime.":::
1. Enter a name for your Azure IR, and select **Create**.
- :::image type="content" source="media/create-azure-integration-runtime/create-azure-integration-runtime.png" alt-text="Create an Azure IR":::
-
+ :::image type="content" source="media/create-azure-integration-runtime/create-azure-integration-runtime.png" alt-text="Screenshot that shows the final step to create the Azure integration runtime.":::
1. You'll see a pop-up notification when the creation completes. On the **Integration runtimes** page, make sure that you see the newly created IR in the list.-
+ :::image type="content" source="media/create-azure-integration-runtime/integration-runtime-in-the-list.png" alt-text="Screenshot showing the Azure integration runtime in the list.":::
+
> [!NOTE] > If you want to enable managed virtual network on Azure IR, please see [How to enable managed virtual network](managed-virtual-network-private-endpoint.md)
See the following articles on how to create other types of integration runtimes:
- [Create self-hosted integration runtime](create-self-hosted-integration-runtime.md) - [Create Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md)+
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Previously updated : 01/26/2022 Last updated : 09/22/2022 # Create a shared self-hosted integration runtime in Azure Data Factory
data-factory Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/credentials.md
Previously updated : 07/19/2021 Last updated : 10/25/2022
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
5. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events. > [!NOTE]
- > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. If you hit the limit, please contact support for recommendations and increasing the limit upon evaluation by Event Grid team.
+ > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you are working with SFTP Storage Events you need to specify the SFTP Data API under the filtering section too. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. If you hit the limit, please contact support for recommendations and increasing the limit upon evaluation by Event Grid team.
> [!NOTE] > To create a new or modify an existing Storage Event Trigger, the Azure account used to log into the service and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory and Azure Synapse does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 08/06/2021 Last updated : 10/25/2022 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
ADF offers a serverless architecture that allows parallelism at different levels
Customers have successfully migrated petabytes of data consisting of hundreds of millions of files from Data Lake Storage Gen1 to Gen2, with a sustained throughput of 2 GBps and higher.
-you can achieve great data movement speeds through different levels of parallelism:
+You can achieve greater data movement speeds by applying different levels of parallelism:
- A single copy activity can take advantage of scalable compute resources: when using Azure Integration Runtime, you can specify up to 256 [data integration units (DIUs)](copy-activity-performance-features.md#data-integration-units) for each copy activity in a serverless manner; when using self-hosted Integration Runtime, you can manually scale up the machine or scale out to multiple machines (up to 4 nodes), and a single copy activity will partition its file set across all nodes. - A single copy activity reads from and writes to the data store using multiple threads.
You can also enable [fault tolerance](copy-activity-fault-tolerance.md) in copy
### Permissions
-In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you need, grant high enough permissions for the account you provide to access, read, or write all files and set ACLs if you choose to. Grant it a super-user or owner role during the migration period.
+In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you will need to grant high enough permissions to the account to access, read, or write all files and set ACLs if you choose to. You should grant the account a super-user or owner role during the migration period and remove the elevated permissions once the migration is completed.
## Next steps
In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lak
> [!div class="nextstepaction"] > [Copy activity overview](copy-activity-overview.md) > [Azure Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md)
-> [Azure Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md)
+> [Azure Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md)
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Previously updated : 09/02/2021 Last updated : 10/25/2022 # Data Factory metrics and alerts
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
Title: Programmatically monitor an Azure data factory
+ Title: Programmatically monitor an Azure Data Factory
description: Learn how to monitor a pipeline in a data factory by using different software development kits (SDKs).
-# Programmatically monitor an Azure data factory
+# Programmatically monitor an Azure Data Factory
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
data-factory Monitor Schema Logs Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-schema-logs-events.md
Previously updated : 09/02/2021 Last updated : 10/25/2022 # Schema of logs and events
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
For example, if you wanted to map a string column based upon a parameter `column
:::image type="content" source="media/data-flow/parameterize-column-name.png" alt-text="Passing in a column name as a parameter":::
+> [!NOTE]
+> In data flow expressions, string interpolation (substituting variables inside of the string) is not supported. Instead, concatenate the expression into string values. For example, `'string part 1' + $variable + 'string part 2'`
+ ## Next steps * [Execute data flow activity](control-flow-execute-data-flow-activity.md) * [Control flow expressions](control-flow-expression-language-functions.md)
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
The prices used in these examples below are hypothetical and are not intended to
- [Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md) - [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md) + ## Next steps Now that you understand the pricing for Azure Data Factory, you can get started!
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Previously updated : 07/05/2021 Last updated : 10/25/2022 # Quickstart: Create an Azure Data Factory using ARM template
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
Title: Create an Azure data factory using REST API
-description: Create an Azure data factory pipeline to copy data from one location in Azure Blob storage to another location.
+ Title: Create an Azure Data Factory using REST API
+description: Create an Azure Data Factory pipeline to copy data from one location in Azure Blob storage to another location.
-# Quickstart: Create an Azure data factory and pipeline by using the REST API
+# Quickstart: Create an Azure Data Factory and pipeline by using the REST API
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores, process/transform the data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning, and publish output data to data stores such as Azure Synapse Analytics for business intelligence (BI) applications to consume.
-This quickstart describes how to use REST API to create an Azure data factory. The pipeline in this data factory copies data from one location to another location in an Azure blob storage.
+This quickstart describes how to use REST API to create an Azure Data Factory. The pipeline in this data factory copies data from one location to another location in an Azure blob storage.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
$response.Content
Note the following points:
-* The name of the Azure data factory must be globally unique. If you receive the following error, change the name and try again.
+* The name of the Azure Data Factory must be globally unique. If you receive the following error, change the name and try again.
``` Data factory name "ADFv2QuickStartDataFactory" is not available.
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
A quick creation experience provided in the Azure Data Factory Studio to enable
> If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again. ## Next steps
-Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](tutorial-copy-data-portal.md) tutorial.
+Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](quickstart-hello-world-copy-data-tool.md) tutorial.
Lean how to create a data flow with Azure Data Factory[data-flow-create.md].
data-factory Quickstart Hello World Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-hello-world-copy-data-tool.md
Previously updated : 07/05/2021 Last updated : 10/24/2022
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow-portal.md
Previously updated : 10/04/2022 Last updated : 10/25/2022 # Branching and chaining activities in an Azure Data Factory pipeline using the Azure portal
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
Last updated 09/26/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this tutorial, you create an Azure data factory with a pipeline that loads delta data from multiple tables in a SQL Server database to a database in Azure SQL Database.
+In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data from multiple tables in a SQL Server database to a database in Azure SQL Database.
You perform the following steps in this tutorial:
END
3. In the **New data factory** page, enter **ADFMultiIncCopyTutorialDF** for the **name**.
- The name of the Azure data factory must be **globally unique**. If you see a red exclamation mark with the following error, change the name of the data factory (for example, yournameADFIncCopyTutorialDF) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
+ The name of the Azure Data Factory must be **globally unique**. If you see a red exclamation mark with the following error, change the name of the data factory (for example, yournameADFIncCopyTutorialDF) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
`Data factory name "ADFIncCopyTutorialDF" is not available`
You performed the following steps in this tutorial:
Advance to the following tutorial to learn about transforming data by using a Spark cluster on Azure: > [!div class="nextstepaction"]
->[Incrementally load data from Azure SQL Database to Azure Blob storage by using Change Tracking technology](tutorial-incremental-copy-change-tracking-feature-portal.md)
+>[Incrementally load data from Azure SQL Database to Azure Blob storage by using Change Tracking technology](tutorial-incremental-copy-change-tracking-feature-portal.md)
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md
Keep and Remove Top, Keep Range (corresponding M functions,
| -- | -- | | Table.PromoteHeaders | Not supported. The same result can be achieved by setting "First row as header" in the dataset. | | Table.CombineColumns | This is a common scenario that isn't directly supported but can be achieved by adding a new column that concatenates two given columns. For example, Table.AddColumn(RemoveEmailColumn, "Name", each [FirstName] & " " & [LastName]) |
-| Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type. |
+| Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type and tranfoming with locale. |
| Table.NestedJoin | Just doing a join will result in a validation error. The columns must be expanded for it to work. | | Table.RemoveLastN | Remove bottom rows isn't supported. | | Table.RowCount | Not supported, but can be achieved by adding a custom column containing the value 1, then aggregating that column with List.Sum. Table.Group is supported. |
databox-online Azure Stack Edge Gpu Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
The Azure Stack Edge Pro device has the following specifications for compute and
| CPU: usable | 40 vCPUs | | Memory type | Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM | | Memory: raw | 128 GB RAM (8 x 16 GB) |
-| Memory: usable | 102 GB RAM |
+| Memory: usable | 96 GB RAM |
## Compute acceleration specifications
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md
Previously updated : 11/09/2021 Last updated : 10/12/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
Perform the following steps to connect and copy data from your computer to the D
A container is created in the Azure storage account for each subfolder under BlockBlob and PageBlob folders. All files under BlockBlob and PageBlob folders are copied into a default container `$root` under the Azure Storage account. Any files in the `$root` container are always uploaded as block blobs.
- Copy files to a folder within *AzureFile* folder. A sub-folder within *AzureFile* folder creates a fileshare. Files copied directly to *AzureFile* folder fail and are uploaded as block blobs.
+ Copy files to a folder within *AzureFile* folder. All files under *AzureFile* folder will be uploaded as files to a default container of type ΓÇ£databox-format-GuidΓÇ¥ (ex: databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7).
If files and folders exist in the root directory, then you must move those to a different folder before you begin data copy.
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Previously updated : 10/07/2022 Last updated : 10/21/2022 # Azure Data Box system requirements
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Review the findings from these vulnerability scanners and respond to them all fr
Learn more on the following pages: - [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md#identify-vulnerabilities-in-images-in-other-container-registries)
+- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md)
+- [Identify vulnerabilities in images in AWS Elastic Container Registry](defender-for-containers-va-ecr.md)
## Enforce your security policy from the top down
defender-for-cloud Defender For Containers Va Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-acr.md
Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defen
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 09/11/2022 Last updated : 10/24/2022 # Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
-This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+This article explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
The triggers for an image scan are:
- (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
- > [!NOTE]
- > **Windows containers**: There is no Defender agent for Windows containers, the Defender agent is deployed to a Linux node running in the cluster, to retrieve the running container inventory for your Windows nodes.
- >
- > Images that aren't pulled from ACR for deployment in AKS won't be checked and will appear under the **Not applicable** tab.
- >
- > Images that have been deleted from their ACR registry, but are still running, won't be reported on only 30 days after their last scan occurred in ACR.
+When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-This scan typically completes within 2 minutes, but it might take up to 40 minutes.
+Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Defender for DevOps](defender-for-devops-introduction.md).
-Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
+## Prerequisites
-## Identify vulnerabilities in images in Azure container registries
+Before you can scan your ACR images:
-To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-
-1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+- [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
- When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
-
-## Identify vulnerabilities in images in other container registries
-
-If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+- If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
-You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-va-ecr.md) directly from the Azure portal.
-
-1. Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+ Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
- When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
+ You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-va-ecr.md) directly from the Azure portal.
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
+For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
## View and remediate findings
Defender for Cloud filters and classifies findings from the scanner. When an ima
Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
-### What registry types are scanned? What types are billed?
-
-For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md#additional-information). Defender for Containers doesn't scan unsupported registries that you connect to your Azure subscription.
- ### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it will expose security vulnerabilities.
defender-for-cloud Defender For Containers Va Ecr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-ecr.md
Defender for Containers lets you scan the container images stored in your Amazon AWS Elastic Container Registry (ECR) as part of the protections provided within Microsoft Defender for Cloud.
-To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities. Defender for Containers creates resources in your AWS account, such as an ECS cluster in a dedicated VPC, internet gateway and an S3 bucket, so that images stay within your account for privacy and intellectual property protection. Resources are created in two AWS regions: us-east-1 and eu-central-1.
+To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities.
-Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+Defender for Containers creates resources in your AWS account to build an inventory of the software in your images. The scan then sends only the software inventory to Defender for Cloud. This architecture protects your information privacy and intellectual property, and also keeps the outbound network traffic to a minimum. Defender for Containers creates an ECS cluster in a dedicated VPC, an internet gateway, and an S3 bucket in the us-east-1 and eu-central-1 regions to build the software inventory.
+
+Defender for Cloud filters and classifies findings from the software inventory that the scanner creates. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
The triggers for an image scan are:
The triggers for an image scan are:
Before you can scan your ECR images: - [Connect your AWS account to Defender for Cloud and enable Defender for Containers](quickstart-onboard-aws.md)-- You must have at least one free VPC in us-east-1 and eu-central-1.
+- You must have at least one free VPC in the `us-east-1` and `eu-central-1` regions to host the AWS resources that build the software inventory.
-> [!NOTE]
-> - Images that have at least one layer over 2GB are not scanned.
-> - Public repositories and manifest lists are not supported.
+For a list of the types of images not supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=aws-eks#images).
## Enable vulnerability assessment
To enable vulnerability assessment:
:::image type="content" source="media/defender-for-containers-va-ecr/aws-containers-enable-va.png" alt-text="Screenshot of the toggle to turn on vulnerability assessment for ECR images.":::
-1. Select **Next: Configure access**.
+1. Select **Save** > **Next: Configure access**.
1. Download the CloudFormation template.
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
There are some limitations to Defender for Cloud's identity and access protectio
- Identity recommendations aren't available for subscriptions with more than 6,000 accounts. In these cases, these types of subscriptions will be listed under Not applicable tab. - Identity recommendations aren't available for Cloud Solution Provider (CSP) partner's admin agents. - Identity recommendations donΓÇÖt identify accounts that are managed with a privileged identity management (PIM) system. If you're using a PIM tool, you might see inaccurate results in the **Manage access and permissions** control.
+- Identity recommendations don't support Azure AD conditional access policies with included Directory Roles instead of users and groups.
## Next steps To learn more about recommendations that apply to other Azure resource types, see the following article: -- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
+- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 08/10/2022 Last updated : 10/24/2022
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
The **tabs** below show the features that are available, by environment, for Mic
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Additional information
+## Additional environment information
+
+### Images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. |
### Kubernetes distributions and configurations
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Outbound proxy without authentication and outbound proxy with basic authenticati
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35 |
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Ensure your Kubernetes node is running on one of the verified supported operatin
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md
Once the target table is created, you can use the Azure Digital Twins plugin to
#### Example schema
-Here's an example of a schema that might be used to represent shared data.
+Here's an example of a schema that might be used to represent shared data. The example follows the Azure Data Explorer [data history schema](concepts-data-history.md#data-schema).
| `TimeStamp` | `SourceTimeStamp` | `TwinId` | `ModelId` | `Name` | `Value` | `RelationshipTarget` | `RelationshipID` | | | | | | | | | |
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
description: Understand data history for Azure Digital Twins. Previously updated : 03/28/2022 Last updated : 10/25/2022
For more of an introduction to data history, including a quick demo, watch the f
<iframe src="https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15" width="1080" height="530"></iframe>
-## Required resources and data flow
+## Resources and data flow
Data history requires the following resources: * Azure Digital Twins instance, with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) enabled
Data moves through these resources in this order:
When working with data history, use the [2022-05-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/stable/2022-05-31) version of the APIs.
-### Required permissions
+### History from multiple Azure Digital Twins instances
+
+If you'd like, you can have multiple Azure Digital Twins instances historize twin property updates to the same Azure Data Explorer cluster.
+
+Each Azure Digital Twins instance will have its own data history connection targeting the same Azure Data Explorer cluster. Within the cluster, instances can send their twin data to either...
+* **different tables** in the Azure Data Explorer cluster.
+* **the same table** in the Azure Data Explorer cluster. To do this, specify the same Azure Data Explorer table name while [creating the data history connections](how-to-use-data-history.md#set-up-data-history-connection). In the [data history table schema](#data-schema), the `ServiceId` column will contain the URL of the source Azure Digital Twins instance, so you can use this field to resolve which Azure Digital Twins instance emitted each record.
+
+## Required permissions
In order to set up a data history connection, your Azure Digital Twins instance must have the following permissions to access the Event Hubs and Azure Data Explorer resources. These roles enable Azure Digital Twins to configure the event hub and Azure Data Explorer database on your behalf (for example, creating a table in the database). These permissions can optionally be removed after data history is set up. * Event Hubs resource: **Azure Event Hubs Data Owner**
Later, your Azure Digital Twins instance must have the following permission on t
## Creating a data history connection
-Once all the [required resources](#required-resources-and-data-flow) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command is part of the [az iot](/cli/azure/iot?view=azure-cli-latest&preserve-view=true) extension.
+Once all the [resources](#resources-and-data-flow) and [permissions](#required-permissions) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command set is [az dt data-history](/cli/azure/dt/data-history).
For instructions on how to set up a data history connection, see [Use data history with Azure Data Explorer](how-to-use-data-history.md).
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
Now that you've created the required resources, use the command below to create
# [CLI](#tab/cli)
-Use the following command to create a data history connection. By default, this command assumes all resources are in the same resource group as the Azure Digital Twins instance. You can also specify resources that are in different resource groups using the parameter options for this command, which can be displayed by running `az dt data-history connection create adx -h`.
-The command uses several local variables (`$connectionname`, `$dtname`, `$clustername`, `$databasename`, `$eventhub`, and `$eventhubnamespace`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+Use the command in this section to create a data history connection.
+
+By default, this command assumes all resources are in the same resource group as the Azure Digital Twins instance. You can specify resources that are in different resource groups using the parameter options for this command, which can be displayed by running `az dt data-history connection create adx -h`. You can also see the full list of optional parameters, including how to specify a table name and more, in its reference documentation: [az dt data-history connection create adx](/cli/azure/dt/data-history/connection/create#az-dt-data-history-connection-create-adx).
+
+The command below uses several local variables (`$connectionname`, `$dtname`, `$clustername`, `$databasename`, `$eventhub`, and `$eventhubnamespace`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
```azurecli-interactive az dt data-history connection create adx --cn $connectionname --dt-name $dtname --adx-cluster-name $clustername --adx-database-name $databasename --eventhub $eventhub --eventhub-namespace $eventhubnamespace
energy-data-services How To Add More Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md
Each partition provides the highest level of data isolation within a single depl
## Create a data partition
-1. **Open the "Data Partitions" menu-item from left-panel of MEDS overview page.**
+1. Open the "Data Partitions" menu-item from left-panel of MEDS overview page.
[![Screenshot for dynamic data partitions feature discovery from MEDS overview page. Find it under the 'advanced' section in menu-items.](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png)](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox)
-2. **Select "Create"**
+2. Select "Create".
The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking "Create" option on the top opens a right-pane for next steps. [![Screenshot to help you locate the create button on the data partitions page. The 'create' button to add a new data partition is highlighted.](media/how-to-add-more-data-partitions/start-create-data-partition.png)](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox)
-3. **Choose a name for your data partition**
+3. Choose a name for your data partition.
Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started.
energy-data-services How To Integrate Airflow Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md
# Integrate airflow logs with Azure Monitor
-This article describes how you can start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG run failures.
+In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures.
## Prerequisites
Every Microsoft Energy Data Services instance comes inbuilt with an Azure Data F
To access logs via any of the above two options, you need to create a Diagnostic Setting. Each Diagnostic Setting has three basic parts:
-| Title | Description |
+| Part | Description |
|-|-| | Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. | | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
To access logs via any of the above two options, you need to create a Diagnostic
Follow the following steps to set up Diagnostic Settings:
-1. Open Microsoft Energy Data Services' "**Overview**" page
-1. Select "**Diagnostic Settings**" from the left panel
+1. Open Microsoft Energy Data Services' *Overview* page
+1. Select *Diagnostic Settings* from the left panel
[![Screenshot for Azure monitor diagnostic setting overview page. The page shows a list of existing diagnostic settings and the option to add a new one.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png#lightbox)
-1. Select "**Add diagnostic setting**"
+1. Select *Add diagnostic setting*
-1. Select "**Airflow Task Logs**" under Logs
+1. Select *Airflow Task Logs* under Logs
-1. Select "**Archive to a storage account**"
+1. Select *Archive to a storage account*
[![Screenshot for creating a diagnostic setting to archive logs to a storage account. The image shows the subscription and the storage account chosen for a diagnostic setting.](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png#lightbox)
Follow the following steps to set up Diagnostic Settings:
After a diagnostic setting is created for archiving Airflow task logs into a storage account, you can navigate to the storage account **overview** page. You can then use the "Storage Browser" on the left panel to find the right JSON file that you want to investigate. Browsing through different directories is intuitive as you move from a year to a month to a day.
-1. Navigate through **Containers**, available on the left panel.
+1. Navigate through *Containers*, available on the left panel.
[![Screenshot for exploring archived logs in the containers of the Storage Account. The container will show logs from all the sources set up.](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png#lightbox)
You can integrate Airflow logs with Log Analytics Workspace by using **Diagnosti
## Working with the integrated Airflow Logs in Log Analytics Workspace
-Data is retrieved from a Log Analytics Workspace using a query written in Kusto Query Language (KQL). A set of precreated queries is available for many Azure services (not available for Airflow at the moment) so that you don't require knowledge of KQL to get started.
+Use Kusto Query Language (KQL) to retrieve desired data on collected Airflow logs from your Log Analytics Workspace.
[![Screenshot for Azure Monitor Log Analytics page for viewing collected logs. Under log management, tables from all sources will be visible.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png#lightbox)
energy-data-services How To Integrate Elastic Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-This article describes how you can start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor.
+In this article, you'll learn how to start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor.
## Prerequisites -- You need to have a Log Analytics workspace. It will be used to query the Elasticsearch logs dataset using the Kusto Query Language (KQL) query editor in the Log Analytics workspace. Useful Resource: [Create a log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md)
+- You need to have a Log Analytics workspace. It will be used to query the Elasticsearch logs dataset using the Kusto Query Language (KQL) query editor in the Log Analytics workspace. [Create a log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md).
- You need to have a storage account. It will be used to store JSON dumps of Elasticsearch & Elasticsearch Operator logs. The storage account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace.
Every Microsoft Energy Data Services instance comes inbuilt with a managed Elast
Each diagnostic setting has three basic parts:
-| Title | Description |
+| Part | Description |
|-|-| | Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. | | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
We support two destinations for your Elasticsearch logs from Microsoft Energy Da
1. Select *Send to a Log Analytics workspace*
-1. Choose Subscription and the Log Analytics workspace Name. You would have created it already as a prerequisite.
+1. Choose Subscription and the Log Analytics workspace name. You would have created it already as a prerequisite.
[![Screenshot for choosing destination settings for Log Analytics workspace. The image shows the subscription and Log Analytics workspace chosen.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png#lightbox) 1. Select *Archive to storage account*
-1. Choose Subscription and storage account Name. You would have created it already as a prerequisite.
+1. Choose Subscription and storage account name. You would have created it already as a prerequisite.
[![Screenshot that shows choosing destination settings for storage account. Required fields include regions, subscription and storage account.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png#lightbox) 1. Select *Save*.
energy-data-services Overview Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md
Domain data management services (DDMS) store, access, and retrieve metadata and
### Frictionless Exploration and Production(E&P)
-The Microsoft Energy Data Services Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU&trade;. The Azure DDMS service will onboard the OSDU&trade; DDMS and Schlumberger proprietary DMS. Microsoft also continues to contribute to the OSDU&trade; community DDMS to ensure compatibility and architectural alignment.
+The Microsoft Energy Data Services Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU&trade;. The Azure DDMS service will onboard the OSDU&trade; DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU&trade; community DDMS to ensure compatibility and architectural alignment.
### Seamless connection between applications and data
The seismic DMS is part of the OSDU&trade; platform and enables users to connect
## OSDU&trade; - Wellbore DMS
-Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU&trade; compliant application. The Wellbore DMS was contributed by Schlumberger to OSDU&trade;.
+Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU&trade; compliant application. The Wellbore DMS was contributed by SLB to OSDU&trade;.
Well Log data can come in different formats. It's most often indexed by depth or time and the increment of these measurements can vary. Well Logs typically contain multiple attributes for each vertical measurement. Well Logs can therefore be small or for more modern Well Logs that use high frequency data, greater than 1 Gb. Well Log data is smaller than seismic; however, users will want to look at upwards of hundreds of wells at a time. This scenario is common in mature areas that have been heavily drilled such as the Permian Basin in West Texas.
energy-data-services Overview Microsoft Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md
# What is Microsoft Energy Data Services Preview?
-Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU&trade; Data Platform, Microsoft's secure and trusted Azure cloud platform, and Schlumberger's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU&trade; and enables value addition through interoperability with both first-party and third-party solutions.
+Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU&trade; Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU&trade; and enables value addition through interoperability with both first-party and third-party solutions.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
Furthermore, Microsoft Energy Data Services Preview provides security capabiliti
Microsoft Energy Data Services Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed.
-As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from Schlumberger, including Petrel to provide quick time to value.
+As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from SLB, including Petrel to provide quick time to value.
Microsoft will provide support for the platform to enable our customers' use cases.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Here's the list of partners and a link to submit a request to enable events flow
- [Auth0](auth0-how-to.md) - [Microsoft Graph API](subscribe-to-graph-api-events.md)
+- [SAP](subscribe-to-sap-events.md)
## Activate a partner topic
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
+
+ Title: Azure Event Grid - Subscribe to SAP events
+description: This article explains how to subscribe to events published by SAP.
+ Last updated : 10/25/2022++
+# Subscribe to events published by SAP
+This article describes steps to subscribe to events published by an SAP S/4HANA system.
+
+## High-level steps
+
+The common steps to subscribe to events published by any partner, including SAP, are described in [subscribe to partner events](subscribe-to-partner-events.md). For your quick reference, the steps are provided again here with the addition of a step to make sure that your SAP system has the required components. This article deals with steps 1 and 3.
+
+1. [Ensure you meet all prerequisites](#prerequisites).
+1. Register the Event Grid resource provider with your Azure subscription.
+1. Authorize partner to create a partner topic in your resource group.
+1. [Enable SAP S/4HANA events to flow to a partner topic](#enable-events-to-flow-to-your-partner-topic).
+1. Activate partner topic so that your events start flowing to your partner topic.
+1. Subscribe to events.
+
+## Prerequisites
+
+Following are the prerequisites that your system needs to meet before attempting to configure your SAP system to send events to Azure Event Grid.
+
+1. SAP S/4HANA system (on-premises) version 2020 or later.
+1. SAP's [Business Technology Platform](https://www.sap.com/products/technology-platform.html)(BTP).
+1. On the Business Technology Platform, [SAP Event Mesh](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/df532e8735eb4322b00bfc7e42f84e8d.html) is enabled.
+
+If you have any questions, contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>
+
+## Enable events to flow to your partner topic
+
+SAP's capability to send events to Azure Event Grid is available through SAP's [beta program](https://influence.sap.com/sap/ino/#campaign/3314). Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid. At at that point, you may proceed with the next step in the process described in the [High-level steps](#high-level-steps) section.
+
+SAP's BETA program started in October 2022 and will last a couple of months. Thereafter, the feature will be released by SAP as a generally available (GA) capability. Event Grid's capability to receive events from a partner, like SAP, is already a GA feature.
+
+If you have any questions, you can contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>.
+
+## Next steps
+See [subscribe to partner events](subscribe-to-partner-events.md).
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
Title: 'Quickstart: Create an Azure ExpressRoute circuit using Bicep' description: This quickstart shows you how to create an ExpressRoute circuit using Bicep. --++ Last updated 03/24/2022
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
Title: 'Quickstart: Create an Azure Firewall and a firewall policy - Bicep' description: In this quickstart, you deploy an Azure Firewall and a firewall policy using Bicep. --++ Last updated 07/05/2022
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Bicep' description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Bicep. --++ Last updated 06/28/2022
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
Title: 'Quickstart: Create an Azure Firewall with Availability Zones - Bicep' description: In this quickstart, you deploy Azure Firewall using Bicep. The virtual network has one VNet with three subnets. Two Windows Server virtual machines, a jump box, and a server are deployed. -+ Last updated 06/28/2022-+ # Quickstart: Deploy Azure Firewall with Availability Zones - Bicep
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Previously updated : 09/12/2022 Last updated : 10/25/2022
A rule belongs to a rule collection, and it specifies which traffic is allowed o
For application rules, the traffic is processed by our built-in [infrastructure rule collection](infrastructure-fqdns.md) before it's denied by default.
+### Inbound vs. outbound
+
+An **inbound** firewall rule protects your network from threats that originate from outside your network (traffic sourced from the Internet) and attempts to infiltrate your network inwardly.
+
+An **outbound** firewall rule protects against nefarious traffic that originates internally (traffic sourced from a private IP address within Azure) and travels outwardly. This is usually traffic from within Azure resources being redirected via the Firewall before reaching a destination.
+
+### Rule types
+ There are three types of rules: - DNAT - Network - Application
-### DNAT rules
+#### DNAT rules
DNAT rules allow or deny inbound traffic through the firewall public IP address(es). You can use a DNAT rule when you want a public IP address to be translated into a private IP address. The Azure Firewall public IP addresses can be used to listen to inbound traffic from the Internet, filter the traffic and translate this traffic to internal resources in Azure.
-### Network rules
+#### Network rules
Network rules allow or deny inbound, outbound, and east-west traffic based on the network layer (L3) and transport layer (L4). You can use a network rule when you want to filter traffic based on IP addresses, any ports, and any protocols.
-### Application rules
+#### Application rules
Application rules allow or deny outbound and east-west traffic based on the application layer (L7). You can use an application rule when you want to filter traffic based on fully qualified domain names (FQDNs), URLs, and HTTP/HTTPS protocols.
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium using Bicep' description: This quickstart describes how to create an Azure Front Door Standard/Premium using Bicep. --++ Last updated 07/08/2022
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Service using Bicep'
description: This quickstart describes how to create an Azure Front Door Service using Bicep. documentationcenter: --++ Last updated 03/30/2022
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
The following Resource Provider modes are fully supported:
The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**: - `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.-- `Microsoft.Kubernetes.Data` for Azure Policy components that target [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md) resources such as pods, containers, and ingresses. > [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
hdinsight Apache Hadoop Mahout Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md
description: Learn how to use the Apache Mahout machine learning library to gene
Previously updated : 06/29/2022 Last updated : 10/25/2022 # Generate recommendations using Apache Mahout in Azure HDInsight
Learn how to use the [Apache Mahout](https://mahout.apache.org) machine learning
Mahout is a [machine learning](https://en.wikipedia.org/wiki/Machine_learning) library for Apache Hadoop. Mahout contains algorithms for processing data, such as filtering, classification, and clustering. In this article, you use a recommendation engine to generate movie recommendations that are based on movies your friends have seen.
-Mahout is avaiable in HDInsight 3.6, and is not available in HDInsight 4.0. For more information about the version of Mahout in HDInsight, see [HDInsight 3.6 component versions](../hdinsight-36-component-versioning.md).
- ## Prerequisites An Apache Hadoop cluster on HDInsight. See [Get Started with HDInsight on Linux](./apache-hadoop-linux-tutorial-get-started.md).
hdinsight Hdinsight 36 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-36-component-versioning.md
- Title: Apache Hadoop components and versions - Azure HDInsight 3.6
-description: Learn about the Apache Hadoop components and versions in Azure HDInsight 3.6.
-- Previously updated : 08/05/2022--
-# HDInsight 3.6 component versions
-
-In this article, you learn about the Apache Hadoop environment components and versions in Azure HDInsight 3.6.
-
-## Support for HDInsight 3.6
-
-Starting July 1st, 2021 Microsoft will offer Basic support for certain HDI 3.6 cluster types.
-The table below lists the support timeframe for HDInsight 3.6 cluster types.
-
-| Cluster Type | Framework version | Standard support expiration | Basic support expiration date | Retirement date |
-||-|--|-|-|
-| HDInsight 3.6 Hadoop | 2.7.3 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Spark | 2.3 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Kafka | 1.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 HBase | 1.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Interactive Query | 2.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 ML Services | 9.3 | - | - | December 31, 2020 |
-| HDInsight 3.6 Spark | 2.2 | - | - | June 30, 2020 |
-| HDInsight 3.6 Spark | 2.1 | - | - | June 30, 2020 |
-| HDInsight 3.6 Kafka | 1.0 | - | - | June 30, 2020 |
-
-## Apache components available with HDInsight version 3.6
-
-The OSS component versions associated with HDInsight 3.6 are listed in the following table.
-
-| Component | HDInsight 3.6 (default) |
-||--|
-| Apache Hadoop and YARN | 2.7.3 |
-| Apache Tez | 0.7.0 |
-| Apache Pig | 0.16.0 |
-| Apache Hive | (2.1.0 on ESP Interactive Query) |
-| Apache Tez Hive2 | 0.8.4 |
-| Apache Ranger | 0.7.0 |
-| Apache HBase | 1.1.2 |
-| Apache Sqoop | 1.4.6 |
-| Apache Oozie | 4.2.0 |
-| Apache Zookeeper | 3.4.6 |
-| Apache Mahout | 0.9.0+ |
-| Apache Phoenix | 4.7.0 |
-| Apache Spark | 2.3.2. |
-| Apache Livy | 0.4. |
-| Apache Kafka | 1.1 |
-| Apache Ambari | 2.6.0 |
-| Apache Zeppelin | 0.7.3 |
-| Mono | 4.2.1 |
-
-## HDInsight 3.6 to 4.0 Migration Guides
-- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).-- [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md).-- [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk).-- [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).-
-## Next steps
--- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Open-source components and versions - Azure HDInsight
description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 08/25/2022 Last updated : 10/25/2022 # Azure HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| | | | | | | | | [HDInsight 5.0](hdinsight-50-component-versioning.md) |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. |Yes | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. |Yes |
-| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expired on June 30, 2021 for all cluster types.<br> Basic support expires on September 30, 2022. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md) for cluster type details. |October 1, 2022 |Yes |
**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal.
Basic support doesn't include
Microsoft doesn't encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version. ## HDInsight 3.6 to 4.0 Migration Guides-- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md). - [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md). - [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk). - [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 08/12/2022 Last updated : 10/25/2022 # Archived release notes
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Schema tool enhancements to support mergeCatalog|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Hive with TEZ UNION ALL and UDTF results in data loss|[HIVE-21915](https://issues.apache.org/jira/browse/HIVE-21915)| | Split text files even if header/footer exists|[HIVE-21924](https://issues.apache.org/jira/browse/HIVE-21924)|
-| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the once are present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)|
+| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the onc is present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)|
| LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)| | Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)| |Prevent LLAP shutdown on AMReporter related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)| | For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)| | JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
-| Update repo URLs in poms - branh 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
+| Update repo URLs in poms - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
| DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)| | Load data into a bucketed table is ignoring partitions specs and loads data into default partition|[HIVE-21564](https://issues.apache.org/jira/browse/HIVE-21564)| | Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException|[HIVE-21613](https://issues.apache.org/jira/browse/HIVE-21613)|
Let more atΓÇ»[enable private link](./hdinsight-private-link.md).ΓÇ»
The new Azure monitor integration experience will be Preview in East US and West Europe with this release. Learn more details about the new Azure monitor experience [here](./log-analytics-migration.md#migrate-to-the-new-azure-monitor-integration). ### Deprecation
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft offers [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You are automatically enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
-
+HDInsight 3.6 version is deprecated effective Oct 01, 2022.
### Behavior changes #### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable load-based autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Here are the back ported Apache JIRAs for this release:
### Price Correction for HDInsight Dv2 Virtual Machines
-A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25th, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
- Canada Central - Canada East
The OS versions for this release are:
#### OS version upgrade As referenced in [Ubuntu's release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. We started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 with this release. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 won't be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
You need to drop and recreate your clusters if you'd like to move existing HDInsight 4.0 clusters to Ubuntu 18.04. Plan to create or recreate your clusters after Ubuntu 18.04 support becomes available.
No deprecation in this release.
### Behavior changes #### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
-HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers won't be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above. Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
- #### VM host naming will be changed on July 1, 2021
-HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name will not be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name won't be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
#### Move to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You
#### OS version upgrade HDInsight clusters are currently running on Ubuntu 16.04 LTS. As referenced in [UbuntuΓÇÖs release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. WeΓÇÖll start rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
You need to drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster after Ubuntu 18.04 support becomes available. WeΓÇÖll send another notification after the new image becomes available in all regions.
-ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
+ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
#### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
-HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
-
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers won't be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
### Bug fixes HDInsight continues to make cluster reliability and performance improvements.
Default cluster VM sizes will be changed from D-series to Ev3-series. This chang
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
-#### Breaking change for .NET for Apache Spark 1.0.0
-With the latest release, HDInsight introduces the first official version v1.0.0 of the [ΓÇ£.NET for Apache SparkΓÇ¥](https://github.com/dotnet/spark) library. It provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with a host of [other features](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md). There will be breaking changes for this major version, refer to [the .NET for Apache Spark migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand steps needed to update your code and pipelines. To learn more, refer to this [.NET for Apache Spark v1.0 on Azure HDInsight guide](./spark/spark-dotnet-version-update.md#using-net-for-apache-spark-v10-in-hdinsight).
- ### Upcoming changes The following changes will happen in upcoming releases.
Apache Tez View is used to track and debug the execution of Hive Tez job. Tez Vi
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes #### Ambari stack version change
Customers can now use Service Endpoint Policies (SEP) on the HDInsight cluster s
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes No behavior changes you need to pay attention to.
In this release, we support rebooting VMs in HDInsight cluster to reboot unrespo
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes #### ESP Spark cluster head node size change
When 80% of the worker nodes are ready, the cluster enters **operational** stage
After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes are not available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully. #### Create new service principal through HDInsight
-Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers cannot create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
+Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers can't create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
#### Time out for script actions with cluster creation HDInsight supports running script actions with cluster creation. From this release, all script actions with cluster creation must finish within **60 minutes**, or they time out. Script actions submitted to running clusters are not impacted. Learn more details [here](./hdinsight-hadoop-customize-cluster-linux.md#script-action-in-the-cluster-creation-process).
You can find the current component versions for HDInsight 4.0 ad HDInsight 3.6 i
### Known issues #### Hive Warehouse Connector issue
-There is an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Open support ticket if you need further help on this.
+There's an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Open support ticket if you need further help on this.
## Release date: 01/09/2020
No behavior changes for this release. To get ready for upcoming changes, see [Up
The following changes will happen in upcoming releases. #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.3 on HDInsight 3.6 by June 30, 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](./spark/migrate-versions.md).
+Starting July 1, 2020, customers won't be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.3 on HDInsight 3.6 by June 30, 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30, 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](./spark/migrate-versions.md).
+Starting July 1, 2020, customers won't be able to create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30, 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Kafka workloads to Azure HDInsight 4.0](./kafk).
+Starting July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Kafka workloads to Azure HDInsight 4.0](./kafk).
#### HBase 2.0 to 2.1.6 In the upcoming HDInsight 4.0 release, HBase version will be upgraded from version 2.0 to 2.1.6
F-series virtual machines(VMs) is a good choice to get started with HDInsight wi
From this release, G-series VMs are no longer offered in HDInsight. #### Dv1 virtual machine deprecation
-From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There is no price difference between Dv1 and Dv2 VMs.
+From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There's no price difference between Dv1 and Dv2 VMs.
### Behavior changes
A-series VMs could cause ESP cluster issues due to relatively low CPU and memory
HDInsight continues to make cluster reliability and performance improvements. ### Component version change
-There is no component version change for this release. You could find the current component versions for HDInsight 4.0 and HDInsight 3.6 [here](./hdinsight-component-versioning.md).
+There's no component version change for this release. You could find the current component versions for HDInsight 4.0 and HDInsight 3.6 [here](./hdinsight-component-versioning.md).
## Release Date: 08/07/2019
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code. -- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN should not delete the block On "Too many open files" Exception.
+- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN shouldn't delete the block On "Too many open files" Exception.
- [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails frequently.
This release provides Kafka 1.0.0 and the following Apache patches.
- [KAFKA-6179](https://issues.apache.org/jira/browse/KAFKA-6179): RecordQueue.clear() does not clear MinTimestampTracker's maintained list. -- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM if there is a down conversion.
+- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM if there's a down conversion.
- [KAFKA-6190](https://issues.apache.org/jira/browse/KAFKA-6190): GlobalKTable never finishes restoring when consuming transactional messages.
In HDP-2.5.x and 2.6.x, we removed the "commons-httpclient" library from Mahout
- Previously compiled Mahout jobs will need to be recompiled in the HDP-2.5 or 2.6 environment. -- There is a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
+- There's a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
-- There is an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there is no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
+- There's an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there's no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
#### Oozie
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240): ClassCastException from Pig loader. -- [PHOENIX-3452](https://issues.apache.org/jira/browse/PHOENIX-3452): NULLS FIRST/NULL LAST should not impact whether GROUP BY is order preserving.
+- [PHOENIX-3452](https://issues.apache.org/jira/browse/PHOENIX-3452): NULLS FIRST/NULL LAST shouldn't impact whether GROUP BY is order preserving.
- [PHOENIX-3469](https://issues.apache.org/jira/browse/PHOENIX-3469): Incorrect sort order for DESC primary key for NULLS LAST/NULLS FIRST.
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525): Integer overflow in GroupBy execution. -- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there is WHERE on pk column.
+- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on pk column.
- [PHOENIX-4586](https://issues.apache.org/jira/browse/PHOENIX-4586): UPSERT SELECT doesn't take in account comparison operators for subqueries.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23406](https://issues.apache.org/jira/browse/SPARK-23406): Enable stream-stream self-joins for branch-2.3. -- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark should not warn \`metadata directory\` for a HDFS file path.
+- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for a HDFS file path.
- [SPARK-23436](https://issues.apache.org/jira/browse/SPARK-23436): Infer partition as Date only if it can be cast to Date.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23490](https://issues.apache.org/jira/browse/SPARK-23490): Check storage.locationUri with existing table in CreateTable. -- [SPARK-23524](https://issues.apache.org/jira/browse/SPARK-23524): Big local shuffle blocks should not be checked for corruption.
+- [SPARK-23524](https://issues.apache.org/jira/browse/SPARK-23524): Big local shuffle blocks shouldn't be checked for corruption.
- [SPARK-23525](https://issues.apache.org/jira/browse/SPARK-23525): Support ALTER TABLE CHANGE COLUMN COMMENT for external hive table. -- [SPARK-23553](https://issues.apache.org/jira/browse/SPARK-23553): Tests should not assume the default value of \`spark.sql.sources.default\`.
+- [SPARK-23553](https://issues.apache.org/jira/browse/SPARK-23553): Tests shouldn't assume the default value of \`spark.sql.sources.default\`.
- [SPARK-23569](https://issues.apache.org/jira/browse/SPARK-23569): Allow pandas\_udf to work with python3 style type-annotated functions.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23624](https://issues.apache.org/jira/browse/SPARK-23624): Revise doc of method pushFilters in Datasource V2. -- [SPARK-23628](https://issues.apache.org/jira/browse/SPARK-23628): calculateParamLength should not return 1 + num of expressions.
+- [SPARK-23628](https://issues.apache.org/jira/browse/SPARK-23628): calculateParamLength shouldn't return 1 + num of expressions.
- [SPARK-23630](https://issues.apache.org/jira/browse/SPARK-23630): Allow user's hadoop conf customizations to take effect.
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93159 | [OOZIE-3139](https://issues.apache.org/jira/browse/OOZIE-3139) | Oozie validates workflow incorrectly | | BUG-93936 | [ATLAS-2289](https://issues.apache.org/jira/browse/ATLAS-2289) | Embedded kafka/zookeeper server start/stop code to be moved out of KafkaNotification implementation | | BUG-93942 | [ATLAS-2312](https://issues.apache.org/jira/browse/ATLAS-2312) | Use ThreadLocal DateFormat objects to avoid simultaneous use from multiple threads |
-| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
+| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
| BUG-94618 | [YARN-5037](https://issues.apache.org/jira/browse/YARN-5037), [YARN-7274](https://issues.apache.org/jira/browse/YARN-7274) | Ability to disable elasticity at leaf queue level | | BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms | | BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update adls connector to use the current version of ADLS SDK | | BUG-95619 | [HIVE-18551](https://issues.apache.org/jira/browse/HIVE-18551) | Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace |
-| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark should not warn \`metadata directory\` for a HDFS file path |
+| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for a HDFS file path |
**Performance**
Fixed issues represent selected issues that were previously logged via Hortonwor
||-|--| | BUG-100180 | [CALCITE-2232](https://issues.apache.org/jira/browse/CALCITE-2232) | Assertion error on AggregatePullUpConstantsRule while adjusting Aggregate indices | | BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to +ve |
-| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap |
+| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek can't be called on ReversedKeyValueHeap |
| BUG-102078 | [HIVE-17978](https://issues.apache.org/jira/browse/HIVE-17978) | TPCDS queries 58 and 83 generate exceptions in vectorization. | | BUG-92483 | [HIVE-17900](https://issues.apache.org/jira/browse/HIVE-17900) | analyze stats on columns triggered by Compactor generates malformed SQL with &gt; 1 partition column | | BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97178 | [ATLAS-2467](https://issues.apache.org/jira/browse/ATLAS-2467) | Dependency upgrade for Spring and nimbus-jose-jwt | | BUG-97180 | N/A | Upgrade Nimbus-jose-jwt | | BUG-98038 | [HIVE-18788](https://issues.apache.org/jira/browse/HIVE-18788) | Clean up inputs in JDBC PreparedStatement |
-| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO isn't configured, some links cannot be accessed" |
+| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO isn't configured, some links can't be accessed" |
| BUG-98372 | [HBASE-13848](https://issues.apache.org/jira/browse/HBASE-13848) | Access InfoServer SSL passwords through Credential Provider API | | BUG-98385 | [ATLAS-2500](https://issues.apache.org/jira/browse/ATLAS-2500) | Add more headers to Atlas response. | | BUG-98564 | [HADOOP-14651](https://issues.apache.org/jira/browse/HADOOP-14651) | Update okhttp version to 2.7.5 |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93361 | [HIVE-12360](https://issues.apache.org/jira/browse/HIVE-12360) | Bad seek in uncompressed ORC with predicate pushdown | | BUG-93426 | [CALCITE-2086](https://issues.apache.org/jira/browse/CALCITE-2086) | HTTP/413 in certain circumstances due to large Authorization headers | | BUG-93429 | [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240) | ClassCastException from Pig loader |
-| BUG-93485 | N/A | Cannot get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
+| BUG-93485 | N/A | can'tcan'tCan't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
| BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93550 | N/A | Zeppelin %spark.r does not work with spark1 due to scala version mismatch | | BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-95013 | [HIVE-18488](https://issues.apache.org/jira/browse/HIVE-18488) | LLAP ORC readers are missing some null checks | | BUG-95077 | [HIVE-14205](https://issues.apache.org/jira/browse/HIVE-14205) | Hive doesn't support union type with AVRO file format |
-| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend should not trust a partially trusted channel |
+| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn'tshould'n trust a partially trusted channel |
| BUG-95201 | [HDFS-13060](https://issues.apache.org/jira/browse/HDFS-13060) | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | | BUG-95284 | [HBASE-19395](https://issues.apache.org/jira/browse/HBASE-19395) | \[branch-1\] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE | | BUG-95301 | [HIVE-18517](https://issues.apache.org/jira/browse/HIVE-18517) | Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching |
Fixed issues represent selected issues that were previously logged via Hortonwor
|**Apache Component**|**Apache JIRA**|**Summary**|**Details**| |--|--|--|--| |**Spark 2.3** |**N/A** |**Changes as documented in the Apache Spark release notes** |- There's a "Deprecation" document and a "Change of behavior" guide, https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations<br /><br />- For SQL part, there's another detailed "Migration" guide (from 2.2 to 2.3), https://spark.apache.org/docs/latest/sql-programming-guide.html#upgrading-from-spark-sql-22-to-23|
-|Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there is an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.|
+|Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there's an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.|
|**Kafka 1.0**|**N/A**|**Changes as documented in the Apache Spark release notes** |https://kafka.apache.org/10/documentation.html#upgrade_100_notable| |**Hive/ Ranger** | |Another ranger hive policies required for INSERT OVERWRITE |**Scenario:** Another ranger hive policies required for **INSERT OVERWRITE**<br /><br />**Previous behavior:** Hive **INSERT OVERWRITE** queries succeed as usual.<br /><br />**New behavior:** Hive **INSERT OVERWRITE** queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:<br /><br />Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/\*(state=42000,code=40000)<br /><br />As of HDP-2.6.0, Hive **INSERT OVERWRITE** queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.<br /><br />**Workaround/Expected Customer Action:**<br /><br />1. Create a new policy under the Hive repository.<br />2. In the dropdown where you see Database, select URI.<br />3. Update the path (Example: /tmp/*)<br />4. Add the users and group and save.<br />5. Retry the insert query.| |**HDFS**|**N/A** |HDFS should support for multiple KMS Uris |**Previous Behavior:** dfs.encryption.key.provider.uri property was used to configure the KMS provider path.<br /><br />**New Behavior:** dfs.encryption.key.provider.uri is now deprecated in favor of hadoop.security.key.provider.path to configure the KMS provider path.|
Fixed issues represent selected issues that were previously logged via Hortonwor
1. Find out PermissionList.js file under /usr/hdp/current/ranger-admin
- 2. Find out definition of renderPolicyCondtion function (line no:404).
+ 2. Find out definition of renderPolicyCondtion function (line no: 404).
- 3. Remove following line from that function i.e under display function(line no:434)
+ 3. Remove following line from that function i.e under display function(line no: 434)
val = \_.escape(val);//Line No:460
Fixed issues represent selected issues that were previously logged via Hortonwor
**HDInsight Integration with ADLS Gen 2: User directories and permissions issue with ESP clusters** 1. Home directories for users are not getting created on Head Node 1. Workaround is to create these manually and change ownership to the respective userΓÇÖs UPN.
- 2. Permissions on /hdp is currently not set to 751. This needs to be set to
+ 2. Permissions on /hdp are currently not set to 751. This needs to be set to
a. chmod 751 /hdp b. chmod ΓÇôR 755 /hdp/apps ### Deprecation -- **OMS Portal:** We have removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal.
+- **OMS Portal:** We've removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal.
- **Spark 2.3:** [Spark Release 2.3.0 deprecations](https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations)
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
description: Learn guidelines to migrate your Azure HDInsight cluster to a newer
Previously updated : 09/19/2022 Last updated : 10/25/2022 # Migrate HDInsight cluster to a newer version
For more information about database backup and restore, see [Recover a database
As mentioned above, Microsoft recommends that HDInsight clusters be regularly migrated to the latest version in order to take advantage of new features and fixes. See the following list of reasons we would request that a cluster to be deleted and redeployed:
-* The cluster version is [Retired](hdinsight-retired-versions.md) or in [Basic support](hdinsight-36-component-versioning.md) and you're having a cluster issue that would be resolved with a newer version.
+* The cluster version is [Retired](hdinsight-retired-versions.md) or if you're having a cluster issue that would be resolved with a newer version.
* The root cause of a cluster issue is determined to relate an undersized VM. [View Microsoft's recommended node configuration](hdinsight-supported-node-configuration.md). * A customer opens a support case and the Microsoft engineering team determines the issue has already been fixed in a newer cluster version. * A default metastore database (Ambari, Hive, Oozie, Ranger) has reached its utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
hdinsight Hdinsight Version Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-version-release.md
Title: HDInsight 4.0 overview - Azure
description: Compare HDInsight 3.6 to HDInsight 4.0 features, limitations, and upgrade recommendations. Previously updated : 10/13/2022 Last updated : 10/25/2022 # Azure HDInsight 4.0 overview
There's no supported upgrade path from previous versions of HDInsight to HDInsig
* [HBase migration guide](./hbase/apache-hbase-migrate-new-version.md) * [Hive migration guide](./interactive-query/apache-hive-migrate-workloads.md) * [Kafka migration guide](./kafk)
-* [Spark migration guide](./spark/migrate-versions.md)
* [Azure HDInsight Documentation](index.yml) * [Release Notes](hdinsight-release-notes.md)
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
New-AzStorageContainer -Name $containerName -Context $storageContext
Create an Apache Kafka on HDInsight cluster with [New-AzHDInsightCluster](/powershell/module/az.HDInsight/New-azHDInsightCluster). ```azurepowershell-interactive
-# Create a Kafka 1.1 cluster
+# Create a Kafka 2.4.0 cluster
$clusterName = Read-Host -Prompt "Enter the name of the Kafka cluster" $httpCredential = Get-Credential -Message "Enter the cluster login credentials" -UserName "admin" $sshCredentials = Get-Credential -Message "Enter the SSH user credentials" -UserName "sshuser"
$clusterType="Kafka"
$disksPerNode=2 $kafkaConfig = New-Object "System.Collections.Generic.Dictionary``2[System.String,System.String]"
-$kafkaConfig.Add("kafka", "1.1")
+$kafkaConfig.Add("kafka", "2.4.0")
New-AzHDInsightCluster ` -ResourceGroupName $resourceGroup `
hdinsight Migrate Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/migrate-versions.md
- Title: Migrate Apache Spark 2.1 or 2.2 workloads to 2.3 or 2.4 - Azure HDInsight
-description: Learn how to migrate Apache Spark 2.1 and 2.2 to 2.3 or 2.4.
-- Previously updated : 08/28/2022--
-# Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4
-
-This document explains how to migrate Apache Spark workloads on Spark 2.1 and 2.2 to 2.3 or 2.4.
-
-As discussed in the [Release Notes](../hdinsight-release-notes-archive.md), starting July 1, 2020, the following cluster configurations will not be supported and customers will not be able to create new clusters with these configurations:
-
-Existing clusters in these configurations will run as-is without support from Microsoft. If you are on Spark 2.1 or 2.2 on HDInsight 3.6, move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption. If you are on Spark 2.3 on an HDInsight 4.0 cluster, move to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
-
-For general information about migrating an HDInsight cluster from 3.6 to 4.0, see [Migrate HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md). For general information about migrating to a newer version of Apache Spark, see [Apache Spark: Versioning Policy](https://spark.apache.org/versioning-policy.html).
-
-## Guidance on Spark version upgrades on HDInsight
-
-| Upgrade scenario | Mechanism | Things to consider | Spark/Hive integration |
-||--|--||
-|HDInsight 3.6 Spark 2.1 to HDInsight 3.6 Spark 2.3| Recreate clusters with HDInsight Spark 2.3 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) <br><br> [Apache Spark: Upgrading From Spark SQL 2.1 to 2.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-21-to-22) | No Change |
-|HDInsight 3.6 Spark 2.2 to HDInsight 3.6 Spark 2.3 | Recreate clusters with HDInsight Spark 2.3 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) | No Change |
-| HDInsight 3.6 Spark 2.1 to HDInsight 4.0 Spark 2.4 | Recreate clusters with HDInsight 4.0 Spark 2.4 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.3 to 2.4](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-23-to-24) <br><br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) <br><br> [Apache Spark: Upgrading From Spark SQL 2.1 to 2.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-21-to-22) | Spark and Hive integration has changed in HDInsight 4.0. <br><br> In HDInsight 4.0, Spark and Hive use independent catalogs for accessing SparkSQL or Hive tables. A table created by Spark lives in the Spark catalog. A table created by Hive lives in the Hive catalog. This behavior is different than HDInsight 3.6 where Hive and Spark shared common catalog. Hive and Spark Integration in HDInsight 4.0 relies on Hive Warehouse Connector (HWC). HWC works as a bridge between Spark and Hive. Learn about Hive Warehouse Connector. <br> In HDInsight 4.0 if you would like to Share the metastore between Hive and Spark, you can do so by changing the property metastore.catalog.default to hive in your Spark cluster. You can find this property in Ambari Advanced spark2-hive-site-override. It's important to understand that sharing of metastore only works for external hive tables, this will not work if you have internal/managed hive tables or ACID tables. <br><br>Read [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md) for more information.<br><br> |
-| HDInsight 3.6 Spark 2.2 to HDInsight 4.0 Spark 2.4 | Recreate clusters with HDInsight 4.0 Spark 2.4 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.3 to 2.4](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-23-to-24) <br><br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) | Spark and Hive integration has changed in HDInsight 4.0. <br><br> In HDInsight 4.0, Spark and Hive use independent catalogs for accessing SparkSQL or Hive tables. A table created by Spark lives in the Spark catalog. A table created by Hive lives in the Hive catalog. This behavior is different than HDInsight 3.6 where Hive and Spark shared common catalog. Hive and Spark Integration in HDInsight 4.0 relies on Hive Warehouse Connector (HWC). HWC works as a bridge between Spark and Hive. Learn about Hive Warehouse Connector. <br> In HDInsight 4.0 if you would like to Share the metastore between Hive and Spark, you can do so by changing the property metastore.catalog.default to hive in your Spark cluster. You can find this property in Ambari Advanced spark2-hive-site-override. It's important to understand that sharing of metastore only works for external hive tables, this will not work if you have internal/managed hive tables or ACID tables. <br><br>Read [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md) for more information.|
-
-## Next steps
-
-* [Migrate HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
-* [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md)
hdinsight Spark Dotnet Version Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-dotnet-version-update.md
- Title: Updating .NET for Apache Spark to version v1.0 in HDI
-description: Learn about updating .NET for Apache Spark version to 1.0 in HDI and how that affects your existing code and clusters.
---- Previously updated : 07/22/2022--
-# Updating .NET for Apache Spark to version v1.0 in HDInsight
-
-This document talks about the first major version of [.NET for Apache Spark](https://github.com/dotnet/spark), and how it might impact your current production pipelines in HDInsight clusters.
-
-## About .NET for Apache Spark version 1.0.0
-
-This is the first [major official release](https://github.com/dotnet/spark/releases/tag/v1.0.0) of .NET for Apache Spark and provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with other features. For a complete list of all features, improvements and bug fixes, see the official [v1.0.0 release notes](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md).
-Another important thing to note is that this version is **not** compatible with prior versions of `Microsoft.Spark` and `Microsoft.Spark.Worker`. Check out the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) if you're planning to upgrade your .NET for Apache Spark application to be compatible with v1.0.0.
-
-## Using .NET for Apache Spark v1.0 in HDInsight
-
-While current HDI clusters won't be affected (that is, they'll still have the same version as before), newly created HDI clusters will carry this latest v1.0.0 version of .NET for Apache Spark. What this means if:
--- **You have an older HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), you'll have to update the `Microsoft.Spark.Worker` version on your HDI cluster. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](#changing-net-for-apache-spark-version-on-hdinsight).
-If you don't want to update the current version of .NET for Apache Spark in your application, no further steps are necessary.
--- **You have a new HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), no steps are needed to change the worker on HDI, however you'll have to refer to the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand the steps needed to update your code and pipelines.
-If you don't want to change the current version of .NET for Apache Spark in your application, you need to change the version on your HDI cluster from v1.0 (default on new clusters) to whichever version you're using. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](spark-dotnet-version-update.md#changing-net-for-apache-spark-version-on-hdinsight).
-
-## Changing .NET for Apache Spark version on HDInsight
-
-### Deploy Microsoft.Spark.Worker
-
-`Microsoft.Spark.Worker` is a backend component that lives on the individual worker nodes of your Spark cluster. When you want to execute a C# UDF (user-defined function), Spark needs to understand how to launch the .NET CLR to execute this UDF. `Microsoft.Spark.Worker` provides a collection of classes to Spark that enable this functionality. Select the worker version depending on the version of .NET for Apache Spark you want to deploy on the HDI cluster.
-
-1. Download the Microsoft.Spark.Worker Linux release of your particular version. For example, if you want `.NET for Apache Spark v1.0.0`, you'd download [Microsoft.Spark.Worker.netcoreapp3.1.linux-x64-1.0.0.tar.gz](https://github.com/dotnet/spark/releases/tag/v1.0.0).
-
-2. Download [install-worker.sh](https://github.com/dotnet/spark/blob/master/deployment/install-worker.sh) script to install the worker binaries downloaded in Step 1 to all the worker nodes of your HDI cluster.
-
-3. Upload the above mentioned files to the Azure Storage account your cluster has access to. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#upload-files-to-azure) for more details.
-
-4. Run the `install-worker.sh` script on all worker nodes of your cluster, using Script actions. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#run-the-hdinsight-script-action).
-
-### Update your application to use specific version
-
-You can update your .NET for Apache Spark application to use a specific version by choosing the required version of the [Microsoft.Spark NuGet package](https://www.nuget.org/packages/Microsoft.Spark/) in your project. Be sure to check out the release notes of the particular version and the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) as mentioned above, if choosing to update your application to v1.0.0.
-
-## FAQs
-
-### Will my existing HDI cluster with version < 1.0.0 start failing with the new release?
-
-Existing HDI clusters will continue to have the same previous version for .NET for Apache Spark and your existing application (having previous version of Spark .NET) won't be affected.
-
-## Next steps
-
-[Deploy your .NET for Apache Spark application on HDInsight](/dotnet/spark/tutorials/hdinsight-deployment)
healthcare-apis Export Dicom Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-dicom-files.md
The only setting is the list of identifiers to export.
| :- | :- | : | :- | | `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instances identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"`. |
-### Destination Settings
+### Destination settings
The connection to the Azure Blob storage account is specified with a `BlobContainerUri`.
Content-Type: application/json
"lastUpdatedTime": "2022-09-08T16:41:01.2776644Z", "status": "completed", "results": {
- "errorHref": "<container uri>/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
+ "errorHref": "https://dicomexport.blob.core.windows.net/export/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
"exported": 1000, "skipped": 3 }
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
| Enhancements/Improvements | Related information | | : | :- |
-| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](https://github.com/microsoft/dicom-server/blob/main/docs/how-to-guides/export-data.md). |
+| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](dicom/export-dicom-files.md). |
|Improved deployment performance |Performance improvements have cut the time to deploy new instances of the DICOM service by more than 55% at the 50th percentile. | | Reduced strictness when validating STOW requests |Some customers have run into issues storing DICOM files that do not perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW. <p>The service will now accept the following: <p><ul><li>DICOM UIDs that contain trailing whitespace <li>IS, DS, SV, and UV VRs that are not valid numbers<li>Invalid private creator tags |
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/about-iot-dps.md
Title: Overview of the Microsoft Azure IoT Hub Device Provisioning Service
-description: Describes device provisioning in Azure with the Device Provisioning Service (DPS) and IoT Hub
+ Title: Overview of Azure IoT Hub Device Provisioning Service
+description: Describes production scale device provisioning in Azure with the Device Provisioning Service (DPS) and IoT Hub
Previously updated : 11/22/2021 Last updated : 10/14/2022
Microsoft Azure provides a rich set of integrated public cloud services for all your IoT solution needs. The IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention. DPS enables the provisioning of millions of devices in a secure and scalable manner.
+Many of the manual steps traditionally involved in provisioning are automated with DPS to reduce the time to deploy IoT devices and lower the risk of manual error. The following diagram describes what goes on behind the scenes to get a device provisioned. The first step is manual, all of the following steps are automated.
++
+Before the device provisioning flow begins, there are two manual steps to prepare. On the device side, the device manufacturer prepares the device for provisioning by preconfiguring it with its authentication credentials and assigned Device Provisioning Service ID and endpoint. On the cloud side, you or the device manufacturer prepares the Device Provisioning Service instance with individual enrollments and enrollments groups that identify valid devices and define how they should be provisioned.
+
+Once the device and cloud are set up for provisioning, the following steps kick off automatically as soon as the device powers on for the first time:
+
+1. When the device first powers on, it connects to the DPS endpoint and presents it authentication credentials.
+1. The DPS instance checks the identity of the device against its enrollment list. Once the device identity is verified, DPS assigns the device to an IoT hub and registers it in the hub.
+1. The DPS instance receives the device ID and registration information from the assigned hub and passes that information back to the device.
+1. The device uses its registration information to connect directly to its assigned IoT hub and authenticate.
+1. Once authenticated, the device and IoT hub begin communicating directly. The DPS instance has no further role as an intermediary unless the device needs to reprovision.
+ ## When to use Device Provisioning Service There are many provisioning scenarios in which DPS is an excellent choice for getting devices connected and configured to IoT Hub, such as:
There are many provisioning scenarios in which DPS is an excellent choice for ge
* Reprovisioning based on a change in the device * Rolling the keys used by the device to connect to IoT Hub (when not using X.509 certificates to connect)
-Provisioning of nested edge devices (parent/child hierarchies) is not currently supported by DPS.
-
-## Behind the scenes
-
-All the scenarios listed in the previous section can be done using DPS for zero-touch provisioning with the same flow. Many of the manual steps traditionally involved in provisioning are automated with DPS to reduce the time to deploy IoT devices and lower the risk of manual error. The following section describes what goes on behind the scenes to get a device provisioned. The first step is manual, all of the following steps are automated.
-
-![Basic provisioning flow](./media/about-iot-dps/dps-provisioning-flow.png)
-
-1. Device manufacturer adds the device registration information to the enrollment list in the Azure portal.
-2. Device contacts the DPS endpoint set at the factory. The device passes the identifying information to DPS to prove its identity.
-3. DPS validates the identity of the device by validating the registration ID and key against the enrollment list entry using either a nonce challenge ([Trusted Platform Module](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/)) or standard X.509 verification (X.509).
-4. DPS registers the device with an IoT hub and populates the device's [desired twin state](../iot-hub/iot-hub-devguide-device-twins.md).
-5. The IoT hub returns device ID information to DPS.
-6. DPS returns the IoT hub connection information to the device. The device can now start sending data directly to the IoT hub.
-7. The device connects to IoT hub.
-8. The device gets the desired state from its device twin in IoT hub.
+Provisioning of nested IoT Edge devices (parent/child hierarchies) is not currently supported by DPS.
## Provisioning process
This step is about configuring the cloud for proper automatic provisioning. Gene
There is a one-time initial setup of the provisioning that must occur, which is usually handled by the solution operator. Once the provisioning service is configured, it does not have to be modified unless the use case changes.
-After the service has been configured for automatic provisioning, it must be prepared to enroll devices. This step is done by the device operator, who knows the desired configuration of the device(s) and is in charge of making sure the provisioning service can properly attest to the device's identity when it comes looking for its IoT hub. The device operator takes the identifying key information from the manufacturer and adds it to the enrollment list. There can be subsequent updates to the enrollment list as new entries are added or existing entries are updated with the latest information about the devices.
+After the service has been configured for automatic provisioning, it must be prepared to enroll devices. This step is done by the device operator, who knows the desired configuration of the device(s) and is in charge of making sure the provisioning service can properly attest to the device's identity when it looks for its IoT hub. The device operator takes the identifying key information from the manufacturer and adds it to the enrollment list. There can be subsequent updates to the enrollment list as new entries are added or existing entries are updated with the latest information about the devices.
## Registration and provisioning
DPS has many features, making it ideal for provisioning devices.
* **Secure attestation** support for both X.509 and TPM-based identities. * **Enrollment list** containing the complete record of devices/groups of devices that may at some point register. The enrollment list contains information about the desired configuration of the device once it registers, and it can be updated at any time.
-* **Multiple allocation policies** to control how DPS assigns devices to IoT hubs in support of your scenarios: Lowest latency, evenly weighted distribution (default), and static configuration via the enrollment list. Latency is determined using the same method as [Traffic Manager](../traffic-manager/traffic-manager-routing-methods.md#performance).
+* **Multiple allocation policies** to control how DPS assigns devices to IoT hubs in support of your scenarios: Lowest latency, evenly weighted distribution (default), and static configuration. Latency is determined using the same method as [Traffic Manager](../traffic-manager/traffic-manager-routing-methods.md#performance). Custom allocation, which lets you implement your own allocation policies via webhooks hosted in Azure Functions is also supported.
* **Monitoring and diagnostics logging** to make sure everything is working properly. * **Multi-hub support** allows DPS to assign devices to more than one IoT hub. DPS can talk to hubs across multiple Azure subscriptions. * **Cross-region support** allows DPS to assign devices to IoT hubs in other regions. * **Encryption for data at rest** allows data in DPS to be encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
-You can learn more about the concepts and features involved in device provisioning by reviewing the [DPS terminology](concepts-service.md) topic along with the other conceptual topics in the same section.
+You can learn more about the concepts and features involved in device provisioning by reviewing the [DPS terminology](concepts-service.md) article along with the other conceptual articles in the same section.
## Cross-platform support
-Just like all Azure IoT services, DPS works cross-platform with a variety of operating systems. Azure offers open-source SDKs in a variety of [languages](https://github.com/Azure/azure-iot-sdks) to facilitate connecting devices and managing the service. DPS supports the following protocols for connecting devices:
+Just like all Azure IoT services, DPS works cross-platform with various operating systems. Azure offers open-source SDKs in various [languages](https://github.com/Azure/azure-iot-sdks) to facilitate connecting devices and managing the service. DPS supports the following protocols for connecting devices:
* HTTPS * AMQP
iot-dps Concepts Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md
An important part of the overall deployment is monitoring the solution end-to-en
## Next steps -- [Provision devices across load-balanced IoT Hubs](tutorial-provision-multiple-hubs.md)
+- [Provision devices across IoT Hubs](how-to-use-allocation-policies.md)
- [Retry timing](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/docs/iot/mqtt_state_machine.md#retry-timing) when retrying operations
iot-dps Concepts Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-service.md
Title: Terminology used with Azure IoT Hub Device Provisioning Service | Microso
description: Describes common terminology used with the Device Provisioning Service (DPS) and IoT Hub Previously updated : 09/18/2019 Last updated : 10/24/2022
This article gives an overview of the provisioning concepts most applicable to m
## Service operations endpoint
-The service operations endpoint is the endpoint for managing the service settings and maintaining the enrollment list. This endpoint is only used by the service administrator; it is not used by devices.
+The service operations endpoint is the endpoint for managing the service settings and maintaining the enrollment list. This endpoint is only used by the service administrator; it isn't used by devices.
## Device provisioning endpoint
The device provisioning endpoint is the single endpoint all devices use for auto
## Linked IoT hubs
-The Device Provisioning Service can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to an instance of the Device Provisioning Service gives the service read/write permissions to the IoT hub's device registry; with the link, a Device Provisioning Service can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your provisioning service.
+DPS can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the service read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance. Settings on a linked IoT hub, for example, the allocation weight setting, determine how it participates in allocation policies.
## Allocation policy
-The service-level setting that determines how Device Provisioning Service assigns devices to an IoT hub. There are four supported allocation policies:
+Allocation policies determine how DPS assigns devices to an IoT hub. Each DPS instance has a default allocation policy, but this policy can be overridden by an allocation policy set on an enrollment. Only IoT hubs [linked](#linked-iot-hubs) to the DPS instance can participate in allocation.
-* **Evenly weighted distribution**: linked IoT hubs are equally likely to have devices provisioned to them. The default setting. If you are provisioning devices to only one IoT hub, you can keep this setting.
+There are four supported allocation policies:
-* **Lowest latency**: devices are provisioned to an IoT hub with the lowest latency to the device. If multiple linked IoT hubs would provide the same lowest latency, the provisioning service hashes devices across those hubs
+* **Evenly weighted distribution**: devices are provisioned to an IoT hub using a weighted hash. By default, linked IoT hubs have the same allocation weight setting, so they're equally likely to have devices provisioned to them. The allocation weight of an IoT hub may be adjusted to increase or decrease its likelihood of being assigned. This is the default allocation policy for a DPS instance. If you're provisioning devices to only one IoT Hub, we recommend using this policy.
-* **Static configuration via the enrollment list**: specification of the desired IoT hub in the enrollment list takes priority over the service-level allocation policy.
+* **Lowest latency**: devices are provisioned to an IoT hub with the lowest latency to the device. If multiple linked IoT hubs would provide the same lowest latency, DPS hashes devices across those IoT hubs based on their configured allocation weight.
-* **Custom (Use Azure Function)**: A [custom allocation policy](concepts-custom-allocation.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an Azure Function to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment to your code. Your function code is executed and returns the IoT hub information used to provisioning the device.
+* **Static configuration**: devices are provisioned to a single IoT hub, which must be specified on the enrollment.
+
+* **Custom (Use Azure Function)**: A [custom allocation policy](concepts-custom-allocation.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using a custom webhook hosted in Azure Functions to assign devices to an IoT hub. DPS calls your webhook providing all relevant information about the device and the enrollment. Your webhook returns the IoT hub and initial device twin (optional) used to provision the device. Can't be set as the DPS instance default policy.
## Enrollment An enrollment is the record of devices or groups of devices that may register through auto-provisioning. The enrollment record contains information about the device or group of devices, including:-- the [attestation mechanism](#attestation-mechanism) used by the device-- the optional initial desired configuration-- desired IoT hub-- the desired device ID+
+* the [attestation mechanism](#attestation-mechanism) used by the device
+* the optional initial desired configuration
+* the [allocation policy](#allocation-policy) to use to assign devices to an IoT hub; if not specified on the enrollment, the DPS instance default allocation policy is used.
+* the [linked IoT hub(s)](#linked-iot-hubs) to apply the allocation policy to. For the *Static configuration* allocation policy, a single IoT hub must be specified. For all other allocation policies, one or more IoT hubs may be specified; if no IoT hubs are specified on the enrollment, all the IoT hubs linked to the DPS instance are used.
+* the desired device ID (individual enrollments only)
There are two types of enrollments supported by Device Provisioning Service:
There are two types of enrollments supported by Device Provisioning Service:
An enrollment group is a group of devices that share a specific attestation mechanism. Enrollment groups support X.509 certificate or symmetric key attestation. Devices in an X.509 enrollment group present X.509 certificates that have been signed by the same root or intermediate Certificate Authority (CA). The subject common name (CN) of each device's end-entity (leaf) certificate becomes the registration ID for that device. Devices in a symmetric key enrollment group present SAS tokens derived from the group symmetric key.
-The name of the enrollment group as well as the registration IDs presented by devices must be case-insensitive strings of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The enrollment group name can be up to 128 characters long. In symmetric key enrollment groups, the registration IDs presented by devices can be up to 128 characters long. However, in X.509 enrollment groups, because the maximum length of the subject common name in an X.509 certificate is 64 characters, the registration IDs are limited to 64 characters.
+The name of the enrollment group and the registration IDs presented by devices must be case-insensitive strings of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The enrollment group name can be up to 128 characters long. In symmetric key enrollment groups, the registration IDs presented by devices can be up to 128 characters long. However, in X.509 enrollment groups, because the maximum length of the subject common name in an X.509 certificate is 64 characters, the registration IDs are limited to 64 characters.
For devices in an enrollment group, the registration ID is also used as the device ID that is registered to IoT Hub.
An attestation mechanism is the method used for confirming a device's identity.
The Device Provisioning Service supports the following forms of attestation: * **X.509 certificates** based on the standard X.509 certificate authentication flow. For more information, see [X.509 attestation](concepts-x509-attestation.md).
-* **Trusted Platform Module (TPM)** based on a nonce challenge, using the TPM standard for keys to present a signed Shared Access Signature (SAS) token. This does not require a physical TPM on the device, but the service expects to attest using the endorsement key per the [TPM spec](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/). For more information, see [TPM attestation](concepts-tpm-attestation.md).
+* **Trusted Platform Module (TPM)** based on a nonce challenge, using the TPM standard for keys to present a signed Shared Access Signature (SAS) token. This doesn't require a physical TPM on the device, but the service expects to attest using the endorsement key per the [TPM spec](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/). For more information, see [TPM attestation](concepts-tpm-attestation.md).
* **Symmetric Key** based on shared access signature (SAS) [SAS tokens](../iot-hub/iot-hub-dev-guide-sas.md#sas-tokens), which include a hashed signature and an embedded expiration. For more information, see [Symmetric key attestation](concepts-symmetric-key-attestation.md). ## Hardware security module
The hardware security module, or HSM, is used for secure, hardware-based storage
> [!TIP] > We strongly recommend using an HSM with devices to securely store secrets on your devices.
-Device secrets may also be stored in software (memory), but it is a less secure form of storage than an HSM.
+Device secrets may also be stored in software (memory), but it's a less secure form of storage than an HSM.
## ID scope
-The ID scope is assigned to a Device Provisioning Service when it is created by the user and is used to uniquely identify the specific provisioning service the device will register through. The ID scope is generated by the service and is immutable, which guarantees uniqueness.
+The ID scope is assigned to a Device Provisioning Service when it's created by the user and is used to uniquely identify the specific provisioning service the device will register through. The ID scope is generated by the service and is immutable, which guarantees uniqueness.
> [!NOTE] > Uniqueness is important for long-running deployment operations and merger and acquisition scenarios. ## Registration
-A registration is the record of a device successfully registering/provisioning to an IoT Hub via the Device Provisioning Service. Registration records are created automatically; they can be deleted, but they cannot be updated.
+A registration is the record of a device successfully registering/provisioning to an IoT Hub via the Device Provisioning Service. Registration records are created automatically; they can be deleted, but they can't be updated.
## Registration ID The registration ID is used to uniquely identify a device registration with the Device Provisioning Service. The registration ID must be unique in the provisioning service [ID scope](#id-scope). Each device must have a registration ID. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). DPS supports registration IDs up to 128 characters long.
-* In the case of TPM, the registration ID is provided by the TPM itself.
-* In the case of X.509-based attestation, the registration ID is set to the subject common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format. However, the registration ID is limited to 64 characters because that's the maximum length of the subject common name in an X.509 certificate.
+* With TPM attestation, the registration ID is provided by the TPM itself.
+* With X.509-based attestation, the registration ID is set to the subject common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format. However, the registration ID is limited to 64 characters because that's the maximum length of the subject common name in an X.509 certificate.
## Device ID
-The device ID is the ID as it appears in IoT Hub. The desired device ID may be set in the enrollment entry, but it is not required to be set. Setting the desired device ID is only supported in individual enrollments. If no desired device ID is specified in the enrollment list, the registration ID is used as the device ID when registering the device. Learn more about [device IDs in IoT Hub](../iot-hub/iot-hub-devguide-identity-registry.md).
+The device ID is the ID as it appears in IoT Hub. The desired device ID may be set in the enrollment entry, but it isn't required to be set. Setting the desired device ID is only supported in individual enrollments. If no desired device ID is specified in the enrollment list, the registration ID is used as the device ID when registering the device. Learn more about [device IDs in IoT Hub](../iot-hub/iot-hub-devguide-identity-registry.md).
## Operations
iot-dps How To Manage Enrollments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-enrollments.md
# How to manage device enrollments with Azure portal
-A *device enrollment* creates a record of a single device or a group of devices that may at some point register with the Azure IoT Hub Device Provisioning Service (DPS). The enrollment record contains the initial configuration for the device(s) as part of that enrollment. Included in the configuration is either the IoT hub to which a device will be assigned, or an allocation policy that configures the hub from a set of hubs. This article shows you how to manage device enrollments for your provisioning service.
+A *device enrollment* creates a record of a single device or a group of devices that may at some point register with the Azure IoT Hub Device Provisioning Service (DPS). The enrollment record contains the initial configuration for the device(s) as part of that enrollment. Included in the configuration is either the IoT hub to which a device will be assigned, or an allocation policy that configures the IoT hub from a set of IoT hubs. This article shows you how to manage device enrollments for your provisioning service.
The Azure IoT Device Provisioning Service supports two types of enrollments:
To create a symmetric key enrollment group:
| **Group name** | The name of the group of devices. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).| | **Attestation Type** |Select **Symmetric Key**.| | **Auto Generate Keys** |Check this box.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
Leave the rest of the fields at their default values.
To create a symmetric key individual enrollment:
| **Auto Generate Keys** |Check this box. | | **Registration ID** | Type in a unique registration ID.| | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry.md). If the device ID is left unspecified, then the registration ID will be used.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-symm-key.png" alt-text="Add individual enrollment for symmetric key attestation.":::
To create a X.509 certificate individual enrollment:
| **Mechanism** | Select *X.509* | | **Primary Certificate .pem or .cer file** | Upload a certificate from which you may generate leaf certificates. If choosing .cer file, only base-64 encoded certificate is accepted. | | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry. The device ID must be the subject name on the device certificate that you upload for the enrollment. That subject name must conform to the rules for a device ID.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-cert.png" alt-text="Add individual enrollment for X.509 certificate attestation.":::
To create a TPM individual enrollment:
| **Endorsement Key** | The unique endorsement key of the TPM device. | | **Registration ID** | Type in a unique registration ID.| | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry. If the device ID is left unspecified, then the registration ID will be used.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-tpm.png" alt-text="Add individual enrollment for TPM attestation.":::
iot-dps How To Manage Linked Iot Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-linked-iot-hubs.md
+
+ Title: How to manage linked IoT hubs with Device Provisioning Service (DPS)
+description: This article shows how to link and manage IoT hubs with the Device Provisioning Service (DPS).
++ Last updated : 10/24/2022++++++
+# How to link and manage IoT hubs
+
+Azure IoT Hub Device Provisioning Service (DPS) can provision devices across one or more IoT hubs. Before DPS can provision devices to an IoT hub, it must be linked to your DPS instance. Once linked, an IoT hub can be used in an allocation policy. Allocation policies determine how devices are assigned to IoT hubs by DPS. This article provides instruction on how to link IoT hubs and manage them in your DPS instance.
+
+## Linked IoT hubs and allocation policies
+
+DPS can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the service read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance.
+
+After an IoT hub is linked to DPS, it's eligible to participate in allocation. Whether and how it will participate in allocation depends on settings in the enrollment that a device provisions through and settings on the linked IoT hub itself.
+
+The following settings control how DPS uses linked IoT hubs:
+
+* **Connection string**: Sets the IoT Hub connection string that DPS uses to connect to the linked IoT hub. The connection string is based on one of the IoT hub's shared access policies. DPS needs the following permissions on the IoT hub: *RegistryWrite* and *ServiceConnect*. The connection string must be for a shared access policy that has these permissions. To learn more about IoT Hub shared access policies, see [IoT Hub access control and permissions](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions).
+
+* **Allocation weight**: Determines the likelihood of an IoT hub being selected when DPS hashes device assignment across a set of IoT hubs. The value can be between one and 1000. The default is one (or **null**). Higher values increase the IoT hub's probability of being selected.
+
+* **Apply allocation policy**: Sets whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices won't be assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
+
+To learn about DPS allocation policies and how linked IoT hubs participate in them, see [Manage allocation policies](how-to-use-allocation-policies.md).
+
+## Add a linked IoT hub
+
+When you link an IoT hub to your DPS instance, it becomes available to participate in allocation. You can add IoT hubs that are inside or outside of your subscription. When you link an IoT hub, it may or may not be available for allocations in existing enrollments:
+
+* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a newly linked IoT hub immediately begins participating in allocation.
+
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically add the new IoT hub to the enrollment settings for it to participate in allocation.
+
+### Use the Azure portal to link an IoT hub
+
+In the Azure portal, you can link an IoT hub either from the left menu of your DPS instance or from the enrollment when creating or updating an enrollment. In both cases, the IoT hub is scoped to the DPS instance (not just the enrollment).
+
+To link an IoT hub to your DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**.
+
+1. At the top of the page, select **+ Add**.
+
+1. On the **Add link to IoT hub** page, select the subscription that contains the IoT hub and then choose the name of the IoT hub from the **IoT hub** list.
+
+1. After you select the IoT hub, choose an access policy that DPS will use to connect to the IoT hub. The **Access Policy** list shows all shared access policies defined on the selected IoT Hub that have both *RegistryWrite* and *ServiceConnect* permissions defined. The default is the *iothubowner* policy. Select the policy you want to use.
+
+1. Select **Save**.
+
+When you're creating or updating an enrollment, you can use the **Link a new IoT hub** button on the enrollment. You'll be presented with the same page and choices as above. After you save the linked hub, it will be available on your DPS instance and can be selected from your enrollment.
+
+> [!NOTE]
+>
+> In the Azure portal, you can't set the *Allocation weight* and *Apply allocation policy* settings when you add a linked IoT hub. Instead, You can update these settings after the IoT hub is linked. To learn more, see [Update a linked IoT hub](#update-a-linked-iot-hub).
+
+### Use the Azure CLI to link an IoT hub
+
+Use the [az iot dps linked-hub create](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-create) Azure CLI command to link an IoT hub to your DPS instance.
+
+For example, the following command links an IoT hub named *MyExampleHub* using a connection string for its *iothubowner* shared access policy. This command leaves the *Allocation weight* and *Apply allocation policy* settings at their defaults, but you can specify values for these settings if you want to.
+
+```azurecli
+az iot dps linked-hub create --dps-name MyExampleDps --resource-group MyResourceGroup --connection-string "HostName=MyExampleHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XNBhoasdfhqRlgGnasdfhivtshcwh4bJwe7c0RIGuWsirW0=" --location westus
+```
+
+DPS also supports linking IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Update a linked IoT hub
+
+You can update the settings on a linked IoT hub to change its allocation weight, whether it can have allocation policies applied to it, and the connection string that DPS uses to connect to it. When you update the settings for an IoT hub, the changes take effect immediately, whether the IoT hub is specified on an enrollment or used by default.
+
+### Use the Azure portal to update a linked IoT hub
+
+In the Azure portal, you can update the *Allocation weight* and *Apply allocation policy* settings.
+
+To update the settings for a linked IoT hub using the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**, then select the IoT hub from the list.
+
+1. On the **Linked IoT hub details** page:
+
+ :::image type="content" source="media/how-to-manage-linked-iot-hubs/set-linked-iot-hub-properties.png" alt-text="Screenshot that shows the linked IoT hub details page.":::.
+
+ * Use the **Allocation weight** slider or text box to choose a weight between one and 1000. The default is one.
+
+ * Set the **Apply allocation policy** switch to specify whether the linked IoT hub should be included in allocation.
+
+1. Save your settings.
+
+> [!NOTE]
+>
+> You can't update the connection string that DPS uses to connect to the IoT hub from the Azure portal. Instead, you can use the Azure CLI to update the connection string, or you can delete the linked IoT hub from your DPS instance and relink it. To learn more, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs).
+
+### Use the Azure CLI to update a linked IoT hub
+
+With the Azure CLI, you can update the *Allocation weight*, *Apply allocation policy*, and *Connection string* settings.
+
+Use the [az iot dps linked-hub update](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-update) command to update the allocation weight or apply allocation policies settings. For example, the following command sets the allocation weight and apply allocation policy for a linked IoT hub:
+
+```azurecli
+az iot dps linked-hub update --dps-name MyExampleDps --resource-group MyResourceGroup --linked-hub MyExampleHub --allocation-weight 2 --apply-allocation-policy true
+```
+
+Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) command to update the connection string for a linked IoT hub. You can use the `--set` parameter along with the connection string for the IoT hub shared access policy you want to use. For details, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs).
+
+DPS also supports updating linked IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Delete a linked IoT hub
+
+When you delete a linked IoT hub from your DPS instance, it will no longer be available to set in future enrollments. However, it may or may not be removed from allocations in existing enrollments:
+
+* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a deleted linked IoT hub is no longer available for allocation.
+
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically remove the IoT hub from the enrollment settings for it to be removed from participation in allocation. Failure to do so may result in an error when a device tries to provision through the enrollment.
+
+### Use the Azure portal to delete a linked IoT hub
+
+To delete a linked IoT hub from your DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**.
+
+1. From the list of IoT hubs, select the check box next to the IoT hub or IoT hubs you want to delete. Then select **Delete** at the top of the page and confirm your choice when prompted.
+
+### Use the Azure CLI to delete a linked IoT hub
+
+Use the [az iot dps linked-hub delete](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-delete) command to remove a linked IoT hub from the DPS instance. For example, the following command removes the IoT hub named MyExampleHub:
+
+```azurecli
+az iot dps linked-hub delete --dps-name MyExampleDps --resource-group MyResourceGroup --linked-hub MyExampleHub
+```
+
+DPS also supports deleting linked IoT Hubs from the DPS instance using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Update keys for linked IoT hubs
+
+It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub. Note that provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key.
+
+### Use the Azure portal to update keys
+
+You can't update the connection string setting for a linked IoT Hub when using Azure portal. Instead, you need to delete the linked IoT hub from your DPS instance and then re-add it.
+
+To update symmetric keys for a linked IoT hub in the Azure portal:
+
+1. On the left menu of your DPS instance in the Azure portal, select the IoT hub that you want to update the key(s) for.
+
+1. On the **Linked IoT hub details** page, note down the values for *Allocation weight* and *Apply allocation policy*, you'll need these values when you relink the IoT hub to your DPS instance later. Then, select **Manage Resource** to go to the IoT hub.
+
+1. On the left menu of the IoT hub, under **Security settings**, select **Shared access policies**.
+
+1. On **Shared access policies**, under **Manage shared access policies**, select the policy that your DPS instance uses to connect to the linked IoT hub.
+
+1. At the top of the page, select **Regenerate primary key**, **Regenerate secondary key**, or **Swap keys**, and confirm your choice when prompted.
+
+1. Navigate back to your DPS instance.
+
+1. Follow the steps in [Delete an IoT hub](#use-the-azure-portal-to-delete-a-linked-iot-hub) to delete the IoT hub from your DPS instance.
+
+1. Follow the steps in [Link an IoT hub](#use-the-azure-portal-to-link-an-iot-hub) to relink the IoT hub to your DPS instance with the new connection string for the policy.
+
+1. If you need to restore the allocation weight and apply allocation policy settings, follow the steps in [Update a linked IoT hub](#use-the-azure-portal-to-update-a-linked-iot-hub) using the values you saved in step 2.
+
+### Use the Azure CLI to update keys
+
+To update symmetric keys for a linked IoT hub with Azure CLS:
+
+1. Use the [az iot hub policy renew-key](/cli/azure/iot/hub/policy#az-iot-hub-policy-renew-key) command to swap or regenerate the symmetric keys for the shared access policy on the IoT hub. For example, the following command renews the primary key for the *iothubowner* shared access policy on an IoT hub:
+
+ ```azurecli
+ az iot hub policy renew-key --hub-name MyExampleHub --name owner --rk primary
+ ```
+
+1. Use the [az iot hub connection-string show](/cli/azure/iot/hub/policy#az-iot-hub-az-iot-hub-connection-string-show) command to get the new connection string for the shared access policy. For example, the following command gets the primary connection string for the *iothubowner* shared access policy that the primary key was regenerated for in the previous command:
+
+ ```azurecli
+ az iot hub connection-string show --hub-name MyExampleHub --policy-name owner --key-type primary
+ ```
+
+1. Use the [az iot dps linked-hub list](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-show) command to find the position of the IoT hub in the collection of linked IoT hubs for your DPS instance. For example, the following command gets the primary connection string for the *owner* shared access policy that the primary key was regenerated for in the previous command:
+
+ ```azurecli
+ az iot dps linked-hub list --dos-name MyExampleDps
+ ```
+
+ The output will show the position of the linked IoT hub you want to update the connection string for in the table of linked IoT hubs maintained by your DPS instance. In this case, it's the first IoT hub in the list, *MyExampleHub*.
+
+ ```json
+ [
+ {
+ "allocationWeight": null,
+ "applyAllocationPolicy": null,
+ "connectionString": "HostName=MyExampleHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=****",
+ "location": "centralus",
+ "name": "MyExampleHub.azure-devices.net"
+ },
+ {
+ "allocationWeight": null,
+ "applyAllocationPolicy": null,
+ "connectionString": "HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=****",
+ "location": "centralus",
+ "name": "NyExampleHub-2.azure-devices.net"
+ }
+ ]
+ ```
+
+1. Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) command to update the connection string for the linked IoT hub. You use the `--set` parameter and the position of the linked IoT hub in the `properties.iotHubs[]` table to target the IoT hub. For example, the following command updates the connection string for *MyExampleHub* returned first in the previous command:
+
+ ```azurecli
+ az iot dps update --name MyExampleDps --set properties.iotHubs[0].connectionString="HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=NewTokenValue"
+ ```
+
+## Limitations
+
+There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+
+## Next steps
+
+* To learn more about allocation policies, see [Manage allocation policies](how-to-use-allocation-policies.md).
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
# How to reprovision devices
-During the lifecycle of an IoT solution, it is common to move devices between IoT hubs. This topic is written to assist solution operators configuring reprovisioning policies.
+During the lifecycle of an IoT solution, it's common to move devices between IoT hubs. This topic is written to assist solution operators configuring reprovisioning policies.
For more a more detailed overview of reprovisioning scenarios, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md). - ## Configure the enrollment allocation policy
-The allocation policy determines how the devices associated with the enrollment will be allocated, or assigned, to an IoT hub once reprovisioned.
+The allocation policy determines how the devices associated with the enrollment will be allocated, or assigned, to an IoT hub once reprovisioned. To learn more about allocation polices, see [How to use allocation policies](how-to-use-allocation-policies.md).
The following steps configure the allocation policy for a device's enrollment: 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Device Provisioning Service instance.
-2. Click **Manage enrollments**, and click the enrollment group or individual enrollment that you want to configure for reprovisioning.
+2. Select **Manage enrollments**, and then select the enrollment group or individual enrollment that you want to configure for reprovisioning.
3. Under **Select how you want to assign devices to hubs**, select one of the following allocation policies:
- * **Lowest latency**: This policy assigns devices to the linked IoT Hub that will result in the lowest latency communications between device and IoT Hub. This option enables the device to communicate with the closest IoT hub based on location.
-
- * **Evenly weighted distribution**: This policy distributes devices across the linked IoT Hubs based on the allocation weight assigned to each linked IoT hub. This policy allows you to load balance devices across a group of linked hubs based on the allocation weights set on those hubs. If you are provisioning devices to only one IoT Hub, we recommend this setting. This setting is the default.
-
- * **Static configuration**: This policy requires a desired IoT Hub be listed in the enrollment entry for a device to be provisioned. This policy allows you to designate a single specific IoT hub that you want to assign devices to.
+ * **Lowest latency**: This policy assigns devices to the IoT hub that will result in the lowest latency communications between the device and IoT Hub. This option enables the device to communicate with the closest IoT hub based on location.
+
+ * **Evenly weighted distribution**: This policy distributes devices across IoT hubs based on the allocation weight configured on each IoT hub. IoT hubs with a higher allocation weight are more likely to be assigned. If you're provisioning devices to only one IoT Hub, we recommend this setting. This setting is the default.
+
+ * **Static configuration**: This policy requires a desired IoT hub be listed in the enrollment entry for a device to be provisioned. This policy allows you to designate a single IoT hub that you want to assign devices to.
-4. Under **Select the IoT hubs this group can be assigned to**, select the linked IoT hubs that you want included with your allocation policy. Optionally, add a new linked Iot hub using the **Link a new IoT Hub** button.
+ * **Custom (Use Azure Function)**: This policy uses a custom webhook hosted in Azure Functions to assign devices to one or more IoT hubs. Custom allocation policies give you more control over how devices are assigned to your IoT hubs. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md).
- With the **Lowest latency** allocation policy, the hubs you select will be included in the latency evaluation to determine the closest hub for device assignment.
+4. Under **Select the IoT hubs this group can be assigned to**, select the linked IoT hubs that you want included in your allocation policy. Optionally, add a new linked Iot hub using the **Link a new IoT Hub** button.
- With the **Evenly weighted distribution** allocation policy, devices will be load balanced across the hubs you select based on their configured allocation weights and their current device load.
+ * With the **Lowest latency** allocation policy, the IoT hubs you select will be included in the latency evaluation to determine the closest IoT hub for device assignment.
- With the **Static configuration** allocation policy, select the IoT hub you want devices assigned to.
+ * With the **Evenly weighted distribution** allocation policy, devices will be hashed across the IoT hubs you select based on their configured allocation weights.
-4. Click **Save**, or proceed to the next section to set the reprovisioning policy.
+ * With the **Static configuration** allocation policy, select the IoT hub you want devices assigned to.
- ![Select enrollment allocation policy](./media/how-to-reprovision/enrollment-allocation-policy.png)
+ * With the **Custom** allocation policy, select the IoT hubs you want evaluated for assignment by your custom allocation webhook.
+5. Select **Save**, or proceed to the next section to set the reprovisioning policy.
+ ![Screenshot that shows setting the enrollment allocation policy and IoT hubs in the Azure portal.](./media/how-to-reprovision/enrollment-allocation-policy.png)
## Set the reprovisioning policy 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Device Provisioning Service instance.
-2. Click **Manage enrollments**, and click the enrollment group or individual enrollment that you want to configure for reprovisioning.
+2. Select **Manage enrollments**, and the select the enrollment group or individual enrollment that you want to configure for reprovisioning.
3. Under **Select how you want device data to be handled on re-provision to a different IoT hub**, choose one of the following reprovisioning policies:
The following steps configure the allocation policy for a device's enrollment:
* **Re-provision and reset to initial config**: This policy takes action when devices associated with the enrollment entry submit a new provisioning request. Depending on the enrollment entry configuration, the device may be reassigned to another IoT hub. If the device is changing IoT hubs, the device registration with the initial IoT hub will be removed. The initial configuration data that the provisioning service instance received when the device was provisioned is provided to the new IoT hub. During migration, the device's status will be reported as **Assigning**.
-4. Click **Save** to enable the reprovisioning of the device based on your changes.
-
- ![Screenshot that highlights the changes you've made and the Save button.](./media/how-to-reprovision/reprovisioning-policy.png)
-
+4. Select **Save** to enable the reprovisioning of the device based on your changes.
+ ![Screenshot that shows setting the enrollment reprovisioning policy in the Azure portal.](./media/how-to-reprovision/reprovisioning-policy.png)
## Send a provisioning request from the device
How often a device submits a provisioning request depends on the scenario. When
>[!TIP] > We recommend not provisioning on every reboot of the device, as this could hit the service throttling limits especially when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults). >In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+>
> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header. > * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response. > * On errors other than 429 and 5xx, re-register through DPS > * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
->
+>
> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time. - ## Next steps -- To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) -- To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
+* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md).
+* To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md).
iot-dps How To Use Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-allocation-policies.md
+
+ Title: How to use allocation policies with Device Provisioning Service (DPS)
+description: This article shows how to use the Device Provisioning Service (DPS) allocation policies to automatically provision device across one or more IoT hubs.
++ Last updated : 10/24/2022++++++
+# How to use allocation policies to provision devices across IoT hubs
+
+Azure IoT Hub Device Provisioning Service (DPS) supports several built-in allocation policies that determine how it assigns devices across one or more IoT hubs. DPS also includes support for custom allocation policies, which let you create and use your own allocation policies when your IoT scenario requires functionality not provided by the built-in policies.
+
+This article helps you understand how to use and manage DPS allocation policies.
+
+## Understand allocation policies
+
+Allocation policies determine how DPS assigns devices to an IoT hub. Each DPS instance has a default allocation policy, but this policy can be overridden by an allocation policy set on an enrollment. Only IoT hubs that have been linked to the DPS instance can participate in allocation. Whether a linked IoT hub will participate in allocation depends on settings on the enrollment that a device provisions through.
+
+DPS supports four allocation policies:
+
+* **Evenly weighted distribution**: devices are provisioned to an IoT hub using a weighted hash. By default, linked IoT hubs have the same allocation weight setting, so they're equally likely to have devices provisioned to them. The allocation weight of an IoT hub may be adjusted to increase or decrease its likelihood of being assigned. *Evenly weighted distribution* is the default allocation policy for a DPS instance. If you're provisioning devices to only one IoT hub, we recommend using this policy.
+
+* **Lowest latency**: devices are provisioned to the IoT hub with the lowest latency to the device. If multiple IoT hubs would provide the lowest latency, DPS hashes devices across those hubs based on their configured allocation weight.
+
+* **Static configuration**: devices are provisioned to a single IoT hub, which must be specified on the enrollment.
+
+* **Custom (Use Azure Function)**: A custom allocation policy gives you more control over how devices are assigned to an IoT hub. This is accomplished by using a custom webhook hosted in Azure Functions to assign devices to an IoT hub. DPS calls your webhook providing all relevant information about the device and the enrollment. Your webhook returns the IoT hub and initial device twin (optional) used to provision the device. Custom payloads can also be passed to and from the device. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md). Can't be set as the DPS instance default policy.
+
+> [!NOTE]
+> The preceding list shows the names of the allocation policies as they appear in the Azure portal. When setting the allocation policy using the DPS REST API, Azure CLI, and DPS service SDKs, they are referred to as follows: **hashed**, **geolatency**, **static**, and **custom**.
+
+There are two settings on a linked IoT hub that control how it participates in allocation:
+
+* **Allocation weight**: sets the weight that the IoT hub will have when participating in allocation policies that involve multiple IoT hubs. It can be a value between one and 1000. The default is one (or **null**).
+
+ * With the *Evenly weighted distribution* allocation policy, IoT hubs with higher allocation weight values have a greater likelihood of being selected compared to those with lower weight values.
+
+ * With the *Lowest latency* allocation policy, the allocation weight value will affect the probability of an IoT hub being selected when more than one IoT hub satisfies the lowest latency requirement.
+
+ * With a *Custom* allocation policy, whether and how the allocation weight value is used will depend on the webhook logic.
+
+* **Apply allocation policy**: specifies whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices won't be assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
+
+To learn more about linking and managing IoT hubs in your DPS instance, see [Link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).
+
+When a device provisions through DPS, the service assigns it to an IoT hub according to the following guidelines:
+
+* If the enrollment specifies an allocation policy, use that policy; otherwise, use the default allocation policy for the DPS instance.
+
+* If the enrollment specifies one or more IoT hubs, apply the allocation policy across those IoT hubs; otherwise, apply the allocation policy across all of the IoT hubs linked to the DPS instance. Note that if the allocation policy is *Static configuration*, the enrollment *must* specify an IoT hub.
+
+> [!IMPORTANT]
+> When you change an allocation policy or the IoT hubs it applies to, the changes only affect subsequent device registrations. Devices already provisioned to an IoT hub won't be affected. If you want your changes to apply retroactively to these devices, you'll need to reprovision them. To learn more, see [How to reprovision devices](how-to-reprovision.md).
+
+## Set the default allocation policy for the DPS instance
+
+The default allocation policy for the DPS instance is used when an allocation policy isn't specified on an enrollment. Only *Evenly weighted distribution*, *Lowest latency*, and *Static configuration* are supported for the default allocation policy. *Custom* allocation isn't supported. When a DPS instance is created, its default policy is automatically set to *Evenly weighted distribution*. However, you can update your DPS instance to set a different allocation policy.
+
+> [!NOTE]
+> If you set *Static configuration* as the default allocation policy for a DPS instance, a linked IoT hub *must* be specified in enrollments that rely on the default policy.
+
+### Use the Azure portal to the set default allocation policy
+
+To set the default allocation policy for the DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Manage allocation policy**.
+
+2. Select the button for the allocation policy you want to set: **Lowest latency**, **Evenly weighted distribution**, or **Static configuration**. (Custom allocation isn't supported for the default allocation policy.)
+
+3. Select **Save**.
+
+### Use the Azure CLI to set the default allocation policy
+
+Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) Azure CLI command to set the default allocation policy for the DPS instance. You use `--set properties.allocationPolicy` to specify the policy. For example, the following command sets the allocation policy to *Evenly weighted distribution* (the default):
+
+```azurecli
+az iot dps update --name MyExampleDps --set properties.allocationPolicy=hashed
+```
+
+DPS also supports setting the default allocation policy using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Set allocation policy and IoT hubs for enrollments
+
+Individual enrollments and enrollment groups can specify an allocation policy and the linked IoT hubs that it should apply to. If no allocation policy is specified by the enrollment, then the default allocation policy for the DPS instance is used.
+
+In either case, the following conditions apply:
+
+* For *Evenly weighted distribution*, *Lowest latency*, and *Custom* allocation policies, the enrollment *may* specify which linked IoT hubs should be used. If no IoT hubs are selected in the enrollment, then all of the linked IoT hubs in the DPS instance will be used.
+
+* For *Static configuration*, the enrollment *must* specify a single IoT hub from the list of linked IoT hubs.
+
+For both individual enrollments and enrollment groups, you can specify an allocation policy and the linked IoT hubs to apply it to when you create or update an enrollment.
+
+### Use the Azure portal to manage enrollment allocation policy and IoT hubs
+
+To set allocation policy and select IoT hubs on an enrollment in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Manage enrollments**.
+
+1. On the **Manage enrollments** page:
+
+ * To create a new enrollment, select either **+ Add enrollment group** or **+ Add individual enrollment** at the top of the page.
+
+ * To update an existing enrollment, select it from the list under either the **Enrollment Groups** or **Individual Enrollments** tab.
+
+1. On the **Add Enrollment** page (on create) or the **Enrollment details** page (on update), you can select the allocation policy you want applied to the enrollment and select the IoT hubs that should be used:
+
+ :::image type="content" source="media/how-to-use-allocation-policies/select-enrollment-policy-and-hubs.png" alt-text="Screenshot that shows the allocation policy and selected hubs settings on Add Enrollment page.":::.
+
+ * Select the allocation policy you want to apply from the drop-down. The default allocation policy for the DPS instance is selected by default. For custom allocation, you'll also need to specify a custom allocation policy webhook in Azure Functions. For details, see the [Use custom allocation policies](tutorial-custom-allocation-policies.md) tutorial.
+
+ * Select the IoT hubs that devices can be assigned to. If you've selected the *Static configuration* allocation policy, you'll be limited to selecting a single linked IoT hub. For all other allocation policies, all the linked IoT hubs will be selected by default, but you can modify this selection using the drop-down. To have the enrollment automatically use linked IoT hubs as they're added to (or deleted from) the DPS instance, unselect all IoT hubs.
+
+ * Optionally, you can select the **Link a new IoT hub** button to link a new IoT hub to the DPS instance and make it available in the list of IoT hubs that can be selected. For details about linking an IoT hub, see [Link an IoT Hub](how-to-manage-linked-iot-hubs.md#use-the-azure-portal-to-link-an-iot-hub).
+
+1. Set any other properties needed for the enrollment and then save your settings.
+
+### Use the Azure CLI to manage enrollment allocation policy and IoT hubs
+
+Use the [az iot dps enrollment create](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-create), [az iot dps enrollment update](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-update), [az iot dps enrollment-group create](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-group-create), [az iot dps enrollment-group update](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-group-update) Azure CLI commands to create or update individual enrollments or enrollment groups.
+
+For example, the following command creates a symmetric key enrollment group that defaults to using the default allocation policy set on the DPS instance and all the IoT hubs linked to the DPS instance:
+
+```azurecli
+az iot dps enrollment-group create --dps-name MyExampleDps --enrollment-id MyEnrollmentGroup
+```
+
+The following command updates the same enrollment group to use the *Lowest latency* allocation policy with IoT hubs named *MyExampleHub* and *MyExampleHub-2*:
+
+```azurecli
+az iot dps enrollment-group update --dps-name MyExampleDps --enrollment-id MyEnrollmentGroup --allocation-policy geolatency --iot-hubs "MyExampleHub.azure-devices.net MyExampleHub-2.azure-devices.net"
+```
+
+DPS also supports setting allocation policy and selected IoT hubs on the enrollment using the [Create or Update individual enrollment](/rest/api/iot-dps/service/individual-enrollment/create-or-update) and [Create or Update enrollment group](/rest/api/iot-dps/service/enrollment-group/create-or-update) REST APIs, and the [DPS service SDKs](libraries-sdks.md#service-sdks).
+
+## Allocation behavior
+
+Note the following behavior when using allocation policies with IoT hub:
+
+* With the Azure CLI, the REST API, and the DPS service SDKs, you can create enrollments with no allocation policy. In this case, DPS uses the default policy for the DPS instance when a device provisions through the enrollment. Changing the default policy setting on the DPS instance will change how devices are provisioned through the enrollment.
+
+* With the Azure portal, the allocation policy setting for the enrollment is pre-populated with the default allocation policy. You can keep this setting or change it to another policy, but, when you save the enrollment, the allocation policy is set on the enrollment. Subsequent changes to the service default allocation policy, won't change how devices are provisioned through the enrollment.
+
+* For the *Equally weighted distribution*, *Lowest latency* and *Custom* allocation policies you can configure the enrollment to use all the IoT hubs linked to the DPS instance:
+
+ * With the Azure CLI and the DPS service SDKs, create the enrollment without specifying any IoT hubs.
+
+ * With the Azure portal, the enrollment is pre-populated with all the IoT hubs linked to the DPS instance selected; unselect all the IoT hubs before you save the enrollment.
+
+ If no IoT hubs are selected on the enrollment, then whenever a new IoT hub is linked to the DPS instance, it will participate in allocation; and vice-versa for an IoT hub that is removed from the DPS instance.
+
+* If IoT hubs are specified on an enrollment, the IoT hubs setting on the enrollment must be manually or programmatically updated for a newly linked IoT hub to be added or a deleted IoT hub to be removed from allocation.
+
+* Changing the allocation policy or IoT hubs used for an enrollment only affects subsequent registrations through that enrollment. If you want the changes to affect prior registrations, you'll need to reprovision all previously registered devices.
+
+## Limitations
+
+There are some limitations when working with allocation policies and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+
+## Next steps
+
+* To learn more about linking and managing linked IoT hubs, see [Manage linked IoT hubs](how-to-manage-linked-iot-hubs.md).
+
+* To learn more about custom allocation policies, see [Understand custom allocation policies](concepts-custom-allocation.md).
+
+* For an end-to-end example using the lowest latency allocation policy, see the [Provision for geolatency](how-to-provision-multitenant.md) tutorial.
+
+* For an end-to-end example using a custom allocation policy, see the [Use custom allocation policies](tutorial-custom-allocation-policies.md) tutorial.
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
When you're finished testing and exploring this device client sample, use the fo
## Next steps
-In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
+In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices across multiple IoT hubs, see:
> [!div class="nextstepaction"]
-> [Tutorial: Provision devices across load-balanced IoT hubs](tutorial-provision-multiple-hubs.md)
+> [How to use allocation policies](how-to-use-allocation-policies.md)
iot-dps Tutorial Provision Multiple Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-provision-multiple-hubs.md
- Title: Tutorial - Provision devices across load balanced hubs using Azure IoT Hub Device Provisioning Service
-description: This tutorial demonstrates how Device Provisioning Service (DPS) enables automatic device provisioning across load balanced IoT hubs in the Azure portal.
-- Previously updated : 10/18/2021------
-# Tutorial: Provision devices across load-balanced IoT hubs
-
-This tutorial shows how to provision devices for multiple, load-balanced IoT hubs using the Device Provisioning Service. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to provision a second device to a second IoT hub
-> * Add an enrollment list entry to the second device
-> * Set the Device Provisioning Service allocation policy to **even distribution**
-> * Link the new IoT hub to the Device Provisioning Service
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-
-## Use the Azure portal to provision a second device to a second IoT hub
-
-Follow the steps in the quickstarts to link a second IoT hub to your DPS instance and provision a device to that hub:
-
-* [Set up the Device Provisioning Service](quick-setup-auto-provision.md)
-* [Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md)
-
-## Add an enrollment list entry to the second device
-
-The enrollment list tells the Device Provisioning Service which method of attestation (the method for confirming a device identity) it is using with the device. The next step is to add an enrollment list entry for the second device.
-
-1. In the page for your Device Provisioning Service, click **Manage enrollments**. The **Add enrollment list entry** page appears.
-2. At the top of the page, click **Add**.
-3. Complete the fields and then click **Save**.
-
-## Set the Device Provisioning Service allocation policy
-
-The allocation policy is a Device Provisioning Service setting that determines how devices are assigned to an IoT hub. There are three supported allocation policies: 
-
-1. **Lowest latency**: Devices are provisioned to an IoT hub based on the hub with the lowest latency to the device.
-2. **Evenly weighted distribution** (default): Linked IoT hubs are equally likely to have devices provisioned to them. This is the default setting. If you are provisioning devices to only one IoT hub, you can keep this setting. If you plan to use on IoT hub, but expect to increase the number of hubs as the number of devices increases, it's important to note that, when assigning to an IoT Hub, the policy doesn't take into account previously registered devices. All linked hubs hold an equal chance of getting a device registration based on the weight of the linked IoT Hub. However, if an IoT hub has reached its device capacity limit, it will no longer receive device registrations. You can, however, adjust the weight of allocation for each linked IoT Hub.
-
-3. **Static configuration via the enrollment list**: Specification of the desired IoT hub in the enrollment list takes priority over the Device Provisioning Service-level allocation policy.
-
-### How the allocation policy assigns devices to IoT Hubs
-
-It may be desirable to use only one IoT Hub, until a specific number of devices is reached. In that scenario, it's important to note that, once a new IoT Hub is added, a new device has the potential to be provisioned to any one of the IoT Hubs. If you wish to balance all devices, registered and unregistered, then you'll need to re-provision all devices.
-
-Follow these steps to set the allocation policy:
-
-1. To set the allocation policy, in the Device Provisioning Service page click **Manage allocation policy**.
-2. Set the allocation policy to **Evenly weighted distribution**.
-3. Click **Save**.
-
-## Link the new IoT hub to the Device Provisioning Service
-
-Link the Device Provisioning Service and IoT hub so that the Device Provisioning Service can register devices to that hub.
-
-1. In the **All resources** page, click the Device Provisioning Service you created previously.
-2. In the Device Provisioning Service page, click **Linked IoT hubs**.
-3. Click **Add**.
-4. In the **Add link to IoT hub** page, use the radio buttons to specify whether the linked IoT hub is located in the current subscription, or in a different subscription. Then, choose the name of the IoT hub from the **IoT hub** box.
-5. Click **Save**.
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to provision a second device to a second IoT hub
-> * Add an enrollment list entry to the second device
-> * Set the Device Provisioning Service allocation policy to **even distribution**
-> * Link the new IoT hub to the Device Provisioning Service
-
-## Next steps
-
-<!-- Advance to the next tutorial to learn how to
- Replace this .md
-> [!div class="nextstepaction"]
-> [Bind an existing custom SSL certificate to Azure Web Apps]()
>
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
Using single device provisioning, you'll need to manually enter provisioning inf
Provisioning devices at-scale refers to provisioning one or more IoT Edge devices with the assistance of the [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). You'll see provisioning at-scale also referred to as **autoprovisioning**.
-If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device. This automated model can be scaled to millions of IoT Edge devices. You can see the automated provisioning flow in the [Behind the scenes section of IoT Hub DPS overview page](../iot-dps/about-iot-dps.md#behind-the-scenes).
+If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device. This automated model can be scaled to millions of IoT Edge devices.
You can secure your IoT Edge solution with the authentication method of your choice. **Symmetric key**, **X.509 certificates**, and **trusted platform module (TPM) attestation** authentication methods are available for provisioning devices at-scale. You can read more about those options in the [Choose an authentication method section](#choose-an-authentication-method).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Release Date | End of Support Date | Highlights | | | - | | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2022 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2024 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | August 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | June 2022 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
The configuration of a load test consists of:
- [Environment variables](./how-to-parameterize-load-tests.md). - [Secret parameters](./how-to-parameterize-load-tests.md). - The number of [test engines](#test-engine) to run the test script on.-- The [pass/fail criteria](./how-to-define-test-criteria.md) for the test.
+- The [fail criteria](./how-to-define-test-criteria.md) for the test.
- The list of [app components and resource metrics to monitor](./how-to-monitor-server-side-metrics.md) during the test execution. When you run a test, a [test run](#test-run) instance is created.
When you create or update a load test, you can configure the list of app compone
During a load test, Azure Load Testing collects metrics about the test execution. There are two types of metrics: -- *Client-side metrics* give you details reported by the test engine. These metrics include the number of virtual users, the request response time, the number of failed requests, or the number of requests per second. You can [define pass/fail criteria](./how-to-define-test-criteria.md) based on client-side metrics to specify when a test passes or fails.
+- *Client-side metrics* give you telemetry reported by the test engine. These metrics include the number of virtual users, the request response time, the number of failed requests, or the number of requests per second. You can [define test fail criteria](./how-to-define-test-criteria.md) based on these client-side metrics.
- *Server-side metrics* are available for Azure-hosted applications and provide information about your Azure [application components](#app-component). Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
Title: Get more insights from App Service diagnostics
+ Title: Get load test insights from App Service diagnostics
-description: 'Learn how to get detailed insights from App Service diagnostics and Azure Load Testing for App Service workloads.'
+description: 'Learn how to get detailed application performance insights from App Service diagnostics and Azure Load Testing.'
Previously updated : 11/30/2021 Last updated : 10/24/2022
-# Get detailed insights from App Service diagnostics and Azure Load Testing Preview for Azure App Service workloads
+# Get performance insights from App Service diagnostics and Azure Load Testing Preview
-In this article, you'll learn how to gain more insights from Azure App Service workloads by using Azure Load Testing Preview and Azure App Service diagnostics.
+Azure Load Testing Preview collects detailed resource metrics across your Azure app components to help identify performance bottlenecks. In this article, you learn how to use App Service Diagnostics to get additional insights when load testing Azure App Service workloads.
-[App Service diagnostics](../app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
-
-You can take advantage of App Service diagnostics when you run load tests on applications that run on App Service.
+[App Service diagnostics](/azure/app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
You can take advantage of App Service diagnostics when you run load tests on app
- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md). - An Azure App Service workload that you're running a load test against and that you've added to the app components to monitor during the load test.
-## Get more insights when you test an App Service workload
-
-In this section, you use [App Service diagnostics](../app-service/overview-diagnostics.md) to get more insights from load testing an Azure App Service workload.
+## Use App Service diagnostics for your load test
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+Azure Load Testing lets you monitor server-side metrics for your Azure app components for a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard.
-1. On the left pane, select **Tests** to view the list of tests, and then select your test.
+When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics.md).
-1. On the test runs page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
+To view the App Service diagnostics information for your application under load test:
- :::image type="content" source="media/how-to-appservice-insights/configure-app-components.png" alt-text="Screenshot that shows the 'Configure' and 'App Components' buttons for configuring the load test.":::
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Select the **Monitoring** tab, and then add your app service to the list of app components to monitor.
+1. Add your App Service resource to the load test app components. Follow the steps in [monitor server-side metrics](./how-to-monitor-server-side-metrics.md) to add your app service.
- :::image type="content" source="media/how-to-appservice-insights/test-monitoring-app-service.png" alt-text="Screenshot of the 'Edit test' pane for selecting and app service resource to monitor.":::
+ :::image type="content" source="media/how-to-appservice-insights/test-monitoring-app-service.png" alt-text="Screenshot of the Monitoring tab when editing a load test in the Azure portal, highlighting the App Service resource.":::
-1. Select **Run** to execute the load test.
+1. Select **Run** to run the load test.
After the test finishes, you'll notice a section about App Service on the test result dashboard.
-1. Select the **here** link in the App Service message.
+ :::image type="content" source="media/how-to-appservice-insights/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the load testing dashboard in the Azure portal.":::
+
+1. Select the link in **Additional insights** to view the App Service diagnostics information.
- :::image type="content" source="media/how-to-appservice-insights/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the test result dashboard.":::
+ App Service diagnostics enables you to view in-depth information and dashboard about the performance, resource usage, and stability of your app service.
- Your Azure App Service **Availability and Performance** page opens, which displays your App Service diagnostics.
+ In the screenshot, you notice that there are concerns about the CPU usage, app performance, and failed requests.
:::image type="content" source="media/how-to-appservice-insights/app-diagnostics-overview.png" alt-text="Screenshot that shows the App Service diagnostics overview page, with a list of interactive reports on the left pane.":::
-1. On the left pane, select any of the various interactive reports that are available in App Service diagnostics.
+ On the left pane, you can drill deeper into specific issues by selecting one the diagnostics reports. For example, the following screenshot shows the **High CPU Analysis** report.
:::image type="content" source="media/how-to-appservice-insights/app-diagnostics-high-cpu.png" alt-text="Screenshot that shows the App Service diagnostics CPU usage report.":::
- > [!IMPORTANT]
+ The following screenshot shows the **Web App Slow** report, which gives details and recommendations about application performance.
+
+ :::image type="content" source="media/how-to-appservice-insights/app-diagnostics-web-app-slow.png" alt-text="Screenshot that shows the App Service diagnostics slow application report.":::
+
+ > [!NOTE]
> It can take up to 45 minutes for the insights data to be displayed on this page. ## Next steps -- Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md) with secrets.--- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to [parameterize a load test with secrets and environment variables](./how-to-parameterize-load-tests.md).
+- Learn how to [identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) for Azure applications.
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
Title: Define load test pass/fail criteria
+ Title: Define load test fail criteria
-description: 'Learn how to configure pass/fail criteria for load tests with Azure Load Testing.'
+description: 'Learn how to configure fail criteria for load tests with Azure Load Testing. Fail criteria let you define conditions that your load test results should meet.'
Previously updated : 11/30/2021 Last updated : 10/19/2022
-# Define pass/fail criteria for load tests by using Azure Load Testing Preview
+# Define fail criteria for load tests by using Azure Load Testing Preview
-In this article, you'll learn how to define pass/fail criteria for your load tests with Azure Load Testing Preview.
-
-By defining test criteria, you can specify the performance expectations of your application under test. By using the Azure Load Testing service, you can set failure criteria for various test metrics.
+In this article, you'll learn how to define test fail criteria for your load tests with Azure Load Testing Preview. Fail criteria let you define performance and quality expectations for your application under load. Azure Load Testing supports various client metrics for defining fail criteria. Criteria can apply to the entire load test, or to an individual request in the JMeter script.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
By defining test criteria, you can specify the performance expectations of your
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+
+## Load test fail criteria
+
+Load test fail criteria are conditions for client-side metrics, that your test should meet. You define test criteria at the load test level in Azure Load Testing. A load test can have one or more test criteria. When at least one of the test criteria evaluates to true, the load test gets the *failed* status.
+
+You can define test criteria at two levels. A load test can combine criteria at the different levels.
+
+- At the load test level. For example, to ensure that the total error percentage doesn't exceed a threshold.
+- At the JMeter request level (JMeter sampler). For example, you could specify a threshold for the response time of the *getProducts* request, but disregard the response time of the *sign in* request.
-## Load test pass/fail criteria
+You can define a maximum of 10 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
-This section discusses the syntax of Azure Load Testing pass/fail criteria. When a criterion evaluates to `true`, the load test gets the *failed* status.
+### Fail criteria structure
-The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`.
+The format of fail criteria in Azure Load Testing follows that of a conditional statement for a [supported metric](#supported-client-metrics-for-fail-criteria). For example, ensure that the average number of requests per second is greater than 500.
+
+Fail criteria have the following structure:
+
+- Test criteria at the load test level: `Aggregate_function (client_metric) condition threshold`.
+- Test criteria applied to specific JMeter requests: `Request: Aggregate_function (client_metric) condition threshold`.
The following table describes the different components:
-|Parameter |Description |
-|||
-|`Request` | *Optional.* Name of the sampler in the JMeter script to which the criterion applies. If you don't specify a request name, the criterion applies to the aggregate of all the requests in the script. |
-|`Client metric` | *Required.* The client metric on which the criteria should be applied. |
-|`Aggregate function` | *Required.* The aggregate function to be applied on the client metric. |
-|`Condition` | *Required.* The comparison operator. |
-|`Threshold` | *Required.* The numeric value to compare with the client metric. |
+|Parameter |Description |
+||-|
+|`Client metric` | *Required.* The client metric on which the condition should be applied. |
+|`Aggregate function` | *Required.* The aggregate function to be applied on the client metric. |
+|`Condition` | *Required.* The comparison operator, such as `greater than`, or `less than`. |
+|`Threshold` | *Required.* The numeric value to compare with the client metric. |
+|`Request` | *Optional.* Name of the sampler in the JMeter script to which the criterion applies. If you don't specify a request name, the criterion applies to the aggregate of all the requests in the script. |
+
+### Supported client metrics for fail criteria
+
+Azure Load Testing supports the following client metrics:
-Azure Load Testing supports the following metrics:
+|Metric |Aggregate function |Threshold |Condition | Description |
+|||||-|
+|`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Response time or elapsed time, in milliseconds. Learn more about [elapsed time in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
+|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
+|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) | Percentage of failed requests. |
+|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) | Number of requests per second. |
+|`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) | Total number of requests. |
-|Metric |Aggregate function |Threshold |Condition |
-|||||
-|`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
-|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
-|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) |
-|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) |
-|`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) |
+## Define load test fail criteria
-## Define test pass/fail criteria in the Azure portal
+# [Azure portal](#tab/portal)
In this section, you configure test criteria for a load test in the Azure portal. 1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
-1. On the left pane, select **Tests** to view the list of load tests, and then select the test you're working with.
+1. On the left pane, select **Tests** to view the list of load tests.
- :::image type="content" source="media/how-to-define-test-criteria/configure-test.png" alt-text="Screenshot of the 'Configure' and 'Test' buttons and a list of load tests.":::
+1. Select your load test from the list, and then select **Edit**.
-1. Select the **Test criteria** tab.
+ :::image type="content" source="media/how-to-define-test-criteria/edit-test.png" alt-text="Screenshot of the list of tests for an Azure load testing resource in the Azure portal, highlighting the 'Edit' button.":::
- :::image type="content" source="media/how-to-define-test-criteria/configure-test-test-criteria.png" alt-text="Screenshot that shows the 'Test criteria' tab and the pane for configuring the criteria.":::
+1. On the **Test criteria** pane, fill the **Metric**, **Aggregate function**, **Condition**, and **Threshold** values for your test.
-1. On the **Test criteria** pane, use the dropdown lists to select the **Metric**, **Aggregate function**, **Condition**, and **Threshold** values for your test.
+ :::image type="content" source="media/how-to-define-test-criteria/test-creation-criteria.png" alt-text="Screenshot of the 'Test criteria' pane for a load test in the Azure portal and highlights the fields for adding a test criterion.":::
- :::image type="content" source="media/how-to-define-test-criteria/test-creation-criteria.png" alt-text="Screenshot of the 'Test criteria' pane and the dropdown controls for adding test criteria to a load test.":::
+ Optionally, enter the **Request name** information to add a test criterion for a specific JMeter request. The value should match the name of the JMeter sampler in the JMX file.
- You can define a maximum of 10 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
1. Select **Apply** to save the changes.
-When you run the load test, Azure Load Testing uses the updated test configuration. The test run dashboard shows the test criteria and indicates whether the test results pass or fail the criteria.
+ When you now run the load test, Azure Load Testing uses the test criteria to determine the status of the load test run.
+1. Run the test and view the status in the load test dashboard.
+
+ The dashboard shows each of the test criteria and their status. The overall test status will be failed if at least one criterion was met.
+
+ :::image type="content" source="media/how-to-define-test-criteria/test-criteria-dashboard.png" alt-text="Screenshot that shows the test criteria on the load test dashboard.":::
-## Define test pass/fail criteria in CI/CD workflows
+# [Azure Pipelines](#tab/pipelines)
+
+In this section, you configure test criteria for a load test, as part of an Azure Pipelines CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+
+For CI/CD workflows, you configure the load test settings in a [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
+
+To specify fail criteria in the YAML configuration file:
+
+1. Open the YAML test configuration file for your load test in your editor of choice.
+
+1. Add your test criteria in the `failureCriteria` setting.
+
+ Use the [fail criteria format](#fail-criteria-structure), as described earlier. You can add multiple fail criteria for a load test.
-In this section, you learn how to define load test pass/fail criteria for continuous integration and continuous delivery (CI/CD) workflows. To run a load test in your CI/CD workflow, you use a [YAML test configuration file](./reference-test-config-yaml.md).
+ The following example defines three fail criteria. The first two criteria apply to the overall load test, and the last one specifies a condition for the `GetCustomerDetails` request.
-1. Open the YAML test configuration file.
+ ```yaml
+ version: v0.1
+ testName: SampleTest
+ testPlan: SampleTest.jmx
+ description: Load test website home page
+ engineInstances: 1
+ failureCriteria:
+ - avg(response_time_ms) > 300
+ - percentage(error) > 50
+ - GetCustomerDetails: avg(latency_ms) >200
+ ```
+
+ When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
+
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
+
+1. Save the YAML configuration file, and commit the changes to source control.
+
+1. After the CI/CD workflow runs, verify the test status in the CI/CD log.
+
+ The log shows the overall test status, and the status of each of the test criteria. The status of the CI/CD workflow run also reflects the test run status.
+
+ :::image type="content" source="media/how-to-define-test-criteria/azure-pipelines-log.png" alt-text="Screenshot that shows the test criteria in the CI/CD workflow log.":::
+
+# [GitHub Actions](#tab/github)
+
+In this section, you configure test criteria for a load test, as part of a GitHub Actions CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+
+For CI/CD workflows, you configure the load test settings in a [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
-1. Add the test criteria to the configuration file. For more information about YAML syntax, see [test configuration YAML reference](./reference-test-config-yaml.md).
+To specify fail criteria in the YAML configuration file:
- ```yml
- failureCriteria:
-     - avg(response_time_ms) > 300
-     - percentage(error) > 20
- - GetCustomerDetails: avg(latency_ms) >200
+1. Open the YAML test configuration file for your load test in your editor of choice.
+
+1. Add your test criteria in the `failureCriteria` setting.
+
+ Use the [fail criteria format](#fail-criteria-structure), as described earlier. You can add multiple fail criteria for a load test.
+
+ The following example defines three fail criteria. The first two criteria apply to the overall load test, and the last one specifies a condition for the `GetCustomerDetails` request.
+
+ ```yaml
+ version: v0.1
+ testName: SampleTest
+ testPlan: SampleTest.jmx
+ description: Load test website home page
+ engineInstances: 1
+ failureCriteria:
+ - avg(response_time_ms) > 300
+ - percentage(error) > 50
+ - GetCustomerDetails: avg(latency_ms) >200
```
+
+ When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
-1. Save the YAML configuration file.
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
-When the CI/CD workflow runs the load test, the workflow status reflects the status of the pass/fail criteria. The CI/CD logging information shows the status of each of the test criteria.
+1. Save the YAML configuration file, and commit the changes to source control.
+1. After the CI/CD workflow runs, verify the test status in the CI/CD log.
+
+ The log shows the overall test status, and the status of each of the test criteria. The status of the CI/CD workflow run also reflects the test run status.
+
+ :::image type="content" source="media/how-to-define-test-criteria/github-actions-log.png" alt-text="Screenshot that shows the test criteria in the CI/CD workflow log.":::
++ ## Next steps
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
Previously updated : 11/30/2021 Last updated : 10/20/2022 # Use managed identities for Azure Load Testing Preview
-This article shows how you can create a managed identity for an Azure Load Testing Preview resource and how to use it to read secrets from your Azure key vault.
+This article shows how to create a managed identity for Azure Load Testing Preview. You can use a managed identity to authenticate with and read secrets from Azure Key Vault.
-A managed identity in Azure Active Directory (Azure AD) allows your resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity from Azure Active Directory (Azure AD) allows your load testing resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform and doesn't require you to manage or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
Azure Load Testing supports two types of identities: -- A **system-assigned identity** is associated with your Azure Load Testing resource and is removed when your resource is deleted. A resource can have only one system-assigned identity.--- A **user-assigned identity** is a standalone Azure resource that you can assign to your Azure Load Testing resource. When you delete the Load Testing resource, the identity is not removed. You can assign multiple user-assigned identities to the Load Testing resource.
+- A **system-assigned identity** is associated with your load testing resource and is deleted when your resource is deleted. A resource can only have one system-assigned identity.
+- A **user-assigned identity** is a standalone Azure resource that you can assign to your load testing resource. When you delete the load testing resource, the managed identity remains available. You can assign multiple user-assigned identities to the load testing resource.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Load Testing supports two types of identities:
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure load testing resource. If you need to create an Azure load testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
-- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).-
-## Set a system-assigned identity
+## Assign a system-assigned identity to a load testing resource
-To add a system-assigned identity for your Azure Load Testing resource, you need to enable a property on the resource. You can set this property by using the Azure portal or by using an Azure Resource Manager (ARM) template.
+To assign a system-assigned identity for your Azure load testing resource, enable a property on the resource. You can set this property by using the Azure portal or by using an Azure Resource Manager (ARM) template.
# [Portal](#tab/azure-portal)
-To set up a managed identity in the portal, you first create an Azure Load Testing resource and then enable the feature.
+To set up a managed identity in the portal, you first create an Azure load testing resource and then enable the feature.
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. On the left pane, select **Identity**.
-1. Switch the system-assigned identity status to **On**, and then select **Save**.
+1. Select the **System assigned** tab.
+
+1. Switch the **Status** to **On**, and then select **Save**.
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to assign a system-assigned managed identity for Azure Load Testing in the Azure portal.":::
+
+1. On the confirmation window, select **Yes** to confirm the assignment of the managed identity.
- :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on system-assigned managed identity for Azure Load Testing.":::
+1. After assigning the managed identity finishes, the page will show the **Object ID** of the managed identity, and let you assign permissions to it.
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity-completed.png" alt-text="Screenshot that shows the system-assigned managed identity information for a load testing resource in the Azure portal.":::
+
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
# [ARM template](#tab/arm)
-You can use an ARM template to automate the deployment of your Azure resources. You can create any resource of type `Microsoft.LoadTestService/loadtests` with an identity by including the following property in the resource definition:
+You can use an ARM template to automate the deployment of your Azure resources. For more information about using ARM templates with Azure Load Testing, see the [Azure Load Testing ARM reference documentation](/azure/templates/microsoft.loadtestservice/allversions).
+
+You can assign a system-assigned managed identity when you create a resource of type `Microsoft.LoadTestService/loadtests`. Configure the `identity` property with the `SystemAssigned` value in the resource definition:
```json "identity": {
You can use an ARM template to automate the deployment of your Azure resources.
} ```
-By adding the system-assigned type, you're telling Azure to create and manage the identity for your resource. For example, an Azure Load Testing resource might look like the following:
+By adding the system-assigned identity type, you're telling Azure to create and manage the identity for your resource. For example, an Azure load testing resource might look like the following:
```json {
By adding the system-assigned type, you're telling Azure to create and manage th
} ```
-When the resource is created, it gets the following additional properties:
+After the resource creation finishes, the following properties are configured for the resource:
-```json
+```output
"identity": { "type": "SystemAssigned",
- "tenantId": "<TENANTID>",
- "principalId": "<PRINCIPALID>"
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "principalId": "00000000-0000-0000-0000-000000000000"
} ```
-The `tenantId` property identifies which Azure AD tenant the identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Azure AD, the service principal has the same name as the Azure Load Testing resource.
+The `tenantId` property identifies which Azure AD tenant the managed identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Azure AD, the service principal has the same name as the Azure load testing resource.
+
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
-## Set a user-assigned identity
+## Assign a user-assigned identity to a load testing resource
-Before you can add a user-assigned identity to an Azure Load Testing resource, you must first create this identity. You can then add the identity by using its resource identifier.
+Before you can add a user-assigned managed identity to an Azure load testing resource, you must first create this identity in Azure AD. Then, you can assign the identity by using its resource identifier.
+
+You can add multiple user-assigned managed identities to your resource. For example, if you need to access multiple Azure resources, you can grant different permissions to each of these identities.
# [Portal](#tab/azure-portal)
-1. Create a user-assigned managed identity by following the instructions mentioned [here](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+1. Create a user-assigned managed identity by following the instructions mentioned in [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/create-user-assigned-managed-identity.png" alt-text="Screenshot that shows how to create a user-assigned managed identity in the Azure portal.":::
-1. In the [Azure portal](https://portal.azure.com/), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com/), go to your Azure load testing resource.
1. On the left pane, select **Identity**.
-1. Select **User assigned** tab and click **Add**.
+1. Select the **User assigned** tab, and select **Add**.
-1. Search and select the identity you created previously. Then select **Add** to add it to the Azure Load Testing resource.
+1. Search and select the managed identity you created previously. Then, select **Add** to add it to the Azure load testing resource.
:::image type="content" source="media/how-to-use-a-managed-identity/user-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on user-assigned managed identity for Azure Load Testing.":::
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
+ # [ARM template](#tab/arm)
-You can create an Azure Load Testing resource by using an ARM template and the resource type `Microsoft.LoadTestService/loadtests`. You can specify a user-assigned identity in the `identity` section of the resource definition. Replace the `<RESOURCEID>` text placeholder with the resource ID of your user-assigned identity:
+You can create an Azure load testing resource by using an ARM template and the resource type `Microsoft.LoadTestService/loadtests`. For more information about using ARM templates with Azure Load Testing, see the [Azure Load Testing ARM reference documentation](/azure/templates/microsoft.loadtestservice/allversions).
-```json
-"identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<RESOURCEID>": {}
- }
-}
-```
+1. Create a user-assigned managed identity by following the instructions mentioned in [Create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3).
-The following code snippet shows an example of an Azure Load Testing ARM resource definition with a user-assigned identity:
+
+1. Specify the user-assigned managed identity in the `identity` section of the resource definition.
-```json
-{
- "type": "Microsoft.LoadTestService/loadtests",
- "apiVersion": "2021-09-01-preview",
- "name": "[parameters('name')]",
- "location": "[parameters('location')]",
- "tags": "[parameters('tags')]",
+ Replace the `<RESOURCEID>` text placeholder with the resource ID of your user-assigned identity:
+
+ ```json
"identity": { "type": "UserAssigned", "userAssignedIdentities": { "<RESOURCEID>": {} }
-}
-```
+ }
+ ```
+
+ The following code snippet shows an example of an Azure Load Testing ARM resource definition with a user-assigned identity:
+
+ ```json
+ {
+ "type": "Microsoft.LoadTestService/loadtests",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {}
+ }
+ }
+ ```
-After the Load Testing resource is created, Azure provides the `principalId` and `clientId` properties:
+ After the Load Testing resource is created, Azure provides the `principalId` and `clientId` properties in the output:
-```json
-"identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<RESOURCEID>": {
- "principalId": "<PRINCIPALID>",
- "clientId": "<CLIENTID>"
+ ```output
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "clientId": "00000000-0000-0000-0000-000000000000"
+ }
} }
-}
-```
+ ```
+
+ The `principalId` is a unique identifier for the identity that's used for Azure AD administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls.
-The `principalId` is a unique identifier for the identity that's used for Azure AD administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls.
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
## Grant access to your Azure key vault
-A managed identity allows the Azure Load testing resource to access other Azure resources. In this section, you grant the Azure Load Testing service access to read secret values from your key vault.
+Using managed identities for Azure resources, your Azure load testing resource can access tokens that enable authentication to your Azure key vault. Grant the managed identity access by assigning the [appropriate role](/azure/role-based-access-control/built-in-roles) to the managed identity.
+
+To grant your Azure load testing resource permissions to read secrets from your Azure key vault:
+
-If you don't already have a key vault, follow the instructions in [Azure Key Vault quickstart](../key-vault/secrets/quick-create-cli.md) to create it.
+1. In the [Azure portal](https://portal.azure.com/), go to your Azure key vault resource.
-1. In the Azure portal, go to your Azure Key Vault resource.
+ If you don't have a key vault, follow the instructions in [Azure Key Vault quickstart](/azure/key-vault/secrets/quick-create-cli) to create one.
1. On the left pane, under **Settings**, select **Access Policies**, and then **Add Access Policy**.
If you don't already have a key vault, follow the instructions in [Azure Key Vau
:::image type="content" source="media/how-to-use-a-managed-identity/key-vault-add-policy.png" alt-text="Screenshot that shows how to add an access policy to your Azure key vault.":::
-1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure Load Testing resource.
+1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure load testing resource.
- The name of the system-assigned principal is the same name as the Azure Load Testing resource.
+ If you're using a system-assigned managed identity, the name matches that of your Azure load testing resource.
1. Select **Add**.
-You've now granted access to your Azure Load Testing resource to read the secret values from your Azure key vault.
+You've now granted access to your Azure load testing resource to read the secret values from your Azure key vault.
## Next steps
-* To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
+* Learn how to [Parameterize a load test with secrets](./how-to-parameterize-load-tests.md).
+* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
+* [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview)
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points
Get started with [adding load testing to your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md) to quickly identify performance degradation of your application under load.
-In the test configuration, you [specify pass/fail rules](./how-to-define-test-criteria.md) to catch performance regressions early in the development cycle. For example, when the average response time exceeds a threshold, the test should fail.
+In the test configuration, [specify test fail criteria](./how-to-define-test-criteria.md) to catch application performance or stability regressions early in the development cycle. For example, get alerted when the average response time or the number of errors exceed a specific threshold.
Azure Load Testing will automatically stop an automated load test in response to specific error conditions. You can also use the AutoStop listener in your Apache JMeter script. Automatically stopping safeguards you against failing tests further incurring costs, for example, because of an incorrectly configured endpoint URL.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. | | `description` | string | | Short description of the test run. | | `subnetId` | string | | Resource ID of the subnet for testing privately hosted endpoints (VNET injection). This subnet will host the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). |
-| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Load test pass/fail criteria](./how-to-define-test-criteria.md#load-test-passfail-criteria). |
+| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Define load test fail criteria](./how-to-define-test-criteria.md#load-test-fail-criteria). |
| `properties` | object | | List of properties to configure the load test. | | `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. | | `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). |
load-testing Tutorial Identify Performance Regression With Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-performance-regression-with-cicd.md
You can specify load test fail criteria for Azure Load Testing in the test confi
- percentage(error) > 20 ```
- You've now specified pass/fail criteria for your load test. The test will fail if at least one of these conditions is met:
+ You've now specified fail criteria for your load test based on the average response time and the error rate. The test will fail if at least one of these conditions is met:
- The aggregate average response time is greater than 100 ms. - The aggregate percentage of errors is greater than 20%.
You can specify load test fail criteria for Azure Load Testing in the test confi
1. After the test finishes, notice that the CI/CD pipeline run has failed.
- In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the pass/fail criteria.
+ In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the fail criteria.
:::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/test-criteria-failed.png" alt-text="Screenshot that shows pipeline logs after failed test criteria."::: The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
-1. Edit the *SampleApp.yml* file and change the test's pass/fail criteria to increase the criterion for average response time:
+1. Edit the *SampleApp.yml* file and change the test's fail criteria to increase the criterion for average response time:
```yaml failureCriteria:
You've now created a CI/CD workflow that uses Azure Load Testing to automate run
* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md). * Learn more about [Comparing results across multiple test runs](./how-to-compare-multiple-test-runs.md). * Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
-* Learn more about [Defining test pass/fail criteria](./how-to-define-test-criteria.md).
+* Learn more about [Defining test fail criteria](./how-to-define-test-criteria.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
from azure.identity import ManagedIdentityCredential
subscription_id = "<subscription>" resource_group = "<resource-group>" workspace = "<workspace>"
+resource_id = "<resource-id>"
-ml_client = MLClient(ManagedIdentityCredential("<resource-id>"), subscription_id, resource_group, workspace)
+ml_client = MLClient(ManagedIdentityCredential(resource_id), subscription_id, resource_group, workspace)
``` Once authenticated, use the following command to run a batch deployment job:
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning lets you bring data from a local machine or an existing cloud-based storage. In this article you will learn the main data concepts in Azure Machine Learning, including: > [!div class="checklist"]
-> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs.
-> - [**Data asset**](#data-asset) - Create data assets in your workspace to share with team members, version, and track data lineage.
-> - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
-> - [**MLTable**](#mltable) - a method to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
+> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, You can define this job input by providing `type` as `uri_file`, `path` as where the file is.
+> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
+> - [**Data asset**](#data-asset) - If you plan to share your data (URIs or MLTables) in your workspace to team members, or you want to track data versions, or track lineage, you can create data assets from URIs or MLTables you have. But if you didn't create data asset, you can still consume the data in jobs without lineange tracking, version management, etc.
+> - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information(storage container name, credentials) to your data storage on Azure, so you don't have to code it in your scripts. You can use AzureML datastore uri and relative path to your data to point to your data. You can also register files/folders in your AzureML datastore into data assets.
+ ## URIs A URI (uniform resource identifier) represents a storage location on your local computer, an attached Datastore, blob/ADLS storage, or a publicly available http(s) location. In addition to local paths (for example: `./path_to_my_data/`), several different protocols are supported for cloud storage locations:
az ml data create --file data-example.yml --version 1
# [Consume data asset](#tab/cli-data-consume-example)
-To consume a data asset in a job, define your job specification in a YAML file the path to be `azureml:<NAME_OF_DATA_ASSET>:<VERSION>`, for example:
+To consume a registered/created data asset in a job, you can define your job specification in a YAML file, you need to specify the type of your data asset (type will be set as ` uri_folder` by default if you don't provide a type value), and you can specify the path to be `azureml:<NAME_OF_DATA_ASSET>:<VERSION>` to spare the effort of checking what is the datastore uri or storage uri (these 2 paths are also supported).
+For example:
```yml # hello-data-uri-file.yml
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Learn how to deploy a custom container as an online endpoint in Azure Machine Le
Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication.
+This article focuses on serving a TensorFlow model with TensorFlow Serving. You can find [various examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) for TorchServe, Triton Inference Server, Plumber R package, and AzureML Inference Minimal image.
+ > [!WARNING] > Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
This article gives a comparison of data scenario(s) in SDK v1 and SDK v2.
|Functionality in SDK v1|Rough mapping in SDK v2| |-|-|
-|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.datadisplayname: migration, v1, v2)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
+|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.data)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
## Next steps
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
+
+ Title: 'CLI (v2) mltable YAML schema'
+
+description: Reference documentation for the CLI (v2) MLTable YAML schema.
++++++++ Last updated : 09/15/2022+++
+# CLI (v2) mltable YAML schema
++
+`MLTable` is a way to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
+
+The ideal scenarios to use mltable are:
+
+- The schema of your data is complex and/or changes frequently.
+- You only need a subset of data. (for example: a sample of rows or files, specific columns, etc.)
+- AutoML jobs requiring tabular data.
+If your scenario does not fit the above, then it is likely that [URIs](reference-yaml-data.md) are a more suitable type.
+
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/MLTable.schema.json.
++
+> [!Note]
+> If you just want to create an data asset for a job or you want to write your own parsing logic in python you could just use `uri_file`, `uri_folder` as mentioned in [CLI (v2) data YAML schema](reference-yaml-data.md).
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `type` | const | `mltable` to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
+| `paths` | array | Paths can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(* and **) of files and folders containing data. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. |`file`, `folder`, `pattern` | |
+| `transformations`| array | Defined sequence of transformations that are applied to data loaded from defined paths. |`read_delimited`, `read_parquet` , `read_json_lines` , `read_delta_lake`, `take` to take the first N rows from dataset, `take_random_sample` to take a random sample of records in the dataset approximately by the probability specified, `drop_columns`, `keep_columns`,... ||
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/assets/data). And please find examples below.
+
+## MLTable paths: file
+```yaml
+type: mltable
+paths:
+ - file: https://dprepdata.blob.core.windows.net/demo/Titanic2.csv
+transformations:
+ - take: 1
+```
+
+## MLTable paths: pattern
+```yaml
+type: mltable
+paths:
+ - pattern: ./*.txt
+transformations:
+ - read_delimited:
+ delimiter: ,
+ encoding: ascii
+ header: all_files_same_headers
+ - columns: [Trip_Pickup_DateTime, Trip_Dropoff_DateTime]
+ column_type:
+ datetime:
+ formats: ['%Y-%m-%d %H:%M:%S']
+```
+
+## MLTable transformations
+These transformations apply to all mltable-artifact files:
+
+- `take`: Takes the first *n* records of the table
+- `take_random_sample`: Takes a random sample of the table where each record has a *probability* of being selected. The user can also include a *seed*.
+- `drop_columns`: Drops the specified columns from the table. This transform supports regex so that users can drop columns matching a particular pattern.
+- `keep_columns`: Keeps only the specified columns in the table. This transform supports regex so that users can keep columns matching a particular pattern.
+- `convert_column_types`
+ - `columns`: The column name you want to convert type of.
+ - `column_type`: The type you want to convert the column to. For example: string, float, int, or datetime with specified formats.
+
+## MLTable transformations: read_delimited
+
+```yaml
+paths:
+ - file: https://dprepdata.blob.core.windows.net/demo/Titanic2.csv
+transformations:
+ - read_delimited:
+ infer_column_types: false
+ delimiter: ','
+ encoding: 'ascii'
+ empty_as_string: false
+ - take: 10
+```
+
+## Delimited files transformations
+The following transformations are specific to delimited files.
+- infer_column_types: Boolean to infer column data types. Defaults to `True`. Type inference requires that the data source is accessible from the current compute. Currently type inference will only pull first 200 rows.
+- encoding: Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Defaults to `utf8`.
+- header: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.
+- delimiter: The separator used to split columns.
+- empty_as_string: Specify if empty field values should be loaded as empty strings. The default (`False`) will read empty field values as nulls. Passing this setting as `True` will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.
+- include_path_column: Boolean to keep path information as column in the table. Defaults to `False`. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- support_multi_line: By default (support_multi_line=`False`), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This setting should be set to `True` when the delimited files are known to contain quoted line breaks.
+
+## MLTable transformations: read_json_lines
+```yaml
+paths:
+ - file: ./order_invalid.jsonl
+transformations:
+ - read_json_lines:
+ encoding: utf8
+ invalid_lines: drop
+ include_path_column: false
+```
+
+## MLTable transformations: read_json_lines, convert_column_types
+```yaml
+paths:
+ - file: ./train_annotations.jsonl
+transformations:
+ - read_json_lines:
+ encoding: utf8
+ invalid_lines: error
+ include_path_column: false
+ - convert_column_types:
+ - columns: image_url
+ column_type: stream_info
+```
+
+### Json lines transformations
+Only flat Json files are supported.
+Below are the supported transformations that are specific for json lines:
+
+- `include_path_column` Boolean to keep path information as column in the MLTable. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- `invalid_lines` How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`.
+- `encoding` Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
++
+## MLTable transformations: read_parquet
+```yaml
+type: mltable
+traits:
+ index_columns: ID
+paths:
+ - file: ./crime.parquet
+transformations:
+ - read_parquet
+```
+### Parquet files transformations
+If the user doesn't define options for `read_parquet` transformation, default options will be selected (see below).
+
+- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+
+## MLTable transformations: read_delta_lake
+```yaml
+type: mltable
+
+paths:
+- abfss://my_delta_files
+
+transforms:
+ - read_delta_lake:
+ timestamp_as_of: '2022-08-26T00:00:00Z'
+```
+
+### Delta lake transformations
+
+- `timestamp_as_of`: Timestamp to be specified for time-travel on the specific Delta Lake data.
+- `version_as_of`: Version to be specified for time-travel on the specific Delta Lake data.
+
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
Run an assessment as follows:
1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
- ![Screenshot of Get started screen.](./media/tutorial-assess-vmware-azure-vm/assess.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess.png" alt-text="Screenshot of Get started screen.":::
2. In **Azure Migrate: Discovery and assessment**, select **Assess** and select **Azure VM**.
- ![Screenshot of Assess VM selection.](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-servers.png" alt-text="Screenshot of Assess VM selection.":::
3. The **Create assessment** wizard appears with **Azure VM** as the **Assessment type**. 4. In **Discovery source**:
Run an assessment as follows:
1. Select **Edit** to review the assessment properties.
- ![Screenshot of View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Screenshot of View all button to review assessment properties.":::
+
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you'll be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government). - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it. - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. [Learn more](https://aka.ms/azurereservedinstances).
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment.
Run an assessment as follows:
1. Select **Save** if you make changes.
- ![Screenshot of Assessment properties.](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-properties.png" alt-text="Screenshot of Assessment properties.":::
1. In **Assess Servers**, select **Next**.
Run an assessment as follows:
1. In **Select or create a group** > select **Create New** and specify a group name.
- ![Screenshot of adding VMs to a group.](./media/tutorial-assess-vmware-azure-vm/assess-group.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-group.png" alt-text="Screenshot of adding VMs to a group.":::
1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**.
An Azure VM assessment describes:
### View an Azure VM assessment 1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VM**.
-2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
+2. In **Assessments**, select an assessment to open it. As an example (estimations and costs, for example, only):
- ![Screenshot of an Assessment summary.](./media/how-to-create-assessment/assessment-summary.png)
+ :::image type="content" source="./media/how-to-create-assessment/assessment-summary.png" alt-text="Screenshot of an Assessment summary.":::
### Review Azure readiness
This view shows the estimated compute and storage cost of running VMs in Azure.
When you run performance-based assessments, a confidence rating is assigned to the assessment.
-![Screenshot of Confidence rating.](./media/how-to-create-assessment/confidence-rating.png)
- A rating from 1-star (lowest) to 5-star (highest) is awarded. - The confidence rating helps you estimate the reliability of the size recommendations provided by the assessment.
migrate How To Modify Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-modify-assessment.md
ms. Previously updated : 07/15/2019 Last updated : 10/25/2022+
This article describes how to customize assessments created by Azure Migrate Dis
[Azure Migrate](migrate-services-overview.md) provides a central hub to track discovery, assessment, and migration of your on-premises apps and workloads, and private/public cloud VMs, to Azure. The hub provides Azure Migrate tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
-You can use the Azure Migrate Discovery and assessment tool to create assessments for on-premises VMware VMs and Hyper-V VMs, in preparation for migration to Azure. Discovery and assessment tool assesses on-premises servers for migration to Azure IaaS virtual machines and Azure VMware Solution (AVS).
+You can use the Azure Migrate Discovery and assessment tool to create assessments for on-premises VMware VMs and Hyper-V VMs, in preparation for migration to Azure. The Discovery and assessment tool assesses on-premises servers for migration to Azure IaaS virtual machines and Azure VMware Solution (AVS).
## About assessments
-Assessments you create with Discovery and assessment tool are a point-in-time snapshot of data. There are two types of assessments you can create using Azure Migrate: Discovery and assessment.
+Assessments that you create with the Discovery and assessment tool are a point-in-time snapshot of data. There are two types of assessments that you can create using Azure Migrate: Discovery and assessment.
**Assessment Type** | **Details** |
-**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md), [Hyper-V VMs](how-to-set-up-appliance-hyper-v.md), and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure using this assessment type.(concepts-assessment-calculation.md)
-**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type.[Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md), [Hyper-V VMs](how-to-set-up-appliance-hyper-v.md), and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure using this assessment type.[Learn more](concepts-assessment-calculation.md).
+**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type.[Learn more](concepts-azure-vmware-solution-assessment-calculation.md).
Sizing criteria options in Azure Migrate assessments: **Sizing criteria** | **Details** | **Data** | |
-**Performance-based** | Assessments that make recommendations based on collected performance data | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD or premium-managed disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes: CPU utilization, Memory utilization, IOPS (Data and Log files), throughput and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
+**Performance-based** | Assessments that make recommendations based on collected performance data. | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD or premium-managed disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes CPU utilization, Memory utilization, IOPS (Data and Log files), throughput, and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
**As-is on-premises** | Assessments that don't use performance data to make recommendations. | **Azure VM assessment**: VM size recommendation is based on the on-premises VM size<br/><br> The recommended disk type is based on what you select in the storage type setting for the assessment.<br/><br/> **Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on the on-premises VM size. ## How is an assessment done?
-An assessment done in Azure Migrate Discovery and assessment has three stages. Assessment starts with a suitability analysis, followed by sizing, and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one. For example, if a machine fails the Azure suitability check, itΓÇÖs marked as unsuitable for Azure, and sizing and costing won't be done. [Learn more.](./concepts-assessment-calculation.md)
+An assessment done in Azure Migrate Discovery and assessment has three stages. Assessment starts with a suitability analysis, followed by sizing, and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one. For example, if a machine fails the Azure suitability check, itΓÇÖs marked as unsuitable for Azure, and sizing and costing won't be done. [Learn more](./concepts-assessment-calculation.md).
## What's in an Azure VM assessment?
An assessment done in Azure Migrate Discovery and assessment has three stages. A
**Currency** | Billing currency. **Discount (%)** | Any subscription-specific discount you receive on top of the Azure offer.<br/> The default setting is 0%. **VM uptime** | If your VMs are not going to be running 24x7 in Azure, you can specify the duration (number of days per month and number of hours per day) for which they would be running and the cost estimations would be done accordingly.<br/> The default value is 31 days per month and 24 hours per day.
-**Azure Hybrid Benefit** | Specify whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). If set to Yes, non-Windows Azure prices are considered for Windows VMs. Default is Yes.
+**Azure Hybrid Benefit** | Specify whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). If set to Yes, non-Windows Azure prices are considered for Windows VMs. By default, Azure Hybrid Benefit is set to Yes.
## What's in an Azure VMware Solution (AVS) assessment?
Here's what's included in an AVS assessment:
| **Property** | **Details** | | - | - |
-| **Target location** | Specifies the AVS private cloud location to which you want to migrate.<br/><br/> AVS Assessment currently supports these target regions: East US, West Europe, West US. |
-| **Storage type** | Specifies the storage engine to be used in AVS.<br/><br/> Note that AVS assessments only supports vSAN as a default storage type. |
+| **Target location** | Specifies the AVS private cloud location to which you want to migrate.<br/><br/> AVS Assessment currently supports these target regions: East US, West Europe, and West US. |
+| **Storage type** | Specifies the storage engine to be used in AVS.<br/><br/> Note that AVS assessments only support vSAN as a default storage type. |
**Reserved Instances (RIs)** | This property helps you specify Reserved Instances in AVS. RIs are currently not supported for AVS nodes. | **Node type** | Specifies the [AVS Node type](../azure-vmware/concepts-private-clouds-clusters.md) used to map the on-premises VMs. Note that default node type is AV36. <br/><br/> Azure Migrate will recommend a required number of nodes for the VMs to be migrated to AVS. |
-**FTT Setting, RAID Level** | Specifies the applicable Failure to Tolerate and Raid combinations. The selected FTT option combined with the on-premises VM disk requirement will determine the total vSAN storage required in AVS. |
+**FTT Setting, RAID Level** | Specifies the applicable Failure to Tolerate (FTT) and Raid combinations. The selected FTT option combined with the on-premises VM disk requirement will determine the total vSAN storage required in AVS. |
**Sizing criterion** | Sets the criteria to be used to _right-size_ VMs for AVS. You can opt for _performance-based_ sizing or _as on-premises_ without considering the performance history. | **Performance history** | Sets the duration to consider in evaluating the performance data of machines. This property is applicable only when the sizing criteria is _performance-based_. | **Percentile utilization** | Specifies the percentile value of the performance sample set to be considered for right-sizing. This property is applicable only when the sizing is performance-based.|
Here's what's included in an AVS assessment:
**Offer** | Displays the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in. Azure Migrate estimates the cost accordingly.| **Currency** | Shows the billing currency for your account. | **Discount (%)** | Lists any subscription-specific discount you receive on top of the Azure offer. The default setting is 0%. |
-**Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). Although this has no impact on Azure VMware solutions pricing due to the node based price, customers can still apply their on-premises OS licenses (Microsoft based) in AVS using Azure Hybrid Benefits. Other software OS vendors will have to provide their own licensing terms such as RHEL for example. |
+**Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). Although this has no impact on the Azure VMware solution's pricing due to the node-based price, customers can still apply their on-premises OS licenses (Microsoft-based) in AVS using Azure Hybrid Benefits. Other software OS vendors such as RHEL, for example, will have to provide their own licensing terms. |
**vCPU Oversubscription** | Specifies the ratio of number of virtual cores tied to 1 physical core in the AVS node. The default value in the calculations is 4 vCPU : 1 physical core in AVS. <br/><br/> API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may begin to cause performance degradation but can be used for web server type workloads. | ## What properties are used to create and customize an Azure SQL assessment?
Here's what's included in Azure SQL assessment properties:
**Property** | **Details** | **Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify.
-**Target deployment type** | The target deployment type you want to run the assessment on: <br/><br/> Select **Recommended**, if you want Azure Migrate to assess the readiness of your SQL servers for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration and monthly estimates.<br/><br/>Select **Azure SQL DB**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL DB configuration and monthly estimates.<br/><br/>Select **Azure SQL MI**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL MI configuration and monthly estimates.
+**Target deployment type** | The target deployment type you want to run the assessment on: <br/><br/> Select **Recommended** if you want Azure Migrate to assess the readiness of your SQL servers for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration, and monthly estimates.<br/><br/>Select **Azure SQL DB** if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL DB configuration, and monthly estimates.<br/><br/>Select **Azure SQL MI** if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL MI configuration, and monthly estimates.
**Reserved capacity** | Specifies reserved capacity so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
-**Sizing criteria** | This property is used to right-size the Azure SQL configuration. <br/><br/> It is defaulted to **Performance-based** which means the assessment will collect the SQL Server instances and databases performance metrics to recommend an optimal-sized Azure SQL Managed Instance and/or Azure SQL Database tier/configuration recommendation.
-**Performance history** | Performance history specifies the duration used when performance data is evaluated.
+**Sizing criteria** | This property is used to right-size the Azure SQL configuration. <br/><br/> It is defaulted to **Performance-based** which means the assessment will collect the SQL Server instances and databases performance metrics to recommend an optimal-sized Azure SQL Managed Instance and/or Azure SQL Database tier/configuration recommendation.
+**Performance history** | Performance history specifies the duration used when the performance data is evaluated.
**Percentile utilization** | Percentile utilization specifies the percentile value of the performance sample used for rightsizing. **Comfort factor** | The buffer used during assessment. It accounts for issues like seasonal usage, short performance history, and likely increases in future usage.<br/><br/> For example, a 10-core instance with 20% utilization normally results in a two-core instance. With a comfort factor of 2.0, the result is a four-core instance instead.
-**Offer/Licensing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test. Note that you can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
+**Offer/Licensing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. Currently, you can only choose from **Pay-as-you-go** and **Pay-as-you-go Dev/Test**. Note that you can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
**Service tier** | The most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or Azure SQL Managed Instance:<br/><br/>**Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. <br/><br/> **General Purpose** If you want an Azure SQL configuration designed for budget-oriented workloads. [Learn More](/azure/azure-sql/database/service-tier-general-purpose) <br/><br/> **Business Critical** If you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. [Learn More](/azure/azure-sql/database/service-tier-business-critical) **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
Here's what's included in Azure SQL assessment properties:
To edit assessment properties after creating an assessment, do the following:
-1. In the Azure Migrate project, click **Servers**.
-2. In **Azure Migrate: Discovery and assessment**, click the assessments count.
-3. In **Assessment**, click the relevant assessment > **Edit properties**.
+1. In the Azure Migrate project, select **Servers**.
+2. In **Azure Migrate: Discovery and assessment**, select the assessments count.
+3. In **Assessment**, select the relevant assessment > **Edit properties**.
5. Customize the assessment properties in accordance with the tables above.
-6. Click **Save** to update the assessment.
+6. Select **Save** to update the assessment.
-You can also edit assessment properties when you're creating an assessment.
+You can also edit the assessment properties when you're creating an assessment.
## Next steps
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
The estimated time to recover the server (recovery time objective, or RTO) depen
During the geo-restore, the server configurations that can be changed include virtual network settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations--such as compute, storage, or pricing tier (Burstable, General Purpose, or Memory Optimized)--during geo-restore is not supported.
-For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#performing-geo-restore).
+For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#perform-geo-restore).
> [!IMPORTANT] > When the primary region is down, you can't create geo-redundant servers in the respective geo-paired region, because storage can't be provisioned in the primary region. Before you can provision geo-redundant servers in the geo-paired region, you must wait for the primary region to be up.
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
industry specific, and region/country specific. Compliance offerings are based o
Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national, regional, and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
-[!div class="mx-tableFixed"]
+> [!div class="mx-tableFixed"]
> | **Certification**| **Applicable To** | > ||| > |HIPAA and HITECH Act (U.S.) | Healthcare|
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
Last updated 11/30/2021
When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
-With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request.
+With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request. With [private access](concepts-networking.md#private-access-vnet-integration) no public endpoint is available and only hosts located on the same network can access Azure Database for PostgreSQL - Flexible Server.
You can create firewall rules by using the Azure portal or by using Azure CLI commands. You must be the subscription owner or a subscription contributor.
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Follow these steps to perform a planned failover from your primary to the standb
There are Azure regions that do not support availability zones. If you have already deployed non-HA servers, you cannot directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. Following steps shows how to enable Zone redundant HA for that server.
-1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restoring-to-the-latest-restore-point). Choose **Latest restore point**.
+1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restore-to-the-latest-restore-point). Choose **Latest restore point**.
2. Choose a server name, availability zone. 3. Click **Review+Create**". 4. A new Flexible server will be created from the backup.
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Restore - Azure portal - Azure Database for PostgreSQL - Flexible Server
+ Title: Point-in-time restore of a flexible server - Azure portal
description: This article describes how to perform restore operations in Azure Database for PostgreSQL Flexible Server through the Azure portal.
Last updated 11/30/2021
-# Point-in-time restore of a Flexible Server
+# Point-in-time restore of a flexible server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. You can perform either to a latest restore point or a custom restore point within your retention period.
+This article provides a step-by-step procedure for using the Azure portal to perform point-in-time recoveries in a flexible server through backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
-## Pre-requisites
+## Prerequisites
-To complete this how-to guide, you need:
+To complete this how-to guide, you need Azure Database for PostgreSQL - Flexible Server. The procedure is also applicable for a flexible server that's configured with zone redundancy.
-- You must have an Azure Database for PostgreSQL - Flexible Server. The same procedure is also applicable for flexible server configured with zone redundancy.
+## Restore to the latest restore point
-## Restoring to the latest restore point
+Follow these steps to restore your flexible server to the latest restore point by using an existing backup:
-Follow these steps to restore your flexible server using an existing backup.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-
-2. Click **Overview** from the left panel and click **Restore**
+2. Select **Overview** from the left pane, and then select **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point and Custom restore point.
+3. Under **Source details**, select **Latest restore point (Now)**.
-4. Select **Latest restore point** and provide a new server name in the **Restore to new server** field. You can optionally choose the Availability zone to restore to.
+4. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-latest.png" alt-text="Latest restore time":::
-
-5. Click **OK**.
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-latest.png" alt-text="Screenshot that shows selections for restoring to the latest restore point.":::
-6. A notification will be shown that the restore operation has been initiated.
+5. Select **OK**. A notification shows that the restore operation has started.
-## Restoring to a custom restore point
+## Restore to a custom restore point
-Follow these steps to restore your flexible server using an existing backup.
+Follow these steps to restore your flexible server to a custom restore point by using an existing backup:
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-2. From the overview page, click **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
+4. Under **Source details**, choose **Select a custom restore point**.
-4. Choose **Custom restore point**.
-
-5. Select date and time and provide a new server name in the **Restore to new server** field. Provide a new server name and you can optionally choose the **Availability zone** to restore to.
+5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-custom-2.png" alt-text="Screenshot that shows selections for restoring to a custom restore point.":::
-6. Click **OK**.
-
-7. A notification will be shown that the restore operation has been initiated.
+6. Select **OK**. A notification shows that the restore operation has started.
- ## Restoring using fast restore
+## Restore by using fast restore
-Follow these steps to restore your flexible server using a fast restore option.
+Follow these steps to restore your flexible server by using a fast restore option:
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel and click **Restore**
+2. Select **Overview** from the left pane, and then select **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
+4. Under **Source details**, choose **Select Fast restore point (Restore using full backup only)**. For **Fast Restore point (UTC)**, select the full backup of your choice.
-4. Choose **Fast restore point (Restore using full backup only)**.
-
-5. Select full backup of your choice from the Fast Restore Point drop-down. Provide a **new server name** and you can optionally choose the **Availability zone** to restore to.
+5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/fast-restore.png" alt-text="Screenshot that shows selections for a fast restore point.":::
-6. Click **OK**.
+6. Select **OK**. A notification shows that the restore operation has started.
-7. A notification will be shown that the restore operation has been initiated.
+## Perform geo-restore
-## Performing Geo-Restore
+If your source server is configured with geo-redundant backup, you can restore the servers in a paired region.
-If your source server is configured with geo-redundant backup, you can restore the servers in a paired region. Note that, for the first time restore, please wait at least 1 hour after the source server is created.
+> [!NOTE]
+> For the first time that you perform a geo-restore, wait at least one hour after you create the source server.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to geo-restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to geo-restore the backup from.
-2. From the overview page, click **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Restore click":::
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Screenshot that shows the Restore button.":::
-3. From the restore page, choose Geo-Redundant restore to restore to a paired region.
- :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Geo-restore select":::
+3. Under **Source details**, for **Geo-redundant restore (preview)**, select the **Restore to paired region** checkbox.
-4. The region and the database versions are pre-selected. It will be restored to the last available data at the paired region. You can choose the **Availability zone** in the region to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Screenshot that shows the option for restoring to a paired region for geo-redundant restore.":::
+
+4. Under **Server details**, the region and the database version are pre-selected. The server will be restored to the last available data at the paired region. For **Availability zone**, you can optionally choose an availability zone to restore to.
+
+5. Select **OK**. A notification shows that the restore operation has started.
-5. By default, the backups for the restored server are configured with Geo-redundant backup. If you do not want geo-redundant backup, you can click **Configure Server** and uncheck the Geo-redundant backup.
+By default, the backups for the restored server are configured with geo-redundant backup. If you don't want geo-redundant backup, you can select **Configure Server** and then clear the **Restore to paired region** checkbox.
-6. If the source server is configured with **private access**, you can only restore to another VNET in the remote region. You can either choose an existing VNET or create a new VNET and restore your server into that VNET.
+If the source server is configured with *private access*, you can restore only to another virtual network in the remote region. You can either choose an existing virtual network or create a new virtual network and restore your server into that network.
## Next steps -- Learn about [business continuity](./concepts-business-continuity.md)-- Learn about [zone redundant high availability](./concepts-high-availability.md)-- Learn about [backup and recovery](./concepts-backup-restore.md)
+- Learn about [business continuity](./concepts-business-continuity.md).
+- Learn about [zone-redundant high availability](./concepts-high-availability.md).
+- Learn about [backup and recovery](./concepts-backup-restore.md).
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
Title: 'Quickstart: Create a private endpoint using Bicep' description: In this quickstart, you'll learn how to create a private endpoint using Bicep. -+ Last updated 05/02/2022-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint using Bicep.
private-link Create Private Link Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-bicep.md
Title: 'Quickstart: Create a private link service in Azure Private Link using Bicep' description: In this quickstart, you use Bicep to create a private link service. -+ Last updated 04/29/2022-+ # Quickstart: Create a private link service using Bicep
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
Previously updated : 09/22/2022 Last updated : 10/24/2022
Use either of the following deployment checklists during the setup, or for troub
1. In the Power BI Azure AD tenant, validate the following app registration settings: 1. The app registration exists in your Azure AD tenant where the Power BI tenant is located.
- 2. Under **API permissions**, the following APIs are set up with **read** for **delegated permissions** and **grant admin consent for the tenant**:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**: 1. **Supported account types** > **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** is selected. 2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected.
Use either of the following deployment checklists during the setup, or for troub
1. In the Power BI Azure AD tenant, validate the following app registration settings: 1. The app registration exists in your Azure AD tenant where the Power BI tenant is located.
- 2. Under **API permissions**, the following APIs are set up with **read** for **delegated permissions** and **grant admin consent for the tenant**:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**: 1. **Supported account types** > **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** is selected. 2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected.
To create and run a new scan by using the Azure runtime, perform the following s
1. If your key vault isn't connected to Microsoft Purview yet, you need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
-1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of the client ID (app ID).
+1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant-app-registration.png" alt-text="Screenshot how to create App in Azure AD for cross tenant.":::
+
+3. Take note of the client ID (app ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot that shows how to create a service principle.":::
To create and run a new scan by using the Azure runtime, perform the following s
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions on Power BI and Microsoft Graph.":::
1. From the Azure AD dashboard, select the newly created application, and then select **Authentication**. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
To create and run a new scan by using the Azure runtime, perform the following s
To create and run a new scan by using the self-hosted integration runtime, perform the following steps:
-1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of the client ID (app ID).
+1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant-app-registration.png" alt-text="Screenshot how to create App in Azure AD for cross tenant.":::
+
+2. Take note of the client ID (app ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot that shows how to create a service principle.":::
-1. From the Azure AD dashboard, select the newly created application, and then select **App permissions**. Assign the application the following delegated permissions, and grant admin consent for the tenant:
+1. From the Azure AD dashboard, select the newly created application, and then select **App permissions**. Assign the application the following delegated permissions:
- - Power BI Service Tenant.Read.All
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-api-permissions.png" alt-text="Screenshot of delegated permissions on Microsoft Graph.":::
1. From the Azure AD dashboard, select the newly created application, and then select **Authentication**. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 10/19/2022 Last updated : 10/24/2022
Use any of the following deployment checklists during the setup or for troublesh
1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant.
- 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**, **Allow public client flows** is enabled. 2. If delegated authentication is used, validate Power BI admin user settings to make sure:
Use any of the following deployment checklists during the setup or for troublesh
1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant.
- 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**, **Allow public client flows** is enabled. 2. Review network configuration and validate if:
To create and run a new scan, do the following:
1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. [For information about the Redirect URI see this documenation from Azure Active Directory](/azure/active-directory/develop/reply-url).
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in Azure AD.":::
2. Take note of Client ID(App ID). :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
-1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions:
- - Power BI Service Tenant.Read.All
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-api-permissions.png" alt-text="Screenshot of delegated permissions on Microsoft Graph.":::
1. Under **Advanced settings**, enable **Allow Public client flows**.
To create and run a new scan, do the following:
1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in Azure AD.":::
2. Take note of Client ID(App ID). :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
-1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. Assign the application the following delegated permissions, and grant admin consent for the tenant:
- Power BI Service Tenant.Read.All - Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions on Power BI Service and Microsoft Graph.":::
1. Under **Advanced settings**, enable **Allow Public client flows**.
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
Ingestion-time transformations can also be used to mask or remove personal infor
## Data ingestion flow in Microsoft Sentinel
-The following image shows where ingestion-time data transformation enters the data ingestion flow into Microsoft Sentinel.
+The following image shows where ingestion-time data transformation enters the data ingestion flow in Microsoft Sentinel.
-Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources. Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations, and data ingested directly into the logs ingestion API endpoint is , and then stored in either standard or custom tables.
+Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources.
+- Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations in the workspace DCR. This data can be stored in standard tables or in a specific set of custom tables.
+- Data ingested directly into the Logs ingestion API endpoint is processed by a DCR that may include an ingestion-time transformation, and then stored in either standard or custom tables. This data can then be stored in either standard or custom tables of any kind.
:::image type="content" source="media/data-transformation/data-transformation-architecture.png" alt-text="Diagram of the Microsoft Sentinel data transformation architecture.":::
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Last updated 11/09/2021 - # Detect threats out-of-the-box
This article helps you understand how to detect threats with Microsoft Sentinel:
## View built-in detections
-To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules.
+To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules, as well as the **Threat Intelligence** rule type.
Built-in detections include:
Built-in detections include:
| **Microsoft security** | Microsoft security templates automatically create Microsoft Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. <br><br>For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). | | <a name="fusion"></a>**Fusion**<br>(some detections in Preview) | Microsoft Sentinel uses the Fusion correlation engine, with its scalable machine learning algorithms, to detect advanced multistage attacks by correlating many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result. | | **Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. |
+| **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule is not customizable, but when enabled, will automatically match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators will contain additional context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more details on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
| <a name="anomaly"></a>**Anomaly**<br>(Preview) | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). | | <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.| | <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). |
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
Use a search job when you start an investigation to find specific events in logs
Search in Microsoft Sentinel is built on top of search jobs. Search jobs are asynchronous queries that fetch records. The results are returned to a search table that's created in your Log Analytics workspace after you start the search job. The search job uses parallel processing to run the search across long time spans, in extremely large datasets. So search jobs don't impact the workspace's performance or availability.
-Search results remain in a search results table that has a *_SRCH suffix.
+Search results are stored in a table that h