Updates from: 06/04/2022 01:09:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Determines whether a claim value is equal to the input parameter value. Check ou
| InputClaim | inputClaim1 | string | The claim's type, which is to be compared. | | InputParameter | operator | string | Possible values: `EQUAL` or `NOT EQUAL`. | | InputParameter | compareTo | string | String comparison, one of the values: Ordinal, OrdinalIgnoreCase. |
-| InputParameter | ignoreCase | boolean | Specifies whether this comparison should ignore the case of the strings being compared. |
+| InputParameter | ignoreCase | string | Specifies whether this comparison should ignore the case of the strings being compared. |
| OutputClaim | outputClaim | boolean | The claim that is produced after this claims transformation has been invoked. | ### Example of CompareClaimToValue
Use this claims transformation to check if a claim is equal to a value you speci
<InputParameters> <InputParameter Id="compareTo" DataType="string" Value="V1" /> <InputParameter Id="operator" DataType="string" Value="not equal" />
- <InputParameter Id="ignoreCase" DataType="boolean" Value="true" />
+ <InputParameter Id="ignoreCase" DataType="string" Value="true" />
</InputParameters> <OutputClaims> <OutputClaim ClaimTypeReferenceId="termsOfUseConsentRequired" TransformationClaimType="outputClaim" />
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
description: Learn about deployment considerations and strategy for successful i
Previously updated : 02/02/2022 Last updated : 06/01/2022
You can monitor authentication method registration and usage across your organiz
The Azure AD sign in reports include authentication details for events when a user is prompted for MFA, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for Azure AD Multi-Factor Authentication.
-NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**.
+NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**. Inclusion of this activity in the [Sign-in logs](../reports-monitoring/concept-sign-ins.md) is currently in Preview.
For more information, and additional Azure AD Multi-Factor Authentication reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
active-directory Microsoft Graph Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/microsoft-graph-intro.md
Title: Microsoft Graph API description: The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources.-+
Last updated 10/08/2021-+
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 06/02/2022 Last updated : 06/03/2022
The following are the user properties that you can use to create a single expres
### Properties of type boolean
-| Properties | Allowed values | Usage |
-| | | |
-| accountEnabled |true false |user.accountEnabled -eq true |
-| dirSyncEnabled |true false |user.dirSyncEnabled -eq true |
+Properties | Allowed values | Usage
+ | |
+accountEnabled |true false |user.accountEnabled -eq true
+dirSyncEnabled |true false |user.dirSyncEnabled -eq true
### Properties of type string
The following are the user properties that you can use to create a single expres
| jobTitle |Any string value or *null* |(user.jobTitle -eq "value") | | mail |Any string value or *null* (SMTP address of the user) |(user.mail -eq "value") | | mailNickName |Any string value (mail alias of the user) |(user.mailNickName -eq "value") |
+| memberOf | Any string value (valid group object ID) | (device.memberof -any (group.objectId -in ['value'])) |
| mobile |Any string value or *null* |(user.mobile -eq "value") | | objectId |GUID of the user object |(user.objectId -eq "11111111-1111-1111-1111-111111111111") | | onPremisesDistinguishedName (preview)| Any string value or *null* |(user.onPremisesDistinguishedName -eq "value") |
The following device attributes can be used.
enrollmentProfileName | Apple Device Enrollment Profile name, Android Enterprise Corporate-owned dedicated device Enrollment Profile name, or Windows Autopilot profile name | (device.enrollmentProfileName -eq "DEP iPhones") isRooted | true false | (device.isRooted -eq true) managementType | MDM (for mobile devices) | (device.managementType -eq "MDM")
+ memberOf | Any string value (valid group object ID) | (user.memberof -any (group.objectId -in ['value']))
deviceId | a valid Azure AD device ID | (device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d") objectId | a valid Azure AD object ID | (device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d") devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | (device.devicePhysicalIDs -any _ -contains "[ZTDId]") (device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881") (device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342") systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | (device.systemLabels -contains "M365Managed")
-> [!Note]
+> [!NOTE]
> For the deviceOwnership when creating Dynamic Groups for devices you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details. ## Next steps
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
Azure Active Directory (Azure AD) Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. It provides you with capabilities to ensure that the right people have the right access to the right resources. These and related Azure AD and Enterprise Mobility + Security features allows you to mitigate access risk by protecting, monitoring, and auditing access to critical assets -- while ensuring employee and business partner productivity.
-Identity Governance give organizations the ability to do the following tasks across employees, business partners and vendors, and across services and applications both on-premises and in clouds:
+Identity Governance gives organizations the ability to do the following tasks across employees, business partners and vendors, and across services and applications both on-premises and in clouds:
- Govern the identity lifecycle - Govern access lifecycle
It's a best practice to use the least privileged role to perform administrative
- [What is Azure AD entitlement management?](entitlement-management-overview.md) - [What are Azure AD access reviews?](access-reviews-overview.md) - [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md)-- [What can I do with Terms of use?](../conditional-access/terms-of-use.md)
+- [What can I do with Terms of use?](../conditional-access/terms-of-use.md)
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md
If you are planning to complete the next quickstart, keep the application that y
Learn how to set up single sign-on for an enterprise application. > [!div class="nextstepaction"]
-> [Enable single sign-on](add-application-portal-setup-sso.md)
+> [Enable single sign-on](what-is-single-sign-on.md)
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
Last updated 03/30/2022 - #customer intent: As an admin, I want to learn about the admin consent workflow and how it affects end-user and admin consent experience
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Title: PowerShell samples in Application Management description: These PowerShell samples are used for apps you manage in your Azure Active Directory tenant. You can use these sample scripts to find expiration information about secrets and certificates. -+ Last updated 02/18/2021-+
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-management-certs-faq.md
Title: Application Management certificates frequently asked questions description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP). -+ Last updated 03/19/2021-+
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Last updated 07/11/2017 -
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Last updated 10/23/2021 - #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md
Title: Advanced certificate signing options in a SAML token description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory -+ Last updated 07/30/2021-+
active-directory Cloud App Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloud-app-security.md
Title: App visibility and control with Microsoft Defender for Cloud Apps description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance. -+ Last updated 07/29/2021-+
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Last updated 05/27/2022 - #customer intent: As an admin, I want to configure the admin consent workflow.
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Title: 'Quickstart: Delete an enterprise application' description: Delete an enterprise application in Azure Active Directory. -+ Last updated 03/24/2022-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application.
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Last updated 09/23/2021 - #customer intent: As an admin, I want to disable the way a user signs in for an application so that no user can sign in to it in Azure Active Directory.
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Last updated 10/23/2021 -
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-an-application-integration.md
Last updated 04/05/2021 - # Integrating Azure Active Directory with applications getting started guide
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
Title: Your sign-in was blocked description: Troubleshoot a blocked sign-in to the Microsoft Application Network portal. -+ Last updated 1/18/2022-+ #Customer intent: As a publisher of an application, I want troubleshoot a blocked sign-in to the Microsoft Application Network portal.
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
Last updated 01/07/2021 - # Understand how users are assigned to apps
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md
Last updated 09/23/2021 - # Manage access to an application
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 05/09/2022 Last updated : 06/01/2022
$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $directoryScope
### Microsoft Graph API
+Use the [Add a scopedRoleMember](/graph/api/administrativeunit-post-scopedrolemembers) API to assign a role with administrative unit scope.
+ Request ```http
Get-AzureADMSScopedRoleMembership -Id $adminUnit.Id | fl *
### Microsoft Graph API
+Use the [List scopedRoleMembers](/graph/api/administrativeunit-list-scopedrolemembers) API to list role assignments with administrative unit scope.
+ Request ```http
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Previously updated : 03/22/2022 Last updated : 06/01/2022
In Azure Active Directory (Azure AD), you can list the users, groups, or devices
- Azure AD Premium P1 or P2 license for each administrative unit administrator - Azure AD Free licenses for administrative unit members-- Privileged Role Administrator or Global Administrator - AzureAD module when using PowerShell - AzureADPreview module when using PowerShell for devices - Admin consent when using Graph explorer for Microsoft Graph API
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can read settings and administrative information across Micro
>- [Privileged Access Management (PAM)](/office365/securitycompliance/privileged-access-management-overview) doesn't support the Global Reader role. >- [Azure Information Protection](/azure/information-protection/what-is-information-protection) - Global Reader is supported [for central reporting](/azure/information-protection/reports-aip) only, and when your Azure AD organization isn't on the [unified labeling platform](/azure/information-protection/faqs#how-can-i-determine-if-my-tenant-is-on-the-unified-labeling-platform). > - [SharePoint](https://admin.microsoft.com/sharepoint) - Global Reader currently can't access SharePoint using PowerShell.
+> - [Power Platform admin center](https://admin.powerplatform.microsoft.com) - Global Reader is not yet supported in the Power Platform admin center.
> > These features are currently in development. >
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The maximum number of pods per node in an AKS cluster is 250. The *default* maxi
| -- | :--: | :--: | -- | | Azure CLI | 110 | 30 | Yes (up to 250) | | Resource Manager template | 110 | 30 | Yes (up to 250) |
-| Portal | 110 | 110 (configured in the Node Pools tab) | No |
+| Portal | 110 | 110 (configurable in the Node Pools tab) | Yes (up to 250) |
### Configure maximum - new clusters
A minimum value for maximum pods per node is enforced to guarantee space for sys
| Networking | Minimum | Maximum | | -- | :--: | :--: | | Azure CNI | 10 | 250 |
-| Kubenet | 10 | 110 |
+| Kubenet | 10 | 250 |
> [!NOTE] > The minimum value in the table above is strictly enforced by the AKS service. You can not set a maxPods value lower than the minimum shown as doing so can prevent the cluster from starting.
The planning of IPs for Kubernetes services and Docker bridge remain unchanged.
The pods per node values when using Azure CNI with dynamic allocation of IPs have changed slightly from the traditional CNI behavior:
-|CNI|Deployment Method|Default|Configurable at deployment|
-|--|--| :--: |--|
-|Traditional Azure CNI|Azure CLI|30|Yes (up to 250)|
-|Azure CNI with dynamic allocation of IPs|Azure CLI|250|Yes (up to 250)|
+|CNI|Default|Configurable at deployment|
+|--| :--: |--|
+|Traditional Azure CNI|30|Yes (up to 250)|
+|Azure CNI with dynamic allocation of IPs|250|Yes (up to 250)|
All other guidance related to configuring the maximum nodes per pod remains the same.
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
az network application-gateway create \
--frontend-port 443 \ --http-settings-port 80 \ --http-settings-protocol Http \
+ --priority "1" \
--public-ip-address myAGPublicIPAddress \ --cert-file appgwcert.pfx \ --cert-password "Azure123456!"
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use Cluster Connect to connect to Azure Arc-enabled Kubernetes clusters" Previously updated : 10/31/2021 Last updated : 06/03/2022
description: "Use Cluster Connect to securely connect to Azure Arc-enabled Kuber
# Use Cluster Connect to connect to Azure Arc-enabled Kubernetes clusters
-With Cluster Connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall. Access to the `apiserver` of the Azure Arc-enabled Kubernetes cluster enables the following scenarios:
-* Enable interactive debugging and troubleshooting.
-* Provide cluster access to Azure services for [custom locations](custom-locations.md) and other resources created on top of it.
+With Cluster Connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall.
-A conceptual overview of this feature is available in [Cluster connect - Azure Arc-enabled Kubernetes](conceptual-cluster-connect.md) article.
+Access to the `apiserver` of the Azure Arc-enabled Kubernetes cluster enables the following scenarios:
-## Prerequisites
+- Interactive debugging and troubleshooting.
+- Cluster access to Azure services for [custom locations](custom-locations.md) and other resources created on top of it.
+
+A conceptual overview of this feature is available in [Cluster connect - Azure Arc-enabled Kubernetes](conceptual-cluster-connect.md).
+
+## Prerequisites
+
+### [Azure CLI](#tab/azure-cli)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to version >= 2.16.0. - Install the `connectedk8s` Azure CLI extension of version >= 1.2.5:
- ```azurecli
- az extension add --name connectedk8s
- ```
-
- If you've already installed the `connectedk8s` extension, update the extension to the latest version:
-
- ```azurecli
- az extension update --name connectedk8s
- ```
+ ```azurecli
+ az extension add --name connectedk8s
+ ```
+
+ If you've already installed the `connectedk8s` extension, update the extension to the latest version:
+
+ ```azurecli
+ az extension update --name connectedk8s
+ ```
- An existing Azure Arc-enabled Kubernetes connected cluster.
- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
- | Endpoint | Port |
- |-|-|
- |`*.servicebus.windows.net` | 443 |
- |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+ | Endpoint | Port |
+ |-|-|
+ |`*.servicebus.windows.net` | 443 |
+ |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
- Replace the placeholders and run the below command to set the environment variables used in this document:
- ```azurecli
- CLUSTER_NAME=<cluster-name>
- RESOURCE_GROUP=<resource-group-name>
- ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
- ```
+ ```azurecli
+ CLUSTER_NAME=<cluster-name>
+ RESOURCE_GROUP=<resource-group-name>
+ ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-az-ps).
+
+- An existing Azure Arc-enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+
+- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
+ | Endpoint | Port |
+ |-|-|
+ |`*.servicebus.windows.net` | 443 |
+ |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+
+- Replace the placeholders and run the below command to set the environment variables used in this document:
+
+ ```azurepowershell
+ $CLUSTER_NAME = <cluster-name>
+ $RESOURCE_GROUP = <resource-group-name>
+ $ARM_ID_CLUSTER = (az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
+ ```
++ ## Enable Cluster Connect feature
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
## Azure Active Directory authentication option
-1. Get the `objectId` associated with your Azure AD entity:
+### [Azure CLI](#tab/azure-cli)
+
+1. Get the `objectId` associated with your Azure AD entity.
+
+ - For an Azure AD user account:
+
+ ```azurecli
+ AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query objectId -o tsv)
+ ```
+
+ - For an Azure AD application:
+
+ ```azurecli
+ AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query objectId -o tsv)
+ ```
+
+1. Authorize the entity with appropriate permissions.
+
+ - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ ```
+
+ - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
+
+ ```azurecli
+ az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
+ ```
- - For Azure AD user account:
+### [Azure PowerShell](#tab/azure-powershell)
- ```azurecli
- AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query objectId -o tsv)
- ```
+1. Get the `objectId` associated with your Azure AD entity.
- - For Azure AD application:
+ - For an Azure AD user account:
- ```azurecli
- AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query objectId -o tsv)
- ```
+ ```azurepowershell
+ $AAD_ENTITY_OBJECT_ID = (az ad signed-in-user show --query objectId -o tsv)
+ ```
-1. Authorize the entity with appropriate permissions:
+ - For an Azure AD application:
- - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:
-
- ```console
- kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
- ```
+ ```azurepowershell
+ $AAD_ENTITY_OBJECT_ID = (az ad sp show --id <id> --query objectId -o tsv)
+ ```
- - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
+1. Authorize the entity with appropriate permissions.
- ```azurecli
- az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
- ```
+ - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
+ ```
+
+ - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
+
+ ```azurecli
+ az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
+ ```
++ ## Service account token authentication option
-1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (following command creates it in the default namespace):
+### [Azure CLI](#tab/azure-cli)
- ```console
- kubectl create serviceaccount admin-user
- ```
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):
+
+ ```console
+ kubectl create serviceaccount admin-user
+ ```
1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user ```
-1. Get the service account's token using the following commands
+1. Get the service account's token using the following commands:
```console SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g') ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):
+
+ ```console
+ kubectl create serviceaccount admin-user
+ ```
+
+1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
+
+ ```console
+ kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
+ ```
+
+1. Get the service account's token using the following commands:
+
+ ```console
+ $SECRET_NAME = (kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
+ ```
+
+ ```console
+ $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret $SECRET_NAME -o jsonpath='{$.data.token}'))))
+ ```
+++ ## Access your cluster 1. Set up the Cluster Connect based kubeconfig needed to access your cluster based on the authentication option used:
- - If using Azure Active Directory authentication option, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
+ - If using Azure Active Directory authentication option, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
- ```azurecli
- az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP
- ```
+ ```azurecli
+ az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP
+ ```
- - If using the service account authentication option, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere:
+ - If using the service account authentication option, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere:
- ```azurecli
- az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN
- ```
+ ```azurecli
+ az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN
+ ```
1. Use `kubectl` to send requests to the cluster:
- ```console
- kubectl get pods
- ```
-
- You should now see a response from the cluster containing the list of all pods under the `default` namespace.
+ ```console
+ kubectl get pods
+ ```
+
+You should now see a response from the cluster containing the list of all pods under the `default` namespace.
## Known limitations When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, the following error is observed as this is a known limitation:
-```console
-You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.
-```
+`You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.`
To get past this error:+ 1. Create a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups.
-1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running `az connectedk8s proxy` command.
+1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command.
## Next steps
-> [!div class="nextstepaction"]
-> Set up [Azure AD RBAC](azure-rbac.md) on your clusters
+- Set up [Azure AD RBAC](azure-rbac.md) on your clusters.
+- Deploy and manage [cluster extensions](extensions.md).
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
The script to automate the download and installation, and to establish the conne
1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
-1. On the **Select a method** page, select the **Add servers using interactive script** tile, and then select **Generate script**.
+1. On the **Select a method** page, select the **Add a single server** tile, and then select **Generate script**.
1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location.
azure-fluid-relay Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/faq.md
+
+ Title: Azure Fluid Relay FAQ
+description: Frequently asked questions about Fluid Relay
++ Last updated : 6/1/2022++++
+# Azure Fluid Relay FAQ
+
+The following are frequently asked questions about Azure Fluid Relay
+
+## Which Azure regions currently provide Fluid Relay?
+
+For a complete list of available regions, see [Azure Fluid Relay regions and availability](https://azure.microsoft.com/global-infrastructure/services/?products=fluid-relay).
+
+## Can I move my Fluid Relay resource from one Azure resource group to another?
+
+Yes. You can move the Fluid Relay resource from one resource group to another.
+
+## Can I move my Fluid Relay resource from one Azure subscription to another?
+
+Yes. You can move the Fluid Relay resource from one subscription to another.
+
+## Can I move Fluid Relay resource between Azure regions?
+
+No. Moving the Fluid Relay resource from one region to another isnΓÇÖt supported.
+
+## Is Azure Fluid Relay certified by industry certifications?
+
+We adhere to the security and privacy policies and practices that other Azure services follow to help achieve those industry and regional certifications. Once Azure Fluid Relay is in General Availability, we'll be pursuing those certifications. We'll be updating our certification posture as we achieve the different certifications. For more information, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
+
+## What network protocols does Fluid Relay use?
+
+Fluid Relay, like the Fluid Framework technology, uses both http and web sockets for communication between the clients and the service.
+
+## Will Azure Fluid Relay work in environments where web sockets are blocked?
+
+Yes. The Fluid Framework uses socket.io library for communication with the service. In environments where web sockets are blocked, the client will fall back to use long-polling with http.
+
+## Where does Azure Fluid Relay store customer data?
+
+Azure Fluid Relay stores customer data. By default, customer data is replicated to the paired region. However, the customer can choose to keep it within the same region by selecting the Basic SKU during provisioning. This option is available in select regions where the paired region is outside the country boundary of the primary region data is stored. For more information, go to [Data storage in Azure Fluid Relay](../concepts/data-storage.md).
+
+## Does Azure Fluid Relay support offline mode?
+
+Offline mode is when end users of your application are disconnected from the network. The Fluid Framework client accumulates operations locally and sends them to the service when reconnected. Currently, Azure Fluid Relay doesn't support extended periods of offline mode beyond 1 minute. We highly recommend that you listen to Disconnect signals and update your user experience to avoid accumulation of many ops that can get lost.
+
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md
The *function.json* file in the *HttpExample* folder declares an HTTP trigger fu
Err(_) => 3000, };
- warp::serve(example1).run((Ipv4Addr::UNSPECIFIED, port)).await
+ warp::serve(example1).run((Ipv4Addr::LOCALHOST, port)).await
} ```
azure-functions Durable Functions Dotnet Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-dotnet-entities.md
We also enforce some additional rules:
* Entity interface methods must not have more than one parameter. * Entity interface methods must return `void`, `Task`, or `Task<T>`.
-If any of these rules are violated, an `InvalidOperationException` is thrown at runtime when the interface is used as a type argument to `SignalEntity` or `CreateProxy`. The exception message explains which rule was broken.
+If any of these rules are violated, an `InvalidOperationException` is thrown at runtime when the interface is used as a type argument to `SignalEntity`, `SignalEntityAsync`, or `CreateEntityProxy`. The exception message explains which rule was broken.
> [!NOTE] > Interface methods returning `void` can only be signaled (one-way), not called (two-way). Interface methods returning `Task` or `Task<T>` can be either called or signalled. If called, they return the result of the operation, or re-throw exceptions thrown by the operation. However, when signalled, they do not return the actual result or exception from the operation, but just the default value.
azure-functions Durable Functions Phone Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md
The **E4_SendSmsChallenge** function uses the Twilio binding to send the SMS mes
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/PhoneVerification.cs?range=72-89)] > [!NOTE]
-> You will need to install the `Microsoft.Azure.WebJobs.Extensions.Twilio` Nuget package to run the sample code.
+> You must first install the `Microsoft.Azure.WebJobs.Extensions.Twilio` Nuget package for Functions to run the sample code. Don't also install the main [Twilio nuget package](https://www.nuget.org/packages/Twilio/) because this can cause versioning problems that result in build errors.
# [JavaScript](#tab/javascript)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
description: Understand how to use Azure SQL bindings in Azure Functions.
Previously updated : 5/24/2022 Last updated : 6/3/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
You can add the preview extension bundle by adding or replacing the following co
::: zone pivot="programming-language-python"
+## Functions runtime
+ > [!NOTE]
-> Python language support for the SQL bindings extension is only available for v4 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) and requires runtime v4.5.0 for deployment in Azure. Learn more about determining the runtime in the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
+> Python language support for the SQL bindings extension is only available for v4 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) and requires runtime v4.5.0 or greater for deployment in Azure. Learn more about determining the runtime in the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
-## Install bundle
+The functions runtime required for local development and testing of Python functions isn't included in the current release of functions core tools and must be installed independently. The latest instructions on installing a preview version of functions core tools are available in the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250).
+
+Alternatively, a VS Code [development container](https://code.visualstudio.com/docs/remote/containers) definition can be used to expedite your environment setup. The definition components are available in the SQL bindings [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-python/.devcontainer).
++
+## Install bundle
The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
You can add the preview extension bundle by adding or replacing the following co
# [Preview Bundle v3.x](#tab/extensionv3)
-Python support is not available with the SQL bindings extension in the v3 version of the functions runtime.
+Python support isn't available with the SQL bindings extension in the v3 version of the functions runtime.
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp"
-The usage of the Blob trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
+The usage of the Queue trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
# [In-process class library](#tab/in-process)
The host.json file contains settings that control queue trigger behavior. See th
<!-- LINKS --> [CloudQueueMessage]: /dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage
-[QueueMessage]: /dotnet/api/azure.storage.queues.models.queuemessage
+[QueueMessage]: /dotnet/api/azure.storage.queues.models.queuemessage
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
There is also a [Visual Studio Code-based version](create-first-function-vs-code
## Prerequisites
-+ [Visual Studio 2022](https://azure.microsoft.com/downloads/), which supports .NET 6.0. Make sure to select the **Azure development** workload during installation.
++ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/), which supports .NET 6.0. Make sure to select the **Azure development** workload during installation. + [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
For more information, see [Local settings file](#local-settings).
#### <a name="debugging-functions-locally"></a>Debug functions locally
-To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func host start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
+To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 05/12/2022 Last updated : 06/02/2022 # Azure guidance for secure isolation
When a managed HSM is created, the requestor also provides a list of data plane
> [!IMPORTANT] > Unlike with key vaults, granting your users management plane access to a managed HSM doesn't grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is implemented by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
-As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others.
+As mentioned previously, managed HSM supports [importing keys generated](../key-vault/managed-hsm/hsm-protected-keys-byok.md) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others. For a more complete list of Azure services that work with Managed HSM, see [Data encryption models](../security/fundamentals/encryption-models.md#supporting-services).
Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
Drive encryption through BitLocker and DM-Crypt is a data protection feature tha
For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data can't be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key (BYOK)* scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
-Azure Disk encryption isn't supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption.
+Azure Disk encryption isn't supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption. See [Encryption at host](#encryption-at-host) for other options involving Managed HSM.
> [!NOTE] > Detailed instructions are available for creating and configuring a key vault for Azure Disk encryption with both **[Windows](../virtual-machines/windows/disk-encryption-key-vault.md)** and **[Linux](../virtual-machines/linux/disk-encryption-key-vault.md)** VMs.
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host*
-Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) for VM and VMSS isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 04/29/2022 Last updated : 06/02/2022 # Compare Azure Government and global Azure
Microsoft Azure Government uses same underlying technologies as global Azure, wh
## Export control implications
-You are responsible for designing and deploying your applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, you should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
+You're responsible for designing and deploying your applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, you shouldn't include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
## Guidance for developers
-Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For more information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure.
+Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For more information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you'll mostly have the same experience as in global Azure.
> [!NOTE] > This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az). For Az module installation instructions, see [**Install the Azure Az PowerShell module**](/powershell/azure/install-az-ps).
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||Custom Vision|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://www.customvision.azure.us/)|| ||Content Moderator|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Face API|cognitiveservices.azure.com|cognitiveservices.azure.us||
-||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)||
+||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
||Personalizer|cognitiveservices.azure.com|cognitiveservices.azure.us||
-||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us||
+||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
||Speech service|See [STT API docs](../cognitive-services/speech-service/rest-speech-to-text-short.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../cognitive-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||
-||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us||
+||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
||Translator|See [Translator API docs](../cognitive-services/translator/reference/v3-0-reference.md#base-urls)|cognitiveservices.azure.us|| |**Analytics**|Azure HDInsight|azurehdinsight.net|azurehdinsight.us|| ||Event Hubs|servicebus.windows.net|servicebus.usgovcloudapi.net||
Table below lists API endpoints in Azure vs. Azure Government for accessing and
## Service availability
-Microsoft's goal for Azure Government is to match service availability in Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, you are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true) for the latest, up-to-date information on service availability.
+Microsoft's goal for Azure Government is to match service availability in Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). Services available in Azure Government are listed by category and whether they're Generally Available or available through Preview. If a service is available in Azure Government, that fact isn't reiterated in the rest of this article. Instead, you're encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true) for the latest, up-to-date information on service availability.
In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in this article based on the main service categories outlined in the [online directory of Azure services](https://azure.microsoft.com/services/). Other considerations for service deployment and usage in Azure Government are also provided. ## AI + machine learning
-This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using **Azure Bot Service**, **Azure Machine Learning**, and **Cognitive Services** in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service,bot-service,cognitive-services&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Bot Service](/azure/bot-service/)
-The following Azure Bot Service **features are not currently available** in Azure Government (updated 8/16/2021):
+The following Azure Bot Service **features aren't currently available** in Azure Government (updated 16 August 2021):
- Bot Framework Composer integration - Channels (due to availability of dependent services)
For feature variations and limitations, see [Azure Machine Learning feature avai
### [Cognitive
-The following Content Moderator **features are not currently available** in Azure Government:
+The following Content Moderator **features aren't currently available** in Azure Government:
- Review UI and Review APIs. ### [Cognitive
-The following Language Understanding **features are not currently available** in Azure Government:
+The following Language Understanding **features aren't currently available** in Azure Government:
- Speech Requests - Prebuilt Domains
For feature variations and limitations, including API endpoints, see [Speech ser
### [Cognitive
-The following Translator **features are not currently available** in Azure Government:
+The following Translator **features aren't currently available** in Azure Government:
- Custom Translator - Translator Hub ## Analytics
-This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Analytics services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=data-share,power-bi-embedded,analysis-services,event-hubs,data-lake-analytics,storage,data-catalog,data-factory,synapse-analytics,stream-analytics,databricks,hdinsight&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure HDInsight](../hdinsight/index.yml)
-For secured virtual networks, you will want to allow network security groups (NSGs) access to certain IP addresses and ports. For Azure Government, you should allow the following IP addresses (all with an Allowed port of 443):
+For secured virtual networks, you'll want to allow network security groups (NSGs) access to certain IP addresses and ports. For Azure Government, you should allow the following IP addresses (all with an Allowed port of 443):
|**Region**|**Allowed IP addresses**|**Allowed port**| ||--||
To learn how to embed analytical content within your business process applicatio
## Databases
-This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Database for MySQL](../mysql/index.yml)
-The following Azure Database for MySQL **features are not currently available** in Azure Government:
+The following Azure Database for MySQL **features aren't currently available** in Azure Government:
- Advanced Threat Protection ### [Azure Database for PostgreSQL](../postgresql/index.yml)
-The following Azure Database for PostgreSQL **features are not currently available** in Azure Government:
+The following Azure Database for PostgreSQL **features aren't currently available** in Azure Government:
- Hyperscale (Citus) deployment option - The following features of the Single server deployment option
The following Azure Database for PostgreSQL **features are not currently availab
### [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview)
-The following Azure SQL Managed Instance **features are not currently available** in Azure Government:
+The following Azure SQL Managed Instance **features aren't currently available** in Azure Government:
- Long-term retention ## Developer tools
-This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Enterprise Dev/Test subscription offer](https://azure.microsoft.com/offers/ms-azr-0148p/)
This section outlines variations and considerations when using Developer tools i
## Identity
-This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Identity services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=information-protection,active-directory-ds,active-directory&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
The following features have known limitations in Azure Government:
- Limitations with B2B Collaboration in supported Azure US Government tenants: - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md).
- - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
+ - B2B collaboration via Power BI isn't supported. When you invite a guest user from within Power BI, the B2B flow isn't used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error.
- Limitations with multi-factor authentication:
- - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and should not be required based off the user's current IP address.
+ - Trusted IPs isn't supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and shouldn't be required based off the user's current IP address.
## Management and governance
-This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Automation](../automation/index.yml)
-The following Automation **features are not currently available** in Azure Government:
+The following Automation **features aren't currently available** in Azure Government:
- Automation analytics solution ### [Azure Advisor](../advisor/index.yml)
-The following Azure Advisor recommendation **features are not currently available** in Azure Government:
+The following Azure Advisor recommendation **features aren't currently available** in Azure Government:
- Cost - (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs.
The following Azure Advisor recommendation **features are not currently availabl
- Enforce 'Add or replace a tag on resources' using Azure Policy. - Enforce 'Allowed locations' using Azure Policy. - Enforce 'Allowed virtual machine SKUs' using Azure Policy.
- - Enforce 'Audit VMs that do not use managed disks' using Azure Policy.
+ - Enforce 'Audit VMs that don't use managed disks' using Azure Policy.
- Enforce 'Inherit a tag from the resource group' using Azure Policy. - Update Azure Spring Cloud API Version. - Update your outdated Azure Spring Cloud SDK to the latest version.
If you want to be more aggressive at identifying underutilized virtual machines,
### [Azure Lighthouse](../lighthouse/index.yml)
-The following Azure Lighthouse **features are not currently available** in Azure Government:
+The following Azure Lighthouse **features aren't currently available** in Azure Government:
- Managed Service offers published to Azure Marketplace-- Delegation of subscriptions across a national cloud and the Azure public cloud, or across two separate national clouds, is not supported-- Privileged Identity Management (PIM) feature is not enabled, for example, just-in-time (JIT) / eligible authorization capability
+- Delegation of subscriptions across a national cloud and the Azure public cloud, or across two separate national clouds, isn't supported
+- Privileged Identity Management (PIM) feature isn't enabled, for example, just-in-time (JIT) / eligible authorization capability
### [Azure Monitor](../azure-monitor/index.yml)
For more information, see [Connect Operations Manager to Azure Monitor](../azure
**Frequently asked questions** - Can I migrate data from Azure Monitor logs in Azure to Azure Government?
- - No. It is not possible to move data or your workspace from Azure to Azure Government.
+ - No. It isn't possible to move data or your workspace from Azure to Azure Government.
- Can I switch between Azure and Azure Government workspaces from the Operations Management Suite portal?
- - No. The portals for Azure and Azure Government are separate and do not share information.
+ - No. The portals for Azure and Azure Government are separate and don't share information.
#### [Application Insights](../azure-monitor/app/app-insights-overview.md)
Application Insights (part of Azure Monitor) enables the same features in both A
**Visual Studio** - In Azure Government, you can enable monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based applications running on Azure App Service. For more information, see [Application monitoring for Azure App Service overview](../azure-monitor/app/azure-web-apps.md). In Visual Studio, go to Tools|Options|Accounts|Registered Azure Clouds|Add New Azure Cloud and select Azure US Government as the Discovery endpoint. After that, adding an account in File|Account Settings will prompt you for which cloud you want to add from.
-**SDK endpoint modifications** - In order to send data from Application Insights to an Azure Government region, you will need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](../azure-monitor/app/custom-endpoints.md).
+**SDK endpoint modifications** - In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](../azure-monitor/app/custom-endpoints.md).
-**Firewall exceptions** - Application Insights uses several IP addresses. You might need to know these addresses if the app that you are monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
+**Firewall exceptions** - Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
>[!NOTE]
->Although these addresses are static, it's possible that we will need to change them from time to time. All Application Insights traffic represents outbound traffic except for availability monitoring and webhooks, which require inbound firewall rules.
+>Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic except for availability monitoring and webhooks, which require inbound firewall rules.
You need to open some **outgoing ports** in your server's firewall to allow the Application Insights SDK and/or Status Monitor to send data to the portal:
You need to open some **outgoing ports** in your server's firewall to allow the
### [Cost Management and Billing](../cost-management-billing/index.yml)
-The following Azure Cost Management + Billing **features are not currently available** in Azure Government:
+The following Azure Cost Management + Billing **features aren't currently available** in Azure Government:
- Cost Management + Billing for cloud solution providers (CSPs) ## Media
-This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Media services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=cdn,media-services&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Media Services](/azure/media-services/)
For Azure Media Services v3 feature variations in Azure Government, see [Azure M
## Migration
-This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Migration services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,cost-management,azure-migrate,site-recovery&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure Migrate](../migrate/index.yml)
-The following Azure Migrate **features are not currently available** in Azure Government:
+The following Azure Migrate **features aren't currently available** in Azure Government:
- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. - Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on Azure Kubernetes Service (AKS).
For more information, see [Azure Migrate support matrix](../migrate/migrate-supp
## Networking
-This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Networking services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-bastion,frontdoor,virtual-wan,dns,ddos-protection,cdn,azure-firewall,network-watcher,load-balancer,vpn-gateway,expressroute,application-gateway,virtual-network&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure ExpressRoute](../expressroute/index.yml)
For an overview of ExpressRoute, see [What is Azure ExpressRoute?](../expressrou
### [Private Link](../private-link/index.yml)
-For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).
+- For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).
+- For Private DNS zone names, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#government).
### [Traffic Manager](../traffic-manager/index.yml)
Traffic Manager health checks can originate from certain IP addresses for Azure
## Security
-This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Security services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sentinel,azure-dedicated-hsm,information-protection,application-gateway,vpn-gateway,security-center,key-vault,active-directory-ds,ddos-protection,active-directory&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Microsoft Defender for IoT](../defender-for-iot/index.yml)
For feature variations and limitations, see [Cloud feature availability for US G
## Storage
-This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [Azure managed disks](../virtual-machines/managed-disks-overview.md)
-The following Azure managed disks **features are not currently available** in Azure Government:
+The following Azure managed disks **features aren't currently available** in Azure Government:
- Zone-redundant storage (ZRS)
With Import/Export jobs for US Gov Arizona or US Gov Texas, the mailing address
## Web
-This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
+This section outlines variations and considerations when using Web services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud,signalr-service,api-management,notification-hubs,search,cdn,app-service-linux,app-service&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
### [API Management](../api-management/index.yml)
-The following API Management **features are not currently available** in Azure Government:
+The following API Management **features aren't currently available** in Azure Government:
- Azure AD B2C integration ### [App Service](../app-service/index.yml)
-The following App Service **resources are not currently available** in Azure Government:
+The following App Service **resources aren't currently available** in Azure Government:
- App Service Certificate - App Service Managed Certificate - App Service Domain
-The following App Service **features are not currently available** in Azure Government:
+The following App Service **features aren't currently available** in Azure Government:
- Deployment - Deployment options: only Local Git Repository and External Repository are available
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Bitscape](https://www.bitscape.com)| |[Bio Automation Support](https://www.stacsdna.com/)| |[Blackwood Associates, Inc. (dba BAI Federal)](https://www.blackwoodassociates.com/)|
-|[Blue Source Group, Inc.](https://www.blackwoodassociates.com/)|
+|[Blue Source Group, Inc.](https://bluesourcegroup.com/)|
|[Blueforce Development Corporation](https://www.blueforcedev.com/)| |[Booz Allen Hamilton](https://www.boozallen.com/)| |[Bridge Partners LLC](https://www.bridgepartnersllc.com)|
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
description: Learn how to create and manage action groups in the Azure portal. Previously updated : 2/23/2022 Last updated : 6/2/2022
Under **Instance details**:
> [!NOTE] > When you configure an action to notify a person by email or SMS, they receive a confirmation indicating they have been added to the action group.+ ### Test an action group in the Azure portal (Preview) When creating or updating an action group in the Azure portal, you can **test** the action group.
-1. After creating an action rule, click on **Review + create**. Select *Test action group*.
+1. After defining an action, click on **Review + create**. Select *Test action group*.
![The Test Action Group](./media/action-groups/test-action-group.png)
To allow you to check the action groups are working as expected before you enabl
All the details and links in Test email notifications for the alerts fired are a sample set for reference.
+#### Azure Resource Manager role membership requirements
+The following table describes the role membership requirements to use the *test actions* functionality
+
+| User's role membersip | Existing Action Group | Existing Resource Group and new Action Group | New Resource Group and new Action Group |
+| - | - | -- | - |
+| Subscription Contribuutor | Supported | Supported | Supported |
+| Resource Group Contributor | Supported | Supported | Not Applicable |
+| Action Group resource Contributor | Supported | Not Applicable | Not Applicable |
+| Azure Monitor Contributor | Supported | Supported | Not Applicable |
+| Custom role | Supported | Supported | Not Applicable |
++ > [!NOTE]
-> You may have a limited number of actions in a test Action Group. See the [rate limiting information](./alerts-rate-limiting.md) article.
+> You may perform a limited number of tests over a time period. See the [rate limiting information](./alerts-rate-limiting.md) article.
> > You can opt in or opt out to the common alert schema through Action Groups, on the portal. You can [find common schema samples for test action groups for all the sample types](./alerts-common-schema-test-action-definitions.md).
-> You can opt in or opt out to the non-common alert schema through Action Groups, on the portal. You can [find non-common schema alert definitions](./alerts-non-common-schema-definitions.md).
+> You can [find non-common schema alert definitions](./alerts-non-common-schema-definitions.md).
## Manage your action groups
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Last updated 01/19/2022
# Tutorial: Send custom logs to Azure Monitor Logs using Resource Manager templates (preview) [Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send custom data to tables in a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor using Resource Manager templates. + > [!NOTE] > This tutorial uses Resource Manager templates and REST API to configure custom logs. See [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-custom-logs.md) for a similar tutorial using the Azure portal.
The cache that drives IntelliSense may take up to 24 hours to update.
- [Complete a similar tutorial using the Azure portal.](tutorial-custom-logs.md) - [Read more about custom logs.](custom-logs-overview.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
Last updated 01/19/2022
# Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview) [Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send external data to a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor. + > [!NOTE] > This tutorial uses the Azure portal. See [Tutorial: Send custom logs to Azure Monitor Logs using resource manager templates (preview)](tutorial-custom-logs-api.md) for a similar tutorial using resource manager templates.
azure-resource-manager Tutorial Custom Providers Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-create.md
Title: Create and use a custom provider
description: This tutorial shows how to create and use an Azure Custom Provider. Use custom providers to change workflows on Azure. Previously updated : 06/19/2019 Last updated : 05/06/2022
You can deploy the previous custom provider by using an Azure Resource Manager t
```JSON {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "resources": [ {
You can deploy the previous custom provider by using an Azure Resource Manager t
## Use custom actions and resources
-After you create a custom provider, you can use the new Azure APIs. The following tabs explain how to call and use a custom provider.
+After you create a custom provider, you can use the new Azure APIs. The following sections explain how to call and use a custom provider.
### Custom actions
-# [Azure CLI](#tab/azure-cli)
+#### Azure CLI
> [!NOTE] > You must replace the `{subscriptionId}` and `{resourceGroupName}` placeholders with the subscription and resource group of where you deployed the custom provider.
az resource invoke-action --action myCustomAction \
Parameter | Required | Description ||
-*action* | Yes | The name of the action defined in the custom provider
-*ids* | Yes | The resource ID of the custom provider
-*request-body* | No | The request body that will be sent to the endpoint
-
-# [Template](#tab/template)
-
-None.
--
+*action* | Yes | The name of the action defined in the custom provider.
+*ids* | Yes | The resource ID of the custom provider.
+*request-body* | No | The request body that will be sent to the endpoint.
### Custom resources
A sample Resource Manager template:
```JSON {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "resources": [ {
Parameter | Required | Description
In this article, you learned about custom providers. For more information, see: -- [How to: Adding custom actions to Azure REST API](./custom-providers-action-endpoint-how-to.md)-- [How to: Adding custom resources to Azure REST API](./custom-providers-resources-endpoint-how-to.md)
+- [How to: Add custom actions to Azure REST API](./custom-providers-action-endpoint-how-to.md)
+- [How to: Add custom resources to Azure REST API](./custom-providers-resources-endpoint-how-to.md)
azure-resource-manager Tutorial Custom Providers Function Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
Title: Set up Azure Functions
description: This tutorial goes over how to create a function app in Azure Functions and set it up to work with Azure Custom Providers. Previously updated : 06/19/2019 Last updated : 05/06/2022
-# Set up Azure Functions for Azure Custom Providers
+# Set up Azure Functions for custom providers
A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows in Azure. This tutorial shows how to set up a function app in Azure Functions to work as a custom provider endpoint.
A custom provider is a contract between Azure and an endpoint. With custom provi
> [!NOTE] > In this tutorial, you create a simple service endpoint that uses a function app in Azure Functions. However, a custom provider can use any publicly accessible endpoint. Alternatives include Azure Logic Apps, Azure API Management, and the Web Apps feature of Azure App Service.
-To start this tutorial, you should first follow the tutorial [Create your first function app in the Azure portal](../../azure-functions/functions-get-started.md). That tutorial creates a .NET core webhook function that can be modified in the Azure portal. It is also the foundation for the current tutorial.
+To start this tutorial, you should first follow the tutorial [Create your first function app in the Azure portal](../../azure-functions/functions-get-started.md). That tutorial creates a .NET core webhook function that can be modified in the Azure portal. It's also the foundation for the current tutorial.
## Install Azure Table storage bindings
To install the Azure Table storage bindings:
1. Select **+ New Input**. 1. Select **Azure Table Storage**. 1. Install the Microsoft.Azure.WebJobs.Extensions.Storage extension if it isn't already installed.
-1. In the **Table parameter name** box, enter **tableStorage**.
-1. In the **Table name** box, enter **myCustomResources**.
+1. In the **Table parameter name** box, enter *tableStorage*.
+1. In the **Table name** box, enter *myCustomResources*.
1. Select **Save** to save the updated input parameter. ![Custom provider overview showing table bindings](./media/create-custom-provider/azure-functions-table-bindings.png)
To set up the Azure function to include the custom provider RESTful request meth
## Add Azure Resource Manager NuGet packages > [!NOTE]
-> If your C# project file is missing from the project directory, you can add it manually. Or it will appear after the Microsoft.Azure.WebJobs.Extensions.Storage extension is installed on the function app.
+> If your C# project file is missing from the project directory, you can add it manually, or it will appear after the Microsoft.Azure.WebJobs.Extensions.Storage extension is installed on the function app.
Next, update the C# project file to include helpful NuGet libraries. These libraries make it easier to parse incoming requests from custom providers. Follow the steps to [add extensions from the portal](../../azure-functions/functions-bindings-register.md) and update the C# project file to include the following package references:
The following XML element is an example C# project file:
## Next steps
-In this tutorial, you set up a function app in Azure Functions to work as an Azure custom provider endpoint.
+In this tutorial, you set up a function app in Azure Functions to work as an Azure Custom Provider endpoint.
To learn how to author a RESTful custom provider endpoint, see [Tutorial: Authoring a RESTful custom provider endpoint](./tutorial-custom-providers-function-authoring.md).
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 04/20/2022 Last updated : 06/03/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* artifactSetDefinitions * artifactSetSnapshots
-* chaosProviderConfigurations
-* chaosTargets
* targets ## Microsoft.Consumption
An extension resource is a resource that adds to another resource's capabilities
* backupInstances
-## Microsoft.Diagnostics
-
-* apollo
-* insights
-* solutions
- ## Microsoft.EventGrid * eventSubscriptions
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.KubernetesConfiguration * extensions
+* extensionTypes
* fluxConfigurations * namespaces * sourceControlConfigurations
An extension resource is a resource that adds to another resource's capabilities
* Compliances * dataCollectionAgents * deviceSecurityGroups
+* governanceRules
* InformationProtectionPolicies * insights * jitPolicies
An extension resource is a resource that adds to another resource's capabilities
* automationRules * bookmarks * cases
+* dataConnectorDefinitions
* dataConnectors * enrichment * entities
An extension resource is a resource that adds to another resource's capabilities
* metadata * MitreCoverageRecords * onboardingStates
+* overview
* securityMLAnalyticsSettings * settings * sourceControls
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 04/20/2022 Last updated : 06/03/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.ContainerInstance
+* containerGroupProfiles
* containerGroups
+* containerScaleSets
## Microsoft.ContainerRegistry
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 05/13/2022 Last updated : 06/03/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | accounts | Yes | Yes | > | accounts / datapools | No | No |
+> | workspaces | Yes | Yes |
## Microsoft.AutonomousSystems
To get the same data as a file of comma-separated values, download [tag-support.
> | DataControllers | Yes | Yes | > | DataControllers / ActiveDirectoryConnectors | No | No | > | PostgresInstances | Yes | Yes |
-> | sqlManagedInstances | Yes | Yes |
+> | SqlManagedInstances | Yes | Yes |
> | SqlServerInstances | Yes | Yes | ## Microsoft.AzureCIS
To get the same data as a file of comma-separated values, download [tag-support.
> | clusters | Yes | Yes | > | clusters / arcSettings | No | No | > | clusters / arcSettings / extensions | No | No |
-> | galleryimages | Yes | Yes |
+> | clusters / offers | No | No |
+> | clusters / publishers | No | No |
+> | clusters / publishers / offers | No | No |
+> | galleryImages | Yes | Yes |
> | networkinterfaces | Yes | Yes | > | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | batchAccounts | Yes | Yes | > | batchAccounts / certificates | No | No |
+> | batchAccounts / detectors | No | No |
> | batchAccounts / pools | No | No | ## Microsoft.Billing
To get the same data as a file of comma-separated values, download [tag-support.
> | profiles / endpoints / origins | No | No | > | profiles / origingroups | No | No | > | profiles / origingroups / origins | No | No |
+> | profiles / policies | No | No |
> | profiles / rulesets | No | No | > | profiles / rulesets / rules | No | No | > | profiles / secrets | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | artifactSetDefinitions | No | No | > | artifactSetSnapshots | No | No |
-> | chaosExperiments | Yes | Yes |
-> | chaosProviderConfigurations | No | No |
-> | chaosTargets | No | No |
> | experiments | Yes | Yes | > | targets | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | containerGroupProfiles | Yes | Yes |
> | containerGroups | Yes | Yes |
+> | containerScaleSets | Yes | Yes |
> | serviceAssociationLinks | No | No | ## Microsoft.ContainerRegistry
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | grafana | Yes | Yes |
+> | grafana / privateEndpointConnections | No | No |
+> | grafana / privateLinkResources | No | No |
## Microsoft.DataBox
To get the same data as a file of comma-separated values, download [tag-support.
> | labs / virtualMachines | Yes | Yes | > | schedules | Yes | Yes |
-## Microsoft.Diagnostics
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | apollo | No | No |
-> | azureKB | No | No |
-> | insights | No | No |
-> | solutions | No | No |
- ## Microsoft.DigitalTwins > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | jobs | Yes | Yes |
-## Microsoft.IndustryDataLifecycle
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | baseModels | Yes | Yes |
-> | baseModels / entities | No | No |
-> | baseModels / relationships | No | No |
-> | builtInModels | No | No |
-> | builtInModels / entities | No | No |
-> | builtInModels / relationships | No | No |
-> | collaborativeInvitations | No | No |
-> | custodianCollaboratives | Yes | Yes |
-> | custodianCollaboratives / collaborativeImage | No | No |
-> | custodianCollaboratives / dataModels | No | No |
-> | custodianCollaboratives / dataModels / mergePipelines | No | No |
-> | custodianCollaboratives / invitations | No | No |
-> | custodianCollaboratives / invitations / termsOfUseDocuments | No | No |
-> | custodianCollaboratives / receivedDataPackages | No | No |
-> | custodianCollaboratives / termsOfUseDocuments | No | No |
-> | dataConsumerCollaboratives | Yes | Yes |
-> | dataproviders | No | No |
-> | derivedModels | Yes | Yes |
-> | derivedModels / entities | No | No |
-> | derivedModels / relationships | No | No |
-> | generateMappingTemplate | No | No |
-> | memberCollaboratives | Yes | Yes |
-> | memberCollaboratives / sharedDataPackages | No | No |
-> | modelMappings | Yes | Yes |
-> | pipelineSets | Yes | Yes |
- ## microsoft.insights > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | extensions | No | No |
+> | extensionTypes | No | No |
> | fluxConfigurations | No | No | > | namespaces | No | No | > | privateLinkScopes | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / components / versions | No | No | > | workspaces / computes | No | No | > | workspaces / data | No | No |
+> | workspaces / data / versions | No | No |
> | workspaces / datasets | No | No | > | workspaces / datastores | No | No | > | workspaces / environments | No | No |
+> | workspaces / environments / versions | No | No |
> | workspaces / eventGridFilters | No | No | > | workspaces / jobs | No | No | > | workspaces / labelingJobs | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | privateStoreClient | No | No | > | privateStores | No | No | > | privateStores / AdminRequestApprovals | No | No |
+> | privateStores / anyExistingOffersInTheCollections | No | No |
> | privateStores / billingAccounts | No | No | > | privateStores / bulkCollectionsAction | No | No | > | privateStores / collections | No | No |
+> | privateStores / collections / approveAllItems | No | No |
+> | privateStores / collections / disableApproveAllItems | No | No |
> | privateStores / collections / offers | No | No |
+> | privateStores / collections / offers / upsertOfferWithMultiContext | No | No |
> | privateStores / collections / transferOffers | No | No | > | privateStores / collectionsToSubscriptionsMapping | No | No | > | privateStores / fetchAllSubscriptionsInTenant | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | privateStores / queryApprovedPlans | No | No | > | privateStores / queryNotificationsState | No | No | > | privateStores / queryOffers | No | No |
+> | privateStores / queryUserOffers | No | No |
> | privateStores / RequestApprovals | No | No | > | privateStores / requestApprovals / query | No | No | > | privateStores / requestApprovals / withdrawPlan | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | frontdoorWebApplicationFirewallPolicies | Yes, but limited (see [note below](#network-limitations)) | Yes | > | getDnsResourceReference | No | No | > | internalNotify | No | No |
+> | internalPublicIpAddresses | No | No |
> | ipGroups | Yes | Yes | > | loadBalancers | Yes | Yes | > | localNetworkGateways | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | bareMetalMachines | Yes | Yes | > | clusterManagers | Yes | Yes | > | clusters | Yes | Yes |
+> | hybridAksClusters | Yes | Yes |
+> | hybridAksManagementDomains | Yes | Yes |
+> | hybridAksVirtualMachines | Yes | Yes |
> | rackManifests | Yes | Yes | > | racks | Yes | Yes | > | virtualMachines | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | azureTrafficCollectors | Yes | Yes |
+> | azureTrafficCollectors / collectorPolicies | Yes | Yes |
> | meshVpns | Yes | Yes | > | meshVpns / connectionPolicies | Yes | Yes | > | meshVpns / privateEndpointConnectionProxies | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | playeraccountpools | Yes | Yes |
+> | playerAccountPools | Yes | Yes |
> | titles | Yes | Yes | > | titles / segments | No | No |
-> | titles / titledatakeyvalues | No | No |
-> | titles / titleinternaldatakeyvalues | No | No |
+> | titles / titleDataSets | No | No |
+> | titles / titleInternalDataKeyValues | No | No |
+> | titles / titleInternalDataSets | No | No |
## Microsoft.PolicyInsights
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | availabilitysets | Yes | Yes |
+> | AvailabilitySets | Yes | Yes |
> | Clouds | Yes | Yes | > | VirtualMachines | Yes | Yes | > | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes |
-> | vmmservers | Yes | Yes |
+> | VMMServers | Yes | Yes |
> | VMMServers / InventoryItems | No | No | ## Microsoft.Search
To get the same data as a file of comma-separated values, download [tag-support.
> | MdeOnboardings | No | No | > | policies | No | No | > | pricings | No | No |
+> | query | No | No |
> | regulatoryComplianceStandards | No | No | > | regulatoryComplianceStandards / regulatoryComplianceControls | No | No | > | regulatoryComplianceStandards / regulatoryComplianceControls / regulatoryComplianceAssessments | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | subAssessments | No | No | > | tasks | No | No | > | topologies | No | No |
+> | vmScanners | Yes | Yes |
> | workspaceSettings | No | No | ## Microsoft.SecurityDetonation
To get the same data as a file of comma-separated values, download [tag-support.
> | automationRules | No | No | > | bookmarks | No | No | > | cases | No | No |
+> | dataConnectorDefinitions | No | No |
> | dataConnectors | No | No | > | enrichment | No | No | > | entities | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | metadata | No | No | > | MitreCoverageRecords | No | No | > | onboardingStates | No | No |
+> | overview | No | No |
> | securityMLAnalyticsSettings | No | No | > | settings | No | No | > | sourceControls | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | storageAccounts / queueServices | No | No | > | storageAccounts / services | No | No | > | storageAccounts / services / metricDefinitions | No | No |
+> | storageAccounts / storageTaskAssignments | No | No |
> | storageAccounts / tableServices | No | No |
+> | storageTasks | Yes | Yes |
> | usages | No | No | ## Microsoft.StorageCache
To get the same data as a file of comma-separated values, download [tag-support.
> | sourceControls | No | No | > | staticSites | Yes | Yes | > | staticSites / builds | No | No |
+> | staticSites / builds / linkedBackends | No | No |
> | staticSites / builds / userProvidedFunctionApps | No | No |
+> | staticSites / linkedBackends | No | No |
> | staticSites / userProvidedFunctionApps | No | No | > | validate | No | No | > | verifyHostingEnvironmentVnet | No | No |
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 04/20/2022 Last updated : 06/03/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | accounts | Yes | > | accounts / datapools | No |
+> | workspaces | Yes |
## Microsoft.AutonomousSystems
The resources are listed by resource provider namespace. To match a resource pro
> | DataControllers | Yes | > | DataControllers / ActiveDirectoryConnectors | No | > | PostgresInstances | Yes |
-> | sqlManagedInstances | Yes |
+> | SqlManagedInstances | Yes |
> | SqlServerInstances | Yes | ## Microsoft.AzureCIS
The resources are listed by resource provider namespace. To match a resource pro
> | clusters | Yes | > | clusters / arcSettings | No | > | clusters / arcSettings / extensions | No |
-> | galleryimages | Yes |
+> | clusters / offers | No |
+> | clusters / publishers | No |
+> | clusters / publishers / offers | No |
+> | galleryImages | Yes |
> | networkinterfaces | Yes | > | virtualharddisks | Yes | > | virtualmachines | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | batchAccounts | Yes | > | batchAccounts / certificates | No |
+> | batchAccounts / detectors | No |
> | batchAccounts / pools | No | ## Microsoft.Billing
The resources are listed by resource provider namespace. To match a resource pro
> | profiles / endpoints / origins | No | > | profiles / origingroups | No | > | profiles / origingroups / origins | No |
+> | profiles / policies | No |
> | profiles / rulesets | No | > | profiles / rulesets / rules | No | > | profiles / secrets | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | artifactSetDefinitions | No | > | artifactSetSnapshots | No |
-> | chaosExperiments | Yes |
-> | chaosProviderConfigurations | No |
-> | chaosTargets | No |
> | experiments | Yes | > | targets | No |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | containerGroupProfiles | Yes |
> | containerGroups | Yes |
+> | containerScaleSets | Yes |
> | serviceAssociationLinks | No | ## Microsoft.ContainerRegistry
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | grafana | Yes |
+> | grafana / privateEndpointConnections | No |
+> | grafana / privateLinkResources | No |
## Microsoft.DataBox
The resources are listed by resource provider namespace. To match a resource pro
> | labs / virtualMachines | Yes | > | schedules | Yes |
-## Microsoft.Diagnostics
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | apollo | No |
-> | azureKB | No |
-> | insights | No |
-> | solutions | No |
- ## Microsoft.DigitalTwins > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | jobs | Yes |
-## Microsoft.IndustryDataLifecycle
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | baseModels | Yes |
-> | baseModels / entities | No |
-> | baseModels / relationships | No |
-> | builtInModels | No |
-> | builtInModels / entities | No |
-> | builtInModels / relationships | No |
-> | collaborativeInvitations | No |
-> | custodianCollaboratives | Yes |
-> | custodianCollaboratives / collaborativeImage | No |
-> | custodianCollaboratives / dataModels | No |
-> | custodianCollaboratives / dataModels / mergePipelines | No |
-> | custodianCollaboratives / invitations | No |
-> | custodianCollaboratives / invitations / termsOfUseDocuments | No |
-> | custodianCollaboratives / receivedDataPackages | No |
-> | custodianCollaboratives / termsOfUseDocuments | No |
-> | dataConsumerCollaboratives | Yes |
-> | dataproviders | No |
-> | derivedModels | Yes |
-> | derivedModels / entities | No |
-> | derivedModels / relationships | No |
-> | generateMappingTemplate | No |
-> | memberCollaboratives | Yes |
-> | memberCollaboratives / sharedDataPackages | No |
-> | modelMappings | Yes |
-> | pipelineSets | Yes |
- ## microsoft.insights > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | extensions | No |
+> | extensionTypes | No |
> | fluxConfigurations | No | > | namespaces | No | > | privateLinkScopes | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / components / versions | No | > | workspaces / computes | No | > | workspaces / data | No |
+> | workspaces / data / versions | No |
> | workspaces / datasets | No | > | workspaces / datastores | No | > | workspaces / environments | No |
+> | workspaces / environments / versions | No |
> | workspaces / eventGridFilters | No | > | workspaces / jobs | No | > | workspaces / labelingJobs | No |
The resources are listed by resource provider namespace. To match a resource pro
> | privateStoreClient | No | > | privateStores | No | > | privateStores / AdminRequestApprovals | No |
+> | privateStores / anyExistingOffersInTheCollections | No |
> | privateStores / billingAccounts | No | > | privateStores / bulkCollectionsAction | No | > | privateStores / collections | No |
+> | privateStores / collections / approveAllItems | No |
+> | privateStores / collections / disableApproveAllItems | No |
> | privateStores / collections / offers | No |
+> | privateStores / collections / offers / upsertOfferWithMultiContext | No |
> | privateStores / collections / transferOffers | No | > | privateStores / collectionsToSubscriptionsMapping | No | > | privateStores / fetchAllSubscriptionsInTenant | No |
The resources are listed by resource provider namespace. To match a resource pro
> | privateStores / queryApprovedPlans | No | > | privateStores / queryNotificationsState | No | > | privateStores / queryOffers | No |
+> | privateStores / queryUserOffers | No |
> | privateStores / RequestApprovals | No | > | privateStores / requestApprovals / query | No | > | privateStores / requestApprovals / withdrawPlan | No |
The resources are listed by resource provider namespace. To match a resource pro
> | frontdoorWebApplicationFirewallPolicies | Yes | > | getDnsResourceReference | No | > | internalNotify | No |
+> | internalPublicIpAddresses | No |
> | ipGroups | Yes | > | loadBalancers | Yes | > | localNetworkGateways | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | bareMetalMachines | Yes | > | clusterManagers | Yes | > | clusters | Yes |
+> | hybridAksClusters | Yes |
+> | hybridAksManagementDomains | Yes |
+> | hybridAksVirtualMachines | Yes |
> | rackManifests | Yes | > | racks | Yes | > | virtualMachines | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | azureTrafficCollectors | Yes |
+> | azureTrafficCollectors / collectorPolicies | Yes |
> | meshVpns | Yes | > | meshVpns / connectionPolicies | Yes | > | meshVpns / privateEndpointConnectionProxies | No |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | playeraccountpools | Yes |
+> | playerAccountPools | Yes |
> | titles | Yes | > | titles / segments | No |
-> | titles / titledatakeyvalues | No |
-> | titles / titleinternaldatakeyvalues | No |
+> | titles / titleDataSets | No |
+> | titles / titleInternalDataKeyValues | No |
+> | titles / titleInternalDataSets | No |
## Microsoft.PolicyInsights
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | availabilitysets | Yes |
+> | AvailabilitySets | Yes |
> | Clouds | Yes | > | VirtualMachines | Yes | > | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes |
-> | vmmservers | Yes |
+> | VMMServers | Yes |
> | VMMServers / InventoryItems | No | ## Microsoft.Search
The resources are listed by resource provider namespace. To match a resource pro
> | MdeOnboardings | No | > | policies | No | > | pricings | No |
+> | query | No |
> | regulatoryComplianceStandards | No | > | regulatoryComplianceStandards / regulatoryComplianceControls | No | > | regulatoryComplianceStandards / regulatoryComplianceControls / regulatoryComplianceAssessments | No |
The resources are listed by resource provider namespace. To match a resource pro
> | subAssessments | No | > | tasks | No | > | topologies | No |
+> | vmScanners | Yes |
> | workspaceSettings | No | ## Microsoft.SecurityDetonation
The resources are listed by resource provider namespace. To match a resource pro
> | automationRules | No | > | bookmarks | No | > | cases | No |
+> | dataConnectorDefinitions | No |
> | dataConnectors | No | > | enrichment | No | > | entities | No |
The resources are listed by resource provider namespace. To match a resource pro
> | metadata | No | > | MitreCoverageRecords | No | > | onboardingStates | No |
+> | overview | No |
> | securityMLAnalyticsSettings | No | > | settings | No | > | sourceControls | No |
The resources are listed by resource provider namespace. To match a resource pro
> | storageAccounts / queueServices | No | > | storageAccounts / services | No | > | storageAccounts / services / metricDefinitions | No |
+> | storageAccounts / storageTaskAssignments | No |
> | storageAccounts / tableServices | No |
+> | storageTasks | Yes |
> | usages | No | ## Microsoft.StorageCache
The resources are listed by resource provider namespace. To match a resource pro
> | sourceControls | No | > | staticSites | Yes | > | staticSites / builds | No |
+> | staticSites / builds / linkedBackends | No |
> | staticSites / builds / userProvidedFunctionApps | No |
+> | staticSites / linkedBackends | No |
> | staticSites / userProvidedFunctionApps | No | > | validate | No | > | verifyHostingEnvironmentVnet | No |
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
The following list shows the insights you can retrieve from your videos using Az
* **Speaker enumeration**: Maps and understands which speaker spoke which words and when. Sixteen speakers can be detected in a single audio-file. * **Speaker statistics**: Provides statistics for speakers' speech ratios. * **Textual content moderation**: Detects explicit text in the audio transcript.
-* **Audio effects** (preview): Detects the following audio effects in the non-speech segments of the content: Gunshot, Glass shatter, Alarm, Siren, Explosion, Dog Bark, Screaming, Laughter, Crowd reactions (cheering, clapping, and booing) and Silence. Note: the full set of events is available only when choosing ΓÇÿAdvanced Audio AnalysisΓÇÖ in upload preset, otherwise only ΓÇÿSilenceΓÇÖ and ΓÇÿCrowd reactionΓÇÖ will be available.
* **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear. * **Translation**: Creates translations of the audio transcript to 54 different languages. * **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md
Title: Deploy Horizon on Azure VMware Solution description: Learn how to deploy VMware Horizon on Azure VMware Solution. Previously updated : 09/29/2020 Last updated : 04/11/2022
Horizon 2006 and later versions on the Horizon 8 release line supports both on-p
## Deploy Horizon in a hybrid cloud
-You can deploy Horizon in a hybrid cloud environment by using Horizon Cloud Pod Architecture (CPA) to interconnect on-premises and Azure datacenters. CPA scales up your deployment, builds a hybrid cloud, and provides redundancy for Business Continuity and Disaster Recovery. For more information, see [Expanding Existing Horizon 7 Environments](https://techzone.vmware.com/resource/business-continuity-vmware-horizon#_Toc41650874).
+You can deploy Horizon in a hybrid cloud environment by using Horizon Cloud Pod Architecture (CPA) to interconnect on-premises and Azure data centers. CPA scales up your deployment, builds a hybrid cloud, and provides redundancy for Business Continuity and Disaster Recovery. For more information, see [Expanding Existing Horizon 7 Environments](https://techzone.vmware.com/resource/business-continuity-vmware-horizon#_Toc41650874).
>[!IMPORTANT] >CPA is not a stretched deployment; each Horizon pod is distinct, and all Connection Servers that belong to each of the individual pods are required to be located in a single location and run on the same broadcast domain from a network perspective.
-Like on-premises or private datacenter, you can deploy Horizon in an Azure VMware Solution private cloud. We'll discuss key differences in deploying Horizon on-premises and Azure VMware Solution in the following sections.
+Like on-premises or private data centers, you can deploy Horizon in an Azure VMware Solution private cloud. We'll discuss key differences in deploying Horizon on-premises and Azure VMware Solution in the following sections.
The _Azure private cloud_ is conceptually the same as the _VMware SDDC_, a term typically used in Horizon documentation. The rest of this document uses both terms interchangeably.
The Horizon Cloud Connector is required for Horizon on Azure VMware Solution to
>[!IMPORTANT] >Horizon Control Plane support for Horizon on Azure VMware Solution is not yet available. Be sure to download the VHD version of Horizon Cloud Connector.
-## vCenter Cloud Admin role
+## vCenter Server Cloud Admin role
-Since Azure VMware Solution is an SDDC service and Azure manages the lifecycle of the SDDC on Azure VMware Solution, the vCenter permission model on Azure VMware Solution is limited by design.
+Since Azure VMware Solution is an SDDC service and Azure manages the lifecycle of the SDDC on Azure VMware Solution, the vCenter Server permission model on Azure VMware Solution is limited by design.
-Customers are required to use the Cloud Admin role, which has a limited set of vCenter permissions. The Horizon product was modified to work with the Cloud Admin role on Azure VMware Solution, specifically:
+Customers are required to use the Cloud Admin role, which has a limited set of vCenter Server permissions. The Horizon product was modified to work with the Cloud Admin role on Azure VMware Solution, specifically:
* Instant clone provisioning was modified to run on Azure VMware Solution.
Customers are required to use the Cloud Admin role, which has a limited set of v
## Horizon on Azure VMware Solution deployment architecture
-A typical Horizon architecture design uses a pod and block strategy. A block is a single vCenter, while multiple blocks combined make a pod. A Horizon pod is a unit of organization determined by Horizon scalability limits. Each Horizon pod has a separate management portal, and so a standard design practice is to minimize the number of pods.
+A typical Horizon architecture design uses a pod and block strategy. A block is a single vCenter Server, while multiple blocks combined make a pod. A Horizon pod is a unit of organization determined by Horizon scalability limits. Each Horizon pod has a separate management portal, and so a standard design practice is to minimize the number of pods.
-Every cloud has its own network connectivity scheme. Combined with VMware SDDC networking / NSX Edge, the Azure VMware Solution network connectivity presents unique requirements for deploying Horizon that is different from on-premises.
+Every cloud has its own network connectivity scheme. Combined with VMware SDDC networking / NSX-T Data Center, the Azure VMware Solution network connectivity presents unique requirements for deploying Horizon that is different from on-premises.
Each Azure private cloud and SDDC can handle 4,000 desktop or application sessions, assuming:
You connect your AD domain controller in Azure Virtual Network with your on-prem
A variation on the basic example might be to support connectivity for on-premises resources. For example, users access desktops and generate virtual desktop application traffic or connect to an on-premises Horizon pod using CPA.
-The diagram shows how to support connectivity for on-premises resources. To connect to your corporate network to the Azure Virtual Network, you'll need an ExpressRoute circuit. You'll also need to connect your corporate network with each of the private cloud and SDDCs using ExpressRoute Global Reach. It allows the connectivity from the SDDC to the ExpressRoute circuit and on-premises resources.
+The diagram shows how to support connectivity for on-premises resources. To connect to your corporate network to the Azure Virtual Network, you'll need an ExpressRoute circuit. You'll also need to connect your corporate network with each of the private cloud and SDDCs using ExpressRoute Global Reach. It allows the connectivity from the SDDC to the ExpressRoute circuit and on-premises resources.
:::image type="content" source="media/vmware-horizon/connect-corporate-network-azure-virtual-network.png" alt-text="Diagram showing the connection of a corporate network to an Azure Virtual Network." border="false"::: ### Multiple Horizon pods on Azure VMware Solution across multiple regions
-Another scenario is scaling Horizon across multiple pods. In this scenario, you deploy two Horizon pods in two different regions and federate them using CPA. It's similar to the network configuration in the previous example, but with some additional cross-regional links.
+Another scenario is scaling Horizon across multiple pods. In this scenario, you deploy two Horizon pods in two different regions and federate them using CPA. It's similar to the network configuration in the previous example, but with some additional cross-regional links.
You'll connect the Azure Virtual Network in each region to the private clouds/SDDCs in the other region. It allows Horizon connection servers part of the CPA federation to connect to all desktops under management. Adding extra private clouds/SDDCs to this configuration would allow you to scale to 24,000 sessions overall. 
-The same principles apply if you deploy two Horizon pods in the same region. Make sure to deploy the second Horizon pod in a *separate Azure Virtual Network*. Just like the single pod example, you can connect your corporate network and on-premises pod to this multi-pod/region example using ExpressRoute and Global Reach.
+The same principles apply if you deploy two Horizon pods in the same region. Make sure to deploy the second Horizon pod in a *separate Azure Virtual Network*. Just like the single pod example, you can connect your corporate network and on-premises pod to this multi-pod/region example using ExpressRoute and Global Reach.
:::image type="content" source="media/vmware-horizon/multiple-horizon-pod-azure-vmware-solution.png" alt-text=" Diagram showing multiple Horizon pods on Azure VMware Solution across multiple regions." border="false"::: ## Size Azure VMware Solution hosts for Horizon deployments
-Horizon's sizing methodology on a host running in Azure VMware Solution is simpler than Horizon on-premises. That's because the Azure VMware Solution host is standardized. Exact host sizing helps determine the number of hosts needed to support your VDI requirements. It's central to determining the cost-per-desktop.
+Horizon's sizing methodology on a host running in Azure VMware Solution is simpler than Horizon on-premises. That's because the Azure VMware Solution host is standardized. Exact host sizing helps determine the number of hosts needed to support your VDI requirements. It's central to determining the cost-per-desktop.
### Sizing tables
-Specific vCPU/vRAM requirements for Horizon virtual desktops depend on the customerΓÇÖs specific workload profile. Work with your MSFT and VMware sales team to help determine your vCPU/vRAM requirements for your virtual desktops.
+Specific vCPU/vRAM requirements for Horizon virtual desktops depend on the customerΓÇÖs specific workload profile. Work with your MSFT and VMware sales team to help determine your vCPU/vRAM requirements for your virtual desktops.
| vCPU per VM | vRAM per VM (GB) | Instance | 100 VMs | 200 VMs | 300 VMs | 400 VMs | 500 VMs | 600 VMs | 700 VMs | 800 VMs | 900 VMs | 1000 VMs | 2000 VMs | 3000 VMs | 4000 VMs | 5000 VMs | 6000 VMs | 6400 VMs | |:--:|:-:|:--:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
azure-vmware Bitnami Appliances Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/bitnami-appliances-deployment.md
Title: Deploy Bitnami virtual appliances description: Learn about the virtual appliances packed by Bitnami to deploy in your Azure VMware Solution private cloud. Previously updated : 09/15/2021 Last updated : 04/11/2022
In this article, you'll learn how to install and configure the following virtual
-## Step 2. Access the local vCenter of your private cloud
+## Step 2. Access the local vCenter Server of your private cloud
1. Sign in to the [Azure portal](https://portal.azure.com), select your private cloud, and then **Manage** > **Identity**.
-1. Copy the vCenter URL, username, and password. You'll use them to access your virtual machine (VM).
+1. Copy the vCenter Server URL, username, and password. You'll use them to access your virtual machine (VM).
1. Select **Overview**, select the VM, and then connect to it through RDP. If you need help with connecting, see [connect to a virtual machine](../virtual-machines/windows/connect-logon.md#connect-to-the-virtual-machine) for details.
In this article, you'll learn how to install and configure the following virtual
-## Step 3. Install the Bitnami OVA/OVF file in vCenter
+## Step 3. Install the Bitnami OVA/OVF file in vCenter Server
1. Right-click the cluster that you want to install the LAMP virtual appliance and select **Deploy OVF Template**.
In this article, you'll learn how to install and configure the following virtual
1. After the installation finishes, under **Actions**, select **Power on** to turn on the appliance.
-1. From the vCenter console, select **Launch Web Console** and sign in to the Bitnami virtual appliance. Check the [Bitnami virtual appliance support documentation](https://docs.bitnami.com/vmware-marketplace/faq/get-started/find-credentials/) for the default username and password.
+1. From the vCenter Server console, select **Launch Web Console** and sign in to the Bitnami virtual appliance. Check the [Bitnami virtual appliance support documentation](https://docs.bitnami.com/vmware-marketplace/faq/get-started/find-credentials/) for the default username and password.
>[!NOTE] >You can change the default password to a more secure one. For more information, see ...
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Title: Deploy Arc for Azure VMware Solution (Preview) description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Previously updated : 01/31/2022 Last updated : 04/11/2022 # Deploy Arc for Azure VMware Solution (Preview)
-In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once you've set up the components needed for this public preview, you'll be ready to execute operations in Azure VMware Solution vCenter from the Azure portal. Operations are related to Create, Read, Update, and Delete (CRUD) virtual machines (VMs) in an Arc-enabled Azure VMware Solution private cloud. Users can also enable guest management and install Azure extensions once the private cloud is Arc-enabled.
+In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once you've set up the components needed for this public preview, you'll be ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Operations are related to Create, Read, Update, and Delete (CRUD) virtual machines (VMs) in an Arc-enabled Azure VMware Solution private cloud. Users can also enable guest management and install Azure extensions once the private cloud is Arc-enabled.
Before you begin checking off the prerequisites, verify the following actions have been done: - You deployed an Azure VMware Solution private cluster. - You have a connection to the Azure VMware Solution private cloud through your on-prem environment or your native Azure Virtual Network. -- There should be an isolated NSX-T segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T segment doesn't exist, one will be created.
+- There should be an isolated NSX-T Data Center segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center segment doesn't exist, one will be created.
## Prerequisites The following items are needed to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution (Preview). - A jump box virtual machine (VM) with network access to the Azure VMware Solution vCenter.
- - From the jump-box VM, verify you have access to [vCenter and NSX-T portals](./tutorial-configure-networking.md).
+ - From the jump-box VM, verify you have access to [vCenter Server and NSX-T Manager portals](./tutorial-configure-networking.md).
- Verify that your Azure subscription has been enabled or you have connectivity to Azure end points, mentioned in the [Appendices](#appendices). - Resource group in the subscription where you have owner or contributor role. - A minimum of three free non-overlapping IPs addresses.
The following items are needed to ensure you're set up to begin the onboarding p
> [!NOTE] > Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail.
-At this point, you should have already deployed an Azure VMware Solution private cluster. You need to have a connection from your on-prem environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
+At this point, you should have already deployed an Azure VMware Solution private cloud. You need to have a connection from your on-prem environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](./tutorial-network-checklist.md)
Use the following steps to guide you through the process to onboard in Arc for A
When Arc appliance is successfully deployed on your private cloud, you can do the following actions. - View the status from within the private cloud under **Operations > Azure Arc**, located in the left navigation. -- View the VMware infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.-- Discover your VMware infrastructure resources and project them to Azure using the same browser experience, **Private cloud > Arc vCenter resources > Virtual Machines**.
+- View the VMware vSphere infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.
+- Discover your VMware vSphere infrastructure resources and project them to Azure using the same browser experience, **Private cloud > Arc vCenter resources > Virtual Machines**.
- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure. After you've enabled VMs to be managed from Azure, you can install guest management and do the following actions.
After you've enabled VMs to be managed from Azure, you can install guest managem
- To enable guest management, customers will be required to use admin credentials - VMtools should already be running on the VM > [!NOTE]
-> Azure VMware Solution vCenter will be available in global search but will NOT be available in the list of vCenters for Arc for VMware.
+> Azure VMware Solution vCenter Server will be available in global search but will NOT be available in the list of vCenter Servers for Arc for VMware.
- Customers can view the list of VM extensions available in public preview. - Change tracking
When the script has run successfully, you can check the status to see if Azure A
:::image type="content" source="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png" alt-text="Image showing navigation to Azure Arc state to verify it's configured."lightbox="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png":::
-**Arc enabled VMware resources**
+**Arc enabled VMware vSphere resources**
After the private cloud is Arc-enabled, vCenter resources should appear under **Virtual machines**. - From the left navigation, under **Azure Arc VMware resources (preview)**, locate **Virtual machines**.-- Choose **Virtual machines** to view the vCenter resources.
+- Choose **Virtual machines** to view the vCenter Server resources.
### Manage access to VMware resources through Azure Role-Based Access Control
-After your Azure VMware Solution vCenter resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter resources used to configure VMs.
+After your Azure VMware Solution vCenter resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs.
-This section will demonstrate how to use custom roles to manage granular access to VMware resources through Azure.
+This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure.
#### Arc-enabled VMware vSphere custom roles Three custom roles are provided to meet your Role-based access control (RBAC) requirements. These roles can be applied to a whole subscription, resource group, or a single resource. -- Azure Arc VMware Administrator role-- Azure Arc VMware Private Cloud User role-- Azure Arc VMware VM Contributor role
+- Azure Arc VMware vSphere Administrator role
+- Azure Arc VMware vSphere Private Cloud User role
+- Azure Arc VMware vSphere VM Contributor role
The first role is for an Administrator. The other two roles apply to anyone who needs to deploy or manage a VM.
We recommend assigning this role at the subscription level or resource group you
1. Navigate to the Azure portal. 1. Locate the subscription, resource group, or the resource at the scope you want to provide for the custom role.
-1. Find the Arc-enabled Azure VMware Solution vCenter resources.
+1. Find the Arc-enabled Azure VMware Solution vCenter Server resources.
1. Navigate to the resource group and select the **Show hidden types** checkbox. 1. Search for "Azure VMware Solution". 1. Select **Access control (IAM)** in the table of contents located on the left navigation.
We recommend assigning this role at the subscription level or resource group you
## Create Arc-enabled Azure VMware Solution virtual machine
-This section shows users how to create a virtual machine (VM) on VMware vCenter using Azure Arc. Before you begin, check the following prerequisite list to ensure you're set up and ready to create an Arc-enabled Azure VMware Solution VM.
+This section shows users how to create a virtual machine (VM) on VMware vCenter Server using Azure Arc. Before you begin, check the following prerequisite list to ensure you're set up and ready to create an Arc-enabled Azure VMware Solution VM.
### Prerequisites
Near the top of the **Virtual machines** page, you'll find five tabs labeled: **
1. The connectivity method defaults to **Public endpoint**. Create a **Username**, **Password**, and **Confirm password**. **Disks**
- - You can opt to change the disks configured in the template, add more disks, or update existing disks. These disks will be created on the default datastore per the VMware vCenter storage policies.
+ - You can opt to change the disks configured in the template, add more disks, or update existing disks. These disks will be created on the default datastore per the VMware vCenter Server storage policies.
- You can change the network interfaces configured in the template, add Network interface cards (NICs), or update existing NICs. You can also change the network that the NIC will be attached to provided you have permissions to the network resource. **Networking**
Near the top of the **Virtual machines** page, you'll find five tabs labeled: **
## Enable guest management and extension installation
-The guest management must be enabled on the VMware virtual machine (VM) before you can install an extension. Use the following prerequisite steps to enable guest management.
+The guest management must be enabled on the VMware vSphere virtual machine (VM) before you can install an extension. Use the following prerequisite steps to enable guest management.
**Prerequisite** 1. Navigate to [Azure portal](https://ms.portal.azure.com/).
-1. Locate the VMware VM you want to check for guest management and install extensions on, select the name of the VM.
+1. Locate the VMware vSphere VM you want to check for guest management and install extensions on, select the name of the VM.
1. Select **Configuration** from the left navigation for a VMware VM. 1. Verify **Enable guest management** has been checked.
The guest management must be enabled on the VMware virtual machine (VM) before y
> The following conditions are necessary to enable guest management on a VM. - The machine must be running a [Supported operating system](../azure-arc/servers/agent-overview.md).-- The machine needs to connect through the firewall to communicate over the Internet. Make sure the [URLs](../azure-arc/servers/agent-overview.md) listed aren't blocked.
+- The machine needs to connect through the firewall to communicate over the internet. Make sure the [URLs](../azure-arc/servers/agent-overview.md) listed aren't blocked.
- The machine can't be behind a proxy, it's not supported yet. - If you're using Linux VM, the account must not prompt to sign in on pseudo commands.
When the extension installation steps are completed, they trigger deployment and
Use the following guide to change your Arc appliance credential once you've changed your SDDC credentials.
-Use the **`Set Credential`** command to update the provider credentials for appliance resource. When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store.
+Use the **`Set Credential`** command to update the provider credentials for appliance resource. When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store.
1. Log into the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**. 1. Run the following command for Windows-based jumpbox VM.
The following command invokes the set credential for the specified appliance res
Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM).
-1. Log into vCenter.
+1. Log into vCenter Server.
1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding. 1. Power off the VM. 1. Delete the VM.
Use the following steps to perform a manual upgrade for Arc appliance virtual ma
## Off board from Azure Arc-enabled Azure VMware Solution
-This section demonstrates how to remove your VMware virtual machines (VMs) from Azure management services.
+This section demonstrates how to remove your VMware vSphere virtual machines (VMs) from Azure management services.
If you've enabled guest management on your Arc-enabled Azure VMware Solution VMs and onboarded them to Azure management services by installing VM extensions on them, you'll need to uninstall the extensions to prevent continued billing. For example, if you installed an MMA extension to collect and send logs to an Azure Log Analytics workspace, you'll need to uninstall that extension. You'll also need to uninstall the Azure Connected Machine agent to avoid any problems installing the agent in future.
To avoid problems onboarding the same VM to **Guest management**, we recommend y
## Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
-When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter resource in Azure, you'll need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps:
+When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter Server resource in Azure, you'll need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps:
1. Go to the Azure portal.
-1. Choose **Virtual machines** from Arc-enabled VMware resources in the private cloud.
+1. Choose **Virtual machines** from Arc-enabled VMware vSphere resources in the private cloud.
1. Select all the VMs that have an Azure Enabled value as **Yes**.
-1. Select **Remove from Azure**. This step will start deployment and remove these resources from Azure. The resources will remain in your vCenter.
+1. Select **Remove from Azure**. This step will start deployment and remove these resources from Azure. The resources will remain in your vCenter Server.
1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**. 1. When the deletion completes, select **Overview**. 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section. 1. Select **Remove from Azure** to remove the vCenter resource from Azure.
-1. Go to vCenter resource in Azure and delete it.
+1. Go to vCenter Server resource in Azure and delete it.
1. Go to the Custom location resource and select **Delete**. 1. Go to the Azure Arc Resource bridge resources and select **Delete**. At this point, all of your Arc-enabled VMware vSphere resources have been removed from Azure.
-## Delete Arc resources from vCenter
+## Delete Arc resources from vCenter Server
For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer.
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vm-content-library.md
Title: Create a content library to deploy VMs in Azure VMware Solution description: Create a content library to deploy a VM in an Azure VMware Solution private cloud. Previously updated : 06/28/2021 Last updated : 04/11/2022 # Create a content library to deploy VMs in Azure VMware Solution
In this article, you'll create a content library in the vSphere Client and then
## Prerequisites
-An NSX-T segment and a managed DHCP service are required to complete this tutorial. For more information, see [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
+An NSX-T Data Center segment and a managed DHCP service are required to complete this tutorial. For more information, see [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
## Create a content library
An NSX-T segment and a managed DHCP service are required to complete this tutori
:::image type="content" source="media/content-library/create-new-content-library.png" alt-text="Screenshot showing how to create a new content library in vSphere.":::
-1. Provide a name and confirm the IP address of the vCenter server and select **Next**.
+1. Provide a name and confirm the IP address of the vCenter Server and select **Next**.
:::image type="content" source="media/content-library/new-content-library-step-1.png" alt-text="Screenshot showing the name and vCenter Server IP for the new content library.":::
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Title: Deploy disaster recovery with VMware Site Recovery Manager description: Deploy disaster recovery with VMware Site Recovery Manager (SRM) in your Azure VMware Solution private cloud. Previously updated : 10/04/2021 Last updated : 04/11/2022 # Deploy disaster recovery with VMware Site Recovery Manager
In this article, you'll implement disaster recovery for on-premises VMware virtu
SRM helps you plan, test, and run the recovery of VMs between a protected vCenter Server site and a recovery vCenter Server site. You can use SRM with Azure VMware Solution with the following two DR scenarios: -- On-premise VMware to Azure VMware Solution private cloud disaster recovery
+- On-premises VMware to Azure VMware Solution private cloud disaster recovery
- Primary Azure VMware Solution to Secondary Azure VMware Solution private cloud disaster recovery The diagram shows the deployment of the primary Azure VMware Solution to secondary Azure VMware Solution scenario.
You can use SRM to implement different types of recovery, such as:
## Deployment workflow
-The workflow diagram shows the Primary Azure VMware Solution to secondary workflow. In addition, it shows steps to take within the Azure portal and the VMware environments of Azure VMware Solution to achieve the end-to-end protection of VMs.
+The workflow diagram shows the Primary Azure VMware Solution to secondary workflow. In addition, it shows steps to take within the Azure portal and the VMware vSphere environments of Azure VMware Solution to achieve the end-to-end protection of VMs.
:::image type="content" source="media/vmware-srm-vsphere-replication/site-recovery-manager-workflow.png" alt-text="Diagram showing the deployment workflow for VMware Site Recovery Manager on Azure VMware Solution." border="false"::: ## Prerequisites
-Make sure you've explicitly provided the remote user the VRM administrator and SRM administrator roles in the remote vCenter.
+Make sure you've explicitly provided the remote user the VRM administrator and SRM administrator roles in the remote vCenter Server.
### Scenario: On-premises to Azure VMware Solution
Make sure you've explicitly provided the remote user the VRM administrator and S
## Install SRM in Azure VMware Solution
-1. In your on-premises datacenter, install VMware SRM and vSphere.
+1. In your on-premises datacenter, install VMware SRM and vSphere Replication.
>[!NOTE] >Use the [Two-site Topology with one vCenter Server instance per PSC](https://docs.vmware.com/en/Site-Recovery-Manager/8.4/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html) deployment model. Also, make sure that the [required vSphere Replication Network ports](https://kb.VMware.com/s/article/2087769) are opened.
After the SRM appliance installs successfully, you'll need to install the vSpher
:::image type="content" source="media/vmware-srm-vsphere-replication/vsphere-replication-3.png" alt-text="Screenshot showing that both SRM and the replication appliance are installed.":::
-## Configure site pairing in vCenter
+## Configure site pairing in vCenter Server
-After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter.
+After installing VMware SRM and vSphere Replication, you need to complete the configuration and site pairing in vCenter Server.
-1. Sign in to vCenter as cloudadmin@vsphere.local.
+1. Sign in to vCenter Server as cloudadmin@vsphere.local.
1. Navigate to **Site Recovery**, check the status of both vSphere Replication and VMware SRM, and then select **OPEN Site Recovery** to launch the client.
After installing VMware SRM and vSphere Replication, you need to complete the co
1. Enter the remote site details, and then select **NEXT**. >[!NOTE]
- >An Azure VMware Solution private cloud operates with an embedded Platform Services Controller (PSC), so only one local vCenter can be selected. If the remote vCenter is using an embedded Platform Service Controller (PSC), use the vCenter's FQDN (or its IP address) and port to specify the PSC.
+ >An Azure VMware Solution private cloud operates with an embedded Platform Services Controller (PSC), so only one local vCenter can be selected. If the remote vCenter Server is using an embedded Platform Service Controller (PSC), use the vCenter Server's FQDN (or its IP address) and port to specify the PSC.
>
- >The remote user must have sufficient permissions to perform the pairings. An easy way to ensure this is to give that user the VRM administrator and SRM administrator roles in the remote vCenter. For a remote Azure VMware Solution private cloud, cloudadmin is configured with those roles.
+ >The remote user must have sufficient permissions to perform the pairings. An easy way to ensure this is to give that user the VRM administrator and SRM administrator roles in the remote vCenter Server. For a remote Azure VMware Solution private cloud, cloudadmin is configured with those roles.
:::image type="content" source="media/vmware-srm-vsphere-replication/pair-the-sites-specify-details.png" alt-text="Screenshot showing the Site details for the new site pair." border="true" lightbox="media/vmware-srm-vsphere-replication/pair-the-sites-specify-details.png":::
-1. Select **CONNECT** to accept the certificate for the remote vCenter.
+1. Select **CONNECT** to accept the certificate for the remote vCenter Server.
At this point, the client should discover the VRM and SRM appliances on both sides as services to pair.
After installing VMware SRM and vSphere Replication, you need to complete the co
:::image type="content" source="media/vmware-srm-vsphere-replication/pair-the-sites-new-site.png" alt-text="Screenshot showing the vCenter Server and services details for the new site pair." border="true" lightbox="media/vmware-srm-vsphere-replication/pair-the-sites-new-site.png":::
-1. Select **CONNECT** to accept the certificates for the remote VMware SRM and the remote vCenter (again).
+1. Select **CONNECT** to accept the certificates for the remote VMware SRM and the remote vCenter Server (again).
-1. Select **CONNECT** to accept the certificates for the local VMware SRM and the local vCenter.
+1. Select **CONNECT** to accept the certificates for the local VMware SRM and the local vCenter Server.
1. Review the settings and then select **FINISH**.
After installing VMware SRM and vSphere Replication, you need to complete the co
>[!NOTE] >The SR client sometimes takes a long time to refresh. If an operation seems to take too long or appears "stuck", select the refresh icon on the menu bar.
-1. Select **VIEW DETAILS** to open the panel for remote site pairing, which opens a dialog to sign in to the remote vCenter.
+1. Select **VIEW DETAILS** to open the panel for remote site pairing, which opens a dialog to sign in to the remote vCenter Server.
:::image type="content" source="media/vmware-srm-vsphere-replication/view-details-remote-pairing.png" alt-text="Screenshot showing the new site pair details for Site Recovery Manager and vSphere Replication." border="true" lightbox="media/vmware-srm-vsphere-replication/view-details-remote-pairing.png":::
After installing VMware SRM and vSphere Replication, you need to complete the co
For pairing, the login, which is often a different user, is a one-time action to establish pairing. The SR client requires this login every time the client is launched to work with the pairing. >[!NOTE]
- >The user with sufficient permissions should have **VRM administrator** and **SRM administrator** roles given to them in the remote vCenter. The user should also have access to the remote vCenter inventory, like folders and datastores. For a remote Azure VMware Solution private cloud, the cloudadmin user has the appropriate permissions and access.
+ >The user with sufficient permissions should have **VRM administrator** and **SRM administrator** roles given to them in the remote vCenter Server. The user should also have access to the remote vCenter Server inventory, like folders and datastores. For a remote Azure VMware Solution private cloud, the cloudadmin user has the appropriate permissions and access.
:::image type="content" source="media/vmware-srm-vsphere-replication/sign-into-remote-vcenter.png" alt-text="Screenshot showing the vCenter Server credentials." border="true":::
If you no longer require SRM, you must uninstall it in a clean manner. Before yo
## Support
-VMware SRM is a Disaster Recovery solution from VMware.
+VMware Site Recovery Manager (SRM) is a Disaster Recovery solution from VMware.
Microsoft only supports install/uninstall of SRM and vSphere Replication Manager and scale up/down of vSphere Replication appliances within Azure VMware Solution.
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Title: Move Azure VMware Solution resources across regions
description: This article describes how to move Azure VMware Solution resources from one Azure region to another. Previously updated : 06/01/2021 Last updated : 04/11/2022 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution resources from Azure Region A to Azure Region B.
Before you can move the source configuration, you'll need to [deploy the target
### Back up the source configuration
-Back up the Azure VMware Solution (source) configuration that includes VC, NSX-T, and firewall policies and rules.
+Back up the Azure VMware Solution (source) configuration that includes vCenter Server, NSX-T Data Center, and firewall policies and rules.
-- **Compute:** Export existing inventory configuration. For Inventory backup, you can use RVtool (an open-source app).
+- **Compute:** Export existing inventory configuration. For Inventory backup, you can use RVtools (an open-source app).
- **Network and firewall policies and rules:** On the Azure VMware Solution target, create the same network segments as the source environment.
Now that you have the ExpressRoute circuit IDs and authorization keys for both e
After you establish connectivity, you'll create a VMware HCX site pairing between the private clouds to facilitate the migration of your VMs. You can connect or pair the VMware HCX Cloud Manager in Azure VMware Solution with the VMware HCX Connector in your data center.
-1. Sign in to your source's vCenter, and under **Home**, select **HCX**.
+1. Sign in to your source's vCenter Server, and under **Home**, select **HCX**.
1. Under **Infrastructure**, select **Site Pairing** and select the **Connect To Remote Site** option (in the middle of the screen).
-1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address you noted earlier `https://x.x.x.9`, the Azure VMware Solution cloudadmin\@vsphere.local username, and the password. Then select **Connect**.
+1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address you noted earlier `https://x.x.x.9`, the Azure VMware Solution cloudadmin@vsphere.local username, and the password. Then select **Connect**.
> [!NOTE] > To successfully establish a site pair: > * Your VMware HCX Connector must be able to route to your HCX Cloud Manager IP over port 443. >
- > * Use the same password that you used to sign in to vCenter. You defined this password on the initial deployment screen.
+ > * Use the same password that you used to sign in to vCenter Server. You defined this password on the initial deployment screen.
You'll see a screen showing that your VMware HCX Cloud Manager in Azure VMware Solution and your on-premises VMware HCX Connector are connected (paired).
In this section, you'll migrate the:
In this step, you'll copy the source's vSphere configuration and move it to the target environment.
-1. From the source's vCenter, use the same resource pool configuration and [create the same resource pool configuration](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-0F6C6709-A5DA-4D38-BE08-6CB1002DD13D.html#example-creating-resource-pools-4) on the target's vCenter.
+1. From the source's vCenter Server, use the same resource pool configuration and [create the same resource pool configuration](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-0F6C6709-A5DA-4D38-BE08-6CB1002DD13D.html#example-creating-resource-pools-4) on the target's vCenter Server.
-2. From the source's vCenter, use the same VM folder name and [create the same VM folder](https://docs.vmware.com/en/VMware-Validated-Design/6.1/sddc-deployment-of-cloud-operations-and-automation-in-the-first-region/GUID-9D935BBC-1228-4F9D-A61D-B86C504E469C.html) on the target's vCenter under **Folders**.
+2. From the source's vCenter Server, use the same VM folder name and [create the same VM folder](https://docs.vmware.com/en/VMware-Validated-Design/6.1/sddc-deployment-of-cloud-operations-and-automation-in-the-first-region/GUID-9D935BBC-1228-4F9D-A61D-B86C504E469C.html) on the target's vCenter Server under **Folders**.
-3. Use VMware HCX to migrate all VM templates from the source's vCenter to the target's vCenter.
+3. Use VMware HCX to migrate all VM templates from the source's vCenter Server to the target's vCenter.
1. From the source, convert the existing templates to VMs and then migrate them to the target.
In this step, you'll copy the source's vSphere configuration and move it to the
4. From the source environment, use the same VM Tags name and [create them on the target's vCenter](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-05323758-1EBF-406F-99B6-B1A33E893453.html).
-5. From the source's vCenter Content Library, use the subscribed library option to copy the ISO, OVF, OVA, and VM Templates to the target content library:
+5. From the source's vCenter Server Content Library, use the subscribed library option to copy the ISO, OVF, OVA, and VM Templates to the target content library:
1. If the content library isn't already published, select the **Enable publishing** option.
In this step, you'll copy the source's vSphere configuration and move it to the
4. Select **Sync Now**.
-### Configure the target NSX-T environment
+### Configure the target NSX-T Data Center environment
-In this step, you'll use the source NSX-T configuration to configure the target NSX-T environment.
+In this step, you'll use the source NSX-T Data Center configuration to configure the target NSX-T environment.
>[!NOTE]
->You'll have multiple features configured on the source NSX-T, so you must copy or read from the source NXS-T and recreate it in the target private cloud. Use L2 Extension to keep same IP address and Mac Address of the VM while migrating Source to target AVS Private Cloud to avoid downtime due to IP change and related configuration.
+>You'll have multiple features configured on the source NSX-T Data Center, so you must copy or read from the source NSX-T Data Center and recreate it in the target private cloud. Use L2 Extension to keep same IP address and Mac Address of the VM while migrating Source to target AVS Private Cloud to avoid downtime due to IP change and related configuration.
-1. [Configure NSX network components](tutorial-nsx-t-network-segment.md) required in the target environment under default Tier-1 gateway.
+1. [Configure NSX-T Data Center network components](tutorial-nsx-t-network-segment.md) required in the target environment under default Tier-1 gateway.
1. [Create the security group configuration](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-41CC06DF-1CD4-4233-B43E-492A9A3AD5F6.html).
In this step, you'll use the source NSX-T configuration to configure the target
1. [Configure DNS forwarder](configure-dns-azure-vmware-solution.md).
-1. [Configure a new Tier-1 gateway (other than default)](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A6042263-374F-4292-892E-BC86876325A4.html). This configuration is based on the NSX-T configured on the source.
+1. [Configure a new Tier-1 gateway (other than default)](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A6042263-374F-4292-892E-BC86876325A4.html). This configuration is based on the NSX-T Data Center configured on the source.
### Migrate the VMs from the source
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
Title: Rotate the cloudadmin credentials for Azure VMware Solution description: Learn how to rotate the vCenter Server credentials for your Azure VMware Solution private cloud. Previously updated : 09/10/2021 Last updated : 04/11/2022
-#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter CloudAdmin credentials.
+#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
Last updated 09/10/2021
In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time. >[!CAUTION]
->If you use your cloudadmin credentials to connect services to vCenter in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
+>If you use your cloudadmin credentials to connect services to vCenter Server in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
## Prerequisites
-Consider and determine which services connect to vCenter as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
+Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
-One way to determine which services authenticate to vCenter with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
+One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
Instead of using the cloudadmin user to connect services to vCenter, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
-## Reset your vCenter credentials
+## Reset your vCenter Server credentials
### [Portal](#tab/azure-portal)
Instead of using the cloudadmin user to connect services to vCenter, we recommen
1. Select **Generate new password**.
- :::image type="content" source="media/rotate-cloudadmin-credentials/reset-vcenter-credentials-1.png" alt-text="Screenshot showing the vCenter credentials and a way to copy them or generate a new password." lightbox="media/rotate-cloudadmin-credentials/reset-vcenter-credentials-1.png":::
+ :::image type="content" source="media/rotate-cloudadmin-credentials/reset-vcenter-credentials-1.png" alt-text="Screenshot showing the vCenter Server credentials and a way to copy them or generate a new password." lightbox="media/rotate-cloudadmin-credentials/reset-vcenter-credentials-1.png":::
1. Select the confirmation checkbox and then select **Generate password**.
To begin using Azure CLI:
3. Select the correct connection to Azure VMware Solution and select **Edit Connection**.
-4. Provide the new vCenter user credentials and select **Edit**, which saves the credentials. Save should show successful.
+4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful.
## Next steps
-Now that you've covered resetting your vCenter credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting your vCenter Server credentials for Azure VMware Solution, you may want to learn about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md)
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud. Previously updated : 01/26/2021 Last updated : 04/11/2022 # Configure vRealize Operations for Azure VMware Solution
-vRealize Operations Manager is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter, ESXi, NSX-T, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter, NSX-T, vSAN, and HCX deployment.
+vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
## On-premises vRealize Operations managing Azure VMware Solution deployment
-Most customers have an existing on-premise deployment of vRealize Operations to manage one or more on-premise vCenters domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
+Most customers have an existing on-premise deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="Diagram showing the on-premises vRealize Operations managing Azure VMware Solution deployment." border="false":::
-To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter and NSX-T manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
+To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter Server and NSX-T Manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
> [!TIP] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
Another option is to deploy an instance of vRealize Operations Manager on a vSph
:::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-2.png" alt-text="Diagram showing the vRealize Operations running on Azure VMware Solution." border="false":::
-Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter, ESXi, NSX-T, vSAN, and HCX.
+Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
## Known limitations -- The **cloudadmin\@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
+- The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
- Workload optimization for host-based business intent doesn't work because Azure VMware Solutions manage cluster configurations, including DRS settings. - Workload optimization for the cross-cluster placement within the SDDC using the cluster-based business intent is fully supported with vRealize Operations Manager 8.0 and onwards. However, workload optimization isn't aware of resource pools and places the VMs at the cluster level. A user can manually correct it in the Azure VMware Solution vCenter Server interface. - You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials.
When you connect the Azure VMware Solution vCenter to vRealize Operations Manage
:::image type="content" source="./media/vrealize-operations-manager/warning-adapter-instance-creation-succeeded.png" alt-text="Screenshot showing a Warning message that states the adapter instance was created successfully.":::
-The warning occurs because the **cloudadmin\@vsphere.local** user in Azure VMware Solution doesn't have sufficient privileges to do all vCenter Server actions required for registration. However, the privileges are sufficient for the adapter instance to do data collection, as seen below:
+The warning occurs because the **cloudadmin@vsphere.local** user in Azure VMware Solution doesn't have sufficient privileges to do all vCenter Server actions required for registration. However, the privileges are sufficient for the adapter instance to do data collection, as seen below:
:::image type="content" source="./media/vrealize-operations-manager/adapter-instance-to-perform-data-collection.png" alt-text="Screenshot showing the adapter instance to collect data.":::
-For more information, see [Privileges Required for Configuring a vCenter Adapter Instance](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.core.doc/GUID-3BFFC92A-9902-4CF2-945E-EA453733B426.html).
+For more information, see [Privileges Required for Configuring a vCenter Server Adapter Instance](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.core.doc/GUID-3BFFC92A-9902-4CF2-945E-EA453733B426.html).
<!-- LINKS - external -->
backup Backup Azure Manage Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-manage-vms.md
Title: Manage and monitor Azure VM backups description: Learn how to manage and monitor Azure VM backups by using the Azure Backup service. Previously updated : 09/17/2021 Last updated : 06/03/2022+++ # Manage Azure VM backups with Azure Backup service
In the Azure portal, the Recovery Services vault dashboard provides access to va
You can manage backups by using the dashboard and by drilling down to individual VMs. To begin machine backups, open the vault on the dashboard: [!INCLUDE [backup-center.md](../../includes/backup-center.md)]
To view VMs on the vault dashboard:
1. For ease of use, select the pin icon next to your vault name and select **Pin to dashboard**. 1. Open the vault dashboard.
- :::image type="content" source="./media/backup-azure-manage-vms/full-view-rs-vault.png" alt-text="Screenshot showing to open the vault dashboard and Settings pane.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/full-view-rs-vault-inline.png" alt-text="Screenshot showing to open the vault dashboard and Settings pane." lightbox="./media/backup-azure-manage-vms/full-view-rs-vault-expanded.png":::
1. On the **Backup Items** tile, select **Azure Virtual Machine**.
- :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine-inline.png" alt-text="Screenshot showing to open the Backup Items tile." lightbox="./media/backup-azure-manage-vms/azure-virtual-machine-expanded.png":::
1. On the **Backup Items** pane, you can view the list of protected VMs. In this example, the vault protects one virtual machine: *myVMR1*.
- :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item-inline.png" alt-text="Screenshot showing to view the Backup Items pane." lightbox="./media/backup-azure-manage-vms/backup-items-blade-select-item-expanded.png":::
1. From the vault item's dashboard, you can modify backup policies, run an on-demand backup, stop or resume protection of VMs, delete backup data, view restore points, and run a restore.
- :::image type="content" source="./media/backup-azure-manage-vms/item-dashboard-settings.png" alt-text="Screenshot showing the Backup Items dashboard and the Settings pane.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/item-dashboard-settings-inline.png" alt-text="Screenshot showing the Backup Items dashboard and the Settings pane." lightbox="./media/backup-azure-manage-vms/item-dashboard-settings-expanded.png":::
## Manage backup policy for a VM
To manage a backup policy:
1. Sign in to the [Azure portal](https://portal.azure.com/). Open the vault dashboard. 2. On the **Backup Items** tile, select **Azure Virtual Machine**.
- :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine-inline.png" alt-text="Screenshot showing to open the Backup Items tile." lightbox="./media/backup-azure-manage-vms/azure-virtual-machine-expanded.png":::
3. On the **Backup Items** pane, you can view the list of protected VMs and last backup status with latest restore points time.
- :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item-inline.png" alt-text="Screenshot showing to view the Backup Items pane." lightbox="./media/backup-azure-manage-vms/backup-items-blade-select-item-expanded.png":::
4. From the vault item's dashboard, you can select a backup policy.
- * To switch policies, select a different policy and then select **Save**. The new policy is immediately applied to the vault.
+ To switch policies, select a different policy and then select **Save**. The new policy is immediately applied to the vault.
- :::image type="content" source="./media/backup-azure-manage-vms/backup-policy-create-new.png" alt-text="Screenshot showing to choose a backup policy.":::
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-policy-create-new-inline.png" alt-text="Screenshot showing to choose a backup policy." lightbox="./media/backup-azure-manage-vms/backup-policy-create-new-expanded.png":::
## Run an on-demand backup
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Use the example that corresponds to the type of target VM to which you want to c
If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "AAD"
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
``` **SSH:**
Use the example that corresponds to the type of target VM to which you want to c
The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example. ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
``` **Username/password:**
Use the example that corresponds to the type of target VM to which you want to c
If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM. ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "password" --username "<Username>"
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>"
``` 1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
This connection supports file upload from the local computer to the target VM. F
1. Open the tunnel to your target VM using the following command. ```azurecli
- az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+ az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
``` 1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2.
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md
For background about Batch accounts and scenarios, see [Batch service workflow a
:::image type="content" source="media/batch-account-create-portal/storage_account.png" alt-text="Screenshot of the options when creating a storage account.":::
-1. If desired, select **Advanced** to specify **Identity type**, **Public network access** or **Pool allocation mode**. For most scenarios, the default options are fine.
+1. If desired, select **Advanced** to specify **Identity type**, **Pool allocation mode** or **Authentication mode**. For most scenarios, the default options are fine.
+
+1. If desired, select **Networking** to configure [public network access](public-network-access.md) with your Batch account.
+
+ :::image type="content" source="media/batch-account-create-portal/batch-account-networking.png" alt-text="Screenshot of the networking options when creating a Batch account.":::
1. Select **Review + create**, then select **Create** to create the account.
Once the account has been created, select the account to access its settings and
> [!NOTE] > The name of the Batch account is its ID and can't be changed. If you need to change the name of a Batch account, you'll need to delete the account and create a new one with the intended name. When you develop an application with the [Batch APIs](batch-apis-tools.md#azure-accounts-for-batch-development), you need an account URL and key to access your Batch resources. (Batch also supports Azure Active Directory authentication.) To view the Batch account access information, select **Keys**.
Make sure to set the following parameters based on your Batch pool's configurati
For example: ```powershell
-Get-AzMarketplaceTerms -Publisher 'microsoft-azure-batch' -Product 'ubuntu-server-container' -Name '20-04-lts' | Set-AzMarketplaceTerms -Accept
+Get-AzMarketplaceTerms -Publisher 'microsoft-azure-batch' -Product 'ubuntu-server-container' -Name '20-04-lts' | Set-AzMarketplaceTerms -Accept
```
batch Batch Cli Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-cli-templates.md
The following is an example of a template that creates a pool of Linux VMs with
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "16.04-LTS",
+ "sku": "18.04-LTS",
"version": "latest" },
- "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
}, "vmSize": "STANDARD_D3_V2", "targetDedicatedNodes": "[parameters('nodeCount')]",
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-parallel-node-tasks.md
For more information on adding pools by using the REST API, see [Add a pool to a
"offer": "ubuntuserver", "sku": "18.04-lts" },
- "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
}, "targetDedicatedComputeNodes":2, "taskSlotsPerNode":4,
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
Last updated 01/11/2022
-# Create an Azure Batch pool without public IP addresses (preview)
+# Create a Batch pool without public IP addresses (preview)
> [!IMPORTANT]
-> Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> - Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West.
+> - This preview version will be replaced by [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
+> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
When you create an Azure Batch pool, you can provision the virtual machine configuration pool without a public IP address. This article explains how to set up a Batch pool without public IP addresses.
-## Why use a pool without public IP Addresses?
+## Why use a pool without public IP addresses?
By default, all the compute nodes in an Azure Batch virtual machine configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet.
To restrict access to these nodes and reduce the discoverability of these nodes
## Use the Batch REST API to create a pool without public IP addresses
-The example below shows how to use the [Azure Batch REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
+The example below shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
### REST API URI
client-request-id: 00000000-0000-0000-0000-000000000000
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "16.040-LTS"
+ "sku": "18.04-lts"
},
- "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
} "networkConfiguration": { "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>",
client-request-id: 00000000-0000-0000-0000-000000000000
"enableAutoScale": false, "enableInterNodeCommunication": true, "metadata": [
- {
- "name": "myproperty",
- "value": "myvalue"
- }
- ]
+ {
+ "name": "myproperty",
+ "value": "myvalue"
+ }
+ ]
} ```
batch Create Pool Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md
Request body
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "16.040-LTS"
+ "sku": "18.04-lts"
}, "nodePlacementConfiguration": { "policy": "Zonal" }
- "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
}, "resizeTimeout": "PT15M", "targetDedicatedNodes": 5,
Request body
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [creating a pool in a subnet of an Azure virtual network](batch-virtual-network.md).-- Learn about [creating an Azure Batch pool without public IP addresses](./batch-pool-no-public-ip-address.md).
+- Learn about [creating an Azure Batch pool without public IP addresses](./simplified-node-communication-pool-no-public-ip.md).
batch Create Pool Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-public-ip.md
Last updated 12/20/2021
# Create an Azure Batch pool with specified public IP addresses
-In Azure Batch, you can [create a Batch pool in a subnet of an Azure virtual network (VNet)](batch-virtual-network.md). Virtual machines (VMs) in the Batch pool are accessible through public IP addresses that Batch creates. These public IP addresses can change over the lifetime of the pool. If the IP addresses aren't refreshed, your network settings might become outdated.
+In Azure Batch, you can [create a Batch pool in a subnet of an Azure virtual network (VNet)](batch-virtual-network.md). Virtual machines (VMs) in the Batch pool are accessible through public IP addresses that Batch creates. These public IP addresses can change over the lifetime of the pool. If the IP addresses aren't refreshed, your network settings might become outdated.
-You can create a list of static public IP addresses to use with the VMs in your pool instead. In some cases, you might need to control the list of public IP addresses to make sure they don't change unexpectedly. For example, you might be working with an external service, such as a database, which restricts access to specific IP addresses.
+You can create a list of static public IP addresses to use with the VMs in your pool instead. In some cases, you might need to control the list of public IP addresses to make sure they don't change unexpectedly. For example, you might be working with an external service, such as a database, which restricts access to specific IP addresses.
-For information about creating pools without public IP addresses, read [Create an Azure Batch pool without public IP addresses](./batch-pool-no-public-ip-address.md).
+For information about creating pools without public IP addresses, read [Create an Azure Batch pool without public IP addresses](./simplified-node-communication-pool-no-public-ip.md).
## Prerequisites
For information about creating pools without public IP addresses, read [Create a
Create one or more public IP addresses through one of these methods: - Use the [Azure portal](../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address) - Use the [Azure Command-Line Interface (Azure CLI)](/cli/azure/network/public-ip#az-network-public-ip-create)-- Use [Azure PowerShell](/powershell/module/az.network/new-azpublicipaddress).
+- Use [Azure PowerShell](/powershell/module/az.network/new-azpublicipaddress).
Make sure your public IP addresses meet the following requirements:
Make sure your public IP addresses meet the following requirements:
- Set the **IP address assignment** to **Static**. - Set the **SKU** to **Standard**. - Specify a DNS name.-- Make sure no other resources use these public IP addresses, or the pool might experience allocation failures. Only use these public IP addresses for the VM configuration pools.
+- Make sure no other resources use these public IP addresses, or the pool might experience allocation failures. Only use these public IP addresses for the VM configuration pools.
- Make sure that no security policies or resource locks restrict user access to the public IP address.-- Create enough public IP addresses for the pool to accommodate the number of target VMs.
- - This number must equal at least the sum of the **targetDedicatedNodes** and **targetLowPriorityNodes** properties of the pool.
- - If you don't create enough IP addresses, the pool partially allocates the compute nodes, and a resize error happens.
+- Create enough public IP addresses for the pool to accommodate the number of target VMs.
+ - This number must equal at least the sum of the **targetDedicatedNodes** and **targetLowPriorityNodes** properties of the pool.
+ - If you don't create enough IP addresses, the pool partially allocates the compute nodes, and a resize error happens.
- Currently, Batch uses one public IP address for every 100 VMs. - Also create a buffer of public IP addresses. A buffer helps Batch with internal optimization for scaling down. A buffer also allows quicker scaling up after an unsuccessful scale up or scale down. We recommend adding one of the following amounts of buffer IP addresses; choose whichever number is greater. - Add at least one more IP address.
Request body:
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "16.04.0-LTS"
+ "sku": "18.04-LTS"
},
- "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
}, "networkConfiguration": { "subnetId": "/subscriptions/<subId>/resourceGroups/<rgId>/providers/Microsoft.Network/virtualNetworks/<vNetId>/subnets/<subnetId>",
batch Manage Private Endpoint Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/manage-private-endpoint-connections.md
+
+ Title: Manage private endpoint connections with Azure Batch accounts
+description: Learn how to manage private endpoint connections with Azure Batch accounts, including list, approve, reject and remove.
+ Last updated : 05/26/2022++
+# Manage private endpoint connections with Azure Batch accounts
+
+You can query and manage all existing private endpoint connections for your Batch account. Supported management operations include:
+
+- Approve a pending connection.
+- Reject a connection (either in pending or approved state).
+- Remove a connection, which will remove the connection from Batch account and mark the associated private endpoint resource as Disconnected state.
+
+## Azure portal
+
+1. Go to your Batch account in Azure portal.
+1. In **Settings**, select **Networking** and go to tab **Private Access**.
+1. Select the private connection, then perform the Approve/Reject/Remove operation.
+
+ :::image type="content" source="media/private-connectivity/manage-private-connections.png" alt-text="Screenshot of managing private endpoint connections.":::
+
+## Az PowerShell module
+
+Examples using Az PowerShell module [`Az.Network`](/powershell/module/az.network#networking):
+
+```PowerShell
+$accountResourceId = "/subscriptions/<subscription>/resourceGroups/<rg>/providers/Microsoft.Batch/batchAccounts/<account>"
+$pecResourceId = "$accountResourceId/privateEndpointConnections/<pe-connection-name>"
+
+# List all private endpoint connections for Batch account
+Get-AzPrivateEndpointConnection -PrivateLinkResourceId $accountResourceId
+
+# Show the specified private endpoint connection
+Get-AzPrivateEndpointConnection -ResourceId $pecResourceId
+
+# Approve connection
+Approve-AzPrivateEndpointConnection -Description "Approved!" -ResourceId $pecResourceId
+
+# Reject connection
+Deny-AzPrivateEndpointConnection -Description "Rejected!" -ResourceId $pecResourceId
+
+# Remove connection
+Remove-AzPrivateEndpointConnection -ResourceId $pecResourceId
+```
+
+## Azure CLI
+
+Examples using Azure CLI ([`az network private-endpoint`](/cli/azure/network/private-endpoint)):
+
+```sh
+accountResourceId="/subscriptions/<subscription>/resourceGroups/<rg>/providers/Microsoft.Batch/batchAccounts/<account>"
+pecResourceId="$accountResourceId/privateEndpointConnections/<pe-connection-name>"
+
+# List all private endpoint connections for Batch account
+az network private-endpoint-connection list --id $accountResourceId
+
+# Show the specified private endpoint connection
+az network private-endpoint-connection show --id $pecResourceId
+
+# Approve connection
+az network private-endpoint-connection approve --description "Approved!" --id $pecResourceId
+
+# Reject connection
+az network private-endpoint-connection reject --description "Rejected!" --id $pecResourceId
+
+# Remove connection
+az network private-endpoint-connection delete --id $pecResourceId
+```
batch Private Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/private-connectivity.md
Title: Use private endpoints with Azure Batch accounts
-description: Learn how to connect privately to an Azure Batch account by using private endpoints.
+description: Learn how to connect privately to an Azure Batch account by using private endpoints.
Previously updated : 08/03/2021 Last updated : 05/26/2022 # Use private endpoints with Azure Batch accounts
-By default, [Azure Batch accounts](accounts.md) have a public endpoint and are publicly accessible. The Batch service offers the ability to create private Batch accounts, disabling the public network access.
+By default, [Azure Batch accounts](accounts.md) have public endpoints and are publicly accessible. The Batch service offers the ability to create private endpoint for Batch accounts, allowing private network access to the Batch service.
By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Batch account via a [private endpoint](../private-link/private-endpoint-overview.md). The private endpoint is a set of private IP addresses in a subnet within your virtual network. You can then limit access to an Azure Batch account over private IP addresses. Private Link allows users to access an Azure Batch account from within the virtual network or from any peered virtual network. Resources mapped to Private Link are also accessible on-premises over private peering through VPN or [Azure ExpressRoute](../expressroute/expressroute-introduction.md). You can connect to an Azure Batch account configured with Private Link by using the [automatic or manual approval method](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
-> [!IMPORTANT]
-> Support for private connectivity in Azure Batch is currently available for all regions except Germany Central and Germany Northeast.
+This article describes the steps to create a private endpoint to access Batch account endpoints.
+
+## Private endpoint sub-resources supported for Batch account
+
+Batch account resource has two endpoints supported to access with private endpoints:
+
+- Account endpoint (sub-resource: **batchAccount**): this is the endpoint for [Batch Service REST API](/rest/api/batchservice/) (data plane), for example managing pools, compute nodes, jobs, tasks, etc.
-This article describes the steps to create a private Batch account and access it using a private endpoint.
+- Node management endpoint (sub-resource: **nodeManagement**): used by Batch pool nodes to access Batch node management service. This is only applicable when using [simplified compute node communication](simplified-compute-node-communication.md). This feature is in preview.
+
+> [!IMPORTANT]
+> - This preview sub-resource is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Azure portal
-Use the following steps to create a private Batch account using the Azure portal:
-
-1. From the **Create a resource** pane, choose **Batch Service** and then select **Create**.
-2. Enter the subscription, resource group, region and Batch account name in the **Basics** tab, then select **Next: Advanced**.
-3. In the **Advanced** tab, set **Public network access** to **Disabled**.
-4. In **Settings**, select **Private endpoint connections** and then select **+ Private endpoint**.
- :::image type="content" source="media/private-connectivity/private-endpoint-connections.png" alt-text="Private endpoint connections":::
-5. In the **Basics** pane, enter or select the subscription, resource group, private endpoint resource name and region details, then select **Next: Resource**.
-6. In the **Resource** pane, set the **Resource type** to **Microsoft.Batch/batchAccounts**. Select the private Batch account you want to access, then select **Next: Configuration**.
- :::image type="content" source="media/private-connectivity/create-private-endpoint.png" alt-text="Create a private endpoint - Resource pane":::
-7. In the **Configuration** pane, enter or select this information:
- - **Virtual network**: Select your virtual network.
- - **Subnet**: Select your subnet.
- - **Integrate with private DNS zone**: Select **Yes**. To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines.
- - **Private DNS Zone**: Select privatelink.\<region\>.batch.azure.com. The private DNS zone is determined automatically. You can't change it by using the Azure portal.
-8. Select **Review + create**, then wait for Azure to validate your configuration.
-9. When you see the **Validation passed** message, select **Create**.
-
-After the private endpoint is provisioned, you can access the Batch account from VMs in the same virtual network using the private endpoint.
+Use the following steps to create a private endpoint with your Batch account using the Azure portal:
+
+1. Go to your Batch account in the Azure portal.
+2. In **Settings**, select **Networking** and go to the tab **Private Access**. Then, select **+ Private endpoint**.
+ :::image type="content" source="media/private-connectivity/private-endpoint-connections.png" alt-text="Screenshot of private endpoint connections.":::
+3. In the **Basics** pane, enter or select the subscription, resource group, private endpoint resource name and region details, then select **Next: Resource**.
+ :::image type="content" source="media/private-connectivity/create-private-endpoint-basics.png" alt-text="Screenshot of creating a private endpoint - Basics pane.":::
+4. In the **Resource** pane, set the **Resource type** to **Microsoft.Batch/batchAccounts**. Select the Batch account you want to access, select the target sub-resource, then select **Next: Configuration**.
+ :::image type="content" source="media/private-connectivity/create-private-endpoint.png" alt-text="Screenshot of creating a private endpoint - Resource pane.":::
+5. In the **Configuration** pane, enter or select this information:
+ - For **Virtual network**, select your virtual network.
+ - For **Subnet**, select your subnet.
+ - For **Private IP configuration**, select the default **Dynamically allocate IP address**.
+ - For **Integrate with private DNS zone**, select **Yes**. To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines.
+ - For **Private DNS Zone**, select **privatelink.batch.azure.com**. The private DNS zone is determined automatically. You can't change this setting by using the Azure portal.
> [!IMPORTANT]
-> Performing operations outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message in the Azure Portal.
+> If you have existing private endpoints created with previous private DNS zone `privatelink.<region>.batch.azure.com`, please follow [Migration with existing Batch account private endpoints](#migration-with-existing-batch-account-private-endpoints).
+
+6. Select **Review + create**, then wait for Azure to validate your configuration.
+7. When you see the **Validation passed** message, select **Create**.
-To view the IP address from the Azure portal:
+> [!NOTE]
+> You can also create the private endpoint from **Private Link Center** in Azure portal, or create a new resource by searching **private endpoint**.
+
+## Use the private endpoint
+
+After the private endpoint is provisioned, you can access the Batch account from within the same virtual network using the private endpoint.
+
+- Private endpoint for **batchAccount**: can access Batch account data plane to manage pools/jobs/tasks.
+
+- Private endpoint for **nodeManagement**: Batch pool's compute nodes can connect to and be managed by Batch node management service.
+
+> [!IMPORTANT]
+> If [public network access](public-network-access.md) is disabled with Batch account, performing account operations (for example pools, jobs) outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message for Batch account in the Azure Portal.
+
+To view the IP addresses for the private endpoint from the Azure portal:
1. Select **All resources**. 2. Search for the private endpoint that you created earlier.
-3. Select the **Overview** tab to see the DNS settings and IP addresses.
+3. Select the **DNS Configuration** tab to see the DNS settings and IP addresses.
:::image type="content" source="media/private-connectivity/access-private.png" alt-text="Private endpoint DNS settings and IP addresses":::
-## Azure Resource Manager template
-
-When [creating a Batch account by using Azure Resource Manager template](quick-create-template.md), modify the template to set **publicNetworkAccess** to **Disabled** as shown below.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "batchAccountName": {
- "type": "string",
- },
- "location": {
- "type": "string",
- }
- },
- "resources": [
- {
- "name": "[parameters('batchAccountName')]",
- "type": "Microsoft.Batch/batchAccounts",
- "apiVersion": "2020-03-01-preview",
- "location": "[parameters('location')]",
- "dependsOn": []
- "properties": {
- "poolAllocationMode": "BatchService"
- "publicNetworkAccess": "Disabled"
- }
- }
- ]
-}
-```
- ## Configure DNS zones Use a [private DNS zone](../dns/private-dns-privatednszone.md) within the subnet where you've created the private endpoint. Configure the endpoints so that each private IP address is mapped to a DNS entry. When you're creating the private endpoint, you can integrate it with a [private DNS zone](../dns/private-dns-privatednszone.md) in Azure. If you choose to instead use a [custom domain](../dns/dns-custom-domain.md), you must configure it to add DNS records for all private IP addresses reserved for the private endpoint.
+## Migration with existing Batch account private endpoints
+
+With the introduction of the new private endpoint sub-resource `nodeManagement` for Batch node management endpoint, the default private DNS zone for Batch account is simplified from `privatelink.<region>.batch.azure.com` to `privatelink.batch.azure.com`. The existing private endpoints for sub-resource `batchAccount` will continue to work, and no action is needed.
+
+However, if you have existing `batchAccount` private endpoints that are enabled with automatic private DNS integration using previous private DNS zone, extra configuration is needed for the new `batchAccount` private endpoint to create in the same virtual network:
+
+- If you don't need the previous private endpoint anymore, delete the private endpoint. Also unlink the previous private DNS zone from your virtual network. No more configuration is needed for the new private endpoint.
+
+- Otherwise, after the new private endpoint is created:
+
+ 1. make sure the automatic private DNS integration has a DNS A record created in the new private DNS zone `privatelink.batch.azure.com`. For example, `myaccount.<region> A <IPv4 address>`.
+
+ 1. Go to previous private DNS zone `privatelink.<region>.batch.azure.com`.
+
+ 1. Manually add a DNS CNAME record. For example, `myaccount CNAME => myaccount.<region>.privatelink.batch.azure.com`.
+
+> [!IMPORTANT]
+> This manual mitigation is only needed when you create a new **batchAccount** private endpoint with private DNS integration in the same virtual network which has existing private endpoints.
+ ## Pricing For details on costs related to private endpoints, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). ## Current limitations and best practices
-When creating your private Batch account, keep in mind the following:
+When creating a private endpoint with your Batch account, keep in mind the following:
-- Private endpoint resources must be created in the same subscription as the Batch account.-- To delete the private connection, you must delete the private endpoint resource.-- Once a Batch account is created with public network access, you can't change it to private access only.-- DNS records in the private DNS zone are not removed automatically when you delete a private endpoint or when you remove a region from the Batch account. You must manually remove the DNS records before adding a new private endpoint linked to this private DNS zone. If you don't clean up the DNS records, unexpected data plane issues might happen, such as data outages to regions added after private endpoint removal or region removal.
+- Private endpoint resources with the sub-resource **batchAccount** must be created in the same subscription as the Batch account.
+- Resource movement is not supported for private endpoints with Batch accounts.
+- If a Batch account resource is moved to a different resource group or subscription, the private endpoints can still work, but the association to the Batch account breaks. If you delete the private endpoint resource, its associated private endpoint connection still exists in your Batch account. You can manually remove connection from your Batch account.
+- To delete the private connection, either delete the private endpoint resource, or delete the private connection in the Batch account (this action disconnects the related private endpoint resource).
+- DNS records in the private DNS zone are not removed automatically when you delete a private endpoint connection from the Batch account. You must manually remove the DNS records before adding a new private endpoint linked to this private DNS zone. If you don't clean up the DNS records, unexpected access issues might happen.
## Next steps - Learn how to [create Batch pools in virtual networks](batch-virtual-network.md).-- Learn how to [create Batch pools without public IP addresses](batch-pool-no-public-ip-address.md)-- Learn how to [create Batch pools with specified public IP addresses](create-pool-public-ip.md).
+- Learn how to [create Batch pools without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
+- Learn how to [configure public network access for Batch accounts](public-network-access.md).
+- Learn how to [manage private endpoint connections for Batch accounts](manage-private-endpoint-connections.md).
- Learn about [Azure Private Link](../private-link/private-link-overview.md).
batch Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/public-network-access.md
+
+ Title: Configure public network access with Azure Batch accounts
+description: Learn how to configure public network access with Azure Batch accounts, for example enable, disable, or manage network rules for public network access.
+ Last updated : 05/26/2022++
+# Configure public network access with Azure Batch accounts
+
+By default, [Azure Batch accounts](accounts.md) have public endpoints and are publicly accessible. This article shows how to configure your Batch account to allow access from only specific public IP addresses or IP address ranges.
+
+IP network rules are configured on the public endpoints. IP network rules don't apply to private endpoints configured with [Private Link](private-connectivity.md).
+
+Each endpoint supports a maximum of 200 IP network rules.
+
+## Batch account public endpoints
+
+Batch accounts have two public endpoints:
+
+- The *Account endpoint* is the endpoint for [Batch Service REST API](/rest/api/batchservice/) (data plane). Use this endpoint for managing pools, compute nodes, jobs, tasks, etc.
+- The *Node management endpoint* is used by Batch pool nodes to access the Batch node management service. This endpoint only applicable when using [simplified compute node communication](simplified-compute-node-communication.md).
+
+You can check both endpoints in account properties when you query the Batch account with [Batch Management REST API](/rest/api/batchmanagement/batch-account/get). You can also check them in the overview for your Batch account in the Azure portal:
+
+ :::image type="content" source="media/public-access/batch-account-endpoints.png" alt-text="Screenshot of Batch account endpoints.":::
+
+You can configure public network access to Batch account endpoints with the following options:
+
+- **All networks**: allow public network access with no restriction.
+- **Selected networks**: allow public network access with allowed network rules.
+- **Disabled**: disable public network access, and private endpoints are required to access Batch account endpoints.
+
+## Access from selected public networks
+
+1. In the portal, navigate to your Batch account.
+1. Under **Settings**, select **Networking**.
+1. On the **Public access** tab, select to allow public access from **Selected networks**.
+1. Under access for each endpoint, enter a public IP address or address range in CIDR notation one by one.
+ :::image type="content" source="media/public-access/configure-public-access.png" alt-text="Screenshot of public access with Batch account.":::
+1. Select **Save**.
+
+> [!NOTE]
+> After adding a rule, it takes a few minutes for the rule to take effect.
+
+> [!TIP]
+> To configure IP network rules for node management endpoint, you will need to know the public IP addresses or address ranges used by Batch pool's internet outbound access. This can typically be determined with Batch pools created in [virtual network](batch-virtual-network.md) or with [specified public IP addresses](create-pool-public-ip.md).
+
+## Disable public network access
+
+Optionally, disable public network access to Batch account endpoints. Disabling the public network access overrides all IP network rules configurations. For example, you might want to disable public access to a Batch account secured in a virtual network using [Private Link](private-connectivity.md).
+
+1. In the portal, navigate to your Batch account and select **Settings > Networking**.
+1. On the **Public access** tab, select **Disabled**.
+1. Select **Save**.
+
+## Restore public network access
+
+To re-enable the public network access, update the networking settings to allow public access. Enabling the public access overrides all IP network rule configurations, and will allow access from any IP addresses.
+
+1. In the portal, navigate to your Batch account and select **Settings > Networking**.
+1. On the **Public access** tab, select **All networks**.
+1. Select **Save**.
+
+## Next steps
+
+- Learn how to [use private endpoints with Batch accounts](private-connectivity.md).
+- Learn how to [use simplified compute node communication](simplified-compute-node-communication.md).
+- Learn more about [creating pools in a virtual network](batch-virtual-network.md).
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
az storage account create \
Create a Batch account with the [az batch account create](/cli/azure/batch/account#az-batch-account-create) command. You need an account to create compute resources (pools of compute nodes) and Batch jobs.
-The following example creates a Batch account named *mybatchaccount* in *QuickstartBatch-rg*, and links the storage account you created.
+The following example creates a Batch account named *mybatchaccount* in *QuickstartBatch-rg*, and links the storage account you created.
```azurecli-interactive az batch account create \
az batch account login \
## Create a pool of compute nodes
-Now that you have a Batch account, create a sample pool of Linux compute nodes using the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command. The following example creates a pool named *mypool* of two *Standard_A1_v2* nodes running Ubuntu 16.04 LTS. The suggested node size offers a good balance of performance versus cost for this quick example.
-
+Now that you have a Batch account, create a sample pool of Linux compute nodes using the [az batch pool create](/cli/azure/batch/pool#az-batch-pool-create) command. The following example creates a pool named *mypool* of two *Standard_A1_v2* nodes running Ubuntu 18.04 LTS. The suggested node size offers a good balance of performance versus cost for this quick example.
+ ```azurecli-interactive az batch pool create \ --id mypool --vm-size Standard_A1_v2 \ --target-dedicated-nodes 2 \
- --image canonical:ubuntuserver:16.04-LTS \
- --node-agent-sku-id "batch.node.ubuntu 16.04"
+ --image canonical:ubuntuserver:18.04-LTS \
+ --node-agent-sku-id "batch.node.ubuntu 18.04"
``` Batch creates the pool immediately, but it takes a few minutes to allocate and start the compute nodes. During this time, the pool is in the `resizing` state. To see the status of the pool, run the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. This command shows all the properties of the pool, and you can query for specific properties. The following command gets the allocation state of the pool:
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md
By default, Azure Batch accounts have a public endpoint and are publicly accessi
:::image type="content" source="media/security-best-practices/typical-environment.png" alt-text="Diagram showing a typical Batch environment.":::
-Many features are available to help you create a more secure Azure Batch deployment. You can restrict access to nodes and reduce the discoverability of the nodes from the internet by [provisioning the pool without public IP addresses](batch-pool-no-public-ip-address.md). The compute nodes can securely communicate with other virtual machines or with an on-premises network by [provisioning the pool in a subnet of an Azure virtual network](batch-virtual-network.md). And you can enable [private access from virtual networks](private-connectivity.md) from a service powered by Azure Private Link.
+Many features are available to help you create a more secure Azure Batch deployment. You can restrict access to nodes and reduce the discoverability of the nodes from the internet by [provisioning the pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md). The compute nodes can securely communicate with other virtual machines or with an on-premises network by [provisioning the pool in a subnet of an Azure virtual network](batch-virtual-network.md). And you can enable [private access from virtual networks](private-connectivity.md) from a service powered by Azure Private Link.
:::image type="content" source="media/security-best-practices/secure-environment.png" alt-text="Diagram showing a more secure Batch environment.":::
Batch management operations via Azure Resource Manager are encrypted using HTTPS
The Batch service communicates with a Batch node agent that runs on each node in the pool. For example, the service instructs the node agent to run a task, stop a task, or get the files for a task. Communication with the node agent is enabled by one or more load balancers, the number of which depends on the number of nodes in a pool. The load balancer forwards the communication to the desired node, with each node being addressed by a unique port number. By default, load balancers have public IP addresses associated with them. You can also remotely access pool nodes via RDP or SSH (this access is enabled by default, with communication via load balancers).
-### Restricting access to Batch endpoints
+### Restricting access to Batch endpoints
-Several capabilities are available to limit access to the various Batch endpoints, especially when the solution uses a virtual network.
+Several capabilities are available to limit access to the various Batch endpoints, especially when the solution uses a virtual network.
#### Use private endpoints
By default, all the compute nodes in an Azure Batch virtual machine configuratio
To restrict access to these nodes and reduce the discoverability of these nodes from the internet, you can provision the pool without public IP addresses.
-For more information, see [Create a pool without public IP addresses](batch-pool-no-public-ip-address.md).
+For more information, see [Create a pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
#### Limit remote access to pool nodes
By default, Batch allows a node user with network connectivity to connect extern
To limit remote access to nodes, use one of the following methods: - Configure the [PoolEndpointConfiguration](/rest/api/batchservice/pool/add#poolendpointconfiguration) to deny access. The appropriate network security group (NSG) will be associated with the pool.-- Create your pool [without public IP addresses](batch-pool-no-public-ip-address.md). By default, these pools can't be accessed outside of the VNet.
+- Create your pool [without public IP addresses](simplified-node-communication-pool-no-public-ip.md). By default, these pools can't be accessed outside of the VNet.
- Associate an NSG with the VNet to deny access to the RDP or SSH ports. - Don't create any users on the node. Without any node users, remote access won't be possible.
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication
-description: Learn how the Azure Batch service is simplifying the way Batch pool infrastructure is managed and how to opt in or out of the .
+ Title: Use simplified compute node communication
+description: Learn how the Azure Batch service is simplifying the way Batch pool infrastructure is managed and how to opt in or out of the feature.
Previously updated : 10/21/2021 Last updated : 06/02/2022+ # Use simplified compute node communication
This document describes forthcoming changes with how the Azure Batch service com
> Support for simplified compute node communication in Azure Batch is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Opting in is not required at this time. However, in the future, using simplified compute node communication will be required for all Batch accounts. At that time, an official retirement notice will be provided, with an opportunity to migrate your Batch pools before that happens.
+Opting in isn't required at this time. However, in the future, using simplified compute node communication will be required for all Batch accounts. At that time, an official retirement notice will be provided, with an opportunity to migrate your Batch pools before that happens.
+
+## Supported regions
+
+Simplified compute node communication in Azure Batch is currently available for the following regions:
+
+- Public: Central US EUAP, East US 2 EUAP, West Central US, North Central US, South Central US, East US, East US 2, West US 2, West US, Central US, West US 3, East Asia, South East Asia, Australia East, Australia Southeast, Brazil Southeast, Brazil South, Canada Central, Canada East, North Europe, West Europe, Central India, Japan East, Japan West, Korea Central, Korea South, Switzerland North, UK West, UK South, UAE North, France Central, Germany West Central, Norway East, South Africa North.
+
+- Government: USGov Arizona, USGov Virginia, USGov Texas.
+
+- China: China North 3.
## Compute node communication changes
With the new model, Batch pools in accounts that use simplified compute node com
- Outbound: - Destination port 443 over TCP to BatchNodeManagement.*region*
-Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). This API will report the base set of dependencies, depending upon the Batch account pool communication model. User-specific workloads may need additional rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package repository for virtual file system mounting functionality.
+Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). This API will report the base set of dependencies, depending upon the Batch account pool communication model. User-specific workloads may need additional rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package repository for virtual file system mounting functionality.
## Benefits of the new model
Simplified compute node communication helps reduce security risks by removing th
The new model also provides more fine-grained data exfiltration control, since outbound communication to Storage.*region* is no longer required. You can explicitly lock down outbound communication to Azure Storage if required for your workflow (such as AppPackage storage accounts, other storage accounts for resource files or output files, or other similar scenarios).
-Even if your workloads are not currently impacted by the changes (as described in the next section), you may still want to [opt in to use simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) now. This will ensure your Batch workloads are ready for any future improvements enabled by this model.
+Even if your workloads aren't currently impacted by the changes (as described in the next section), you may still want to [opt in to use simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) now. This will ensure your Batch workloads are ready for any future improvements enabled by this model.
## Scope of impact
-In many cases, this new communication model will not directly affect your Batch workloads. However, simplified compute node communication will have an impact for the following cases:
+In many cases, this new communication model won't directly affect your Batch workloads. However, simplified compute node communication will have an impact for the following cases:
- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following: - Explicitly disable outbound network traffic rules that are incompatible with simplified compute node communication.
If either of these cases applies to you, and you would like to opt in to the pre
### Required network configuration changes
-For impacted users, the following set of steps are required to migrate to the new communication model:
+For impacted users, the following set of steps is required to migrate to the new communication model:
1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the models (that is, the network rules prior to simplified compute node communication and after). At a minimum, these rules would be: - Inbound:
For impacted users, the following set of steps are required to migrate to the ne
- Outbound: - Destination port 443 over TCP to Storage.*region* - Destination port 443 over TCP to BatchNodeManagement.*region*
-1. If you have any additional inbound or outbound scenarios required by your workflow, you will need to ensure that your rules reflect these requirements.
+1. If you have any additional inbound or outbound scenarios required by your workflow, you'll need to ensure that your rules reflect these requirements.
1. [Opt in to simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) as described below. 1. Use one of the following options to update your workloads to use the new communication model. Whichever method you use, keep in mind that pools without public IP addresses are unaffected and can't currently use simplified compute node communication. Please see the [Current limitations](#current-limitations) section. 1. Create new pools and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools.
Use the following options when creating your request.
1. For **Problem type**, select **Batch Accounts**. 1. For **Problem subtype**, select **Other issues with Batch Accounts**. 1. Select **Next**, then select **Next** again to go to the **Additional details** page.
-1. In **Additional details**, you can optionally specify that you want to enable all of the Batch accounts in your subscription, or across multiple subscription. If you do so, be sure to include the subscription IDs here.
+1. In **Additional details**, you can optionally specify that you want to enable all of the Batch accounts in your subscription, or across multiple subscriptions. If you do so, be sure to include the subscription IDs here.
1. Make any other required selections on the page, then select **Next**. 1. Review your request details, then select **Create** to submit your support request.
-After your request has been submitted, you will be notified once the account has been opted in (or out).
+After your request has been submitted, you'll be notified once the account has been opted in (or out).
## Current limitations The following are known limitations for accounts that opt in to simplified compute node communication: -- [Creating pools without public IP addresses](batch-pool-no-public-ip-address.md) isn't currently supported for accounts which have opted in.-- Previously created pools without public IP addresses won't use simplified compute node communication, even if the Batch account has opted in.-- [Private Batch accounts](private-connectivity.md) can opt in to simplified compute node communication, but Batch pools created by these Batch accounts must have public IP addresses in order to use simplified compute node communication.
+- Limited migration support for previously created pools without public IP addresses ([V1 preview](batch-pool-no-public-ip-address.md)). They can only be migrated if created in a [virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even if the Batch account has opted in.
- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are generally deprecated. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). ## Next steps
+- Learn how to [use private endpoints with Batch accounts](private-connectivity.md).
- Learn more about [pools in virtual networks](batch-virtual-network.md).-- Learn how to [create a pool pool with specified public IP addresses](create-pool-public-ip.md).
+- Learn how to [create a pool with specified public IP addresses](create-pool-public-ip.md).
+- Learn how to [create a pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
+
+ Title: Create a simplified node communication pool without public IP addresses (preview)
+description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses.
+ Last updated : 05/26/2022+++
+# Create a simplified node communication pool without public IP addresses (preview)
+
+> [!NOTE]
+> This replaces the previous preview version of [Azure Batch pool without public IP addresses](batch-pool-no-public-ip-address.md). This new version requires [using simplified compute node communication](simplified-compute-node-communication.md).
+
+> [!IMPORTANT]
+> - Support for pools without public IP addresses in Azure Batch is currently in public preview for [selected regions](simplified-compute-node-communication.md#supported-regions).
+> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+When you create an Azure Batch pool, you can provision the virtual machine (VM) configuration pool without a public IP address. This article explains how to set up a Batch pool without public IP addresses.
+
+## Why use a pool without public IP addresses?
+
+By default, all the compute nodes in an Azure Batch VM configuration pool are assigned a public IP address. This address is used by the Batch service to support outbound access to the internet, as well inbound access to compute nodes from the internet.
+
+To restrict access to these nodes and reduce the discoverability of these nodes from the internet, you can provision the pool without public IP addresses.
+
+## Prerequisites
+
+> [!IMPORTANT]
+> The prerequisites have changed from the previous version of this preview. Make sure to review each item for changes before proceeding.
+
+- Use simplified compute node communication. For more information, see [Use simplified compute node communication](simplified-compute-node-communication.md).
+
+- The Batch client API must use Azure Active Directory (AD) authentication. Azure Batch support for Azure AD is documented in [Authenticate Batch service solutions with Active Directory](batch-aad-auth.md).
+
+- Create your pool in an [Azure virtual network (VNet)](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure Command-Line Interface (Azure CLI), or other methods.
+
+ - The VNet must be in the same subscription and region as the Batch account you use to create your pool.
+
+ - The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs.
+
+ - If you plan to use a [private endpoint with Batch accounts](private-connectivity.md), you must disable private endpoint network policies. Run the following Azure CLI command:
+
+ `az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies`
+
+- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. To allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)) either:
+
+ - Use `nodeManagement` [private endpoint with Batch accounts](private-connectivity.md). This is the preferred method.
+
+ - Alternatively, provide your own internet outbound access support (see [Outbound access to the internet](#outbound-access-to-the-internet)).
+
+## Current limitations
+
+1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration.
+1. [Custom endpoint configuration](pool-endpoint-configuration.md) for Batch compute nodes doesn't work with pools without public IP addresses.
+1. Because there are no public IP addresses, you can't [use your own specified public IP addresses](create-pool-public-ip.md) with this type of pool.
+
+## Create a pool without public IP addresses in the Azure portal
+
+1. Navigate to your Batch account in the Azure portal.
+1. In the **Settings** window on the left, select **Pools**.
+1. In the **Pools** window, select **Add**.
+1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown.
+1. Select the correct **Publisher/Offer/Sku** of your image.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
+1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you are creating.
+1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
+
+![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png)
+
+## Use the Batch REST API to create a pool without public IP addresses
+
+The example below shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool that uses public IP addresses.
+
+### REST API URI
+
+```http
+POST {batchURL}/pools?api-version=2020-03-01.11.0
+client-request-id: 00000000-0000-0000-0000-000000000000
+```
+
+### Request body
+
+```json
+"pool": {
+ "id": "pool2",
+ "vmSize": "standard_a1",
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "18.04-lts"
+ },
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ }
+ "networkConfiguration": {
+ "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>",
+ "publicIPAddressConfiguration": {
+ "provision": "NoPublicIPAddresses"
+ }
+ },
+ "resizeTimeout": "PT15M",
+ "targetDedicatedNodes": 5,
+ "targetLowPriorityNodes": 0,
+ "taskSlotsPerNode": 3,
+ "taskSchedulingPolicy": {
+ "nodeFillType": "spread"
+ },
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": true,
+ "metadata": [
+ {
+ "name": "myproperty",
+ "value": "myvalue"
+ }
+ ]
+}
+```
+
+## Outbound access to the internet
+
+In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
+
+Another way to provide outbound connectivity is to use a user-defined route (UDR). This lets you route traffic to a proxy machine that has public internet access, for example [Azure Firewall](../firewall/overview.md).
+
+> [!IMPORTANT]
+> There is no extra network resource (load balancer, network security group) created for simplified node communication pools without public IP addresses. Since the compute nodes in the pool are not bound to any load balancer, Azure may provide [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md). However, Default Outbound Access is not suitable for production workloads, so it is strongly recommended to bring your own Internet outbound access.
+
+## Migration from previous preview version of No Public IP pools
+
+For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md). To migrate the pool, follow the [opt-in process for simplified node communication](simplified-compute-node-communication.md):
+
+1. Opt in to use simplified node communication.
+1. Create a [private endpoint for Batch node management](private-connectivity.md) in the virtual network.
+1. Scale down the pool to zero nodes.
+1. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+
+## Next steps
+
+- Learn how to [use simplified compute node communication](simplified-compute-node-communication.md).
+- Learn more about [creating pools in a virtual network](batch-virtual-network.md).
+- Learn how to [use private endpoints with Batch accounts](private-connectivity.md).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
description: This article is regularly updated with news about the Azure Cogniti
Previously updated : 01/16/2022 Last updated : 06/03/2022 # What's new in Anomaly Detector
We've also added links to some user-generated content. Those items will be marke
## Release notes
+### May 2022
+
+* New blog released: [Detect anomalies in equipment with Multivariate Anomaly Detector in Azure Databricks](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detect-anomalies-in-equipment-with-anomaly-detector-in-azure/ba-p/3390688).
+
+### April 2022
+* Univariate Anomaly Detector is now integrated in Azure Data Explorer(ADX). Check out this [announcement blog post](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-univariate-anomaly-detector-in-azure-data-explorer/ba-p/3285400) to learn more!
+
+### March 2022
+* Anomaly Detector (univariate) available in Sweden Central.
+
+### February 2022
+* **Multivariate Anomaly Detector API has been integrated with Synapse.** Check out this [blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-multivariate-anomaly-detector-in-synapseml/ba-p/3122486) to learn more!
+ ### January 2022 * **Multivariate Anomaly Detector API v1.1-preview.1 public preview on 1/18.** In this version, Multivariate Anomaly Detector supports synchronous API for inference and added new fields in API output interpreting the correlation change of variables. * Univariate Anomaly Detector added new fields in API output. - ### November 2021 * Multivariate Anomaly Detector available in six more regions: UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West. Now in total 26 regions are supported.
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Previously updated : 04/12/2022 Last updated : 06/02/2022 zone_pivot_groups: programming-languages-speech-sdk-cli # Captioning with speech to text
-In this guide, you learn how to create captions with speech to text. Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing.
+In this guide, you learn how to create captions with speech to text. Captioning is the process of converting the audio content of a television broadcast, webcast, film, video, live event, or other production into text, and then displaying the text on a screen, monitor, or other visual display system.
+
+Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing.
Here are some common captioning scenarios: - Online courses and instructional videos
There are some situations where [training a custom model](custom-speech-overview
## Next steps * [Captioning quickstart](captioning-quickstart.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products.
-Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This base model is pre-trained with dialects and phonetics representing a variety of common domains. As a result, consuming the base model requires no additional configuration and works very well in most scenarios. When you make a speech recognition request, the current base model for each [supported language](language-support.md) is used by default.
+Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md) is used by default. The base model works very well in most speech recognition scenarios.
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions
-
-For more information, see [Choose a model for Custom Speech](how-to-custom-speech-choose-model.md).
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
## How does it work?
With Custom Speech, you can upload your own data, test and train a custom model,
Here's more information about the sequence of steps shown in the previous diagram:
-1. [Choose a model](how-to-custom-speech-choose-model.md) and create a Custom Speech project. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal.
+1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal.
1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products. 1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
-1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint.
+1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
-If you will train a custom model with audio data, choose a Speech resource [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training audio data. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After a model is trained, you can copy it to a Speech resource that's in another region as needed for deployment.
+If you will train a custom model with audio data, choose a Speech resource [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training audio data. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After a model is trained, you can copy it to a Speech resource in another region as needed.
## Next steps
-* [Choose a model](how-to-custom-speech-choose-model.md)
+* [Create a project](how-to-custom-speech-create-project.md)
* [Upload test data](./how-to-custom-speech-upload-data.md) * [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Commands Integrate Remote Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-integrate-remote-skills.md
- Title: 'How To: Export Custom Commands application as a remote skill - Speech service'-
-description: In this article, you learn how to export Custom Commands application as a skill
------ Previously updated : 09/30/2020----
-# Export Custom Commands application as a remote skill
-
-In this article, you will learn how to export a Custom Commands application as a remote skill.
-
-> [!NOTE]
-> Exporting a Custom Commands application as a remote skill is a limited preview feature.
-
-## Prerequisites
-> [!div class="checklist"]
-> * [Understanding of Bot Framework Skill](/azure/bot-service/skills-conceptual)
-> * [Understanding of Skill Manifest](https://aka.ms/speech/cc-skill-manifest)
-> * [How to invoke a skill from a Bot Framework Bot](/azure/bot-service/skills-about-skill-consumers)
-> * An exisiting Custom Commands application. In case you don't have any Custom Commands application, try out with - [Quickstart: Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md)
-
-## Custom Commands as remote skills
-* Bot Framework Skills are re-usable conversational skill building-blocks covering conversational use-cases enabling you to add extensive functionality to a Bot within minutes. To read more on this, go to [Bot Framework Skill](https://microsoft.github.io/botframework-solutions/overview/skills/).
-* A Custom Commands application can be exported as a skill. This skill can then be invoked over the remote skills protocol from a Bot Framework bot.
-
-## Configure an application to be exposed as a remote skill
-
-### Application level settings
-1. In the left panel, select **Settings** > **Remote skills**.
-1. Set **Remote skills enabled** toggle to on.
-
-### Authentication to skills
-1. If you want to enable authentication, add Microsoft Application Ids of the Bot Framework Bots you want to configure to call the custom commands application.
- > [!div class="mx-imgBorder"]
- > ![Add a MSA id to skill](media/custom-commands/skill-add-msa-id.png)
-
-1. If you have at least one entry added to the list, authentication will be enabled on the application, and only the allowed bots will be able to call the application.
-> [!TIP]
-> To disable authentication, delete all the Microsoft Application Ids from the allowed list.
-
- ### Enable/disable commands to be exposed as skills
-
-You have the option to choose which commands you want to export over Remote Skills.
-
-1. To expose a command over skills, select **Enable a new command** under the **Enable commands for skills**.
-1. From the dropdown, select the command you intend to add.
-1. Select **Save**.
-
-### Configure triggering utterances for commands
-Custom Commands uses the example sentences which are configured for the commands in order to generate the skills triggering utterances. These **triggering utterances** will be used to generate the **dispatcher** section [**skill manifest**](https://microsoft.github.io/botframework-solutions/skills/handbook/manifest/).
-
-As an author, you might want to control which of your **example sentences** are used to generate the triggering utterances for skills.
-1. By default, all the **Triggering examples** from a command will be included the manifest file.
-1. If you want to explicitly eliminate any one example, select **Edit** icon on the command from **Enabled commands for skills** section.
- > [!div class="mx-imgBorder"]
- > ![Edit an enabled command for skill](media/custom-commands/skill-edit-enabled-command.png)
-
-1. Next, on the example sentences you want to omit, **right click** > **Disable Example Sentence**.
- > [!div class="mx-imgBorder"]
- > ![Disable examples](media/custom-commands/skill-disable-example-sentences.png)
-
-1. Select **Save**.
-1. You will notice that you can't add a new example in this window. If there's a need to do so, proceed to the exit out of the settings section and select the relevant command from **Commands** accordion. At this point, you can add the new entry in the **Example sentences** section. This change will be automatically reflected in the remote skills settings value for the command.
-
-> [!IMPORTANT]
-> In case your existing example sentences have references to **String > Catalog** data-type, those sentences will be automatically omitted from the skills triggering utterances list.
-
-## Download skill manifest
-1. After, you have **published** your application, you can download the skill manifest file.
-1. Use the skill manifest to configure your Bot Framework consumer bot to call in to the Custom Commands skill.
-> [!IMPORTANT]
-> You must **publish** your Custom Commands application in order to download the skill manifest. </br>
-> Additionally, if you made **any changes** to the application, you need to publish the application again for the latest changes to be reflected in the manifest file.
-
-> [!NOTE]
-> If you face any issues with publishing the application and the error directs to skills triggering utterances, please re-check the configuration for **Enabled commands for skills**. Each of the exposed commands must have at least one valid triggering utterance.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Update a command from the client](./how-to-custom-commands-update-command-from-client.md)
cognitive-services How To Custom Speech Choose Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-choose-model.md
- Title: Choose a model for Custom Speech - Speech service-
-description: Learn about how to choose a model for Custom Speech.
------ Previously updated : 05/08/2022----
-# Choose a model for Custom Speech
-
-Custom Speech models are created by customizing a base model with data from your particular customer scenario. Once you create a custom model, the speech recognition accuracy and quality will remain consistent, even when a new base model is released.
-
-Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. But with Custom Speech you can take a snapshot of a particular base model without training it. In this case, "custom" means that speech recognition is pinned to a base model from a particular point in time.
-
-New base models are released periodically to improve accuracy and quality. We recommend that you chose the latest base model when creating your custom model. If a required customization capability is only available with an older model, then you can choose an older base model.
-
-> [!NOTE]
-> The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio
-
-A model deployed to an endpoint using Custom Speech is fixed until you decide to update it. You can also choose to deploy a base model without training, which means that base model is fixed. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
-
-Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-## Choose your model
-
-There are a few approaches to using speech-to-text models:
-- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios).-- A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain.-- Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.-
-One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can use the Speech Studio, Speech CLI, or REST API to compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) score. If there are multiple incorrect word substitutions when evaluating the results, then training a custom model to recognize those words is recommended.
-
-Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesnΓÇÖt need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera.
-
-## Create a Custom Speech project
-
-Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a country or language. For example, you might create a project for English in the United States.
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select the subscription and Speech resource to work with.
-1. Select **Custom speech** > **Create a new project**.
-1. Follow the instructions provided by the wizard to create your project.
-
-Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
-
-If you want to use a base model right away, you can skip the training and testing steps. See [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) to start using a base or custom model.
-
-## Next steps
-
-* [Training and testing datasets](./how-to-custom-speech-test-and-train.md)
-* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
-* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
+
+ Title: Create a Custom Speech project - Speech service
+
+description: Learn about how to create a project for Custom Speech.
++++++ Last updated : 05/22/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Create a Custom Speech project
+
+Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md). For example, you might create a project for English in the United States.
+
+## Create a project
++
+To create a Custom Speech project, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select the subscription and Speech resource to work with.
+1. Select **Custom speech** > **Create a new project**.
+1. Follow the instructions provided by the wizard to create your project.
+
+Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
+++
+To create a project, use the `spx csr project create` command. Construct the request parameters according to the following instructions:
+
+- Set the required `language` parameter. The locale of the project and the contained datasets should be the same. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a project:
+
+```azurecli-interactive
+spx csr project create --name "My Project" --description "My Project Description" --language "en-US"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "links": {
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ },
+ "properties": {
+ "datasetCount": 0,
+ "evaluationCount": 0,
+ "modelCount": 0,
+ "transcriptionCount": 0,
+ "endpointCount": 0
+ },
+ "createdDateTime": "2022-05-17T22:15:18Z",
+ "locale": "en-US",
+ "displayName": "My Project",
+ "description": "My Project Description"
+}
+```
+
+The top-level `self` property in the response body is the project's URI. Use this URI to get details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to update or delete a project.
+
+For Speech CLI help with projects, run the following command:
+
+```azurecli-interactive
+spx help csr project
+```
+++
+To create a project, use the [CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later.
+- Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following [CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "displayName": "My Project",
+ "description": "My Project Description",
+ "locale": "en-US"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/projects"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "links": {
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ },
+ "properties": {
+ "datasetCount": 0,
+ "evaluationCount": 0,
+ "modelCount": 0,
+ "transcriptionCount": 0,
+ "endpointCount": 0
+ },
+ "createdDateTime": "2022-05-17T22:15:18Z",
+ "locale": "en-US",
+ "displayName": "My Project",
+ "description": "My Project Description"
+}
+```
+
+The top-level `self` property in the response body is the project's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject) details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject) a project.
++
+## Choose your model
+
+There are a few approaches to using Custom Speech models:
+- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios). Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. If a required customization capability is only available with an older model, then you can choose an older base model.
+- A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain.
+- Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.
+
+One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) score. If the WER score is high, training a custom model to recognize the incorrectly identified words is recommended.
+
+Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesn't need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera.
+
+## Model stability and lifecycle
+
+A base model or custom model deployed to an endpoint using Custom Speech is fixed until you decide to update it. The speech recognition accuracy and quality will remain consistent, even when a new base model is released. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
+
+Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+## Next steps
+
+* [Training and testing datasets](./how-to-custom-speech-test-and-train.md)
+* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
# Deploy a Custom Speech model
-In this article, you'll learn how to deploy an endpoint for a Custom Speech model. A custom endpoint that you deploy is required to use a Custom Speech model.
+In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+
+You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
> [!NOTE]
-> You can deploy an endpoint to use a base model without training or testing. For example, you might want to quickly create a custom endpoint to start testing your application. The endpoint can be [updated](#change-model-and-redeploy-endpoint) later to use a trained and tested model.
+> Endpoints used by `F0` Speech resources are deleted after seven days.
## Add a deployment endpoint + To create a custom endpoint, follow these steps:
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
To create a custom endpoint, follow these steps:
:::image type="content" source="./media/custom-speech/custom-speech-deploy-model.png" alt-text="Screenshot of the New endpoint page that shows the checkbox to enable logging."::: 1. Select **Add** to save and deploy the endpoint.
-
- > [!NOTE]
- > Endpoints used by `F0` Speech resources are deleted after seven days.
On the main **Deploy models** page, details about the new endpoint are displayed in a table, such as name, description, status, and expiration date. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Succeeded**, the endpoint is ready to use.
-Note the model expiration date, and update the endpoint's model before that date to ensure uninterrupted service. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+> [!IMPORTANT]
+> Take note of the model expiration date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code. ++
+To create an endpoint and deploy a model, use the `spx csr endpoint create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint.
+- Set the required `language` parameter. The endpoint locale must match the locale of the model. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Optionally, you can set the `logging` parameter. Set this to `enabled` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`.
+
+Here's an example Speech CLI command to create an endpoint and deploy a model:
+
+```azurecli-interactive
+spx csr endpoint create --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:27:51Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The top-level `self` property in the response body is the endpoint's URI. Use this URI to get details about the endpoint's project, model, and logs. You also use this URI to update the endpoint.
+
+For Speech CLI help with endpoints, run the following command:
+
+```azurecli-interactive
+spx help csr endpoint
+```
+++
+To create an endpoint and deploy a model, use the [CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the required `model` property to the URI of the model that you want deployed to the endpoint.
+- Set the required `locale` property. The endpoint locale must match the locale of the model. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Optionally, you can set the `loggingEnabled` property within `properties`. Set this to `true` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`.
+
+Make an HTTP POST request using the URI as shown in the following [CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "locale": "en-US",
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:27:51Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The top-level `self` property in the response body is the endpoint's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint) details about the endpoint's project, model, and logs. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint) the endpoint.
++ ## Change model and redeploy endpoint An endpoint can be updated to use another model that was created by the same Speech resource. As previously mentioned, you must update the endpoint's model before the [model expires](./how-to-custom-speech-model-and-endpoint-lifecycle.md). + To use a new model and redeploy the custom endpoint:
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link to an endpoint by name, and then select **Change model**. 1. Select the new model that you want the endpoint to use. 1. Select **Done** to save and redeploy the endpoint. ++
+To redeploy the custom endpoint with a new model, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
+
+- Set the required `endpoint` parameter to the ID of the endpoint that you want deployed.
+- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint.
+
+Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
+
+```azurecli-interactive
+spx csr endpoint update --endpoint YourEndpointId --model YourModelId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:01:34Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:41:27Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
+
+For Speech CLI help with endpoints, run the following command:
+
+```azurecli-interactive
+spx help csr endpoint
+```
+++
+To redeploy the custom endpoint with a new model, use the [UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `model` property to the URI of the model that you want deployed to the endpoint.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ }
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:01:34Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:41:27Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
++ The redeployment takes several minutes to complete. In the meantime, your endpoint will use the previous model without interruption of service. ## View logging data Logging data is available for export if you configured it while creating the endpoint. + To download the endpoint logs:
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
1. Select **Custom Speech** > Your project name > **Deploy models**. 1. Select the link by endpoint name. 1. Under **Content logging**, select **Download log**. ++
+To gets logs for an endpoint, use the `spx csr endpoint list` command. Construct the request parameters according to the following instructions:
+
+- Set the required `endpoint` parameter to the ID of the endpoint that you want to get logs.
+
+Here's an example Speech CLI command that gets logs for an endpoint:
+
+```azurecli-interactive
+spx csr endpoint list --endpoint YourEndpointId
+```
+
+The location of each log file with more details are returned in the response body.
+++
+To get logs for an endpoint, start by using the [GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/2f78cdb7-58ac-4bd9-9bc6-170e31483b26"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:41:05Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-19T23:41:05Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
+
+Make an HTTP GET request using the "logs" URI from the previous response body. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
++
+```curl
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The location of each log file with more details are returned in the response body.
++ Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If your own storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted. ## Next steps
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
+show_latex: true
+no-loc: [$$, '\times', '\over']
# Test accuracy of a Custom Speech model
-In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
+In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
+ ## Create a test
-You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy a Microsoft speech-to-text base model or another custom model.
+You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy a Microsoft speech-to-text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
+ Follow these steps to create a test:
Follow these steps to create a test:
1. Enter the test name and description, and then select **Next**. 1. Review the test details, and then select **Save and close**.
-After your test has been successfully created, you can compare the [word error rate (WER)](#evaluate-word-error-rate) and recognition results side by side.
-## Side-by-side comparison
-After the test is complete, as indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select the test name to view the test details page. This page lists all the utterances in your dataset and the recognition results of the two models, alongside the transcription from the submitted dataset.
-To inspect the side-by-side comparison, you can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which display the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and determine where additional training and improvements are required.
+To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
-## Evaluate word error rate
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model1` parameter to the ID of a model that you want to test.
+- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.
+- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a test:
+
+```azurecli-interactive
+spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio.
+- Set the required `model1` property to the URI of a model that you want to test.
+- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` property to the URI of a dataset that you want to use for the test.
+- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ },
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation) the evaluation.
++
+## Get test results
+
+You should get the test results and [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
++
+Follow these steps to get test results:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
-The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, divides the sum by the total number of words provided in the human-labeled transcript (shown in the following formula as N), and then multiplies that quotient by 100 to calculate the error rate as a percentage.
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
-![Screenshot showing the WER formula.](./media/custom-speech/custom-speech-wer-formula.png)
++
+To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
+
+- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.
+
+Here's an example Speech CLI command that gets test results:
+
+```azurecli-interactive
+spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+```
+
+The word error rates and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The word error rates and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+++
+## Evaluate word error rate
+
+The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, and divides the sum by the total number of words provided in the human-labeled transcript (N).
Incorrectly identified words fall into three categories:
Incorrectly identified words fall into three categories:
* Deletion (D): Words that are undetected in the hypothesis transcript * Substitution (S): Words that were substituted between reference and hypothesis
-Here's an example:
+In the Speech Studio, the quotient is multiplied by 100 and shown as a percentage. The Speech CLI and REST API results aren't multiplied by 100.
+
+$$
+WER = {{I+D+S}\over N} \times 100
+$$
+
+Here's an example that shows incorrectly identified words, when compared to the human-labeled transcript:
![Screenshot showing an example of incorrectly identified words.](./media/custom-speech/custom-speech-dis-words.png)
+The speech recognition result erred as follows:
+* Insertion (I): Added the word "a"
+* Deletion (D): Deleted the word "are"
+* Substitution (S): Substituted the word "Jones" for "John"
+
+The word error rate from the previous example is 60%.
+ If you want to replicate WER measurements locally, you can use the sclite tool from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK). ## Resolve errors and improve WER
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
# Test recognition quality of a Custom Speech model You can inspect the recognition quality of a Custom Speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test has been successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
-> [!TIP]
-> You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
+Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, which requires transcription datasets input, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
+ ## Create a test + Follow these instructions to create a test: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
Follow these instructions to create a test:
1. Select **Inspect quality (Audio-only data)** > **Next**. 1. Choose an audio dataset that you'd like to use for testing, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
- :::image type="content" source="media/custom-speech/custom-speech-choose-test-data.png" alt-text="Review your keyword":::
+ :::image type="content" source="media/custom-speech/custom-speech-choose-test-data.png" alt-text="Screenshot of choosing a dataset dialog":::
1. Choose one or two models to evaluate and compare accuracy. 1. Enter the test name and description, and then select **Next**. 1. Review your settings, and then select **Save and close**. ++
+To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model1` parameter to the ID of a model that you want to test.
+- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.
+- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a test:
+
+```azurecli-interactive
+spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To create a test, use the [CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the required `model1` property to the URI of a model that you want to test.
+- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` property to the URI of a dataset that you want to use for the test.
+- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "displayName": "My Inspection",
+ "description": "My Inspection Description",
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation) the evaluation.
+++
+## Get test results
+
+You should get the test results and [inspect](#compare-transcription-with-audio) the audio datasets compared to transcription results for each model.
++
+Follow these steps to get test results:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
+
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+++
+To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
+
+- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.
+
+Here's an example Speech CLI command that gets test results:
+
+```azurecli-interactive
+spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+```
+
+The models, audio dataset, transcriptions, and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To get test results, start by using the [GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The models, audio dataset, transcriptions, and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
++
+## Compare transcription with audio
+
+You can inspect the transcription output by each model tested, against the audio input dataset. If you included two models in the test, you can compare their transcription quality side by side.
++
+To review the quality of transcriptions:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. Play an audio file while the reading the corresponding transcription by a model.
+
+If the test dataset included multiple audio files, you'll see multiple rows in the table. If you included two models in the test, transcriptions are shown in side-by-side columns. Transcription differences between models are shown in blue text font.
++++
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
+
+To review the quality of transcriptions:
+1. Download the audio test dataset, unless you already have a copy.
+1. Download the output transcriptions.
+1. Play an audio file while the reading the corresponding transcription by a model.
+
+If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
+
-## Side-by-side model comparisons
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
-When the test status is *Succeeded*, select the test item name to see details of the test. This detail page lists all the utterances in your dataset, and shows the recognition results of the two models you are comparing.
+To review the quality of transcriptions:
+1. Download the audio test dataset, unless you already have a copy.
+1. Download the output transcriptions.
+1. Play an audio file while the reading the corresponding transcription by a model.
-To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column (showing human-labeled transcription and the results of two speech-to-text models), you can decide which model meets your needs and where improvements are needed.
+If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
-Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, requiring transcribed audio, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
## Next steps
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
# Custom Speech model lifecycle
-Speech recognition models that are provided by Microsoft are referred to as base models. When you make a speech recognition request, the current base model for each [supported language](language-support.md) is used by default. Base models are updated periodically to improve accuracy and quality.
+You can use a Custom Speech model for some time after it's deployed to your custom endpoint. But when new base models are made available, the older models are expired. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
-You can use a custom model for some time after it's trained and deployed. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
-
-Some key terms related to the model lifecycle include:
+Here are some key terms related to the model lifecycle:
* **Training**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data. In some contexts such as the REST API properties, training is also referred to as **adaptation**. * **Transcription**: Using a model and performing speech recognition (decoding audio into text).
Some key terms related to the model lifecycle include:
## Expiration timeline
-When new models are made available, the older models are retired. Here are timelines for model adaptation and transcription expiration:
+Here are timelines for model adaptation and transcription expiration:
- Training is available for one year after the quarter when the base model was created by Microsoft. - Transcription with a base model is available for two years after the quarter when the base model was created by Microsoft.
When new models are made available, the older models are retired. Here are timel
In this context, quarters end on January 15th, April 15th, July 15th, and October 15th.
-## What happens when models expire and how to update them
+## What to do when a model expires
+
+When a custom model or base model expires, it is no longer available for transcription. You can change the model that is used by your custom speech endpoint without downtime.
+
+|Transcription route |Expired model result |Recommendation |
+||||
+|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
+|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
++
+## Get base model expiration dates
+
-When a custom model or base model expires, typically speech recognition requests will fall back to the most recent base model for the same language. In this case, your implementation won't break, but recognition might not accurately transcribe your domain data.
+The last date that you could use the base model for training was shown when you created the custom model. For more information, see [Train a Custom Speech model](how-to-custom-speech-train-model.md).
-You can change the model that is used by your custom speech endpoint without downtime:
+Follow these instructions to get the transcription expiration date for a base model:
-[Batch transcription](batch-transcription.md) requests for retired models will fail with a 4xx error. In the [`CreateTranscription`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, update the `model` parameter to use a base model or custom model that hasn't yet retired. Otherwise you can remove the `model` entry from the JSON to always use the latest base model.
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
-## Find out when a model expires
-You can get the adaptation and transcription expiration dates for a model via the Speech Studio and REST API.
+ :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
-### Model expiration dates via Speech Studio
-Here's an example adaptation expiration date shown on the train new model dialog:
-Here's an example transcription expiration date shown on the deployment detail page:
+To get the training and transcription expiration dates for a base model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
-### Model expiration dates via REST API
-You can also check the expiration dates via the [`GetBaseModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) and [`GetModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) REST API. The `deprecationDates` property in the JSON response includes the adaptation and transcription expiration dates for each model
+- Set the `url` parameter to the URI of the base model that you want to get. You can run the `spx csr list --base` command to get available base models for all locales.
-Here's an example base model retrieved via [`GetBaseModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel):
+Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
+
+```azurecli-interactive
+spx csr model status --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f
+```
+
+In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+
+You should receive a response body in the following format:
```json {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
"datasets": [], "links": {
- "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789/manifest"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
}, "properties": { "deprecationDates": {
Here's an example base model retrieved via [`GetBaseModel`](https://westus2.dev.
"transcriptionDateTime": "2024-01-15T00:00:00Z" } },
- "lastActionDateTime": "2021-10-29T07:19:01Z",
+ "lastActionDateTime": "2022-05-06T10:52:02Z",
"status": "Succeeded",
- "createdDateTime": "2021-10-29T06:58:14Z",
+ "createdDateTime": "2021-10-13T00:00:00Z",
"locale": "en-US",
- "displayName": "20211012 (CLM public preview)",
+ "displayName": "20210831 + Audio file adaptation",
"description": "en-US base model" } ```
-Here's an example custom model retrieved via [`GetModel`](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel). The custom model was trained from the previously mentioned base model (`e065c68b-21d3-4b28-ae61-eb4c7e797789`):
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To get the training and transcription expiration dates for a base model, use the [GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). You can make a [GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels) request to get available base models for all locales.
+
+Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/BaseModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+
+You should receive a response body in the following format:
```json {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
+ "datasets": [],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-01-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-06T10:52:02Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-10-13T00:00:00Z",
+ "locale": "en-US",
+ "displayName": "20210831 + Audio file adaptation",
+ "description": "en-US base model"
+}
+```
+++
+## Get custom model expiration dates
++
+Follow these instructions to get the transcription expiration date for a custom model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. The expiration date the custom model is shown in the **Expiration** column. This is the last date that you can use the custom model for transcription. Base models are not shown on the **Train custom models** page.
+
+ :::image type="content" source="media/custom-speech/custom-speech-custom-model-expiration.png" alt-text="Screenshot of the train custom models page that shows the transcription expiration date.":::
+
+You can also follow these instructions to get the transcription expiration date for a custom model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
+
+ :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
++++
+To get the transcription expiration date for your custom model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
+
+- Set the `url` parameter to the URI of the model that you want to get. Replace `YourModelId` with your model ID and replace `YourServiceRegion` with your Speech resource region.
+
+Here's an example Speech CLI command to get the transcription expiration date for your custom model:
+
+```azurecli-interactive
+spx csr model status --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId
+```
+
+In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
"baseModel": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/e065c68b-21d3-4b28-ae61-eb4c7e797789"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
}, "datasets": [ {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/f1a72db2-1e89-496d-859f-f1af7a363bb5"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
} ], "links": {
- "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}/manifest",
- "copyTo": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{custom-model-id}/copyto"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/copyto"
}, "project": {
- "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/ee3b1c83-c194-490c-bdb1-b6b1a6be6f59"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
}, "properties": { "deprecationDates": { "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-04-15T00:00:00Z"
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
} },
- "lastActionDateTime": "2022-02-27T13:03:54Z",
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
"status": "Succeeded",
- "createdDateTime": "2022-02-27T13:03:46Z",
+ "createdDateTime": "2022-05-22T16:37:01Z",
"locale": "en-US",
- "displayName": "Custom model A",
- "description": "My first custom model",
- "customProperties": {
- "PortalAPIVersion": "3"
- }
+ "displayName": "My Model",
+ "description": "My Model Description"
} ```
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To get the transcription expiration date for your custom model, use the [GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+
+Make an HTTP GET request using the model URI as shown in the following example. Replace `YourModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-22T16:37:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
++ ## Next steps - [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
# Train a Custom Speech model
-In this article, you'll learn how to train a Custom Speech model to improve recognition accuracy from the Microsoft base model. Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test and refine the model with more data.
+In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released.
-You can use a custom model for a limited time after it's trained and [deployed](how-to-custom-speech-deploy-model.md). You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
+
+You can use a custom model for a limited time after it's trained. You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
> [!NOTE] > You pay to use Custom Speech models, but you are not charged for training a model.
-## Train the model
+If you plan to train a model with audio data, use a Speech resource in a [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training. After a model is trained, you can [copy it to a Speech resource](#copy-a-model) in another region as needed.
+
+## Create a model
-If you plan to train a model with audio data, use a Speech resource in a [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training.
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech). 1. Select **Custom Speech** > Your project name > **Train custom models**. 1. Select **Train a new model**.
-1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list.
+1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list. The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio.
> [!IMPORTANT] > Take note of the **Expiration for adaptation** date. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.
1. Enter a name and description for your custom model, and then select **Next**. 1. Optionally, check the **Add test in the next step** box. If you skip this step, you can run the same tests later. For more information, see [Test recognition quality](how-to-custom-speech-inspect-data.md) and [Test model quantitatively](how-to-custom-speech-evaluate-data.md). 1. Select **Save and close** to kick off the build for your custom model.
+1. Return to the **Train custom models** page.
+
+ > [!IMPORTANT]
+ > Take note of the **Expiration** date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+++
+To create a model with datasets for training, use the `spx csr model create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon.
+- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Optionally, you can set the `baseModel` parameter. If you don't specify the `baseModel`, the default base model for the locale is used.
+
+Here's an example Speech CLI command that creates a model with datasets for training:
+
+```azurecli-interactive
+spx csr model create --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US"
+```
+> [!NOTE]
+> In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+
+You should receive a response body in the following format:
-On the main **Train custom models** page, details about the new model are displayed in a table, such as name, description, status (*Processing*, *Succeeded*, or *Failed*), and expiration date.
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-21T13:21:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
> [!IMPORTANT]
-> Take note of the date in the **Expiration** column. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+>
+> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+The top-level `self` property in the response body is the model's URI. Use this URI to get details about the model's project, manifest, and deprecation dates. You also use this URI to update or delete a model.
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To create a model with datasets for training, use the [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the required `datasets` property to the URI of the datasets that you want used for training.
+- Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Optionally, you can set the `baseModel` property. If you don't specify the `baseModel`, the default base model for the locale is used.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "displayName": "My Model",
+ "description": "My Model Description",
+ "baseModel": null,
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models"
+```
+
+> [!NOTE]
+> In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-21T13:21:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
+
+> [!IMPORTANT]
+> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+>
+> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+The top-level `self` property in the response body is the model's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) details about the model's project, manifest, and deprecation dates. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel) the model.
+++
+## Copy a model
+
+You can copy a model to another project that uses the same locale. For example, after a model is trained with audio data in a [region](regions.md#speech-to-text-pronunciation-assessment-text-to-speech-and-translation) with dedicated hardware for training, you can copy it to a Speech resource in another region as needed.
++
+Follow these instructions to copy a model to a project in another region:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Copy to**.
+1. On the **Copy speech model** page, select a target region where you want to copy the model.
+ :::image type="content" source="./media/custom-speech/custom-speech-copy-to-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/custom-speech-copy-to-full.png":::
+1. Select a Speech resource in the target region, or create a new Speech resource.
+1. Select a project where you want to copy the model, or create a new project.
+1. Select **Copy**.
+
+After the model is successfully copied, you'll be notified and can view it in the target project.
+++
+Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+++
+To copy a model to another Speech resource, use the [CopyModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
+
+Make an HTTP POST request using the URI as shown in the following example. Use the region and URI of the model you want to copy from. Replace `YourModelId` with the model ID, replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "targetSubscriptionKey": "ModelDestinationSpeechResourceKey"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId/copyto"
+```
+
+> [!NOTE]
+> Only the `targetSubscriptionKey` property in the request body has information about the destination Speech resource.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
+ "baseModel": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
+ },
+ "links": {
+ "manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
+ "copyTo": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/copyto"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-22T23:15:27Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-22T23:15:27Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description",
+ "customProperties": {
+ "PortalAPIVersion": "3",
+ "Purpose": "",
+ "VadKind": "None",
+ "ModelClass": "None",
+ "UsesHalide": "False",
+ "IsDynamicGrammarSupported": "False"
+ }
+}
+```
+++
+## Connect a model
+
+Models might have been copied from one project using the Speech CLI or REST API, without being connected to another project. Connecting a model is a matter of updating the model with a reference to the project.
++
+If you are prompted in Speech Studio, you can connect them by selecting the **Connect** button.
++++
+To connect a model to a project, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `modelId` parameter to the ID of the model that you want to connect to the project.
+
+Here's an example Speech CLI command that connects a model to a project:
+
+```azurecli-interactive
+spx csr model update --model YourModelId --project YourProjectId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "project": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}
+```
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To connect a new model to a project of the Speech resource where the model was copied, use the [UpdateModel](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModel) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "project": {
+ "self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}
+```
++ ## Next steps
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
Last updated 05/08/2022
+zone_pivot_groups: speech-studio-cli-rest
# Upload training and testing datasets for Custom Speech You need audio or text data for testing the accuracy of Microsoft speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
-## Upload datasets in Speech Studio
+> [!TIP]
+> You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
+
+## Upload datasets
+ To upload your own datasets in Speech Studio, follow these steps:
To upload your own datasets in Speech Studio, follow these steps:
After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md)
-### Upload datasets via REST API
+++
+To create a dataset and connect it to an existing project, use the `spx csr dataset create` command. Construct the request parameters according to the following instructions:
-You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md) to upload a dataset by using the [CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `contentUrl` parameter. This is the location of the dataset.
+- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
-To connect the dataset to an existing project, fill out the request body according to the following format:
+Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
+
+```azurecli-interactive
+spx csr dataset create --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US"
+```
+
+You should receive a response body in the following format:
```json {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
"kind": "Acoustic", "contentUrl": "https://contoso.com/mydatasetlocation",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "properties": {
+ "acceptedLineCount": 0,
+ "rejectedLineCount": 0
+ },
+ "lastActionDateTime": "2022-05-20T14:07:11Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T14:07:11Z",
"locale": "en-US",
- "displayName": "My speech dataset name",
- "description": "My speech dataset description",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description"
+}
+```
+
+The top-level `self` property in the response body is the dataset's URI. Use this URI to get details about the dataset's project and files. You also use this URI to update or delete a dataset.
+
+For Speech CLI help with datasets, run the following command:
+
+```azurecli-interactive
+spx help csr dataset
+```
++++
+To create a dataset and connect it to an existing project, use the [CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) operation of the [Speech-to-text REST API v3.0](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
+- Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `contentUrl` property. This is the location of the dataset.
+- Set the required `locale` property. The dataset locale must match the locale of the project. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "kind": "Acoustic",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description",
"project": {
- "self": "https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.0/projects/c1c643ae-7da5-4e38-9853-e56e840efcb2"
- }
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "locale": "en-US",
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/datasets"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
+ "kind": "Acoustic",
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "properties": {
+ "acceptedLineCount": 0,
+ "rejectedLineCount": 0
+ },
+ "lastActionDateTime": "2022-05-20T14:07:11Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T14:07:11Z",
+ "locale": "en-US",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description"
} ```
-You can get a list of existing project URLs that can be used in the `project` element by using the [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+The top-level `self` property in the response body is the dataset's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset) details about the dataset's project and files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset) the dataset.
+
-> [!NOTE]
-> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you won't be able to train or test a model in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+> [!IMPORTANT]
+> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you can't select it for training or testing in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
## Next steps
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
The Speech service is available in these regions for speech-to-text, pronunciati
[!INCLUDE [](../../../includes/cognitive-services-speech-service-region-identifier.md)]
-If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy the fully trained model to another region later.
+If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) to [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
> [!TIP]
-> For pronunciation assessment feature, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
+> For pronunciation assessment, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
### Intent recognition
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
## What's new?
-* Speech SDK 1.21.0 and Speech CLI 1.21.0 were released in April 2022. See details below.
+* Speech SDK 1.22.0 and Speech CLI 1.22.0 were released in June 2022. See details below.
* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models. * TTS Service March 2022, public preview of Cheerful and Sad styles with fr-FR-DeniseNeural.
-* TTS Service February 2022, public preview of Custom Neural Voice Lite, extended CNV language support to 49 locales.
## Release notes
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
Previously updated : 05/13/2022 Last updated : 06/03/2022
You can query the deployment programmatically through the [prediction API](https
## Test deployed model
-You can use the Language Studio to submit an utterance, get predictions and visualize the results.
+You can use Language Studio to submit an utterance, get predictions and visualize the results.
[!INCLUDE [Test model](../includes/language-studio/test-model.md)]
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
Previously updated : 05/13/2022 Last updated : 06/03/2022
Before you start using CLU, you will need an Azure Language resource.
[!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)] ## Sign in to Language Studio
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
Previously updated : 04/26/2022 Last updated : 06/03/2022
Use the following steps to label your utterances:
* *Unique utterances per labeled entity* where each utterance is counted if it contains at least one labeled instance of this entity. * *Utterances per intent* where you can view count of utterances per intent. > [!NOTE]
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
Previously updated : 05/24/2022 Last updated : 06/03/2022 # Accepted custom NER data formats
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/tag-data.md).
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
## Labels file format
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
Previously updated : 05/24/2022 Last updated : 06/03/2022 ms.devlang: csharp, python
You can query the deployment programmatically using the [Prediction API](https:/
## Test deployed model
-You can use the Language Studio to submit the custom entity recognition task and visualize the results.
+You can use Language Studio to submit the custom entity recognition task and visualize the results.
[!INCLUDE [Test model](../includes/language-studio/test-model.md)]
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Previously updated : 05/24/2022 Last updated : 06/03/2022
You can create a resource in the following ways:
[!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)] [!INCLUDE [create a new resource with Azure PowerShell](../includes/resource-creation-powershell.md)]
If you have already labeled data, you can use it to get started with the service
### [Language Studio](#tab/language-studio) ### [Rest APIs](#tab/rest-api)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
Previously updated : 04/25/2022 Last updated : 06/02/2022 zone_pivot_groups: usage-custom-language-features
zone_pivot_groups: usage-custom-language-features
Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model is an object that's trained to do a certain task. For this system, the models extract named entities. Models are trained by learning from tagged data.
-In this article, we use the Language studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements.
+In this article, we use Language Studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements, such as the:
+* Date of the agreement
+* Borrower's name, address, city and state
+* Lender's name, address, city and state
+* Loan and interest amounts
::: zone pivot="language-studio"
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
Previously updated : 03/15/2022 Last updated : 06/03/2022 ms.devlang: csharp, python
You can query the deployment programmatically [Prediction API](https://aka.ms/ct
## Test deployed model
-You can use the Language Studio to submit the custom text classification task and visualize the results.
+You can use Language Studio to submit the custom text classification task and visualize the results.
[!INCLUDE [Test model](../includes/language-studio/test-model.md)]
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/create-project.md
Previously updated : 05/06/2022 Last updated : 06/03/2022
You also will need an Azure storage account where you will upload your `.txt` do
### [Using Language Studio](#tab/language-studio) ### [Using Azure PowerShell](#tab/azure-powershell)
If you have already labeled data, you can use it to get started with the service
### [Language Studio](#tab/studio) ### [Rest APIs](#tab/apis)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
Previously updated : 05/06/2022 Last updated : 06/02/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/cognitive-search.md
Previously updated : 05/24/2022 Last updated : 06/03/2022
Typically after you create a project, you go ahead and start [tagging the docume
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in the Language studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
Training could take sometime between 10 and 30 minutes for this sample dataset.
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in the Language studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Submit deployment job
cognitive-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/none-intent.md
Previously updated : 05/19/2022 Last updated : 06/03/2022
The score should be set according to your own observations of prediction scores,
When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
-The default score for Orchestration Workflow projects is set at **0.5** when creating new project in the language studio.
+The default score for Orchestration Workflow projects is set at **0.5** when creating new project in Language Studio.
> [!NOTE] > During model evaluation of your test set, the None score threshold is not applied.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
You can query the deployment programmatically [Prediction API](https://aka.ms/ct
## Test deployed model
-You can use the Language Studio to submit an utterance, get predictions and visualize the results.
+You can use Language Studio to submit an utterance, get predictions and visualize the results.
[!INCLUDE [Test model](../includes/language-studio/test-model.md)]
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
Before you start using orchestration workflow, you will need an Azure Language r
[!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)] ## Sign in to Language Studio
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/best-practices.md
Previously updated : 01/26/2022 Last updated : 06/03/2022
Question answering allows users to collaborate on a project/knowledge base. User
## Active learning
-[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. ItΓÇÖs important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the Language Studio portal, you can review and accept or reject those suggestions.
+[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. ItΓÇÖs important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in Language Studio, you can review and accept or reject those suggestions.
## Next steps
cognitive-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/plan.md
Previously updated : 11/02/2021 Last updated : 06/03/2022 # Plan your question answering app
Question answering also supports unstructured content. You can upload a file tha
Currently we do not support URLs for unstructured content.
-The ingestion process converts supported content types to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit QnA pairs in the Language Studio portal with rich text authoring.
+The ingestion process converts supported content types to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit QnA pairs in Language Studio with rich text authoring.
### Data format considerations
Question answering uses _active learning_ to improve your knowledge base by sugg
### Providing a default answer
-If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page.).
+If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page.
This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met.
cognitive-services Project Development Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/project-development-lifecycle.md
Previously updated : 11/02/2021 Last updated : 06/03/2022 # Question answering project lifecycle
Learn how to [create a knowledge base](../how-to/create-test-deploy.md).
## Testing and updating your project
-The project is ready for testing once it is populated with content, either editorially or through automatic extraction. Interactive testing can be done in the Language Studio portal, in the custom question answering menu through the **Test** panel. You enter common user queries. Then you verify that the responses returned with both the correct response and a sufficient confidence score.
+The project is ready for testing once it is populated with content, either editorially or through automatic extraction. Interactive testing can be done in Language Studio, in the custom question answering menu through the **Test** panel. You enter common user queries. Then you verify that the responses returned with both the correct response and a sufficient confidence score.
* **To fix low confidence scores**: add alternate questions. * **When a query incorrectly returns the [default response](../How-to/change-default-answer.md)**: add new answers to the correct question.
Based on what you learn from your analytics, make appropriate updates to your pr
## Version control for data in your knowledge base
-Version control for data is provided through the import/export features on the project page in the question answering section of the Language Studio portal.
+Version control for data is provided through the import/export features on the project page in the question answering section of Language Studio.
You can back up a project/knowledge base by exporting the project, in either `.tsv` or `.xls` format. Once exported, include this file as part of your regular source control check.
A project/knowledge base has two states: *test* and *published*.
### Test project/knowledge base
-The *test knowledge base* is the version currently edited and saved. The test version has been tested for accuracy, and for completeness of responses. Changes made to the test knowledge base don't affect the end user of your application or chat bot. The test knowledge base is known as `test` in the HTTP request. The `test` knowledge is available with the Language Studio's interactive **Test** pane.
+The *test knowledge base* is the version currently edited and saved. The test version has been tested for accuracy, and for completeness of responses. Changes made to the test knowledge base don't affect the end user of your application or chat bot. The test knowledge base is known as `test` in the HTTP request. The `test` knowledge is available with Language Studio's interactive **Test** pane.
### Production project/knowledge base
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/encrypt-data-at-rest.md
Previously updated : 11/02/2021 Last updated : 06/03/2022
Customer-managed keys are available in all Azure Search regions.
## Encryption of data in transit
-The language studio portal runs in the user's browser. Every action triggers a direct call to the respective Cognitive Service API. Hence, question answering is compliant for data in transit.
+Language Studio runs in the user's browser. Every action triggers a direct call to the respective Cognitive Service API. Hence, question answering is compliant for data in transit.
## Next steps
cognitive-services Manage Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/manage-knowledge-base.md
description: Custom question answering allows you to manage projects by providin
Previously updated : 11/02/2021 Last updated : 06/03/2022
From the **Edit knowledge base page** you can:
## Delete project
-Deleting a project is a permanent operation. It can't be undone. Before deleting a project, you should export the project from the main question answering page within the Language Studio.
+Deleting a project is a permanent operation. It can't be undone. Before deleting a project, you should export the project from the main question answering page within Language Studio.
If you share your project with collaborators and then later delete it, everyone loses access to the project.
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
You can follow the steps below to migrate knowledge bases:
> [!div class="mx-imgBorder"] > ![Migrate QnAMaker with red selection box around the knowledge base selection option with a drop-down displaying three knowledge base names](../media/migrate-qnamaker/select-knowledge-bases.png)
-8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects. To resolve these errors occuring due to invalid characters, select the checkbox (in red) and click **Next**. This is a one-click method to replace the problematic charcaters in the name with the accepted characters. If there's a duplicate, a new unique project name is generated by the system.
+8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects. To resolve these errors occuring due to invalid characters, select the checkbox (in red) and click **Next**. This is a one-click method to replace the problematic characters in the name with the accepted characters. If there's a duplicate, a new unique project name is generated by the system.
> [!CAUTION] > If you migrate a knowledge base with the same name as a project that already exists in the target language resource, **the content of the project will be overridden** by the content of the selected knowledge base.
You can follow the steps below to migrate knowledge bases:
10. It will take a few minutes for the migration to occur. Do not cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration. > [!div class="mx-imgBorder"]
- > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using the Language Studio](../media/migrate-qnamaker/migration-success.png)
+ > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using Language Studio](../media/migrate-qnamaker/migration-success.png)
If any knowledge bases fail to migrate to custom question answering projects, an error will be displayed. The most common migration errors occur when:
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/network-isolation.md
This will establish a private endpoint connection between language resource and
Follow the steps below to restrict public access to question answering language resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
-After restricting access to Cognitive Service resource based on VNet, To browse knowledge bases on the Language Studio portal from your on-premises network or your local browser.
+After restricting access to Cognitive Service resource based on VNet, To browse knowledge bases on Language Studio from your on-premises network or your local browser.
- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks). - Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/overview.md
recommendations: false Previously updated : 11/02/2021 Last updated : 06/03/2022 keywords: "qna maker, low code chat bots, multi-turn conversations"
Once a question answering knowledge base is published, a client application send
## Build low code chat bots
-The language studio portal provides the complete project/knowledge base authoring experience. You can import documents, in their current form, to your knowledge base. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
+Language Studio portal provides the complete project/knowledge base authoring experience. You can import documents, in their current form, to your knowledge base. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
Once your knowledge base is edited, publish the knowledge base to a working [Azure Web App bot](https://azure.microsoft.com/services/bot-service/) without writing any code. Test your bot in the [Azure portal](https://portal.azure.com) or download it and continue development.
cognitive-services Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/reference/document-format-guidelines.md
Question answering can process semi-structured support web pages, such as web ar
## Import and export knowledge base
-**TSV and XLS files**, from exported knowledge bases, can only be used by importing the files from the **Settings** page in the language studio. They cannot be used as data sources during knowledge base creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
+**TSV and XLS files**, from exported knowledge bases, can only be used by importing the files from the **Settings** page in Language Studio. They cannot be used as data sources during knowledge base creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
When you import the knowledge base through these **TSV and XLS files**, the question answer pairs get added to the editorial source and not the sources from which the question and answers were extracted in the exported knowledge base.
cognitive-services Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/bot-service.md
After deploying your project/knowledge base, you can create a bot from the **Dep
* When you make changes to the knowledge base and redeploy, you don't need to take further action with the bot. It's already configured to work with the knowledge base, and works with all future changes to the knowledge base. Every time you publish a knowledge base, all the bots connected to it are automatically updated.
-1. In the Language Studio portal, on the question answering **Deploy knowledge base** page, select **Create bot**.
+1. In Language Studio, on the question answering **Deploy knowledge base** page, select **Create bot**.
> [!div class="mx-imgBorder"] > ![Screenshot of UI with option to create a bot in Azure.](../media/bot-service/create-bot-in-azure.png)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Previously updated : 06/02/2022 Last updated : 06/03/2022 # What is document and conversation summarization (preview)?
-Summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+Summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
# [Document summarization](#tab/document-summarization)
To use this feature, you submit raw unstructured text for analysis and handle th
|Development option |Description | Links | ||||
-| Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
| REST API or Client library (Azure SDK) | Integrate document summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use document summarization](quickstart.md) |
An AI system includes not only the technology, but also the people who will use
* [Transparency note for Azure Cognitive Service for Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context) * [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/cognitive-services/language-service/context/context) * [Characteristics and limitations of summarization](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
To use this feature, you submit raw unstructured text for analysis and handle th
|Development option |Description | Links | ||||
-| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) | | Docker container | Use the available Docker container to deploy this feature on-premises, letting you bring the service closer to your data for compliance, security, or other operational reasons. | ΓÇó [How to deploy on-premises](how-to/use-containers.md) |
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
Occasionally, microphone or camera devices won't be released on time, and that c
Incoming video streams won't stop rendering if the user is on iOS 15.2+ and is using SDK version 1.4.1-beta.1+, the unmute/start video steps will still be required to re-start outgoing audio and video.
+For iOS 15.4+, audio and video should be able to auto recover on most of the cases. On some edge cases, to unmute, an api to 'unmute' must be called by the application (can be as a result of user action) to recover the outgoing audio.
+ ### iOS with Safari crashes and refreshes the page if a user tries to switch from front camera to back camera. Azure Communication Services Calling SDK version 1.2.3-beta.1 introduced a bug that affects all of the calls made from iOS Safari. The problem occurs when a user tries to switch the camera video stream from front to back. Switching camera results in Safari browser to crash and reload the page.
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Alice has ordered a product from Contoso and struggles to set it up. Alice calls
- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate] - One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.
-*Charlie's participation is covered by her Teams license.
+*Charlie's participation is covered by his Teams license.
**Total cost of the visit**: - Teams cost for a user joining using the Communication Services JavaScript SDK: 25 minutes from Teams minute pool
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For more information, review the [SQL Server managed connector reference](/connectors/sql). |
-| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For more information, review the [SQL Server managed connector reference](/connectors/sql). <br><br>**Note**: The ISE version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed version's message limits. |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly connect to Azure virtual networks without the on-premises data gateway. <br><br>For the managed version, review the [SQL Server managed connector reference](/connectors/sql/). |
+| **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). |
+| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For operations, managed connector limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). For ISE-versioned limits, review the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed connector's message limits. |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). |
|||| ## Prerequisites
The SQL Server connector has different versions, based on [logic app type and ho
`Server={your-server-address};Database={your-database-name};User Id={your-user-name};Password={your-password};`
-* The logic app workflow where you want to access your SQL database. If you want to start your workflow with a SQL Server trigger operation, you have to start with a blank workflow.
+* The logic app workflow where you want to access your SQL database. To start your workflow with a SQL Server trigger, you have to start with a blank workflow. To use a SQL Server action, start your workflow with any trigger.
<a name="multi-tenant-or-ise"></a>
The SQL Server connector has different versions, based on [logic app type and ho
* Standard logic app workflow
- You can use the SQL Server built-in connector, which requires a connection string. If you want to use the SQL Server managed connector, you need follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+ You can use the SQL Server built-in connector, which requires a connection string. To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+
+For other connector requirements, review [SQL Server connector reference](/connectors/sql/).
+
+## Limitations
+
+For more information, review the [SQL Server connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
zone_pivot_groups: container-apps-image-build-type
-# Tutorial: Communication between microservices in Azure Container Apps Preview
+# Tutorial: Communication between microservices in Azure Container Apps
Azure Container Apps exposes each container app through a domain name if [ingress](ingress.md) is enabled. Ingress endpoints for container apps within an external environment can be either publicly accessible or only available to other container apps in the same [environment](environment.md).
Output from the `az acr build` command shows the upload progress of the source c
::: zone pivot="docker-local"
-1. The following command builds a container image for the album UI and tags it with the fully qualified name of the ACR log in server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+1. The following command builds a container image for the album UI and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
# [Bash](#tab/bash)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
While publishing ADF resources, the azure pipeline triggers twice or more instea
#### Cause
-Azure DevOps has the 20 MB Rest API limit. When the ARM template exceeds this size, ADF internally splits the template file into multiple files with linked templates to solve this issue. As a side effect, this split could result in customer's triggers being run more than once.
+Azure DevOps has the 20 MB REST API limit. When the ARM template exceeds this size, ADF internally splits the template file into multiple files with linked templates to solve this issue. As a side effect, this split could result in customer's triggers being run more than once.
#### Resolution
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Specifically, this Salesforce connector supports:
- Salesforce Developer, Professional, Enterprise, or Unlimited editions. - Copying data from and to Salesforce production, sandbox, and custom domain.
+>[!NOTE]
+>This function supports copy of any schema from the above mentioned Salesforce environments, including the [Nonprofit Success Pack](https://www.salesforce.org/products/nonprofit-success-pack/) (NPSP). This allows you to bring your Salesforce nonprofit data into Azure, work with it in Azure data services, unify it with other data sets, and visualize it in Power BI for rapid insights.
+ The Salesforce connector is built on top of the Salesforce REST/Bulk API. When copying data from Salesforce, the connector automatically chooses between REST and Bulk APIs based on the data size ΓÇô when the result set is large, Bulk API is used for better performance; You can explicitly set the API version used to read/write data via [`apiVersion` property](#linked-service-properties) in linked service. When copying data to Salesforce, the connector uses BULK API v1. >[!NOTE]
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-wait-activity.md
When you use a Wait activity in a pipeline, the pipeline waits for the specified
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-## Create a Fail activity with UI
+## Create a Wait activity with UI
To use a Wait activity in a pipeline, complete the following steps:
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Last updated 04/20/2022
You use data transformation activities in a Data Factory or Synapse [pipeline](concepts-pipelines-activities.md) to transform and process raw data into predictions and insights. The Script activity is one of the transformation activities that pipelines support. This article builds on the [transform data article](transform-data.md), which presents a general overview of data transformation and the supported transformation activities.
-Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like SELECT, UPDATE, and INSERT let users retrieve, store, modify, delete, insert and update data in the database. DDL statements like CREATE, ALTER and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users.
+Using the script activity, you can execute common operations with Data Manipulation Language (DML), and Data Definition Language (DDL). DML statements like INSERT, UPDATE, DELETE and SELECT let users insert, modify, delete and retrieve data in the database. DDL statements like CREATE, ALTER and DROP allow a database manager to create, modify, and remove database objects such as tables, indexes, and users.
You can use the Script activity to invoke a SQL script in one of the following data stores in your enterprise or on an Azure virtual machine (VM):
You can use the Script activity to invoke a SQL script in one of the following d
The script may contain either a single SQL statement or multiple SQL statements that run sequentially. You can use the Execute SQL task for the following purposes: -- Truncate a table or view in preparation for inserting data.
+- Truncate a table in preparation for inserting data.
- Create, alter, and drop database objects such as tables and views. - Re-create fact and dimension tables before loading data into them. - Run stored procedures. If the SQL statement invokes a stored procedure that returns results from a temporary table, use the WITH RESULT SETS option to define metadata for the result set.
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Subnets used for DNS resolver have the following limitations:
Outbound endpoints have the following limitations: - An outbound endpoint can't be deleted unless the DNS forwarding ruleset and the virtual network links under it are deleted.
+### Ruleset restrictions
+
+- Rulesets can have no more than 25 rules in Public Preview.
+- Rulesets can't be linked across different subscriptions in Public Preview.
+ ### Other restrictions - IPv6 enabled subnets aren't supported in Public Preview.-- Currently, rulesets can't be linked across different subscriptions.+ ## Next steps
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-availability-and-consistency.md
We recommend sending events to an event hub without setting partition informatio
- [Send events using .NET](event-hubs-dotnet-standard-getstarted-send.md) - [Send events using Java](event-hubs-java-get-started-send.md)-- [Send events using JavaScript](event-hubs-python-get-started-send.md)
+- [Send events using JavaScript](event-hubs-node-get-started-send.md)
- [Send events using Python](event-hubs-python-get-started-send.md)
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Once enrolled, verify that the **Microsoft.Network** resource provider is regist
} ] ```
+ > [!NOTE]
+ > If bandwidth is unavailable in the target location, open a [support request in the Azure Portal](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) and select the ExpressRoute Direct Support Topic.
+ >
5. Create an ExpressRoute Direct resource based on the location chosen above ExpressRoute Direct supports both QinQ and Dot1Q encapsulation. If QinQ is selected, each ExpressRoute circuit will be dynamically assigned an S-Tag and will be unique throughout the ExpressRoute Direct resource. Each C-Tag on the circuit must be unique on the circuit, but not across the ExpressRoute Direct.
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
To learn more about Azure Firewall Premium Intermediate CA certificate requireme
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [Azure Firewall preview features](firewall-preview.md#idps-private-ip-ranges-preview).
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [Azure Firewall preview features](firewall-preview.md#idps-private-ip-ranges-preview).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
The following Resource Provider modes are fully supported:
- `Microsoft.Kubernetes.Data` for managing your Kubernetes clusters on or off Azure. Definitions using this Resource Provider mode use effects _audit_, _deny_, and _disabled_. This mode supports custom definitions as a _public preview_. See
- [Create policy definition from constraint template](../how-to/extension-for-vscode.md) to create a
+ [Create policy definition from constraint template](https://docs.microsoft.com/azure/governance/policy/how-to/extension-for-vscode#create-policy-definition-from-constraint-template) to create a
custom definition from an existing [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) GateKeeper v3 [constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates). Use
hpc-cache Hpc Cache Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-manage.md
Title: Manage and update Azure HPC Cache description: How to manage and update Azure HPC Cache using the Azure portal or Azure CLI-+ Previously updated : 01/19/2022- Last updated : 06/02/2022+ # Manage your cache
If a new software version is available, the **Upgrade** button becomes active. Y
Client access is not interrupted during a software upgrade, but cache performance slows. Plan to upgrade software during non-peak usage hours or in a planned maintenance period.
-The software update can take several hours. Caches configured with higher throughput take longer to upgrade than caches with smaller peak throughput values.
+The software update can take several hours. Caches configured with higher throughput take longer to upgrade than caches with smaller peak throughput values. The cache status changes to **Upgrading** until the operation completes.
-When a software upgrade is available, you will have a week or so to apply it manually. The end date is listed in the upgrade message. If you don't upgrade during that time, Azure automatically applies the update to your cache. The timing of the automatic upgrade is not configurable. If you are concerned about the cache performance impact, you should upgrade the software yourself before the time period expires.
+When a software upgrade is available, you will have a week or so to apply it manually. The end date is listed in the upgrade message. If you don't upgrade during that time, Azure automatically applies the new software to your cache.
+
+You can use the Azure portal to schedule a more convenient time for the upgrade. Follow the instructions in the **Portal** tab below.
If your cache is stopped when the end date passes, the cache will automatically upgrade software the next time it is started. (The update might not start immediately, but it will start in the first hour.) ### [Portal](#tab/azure-portal)
-Click the **Upgrade** button to begin the software update. The cache status changes to **Upgrading** until the operation completes.
+Click the **Upgrade** button to configure your software update. You have the option to upgrade the software immediately, or to schedule the upgrade for a specific date and time.
+
+![Screenshot of the Schedule software upgrade blade showing radio buttons with "Schedule later" selected and fields to select a new date and time.](media/upgrade-schedule.png)
+
+To upgrade immediately, select **Upgrade now** and click the **Save** button.
+
+To schedule a different upgrade time, select **Schedule later** and select a new date and time.
+
+* The date and time are shown in the browser's local time zone.
+* You can't choose a later time than the deadline in the original message.
+
+When you save the custom date, the banner message will change to show the date you chose.
+
+If you want to revise your scheduled upgrade date, click the **Upgrade** button again. Click the **Reset date** link. This immediately removes your scheduled date.
+
+![Screenshot of the Schedule software upgrade blade with a custom date selected. A text link appears at the left of the date labeled "Reset date".](media/upgrade-reset-date.png)
+
+After you reset the previously scheduled value, the date selector resets to the latest available date and time. You can choose a new date and save it, or click **Discard** to keep the latest date.
+
+You can't change the schedule if there are fewer than 15 minutes remaining before the upgrade.
### [Azure CLI](#tab/azure-cli)
Click the **Upgrade** button to begin the software update. The cache status chan
On the Azure CLI, new software information is included at the end of the cache status report. (Use [az hpc-cache show](/cli/azure/hpc-cache#az-hpc-cache-show) to check.) Look for the string "upgradeStatus" in the message.
-Use [az hpc-cache upgrade-firmware](/cli/azure/hpc-cache#az-hpc-cache-upgrade-firmware) to apply the update, if any exists.
+Use [az hpc-cache upgrade-firmware](/cli/azure/hpc-cache#az-hpc-cache-upgrade-firmware) to apply the software upgrade, if any exists.
If no update is available, this operation has no effect.
-This example shows the cache status (no update is available) and the results of the upgrade-firmware command.
+This example shows the cache status (no upgrade is available) and the results of the upgrade-firmware command.
```azurecli $ az hpc-cache show --name doc-cache0629
hpc-cache Manage Storage Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/manage-storage-targets.md
Title: Manage Azure HPC Cache storage targets description: How to suspend, remove, force delete, and flush Azure HPC Cache storage targets, and how to understand the storage target state-+ Previously updated : 01/26/2022- Last updated : 05/29/2022+ # View and manage storage targets The storage targets settings page shows information about each storage target for your HPC Cache, and gives options to manage individual storage targets.
+This page also has a utility for customizing the amount of cache space allocated to each individual storage target. Read [Allocate cache storage](#allocate-cache-storage) for details.
+ > [!TIP] > Instructions for listing storage targets using Azure CLI are included in the [Add storage targets](hpc-cache-add-storage.md#view-storage-targets) article. Other actions listed here might not yet be available in Azure CLI. ![Screenshot of the Settings > Storage targets page in the Azure portal. There are multiple storage targets in the list, and column headings show Name, Type, State, Provisioning state, Address/Container, and Usage model for each one.](media/storage-targets-list-states.png)
+<!-- to do: update all storage target list screenshots -->
+ ## Manage storage targets You can perform management actions on individual storage targets. These actions supplement the cache-level options discussed in [Manage your cache](hpc-cache-manage.md).
The **State** value affects which management options you can use. Here's a short
* **Suspended** - The storage target has been taken offline. You can still flush, delete, or force remove this storage target. Choose **Resume** to put the target back in service. * **Flushing** - The storage target is writing data to the back-end storage. The target can't process client requests while flushing, but it will automatically go back to its previous state after it finishes writing data.
+## Allocate cache storage
+
+Optionally, you can configure the amount of cache storage that can be used by each storage target. This feature lets you plan ahead so that space is available to store a particular storage system's files.
+
+If you do not customize the storage allocation, each storage target receives an equal share of the available cache space.
+
+Click the **Allocate storage** button to customize the cache allocation.
+
+![Screenshot of the storage targets page in the Azure portal. The mouse pointer is over the 'Allocate storage' button.](media/allocate-storage-button.png)
+
+On the **Allocate storage** blade, enter the percentage of cache space you want to assign to each storage target. The storage allocations must total 100%.
+
+Remember that some cache space is used for overhead, so the total amount of space available for cached files is not exactly the same as the capacity you chose when you created your HPC Cache.
+
+![Screenshot of the 'Allocate storage' panel at the right side of the storage targets list. Text fields next to each storage target name allow you to enter a new percent value for each target. The screenshot has target 'blob01' set to 75% and target 'blob02' set to 50%. The total is calculated underneath as 125% and an error message explains that the total must be 100%. The Save button is inactive; the Discard button is active.](media/allocate-storage-blade.png)
+
+Click **Save** to complete the allocation.
+ ## Next steps * Learn about [cache-level management actions](hpc-cache-manage.md)
hpc-cache Prime Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/prime-cache.md
Title: Pre-load files in Azure HPC Cache (Preview)
-description: Use the cache priming feature (preview) to populate or preload cache contents before files are requested
-
+ Title: Pre-load files in Azure HPC Cache
+description: Use the cache priming feature to populate or preload cache contents before files are requested
+ Previously updated : 02/03/2022- Last updated : 06/01/2022+
-# Pre-load files in Azure HPC Cache (preview)
+# Pre-load files in Azure HPC Cache
-> [!IMPORTANT]
-> Cache priming is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Azure HPC CacheΓÇÖs priming feature (preview) allows customers to pre-load files in the cache.
+Azure HPC CacheΓÇÖs priming feature allows customers to pre-load files in the cache.
You can use this feature to fetch your expected working set of files and populate the cache before work begins. This technique is sometimes called cache warming.
The cache accesses the manifest file once when the priming job starts. The SAS U
Use the Azure portal to create a priming job. View your Azure HPC Cache in the portal and select the **Prime cache** page under the **Settings** heading.
-![screenshot of the Priming page in the portal, with several completed jobs.](media/priming-preview.png)
-<!-- to do: screenshot with 'preview' on GUI heading, screenshot with more diverse jobs and statuses -->
+![screenshot of the Priming page in the portal, with several jobs in various states.](media/prime-overview.png)
-Click the **Add priming job** text at the top of the table to define a new job.
+Click the **Start priming job** text at the top of the table to define a new job.
In the **Job name** field, type a unique name for the priming job. Use the **Priming file** field to select your priming manifest file. Select the storage account, container, and file where your priming manifest is stored.
-![screenshot of the Add priming job page, with a job name and priming file path filled in. Below the Priming file field is a link labeled "Select from existing blob location".](media/create-priming-job.png)
+![screenshot of the Start priming job page, with a job name and priming file path filled in. Below the Priming file field is a link labeled "Select from existing blob location".](media/create-priming-job.png)
To select the priming manifest file, click the link to select a storage target. Then select the container where your .json manifest file is stored.
If you canΓÇÖt find the manifest file, your cache might not be able to access th
Priming jobs are listed in the **Prime cache** page in the Azure portal.
-![screenshot of the priming jobs list in the portal, with jobs in various states (running, paused, and success). The cursor has clicked the ... symbol at the right side of one job's row, and a context menu shows options to pause or resume.](media/prime-cache-list.png)
+This page shows each job's name, its state, its current status, and summary statistics about the priming progress. The summary in the **Details** column updates periodically as the job progresses. The **Job status** field is populated when a priming job starts; this field also gives basic error information like **Invalid manifest** if a problem occurs.
+
+While a job is running, the **Percentage complete** column shows an estimate of the progress.
-This page shows each job's name, its state, its current status, and summary statistics about the priming progress. The summary in the **Details** column updates periodically as the job progresses. The **Status** field is populated when a priming job starts; this field also gives basic error information like **Invalid manifest** if a problem occurs.
+Before a priming job starts, it has the state **Queued**. Its **Job status**, **Percentage complete**, and **Details** fields are empty.
-Before a priming job starts, it has the state **Queued**. Its **Status** and **Details** fields are empty.
+![screenshot of the priming jobs list in the portal, with jobs in various states (running, paused, and success). The cursor has clicked the ... symbol at the right side of one job's row, and a context menu shows options to pause or resume.](media/prime-cache-context.png)
-Click the **...** section at the right of the table to pause or resume a priming job.
+Click the **...** section at the right of the table to pause or resume a priming job. (It might take a few minutes for the status to update.)
-To delete a priming job, select it in the list and use the delete control at the top of the table.
+To delete a priming job, select it in the list and use the **Stop** control at the top of the table. You can use the **Stop** control to delete a job in any state.
## Azure REST APIs
-You can use these REST API endpoints to create an HPC Cache priming job. These are part of the `2021-10-01-preview` version of the REST API, so make sure you use that string in the *api_version* term.
+You can use these REST API endpoints to create and manage HPC Cache priming jobs. These are part of the `2022-05-01` version of the REST API, so make sure you use that string in the *api_version* term.
-Read the [Azure REST API reference](/rest/api/azure/) to learn how to use this interface.
+Read the [Azure REST API reference](/rest/api/azure/) to learn how to use these tools.
-### Add a priming job
+### Add a new priming job
+
+The `startPrimingJob` interface creates and queues a priming job. The job starts automatically when resources are available.
```rest URL: POST
- https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/addPrimingJob?api-version=2021-10-01-preview
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/startPrimingJob?api-version=2022-05-01
BODY: {
URL: POST
For the `primingManifestUrl` value, pass the fileΓÇÖs SAS URL or other HTTPS URL that is accessible to the cache. Read [Upload the priming manifest file](#upload-the-priming-manifest-file) to learn more.
-### Remove a priming job
+### Stop a priming job
+
+The `stopPrimingJob` interface cancels a job (if it is running) and removes it from the job list. Use this interface to delete a priming job in any state.
```rest URL: POST
- https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/removePrimingJob/MY-JOB-ID-TO-REMOVE?api-version=2021-10-01-preview
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/stopPrimingJob?api-version=2022-05-01
BODY:
+ {
+ "primingJobId": "MY-JOB-ID-TO-REMOVE"
+ }
+ ``` ### Get priming jobs
Priming job names and IDs are returned, along with other information.
```rest URL: GET
- https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME?api-version=2021-10-01-preview
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME?api-version=2022-05-01
BODY: ```
+### Pause a priming job
+
+The `pausePrimingJob` interface suspends a running job.
+
+```rest
+
+URL: POST
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/pausePrimingJob?api-version=2022-05-01
+
+BODY:
+ {
+ "primingJobId": "MY-JOB-ID-TO-PAUSE"
+ }
+
+```
+
+### Resume a priming job
+
+Use the `resumePrimingJob` interface to reactivate a suspended priming job.
+
+```rest
+
+URL: POST
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/resumePrimingJob?api-version=2022-05-01
+
+BODY:
+ {
+ "primingJobId": "MY-JOB-ID-TO-RESUME"
+ }
+
+```
+ ## Frequently asked questions * Can I reuse a priming job?
BODY:
* How long does a failed or completed priming job stay in the list?
- Priming jobs persist in the list until you delete them. On the portal **Prime cache** page, check the checkbox next to the job and select the **Delete** control at the top of the list.
+ Priming jobs persist in the list until you delete them. On the portal **Prime cache** page, check the checkbox next to the job and select the **Stop** control at the top of the list to delete the job.
* What happens if the content IΓÇÖm pre-loading is larger than my cache storage?
BODY:
## Next steps
-* For help with HPC Cache priming (preview) or to report a problem, use the standard Azure support process, described in [Get help with Azure HPC Cache](hpc-cache-support-ticket.md).
+* For more help with HPC Cache priming, follow the process in [Get help with Azure HPC Cache](hpc-cache-support-ticket.md).
* Learn more about [Azure REST APIs](/rest/api/azure/)
iot-dps How To Manage Enrollments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-enrollments.md
To remove an enrollment entry:
4. In the **Settings** menu, select **Manage enrollments**.
-5. Select the enrollment entry you want to remove.
+5. Select the enrollment entry you want to remove.
6. At the top of the page, select **Delete**. 7. When prompted to confirm, select **Yes**. 8. Once the action is completed, you'll see that your entry has been removed from the list of device enrollments.+
+> [!NOTE]
+> Deleting an enrollment group doesn't delete the registration records for devices in the group. DPS uses the registration records to determine whether the maximum number of registrations has been reached for the DPS instance. Orphaned registration records still count against this quota. For the current maximum number of registrations supported for a DPS instance, see [Quotas and limits](about-iot-dps.md#quotas-and-limits).
+>
+>You may want to delete the registration records for the enrollment group before deleting the enrollment group itself. You can see and manage the registration records for an enrollment group manually on the **Registration Records** tab for the group in Azure portal. You can retrieve and manage the registration records programmatically using the [Device Registration State REST APIs](/rest/api/iot-dps/service/device-registration-state) or equivalent APIs in the [DPS service SDKs](libraries-sdks.md), or using the [az iot dps enrollment-group registration Azure CLI commands](/cli/azure/iot/dps/enrollment-group/registration).
iot-dps How To Revoke Device Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-revoke-device-access-portal.md
After you finish the procedure, you should see your entry removed from the list
> [!NOTE] > If you delete an enrollment group for a certificate, devices that have the certificate in their certificate chain might still be able to enroll if an enabled enrollment group for the root certificate or another intermediate certificate higher up in their certificate chain exists.
+> [!NOTE]
+> Deleting an enrollment group doesn't delete the registration records for devices in the group. DPS uses the registration records to determine whether the maximum number of registrations has been reached for the DPS instance. Orphaned registration records still count against this quota. For the current maximum number of registrations supported for a DPS instance, see [Quotas and limits](about-iot-dps.md#quotas-and-limits).
+>
+>You may want to delete the registration records for the enrollment group before deleting the enrollment group itself. You can see and manage the registration records for an enrollment group manually on the **Registration Records** tab for the group in Azure portal. You can retrieve and manage the registration records programmatically using the [Device Registration State REST APIs](/rest/api/iot-dps/service/device-registration-state) or equivalent APIs in the [DPS service SDKs](libraries-sdks.md), or using the [az iot dps enrollment-group registration Azure CLI commands](/cli/azure/iot/dps/enrollment-group/registration).
+ ## Disallow specific devices in an enrollment group Devices that implement the X.509 attestation mechanism use the device's certificate chain and private key to authenticate. When a device connects and authenticates with Device Provisioning Service, the service first looks for an individual enrollment with a registration ID that matches the common name (CN) of the device (end-entity) certificate. The service then searches enrollment groups to determine whether the device can be provisioned. If the service finds a disabled individual enrollment for the device, it prevents the device from connecting. The service prevents the connection even if an enabled enrollment group for an intermediate or root CA in the device's certificate chain exists.
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Title: Diagnose and troubleshoot disconnects with Azure IoT Hub DPS
-description: Learn to diagnose and troubleshoot common errors with device connectivity for Azure IoT Hub Device Provisioning Service (DPS)
--
+ Title: Diagnose and troubleshoot provisioning errors with Azure IoT Hub DPS
+description: Learn to diagnose and troubleshoot common errors for Azure IoT Hub Device Provisioning Service (DPS)
+ Previously updated : 04/15/2022-
-#Customer intent: As an operator for Azure IoT Hub DPS, I need to know how to find out when devices are disconnecting unexpectedly and troubleshoot resolve those issues right away.
Last updated : 05/25/2022+
+#Customer intent: As an operator for Azure IoT Hub DPS, I need to know how to find out when devices are not being provisioned and troubleshoot and resolve those issues right away.
# Troubleshooting with Azure IoT Hub Device Provisioning Service
-Connectivity issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures etc. This article provides guidance on how to detect and troubleshoot device connectivity issues via Azure Monitor. To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
+Provisioning issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures, etc. This article provides guidance on how to detect and troubleshoot device provisioning issues via Azure Monitor. To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
## Using Azure Monitor to view metrics and set up alerts
To view and set up alerts on IoT Hub Device Provisioning Service metrics:
4. Select the desired metric. For supported metrics, see [Metrics](monitor-iot-dps-reference.md#metrics).
-5. Select desired aggregation method to create a visual view of the metric.
+5. Select desired aggregation method to create a visual view of the metric.
6. To set up an alert of a metric, select **New alert rules** from the top right of the metric blade, similarly you can go to **Alert** blade and select **New alert rules**.
Use this table to understand and resolve common errors.
| 429 | Operations are being throttled by the service. For specific service limits, see [IoT Hub Device Provisioning Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | 429 Too many requests | | 500 | An internal error occurred. | 500 Internal Server Error|
+If an IoT Edge device fails to start with error message `failed to provision with IoT Hub, and no valid device backup was found dps client error.`, see [DPS Client error](/azure/iot-edge/troubleshoot-common-errors?view=iotedge-2018-06&preserve-view=true#dps-client-error) in the IoT Edge (1.1) documentation.
+ ## Next Steps - To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
iot-dps How To Unprovision Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-unprovision-devices.md
With X.509 attestation, devices can also be provisioned through an enrollment gr
To see a list of devices that have been provisioned through an enrollment group, you can view the enrollment group's details. This is an easy way to understand which IoT hub each device has been provisioned to. To view the device list:
-1. Log in to the Azure portal and click **All resources** on the left-hand menu.
-2. Click your provisioning service in the list of resources.
-3. In your provisioning service, click **Manage enrollments**, then select **Enrollment Groups** tab.
-4. Click the enrollment group to open it.
+1. Log in to the Azure portal and select **All resources** on the left-hand menu.
+2. Select your provisioning service in the list of resources.
+3. In your provisioning service, select **Manage enrollments**, then select the **Enrollment Groups** tab.
+4. Select the enrollment group to open it.
+5. Select the **Registration Records** tab to view the registration records for the enrollment group.
- ![View enrollment group entry in the portal](./media/how-to-unprovision-devices/view-enrollment-group.png)
+ ![Screenshot showing the registration records for an enrollment group in the portal.](./media/how-to-unprovision-devices/view-registration-records.png)
With enrollment groups, there are two scenarios to consider:
With enrollment groups, there are two scenarios to consider:
2. Use the list of provisioned devices for that enrollment group to disable or delete each device from the identity registry of its respective IoT hub. 3. After disabling or deleting all devices from their respective IoT hubs, you can optionally delete the enrollment group. Be aware, though, that, if you delete the enrollment group and there is an enabled enrollment group for a signing certificate higher up in the certificate chain of one or more of the devices, those devices can re-enroll.
+ > [!NOTE]
+ > Deleting an enrollment group doesn't delete the registration records for devices in the group. DPS uses the registration records to determine whether the maximum number of registrations has been reached for the DPS instance. Orphaned registration records still count against this quota. For the current maximum number of registrations supported for a DPS instance, see [Quotas and limits](about-iot-dps.md#quotas-and-limits).
+ >
+ >You may want to delete the registration records for the enrollment group before deleting the enrollment group itself. You can see and manage the registration records for an enrollment group manually on the **Registration Records** tab for the group in Azure portal. You can retrieve and manage the registration records programmatically using the [Device Registration State REST APIs](/rest/api/iot-dps/service/device-registration-state) or equivalent APIs in the [DPS service SDKs](libraries-sdks.md), or using the [az iot dps enrollment-group registration Azure CLI commands](/cli/azure/iot/dps/enrollment-group/registration).
+ - To deprovision a single device from an enrollment group: 1. Create a disabled individual enrollment for the device.
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
Update payload file, e.g. binary, firmware, script, etc. Must be unique within u
||||| |**filename**|`string`|Update payload file name.|Yes| |**sizeInBytes**|`number`|File size in number of bytes.|Yes|
-|**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent.|Yes|
+|**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. See below for details on how to calculate the hash. |Yes|
Additional properties are not allowed.
File hashes.
### fileHashes object
-Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. For an example of how to calculate the hash correctly, see the [AduUpdate.psm1 script](https://github.com/Azure/iot-hub-device-update/blob/main/tools/AduCmdlets/AduUpdate.psm1).
+Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. For an example of how to calculate the hash correctly, see the Get-AduFileHashes function in [AduUpdate.psm1 script](https://github.com/Azure/iot-hub-device-update/blob/main/tools/AduCmdlets/AduUpdate.psm1).
**Properties**
iot-hub Horizontal Arm Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/horizontal-arm-route-messages.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites
-If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+
+- The sample application you run in this quickstart is written using C#. You need the .NET SDK 6.0 or greater on your development machine.
+
+ You can download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
+
+ You can verify the current version of C# on your development machine using the following command:
+
+ ```cmd/sh
+ dotnet --version
+ ```
+
+- Download and unzip the [IoT C# Samples](/samples/azure-samples/azure-iot-samples-csharp/azure-iot-samples-for-csharp-net/).
## Review the template
This section provides the steps to deploy the template, create a virtual device,
[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-auto-route-messages%2Fazuredeploy.json)
-1. Download and unzip the [IoT C# Samples](/samples/azure-samples/azure-iot-samples-csharp/azure-iot-samples-for-csharp-net/).
- 1. Open a command window and go to the folder where you unzipped the IoT C# Samples. Find the folder with the arm-read-write.csproj file. You create the environment variables in this command window. Log into the [Azure portal](https://portal.azure.com) to get the keys. Select **Resource Groups** then select the resource group used for this quickstart. ![Select the resource group](./media/horizontal-arm-route-messages/01-select-resource-group.png)
key-vault Monitor Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault.md
If you are creating or running an application which runs on Azure Key Vault, [Az
Here are some common and recommended alert rules for Azure Key Vault - - Key Vault Availability drops below 100% (Static Threshold)-- Key Vault Latency is greater than 500ms (Static Threshold)
+- Key Vault Latency is greater than 1000ms (Static Threshold)
- Overall Vault Saturation is greater than 75% (Static Threshold) - Overall Vault Saturation exceeds average (Dynamic Threshold) - Total Error Codes higher than average (Dynamic Threshold)
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
Key rotation policy settings:
- Rotation time: key rotation interval, the minimum value is seven days from creation and seven days from expiration time - Notification time: key near expiry event interval for Event Grid notification. It requires 'Expiry Time' set on rotation policy and 'Expiration Date' set on the key. - > [!IMPORTANT] > Key rotation generates a new key version of an existing key with new key material. Ensure that your data encryption solution uses versioned key uri to point to the same key material for encrypt/decrypt, wrap/unwrap operations to avoid disruption to your services. All Azure services are currently following that pattern for data encryption. + ## Configure key rotation policy Configure key rotation policy during key creation.
Set rotation policy on a key passing previously saved file using Azure CLI [az k
az keyvault key rotation-policy update --vault-name <vault-name> --name <key-name> --value </path/to/policy.json> ```
+### Azure PowerShell
+
+Set rotation policy using Azure Powershell [Set-AzKeyVaultKeyRotationPolicy](/powershell/module/az.keyvault/set-azkeyvaultkeyrotationpolicy) cmdlet.
+
+```powershell
+Get-AzKeyVaultKey -VaultName <vault-name> -Name <key-name>
+$action = [Microsoft.Azure.Commands.KeyVault.Models.PSKeyRotationLifetimeAction]::new()
+$action.Action = "Rotate"
+$action.TimeAfterCreate = New-TimeSpan -Days 540
+$expiresIn = New-TimeSpan -Days 720
+Set-AzKeyVaultKeyRotationPolicy -InputObject $key -KeyRotationLifetimeAction $action -ExpiresIn $expiresIn
+```
+ ## Rotation on demand Key rotation can be invoked manually.
Use Azure CLI [az keyvault key rotate](/cli/azure/keyvault/key#az-keyvault-key-r
az keyvault key rotate --vault-name <vault-name> --name <key-name> ```
+### Azure PowerShell
+
+Use Azure PowerShell [Invoke-AzKeyVaultKeyRotation](/powershell/module/az.keyvault/invoke-azkeyvaultkeyrotation) cmdlet.
+
+```powershell
+Invoke-AzKeyVaultKeyRotation -VaultName <vault-name> -Name <key-name>
+```
+ ## Configure key near expiry notification Configuration of expiry notification for Event Grid key near expiry event. You can configure notification with days, months and years before expiry to trigger near expiry event.
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
+
+ Title: Create and manage tests
+
+description: 'Learn how to create and manage tests in your Azure Load Testing Preview resource.'
++++ Last updated : 05/30/2022++
+<!-- Intent: As a user I want to configure the test plan for a load test, so that I can successfully run a load test -->
+
+# Create and manage tests in Azure Load Testing Preview
+
+Learn how to create and manage [tests](./concept-load-testing-concepts.md#test) in your Azure Load Testing Preview resource.
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create_resource).
+
+## Create a test
+
+There are two options to create a load test for Azure Load Testing resource in the Azure portal:
+
+- Create a quick test by using a web application URL.
+- Create a test by uploading a JMeter test script (JMX).
++
+### Create a quick test by using a URL
+
+To load test a single web endpoint, use the quick test experience in the Azure portal. Specify the application endpoint URL and basic load parameters to create and run a load test. For more information, see our [quickstart for creating and running a test by using a URL](./quickstart-create-and-run-load-test.md).
+
+1. In the [Azure portal](https://portal.azure.com), and go to your Azure Load Testing resource.
+
+1. Select **Quick test** on the **Overview** page.
+
+ Alternately, select **Tests** in the left pane, select **+ Create**, and then select **Create a quick test**.
+
+1. Enter the URL and load parameters.
+
+ :::image type="content" source="media/how-to-create-manage-test/create-quick-test.png" alt-text="Screenshot that shows the page for creating a quick test in the Azure portal.":::
+
+1. Select **Run test** to start the load test.
+
+ Azure Load Testing automatically generates a JMeter test script, and configures your test to scale across multiple test engines, based on your load parameters.
+
+ You can edit the test configuration at time after creating it. For example to [monitor server-side metrics](./how-to-monitor-server-side-metrics.md), [configure high scale load](./how-to-high-scale-load.md), or to edit the generated JMX file.
+
+### Create a test by using a JMeter script
+
+To reuse an existing JMeter test script, or for more advanced test scenarios, create a test by uploading a JMX file. For example, to [read data from a CSV input file](./how-to-read-csv-data.md), or to [configure JMeter user properties](./how-to-configure-user-properties.md).
+
+1. In the [Azure portal](https://portal.azure.com), and go to your Azure Load Testing resource.
+
+1. Select **Create** on the **Overview** page.
+
+ Alternately, select **Tests** in the left pane, select **+ Create**, and then select **Upload a JMeter script**.
+
+1. On the **Basics** page, enter the basic test information.
+
+ If you select **Run test after creation**, the test will start automatically. You can start your test manually at any time, after creating it.
+
+ :::image type="content" source="media/how-to-create-manage-test/create-jmeter-test.png" alt-text="Screenshot that shows the page for creating a test with a J Meter script in the Azure portal.":::
+
+## Test plan
+
+The test plan contains all files that are needed for running your load test. At a minimum, the test plan should contain one `*.jmx` JMeter script. Azure Load Testing only supports one JMX file per load test. In addition, you can include a user property file, configuration files, or input data files.
+
+1. Go to the **Test plan**.
+1. Select all files from your local machine, and upload them to Azure.
+
+ :::image type="content" source="media/how-to-create-manage-test/test-plan-upload-files.png" alt-text="Screenshot that shows the test plan page for creating a test in the Azure portal, highlighting the upload functionality.":::
+
+<!-- 1. Optionally, upload a zip archive instead of uploading the individual data and configuration files.
+
+ Azure Load Testing will unpack the zip archive on the test engine(s) when provisioning the test.
+
+ > [!IMPORTANT]
+ > The JMX file and user properties file can't be placed in the zip archive.
+ >
+ > The maximum upload size for a zip archive is 50 MB.
+
+ :::image type="content" source="media/how-to-create-manage-test/test-plan-upload-zip.png" alt-text="Screenshot that shows the test plan page for creating a test in the Azure portal, highlighting an uploaded zip archive.":::
+ -->
+If you've previously created a quick test, you can edit the test plan at any time. You can add files to the test plan, or download and edit the generated JMeter script. Download a file by selecting the file name in the list.
+
+### Split CSV input data across test engines
+
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+
+For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
+
+If you have multiple CSV files, each file will be split evenly.
+
+To configure your load test:
+
+1. Go to the **Test plan** page for your load test.
+1. Select **Split CSV evenly between Test engines**.
+
+ :::image type="content" source="media/how-to-create-manage-test/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
+
+## Parameters
+
+You can use parameters to make your test plan configurable. Specify key-value pairs in the load test configuration, and then reference their value in the JMeter script by using the parameter name.
+
+There are two types of parameters:
+
+- Environment variables. For example, to specify the domain name of the web application.
+- Secrets, backed by Azure Key Vault. For example, to pass an authentication token in an HTTP request.
+
+You can specify the managed identity to use for accessing your key vault.
+
+For more information, see [Parameterize a load test with environment variables and secrets](./how-to-parameterize-load-tests.md).
++
+## Load
+
+Configure the number of test engine instances, and Azure Load Testing automatically scales your load test across all instances. You configure the number of virtual users, or threads, in the JMeter script and the engine instances then run the script in parallel. For more information, see [Configure a test for high-scale load](./how-to-high-scale-load.md).
++
+## Test criteria
+
+You can specify test failure criteria based on a number of client metrics. When a load test surpasses the threshold for a metric, the load test has a **Failed** status. For more information, see [Configure test failure criteria](./how-to-define-test-criteria.md).
+
+You can use the following client metrics:
+
+- Average **Response time**.
+- **Error** percentage.
++
+## Monitoring
+
+For Azure-hosted applications, Azure Load Testing can capture detailed resource metrics for the Azure app components. These metrics enable you to [analyze application performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).
+
+When you edit a load test, you can select the Azure app component that you want to monitor. Azure Load Testing selects the most relevant resource metrics. You can add or remove resource metrics for each of the app components at any time.
++
+When the load test finishes, the test result dashboard shows a graph for each of the Azure app components and resource metrics.
++
+For more information, see [Configure server-side monitoring](./how-to-monitor-server-side-metrics.md).
+
+## Manage
+
+If you already have a load test, you can start a new run, delete the load test, edit the test configuration, or compare test runs.
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. On the left pane, select **Tests** to view the list of load tests, and then select your test.
++
+You can perform the following actions:
+
+- Refresh the list of test runs.
+- Start a new test run. The run uses the current test configuration settings.
+- Delete the load test. All test runs for the load test are also deleted.
+- Configure the test configuration:
+ - Configure the test plan. You can add or remove any of the files for the load test. If you want to update a file, first remove it and then add the updated version.
+ - Add or remove Azure app components.
+ - Configure resource metrics for the app components. Azure Load Testing automatically selects the relevant resource metrics for each app component. Add or remove metrics for any of the app components in the load test.
+- [Compare test runs](./how-to-compare-multiple-test-runs.md). Select two or more test runs in the list to visually compare them in the results dashboard.
+
+## Next steps
+
+- [Identify performance bottlenecks with Azure Load Testing in the Azure portal](./quickstart-create-and-run-load-test.md)
+- [Set up automated load testing with CI/CD in Azure Pipelines](./tutorial-cicd-azure-pipelines.md)
+- [Set up automated load testing with CI/CD in GitHub Actions](./tutorial-cicd-github-actions.md)
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
In this article, you'll learn how to download the test results from Azure Load Testing Preview in the Azure portal. You might use these results for reporting in third-party tools.
-The test results contain a comma-separated values (CSV) file with details of each application request. See [Apache JMeter CSV log format](https://jmeter.apache.org/usermanual/listeners.html#csvlogformat) and the [Apache JMeter Glossary](https://jmeter.apache.org/usermanual/glossary.html) for details about the different fields.
+The test results contain comma-separated values (CSV) file(s) with details of each application request. See [Apache JMeter CSV log format](https://jmeter.apache.org/usermanual/listeners.html#csvlogformat) and the [Apache JMeter Glossary](https://jmeter.apache.org/usermanual/glossary.html) for details about the different fields.
You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Troubleshoot test execution errors](./how-to-find-download-logs.md).
-In addition, all files for running the Apache JMeter dashboard locally are included.
-
-> [!NOTE]
-> The Apache JMeter dashboard generation is temporarily disabled. You can download the CSV files with the test results.
-
+You can generate the Apache JMeter dashboard from the CSV log file following the steps mentioned [here](https://jmeter.apache.org/usermanual/generating-dashboard.html#report).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this section, you'll retrieve and download the Azure Load Testing results fil
:::image type="content" source="media/how-to-export-test-results/test-results-zip.png" alt-text="Screenshot that shows the test results zip file in the downloads list.":::
- The *testreport.csv* file contains details of each request that the test engine executed during the load test. The Apache JMeter dashboard, which is also included in the zip file, uses this file for its graphs.
+ The folder contains a separate CSV file for every test engine and contains details of requests that the test engine executed during the load test.
## Next steps
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
Title: Read CSV data in an Apache JMeter load test
-description: Learn how to read external data from a CSV file in Apache JMeter and Azure Load Testing.
+description: Learn how to read external data from a CSV file in Apache JMeter with Azure Load Testing.
Previously updated : 12/15/2021 Last updated : 05/23/2022
+zone_pivot_groups: load-testing-config
-# Read data from a CSV file in JMeter and Azure Load Testing Preview
+# Read data from a CSV file in JMeter with Azure Load Testing Preview
-In this article, you'll learn how to read data from a comma-separated value (CSV) file in JMeter and Azure Load Testing Preview.
+In this article, you'll learn how to read data from a comma-separated value (CSV) file in JMeter with Azure Load Testing Preview. You can use the JMeter [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) in your test script.
-You can make an Apache JMeter test script configurable by reading settings from an external CSV file. To do this, you can use the [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) in JMeter. For example, to test a search API, you might retrieve the various query parameters from an external file.
+Use data from an external CSV file to make your JMeter test script configurable. For example, you might invoke an API for each entry in a customers CSV file.
-When you configure your Azure load test, you can upload any additional files that the JMeter script requires. For example, CSV files that contain configuration settings or binary files to send in the body of an HTTP request. You then update the JMeter script to reference the external files.
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Configure your JMeter script to read the CSV file.
+> * Add the CSV file to your load test.
+> * Optionally, split the CSV file evenly across all test engine instances.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
When you configure your Azure load test, you can upload any additional files tha
## Configure your JMeter script
-In this section, you'll configure your Apache JMeter test script to reference an external file. You'll use a CSV Data Set Config element to read data from a CSV file.
+In this section, you'll configure your Apache JMeter script to reference the external CSV file. You'll use a [CSV Data Set Config element](https://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config) to read data from a CSV file.
-Azure Load Testing uploads the JMX file and all related files in a single folder. Verify that you refer to the external files in the JMX script by using only the file name.
+Azure Load Testing uploads the JMX file and all related files in a single folder. When you reference an external file in your JMeter script, verify that your only use the file name and remove any file path references.
To edit your JMeter script by using the Apache JMeter GUI:
- 1. Select the CSV Data Set Config element in your test plan.
+ 1. Select the **CSV Data Set Config** element in your test plan.
1. Update the **Filename** information and remove any file path reference.
To edit your JMeter script by using Visual Studio Code or your editor of prefere
``` 1. Save the JMeter script.
-
-## Add a CSV file to your load test
-
-In this section, you'll configure your Azure load test to include a CSV file. You can then use this CSV file in the JMeter test script. If you reference other external files in your script, you can add them in the same way.
-You can add a CSV file to your load test in two ways:
+## Add a CSV file to your load test
-* Configure the load test by using the Azure portal
-* If you have a CI/CD workflow, update the test configuration YAML file
+When you reference an external file in your JMeter script, upload this file to your load test. When the load starts, Azure Load Testing copies all files to a single folder on each of the test engines instances.
-### Add a CSV file by using the Azure portal
To add a CSV file to your load test by using the Azure portal:
To add a CSV file to your load test by using the Azure portal:
1. Select **Apply** to modify the test and to use the new configuration when you rerun it.
-### Add a CSV file to the test configuration YAML file
+ If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-cicd-azure-pipelines.md).
-To add a CSV file in the test configuration YAML file:
+To add a CSV file to your load test:
+
+ 1. Commit the CSV file to the source control repository that contains the JMX file and YAML test configuration file.
1. Open your YAML test configuration file in Visual Studio Code or your editor of choice.
To add a CSV file in the test configuration YAML file:
The next time the CI/CD workflow runs, it will use the updated configuration. +
+## Split CSV input data across test engines
+
+By default, Azure Load Testing copies and processes your input files unmodified across all test engine instances. Azure Load Testing enables you to split the CSV input data evenly across all engine instances. You don't have to make any modifications to the JMX test script.
+
+For example, if you have a large customer CSV input file, and the load test runs on 10 parallel test engines, then each instance will process 1/10th of the customers.
+
+If you have multiple CSV files, each file will be split evenly.
+
+To configure your load test to split input CSV files:
++
+1. Go to the **Test plan** page for your load test.
+1. Select **Split CSV evenly between Test engines**.
+
+ :::image type="content" source="media/how-to-read-csv-data/configure-test-split-csv.png" alt-text="Screenshot that shows the checkbox to enable splitting input C S V files when configuring a test in the Azure portal.":::
+
+1. Select **Apply** to confirm the configuration changes.
+
+ The next time you run the test, Azure Load Testing splits and processes the CSV file evenly across the test engines.
++
+1. Open your YAML test configuration file in Visual Studio Code or your editor of choice.
+
+1. Add the `splitAllCSVs` setting and set its value to **True**.
+
+ ```yaml
+ testName: MyTest
+ testPlan: SampleApp.jmx
+ description: Run a load test for my sample web app
+ engineInstances: 1
+ configurationFiles:
+ - customers.csv
+ splitAllCSVs: True
+ ```
+
+1. Save the YAML configuration file and commit it to your source control repository.
+
+ The next time you run the test, Azure Load Testing splits and processes the CSV file evenly across the test engines.
+ ## Next steps - For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
Learn how to configure your load test in Azure Load Testing Preview by using [YA
A test configuration uses the following keys:
-| Key | Type | Description |
-| -- | -- | -- |
-| `version` | string | Version of the YAML configuration file that the service uses. Currently, the only valid value is `v0.1`. |
-| `testName` | string | *Required*. Name of the test to run. The results of various test runs will be collected under this test name in the Azure portal. |
-| `testPlan` | string | *Required*. Relative path to the Apache JMeter test script to run. |
-| `engineInstances` | integer | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. |
-| `configurationFiles` | array | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. |
-| `description` | string | Short description of the test run. |
-| `failureCriteria` | object | Criteria that indicate failure of the test. Each criterion is in the form of:<BR>`[Aggregate_function] ([client_metric]) > [value]`<BR><BR>- `[Aggregate function] ([client_metric])` is either `avg(response_time_ms)` or `percentage(error).`<BR>- `value` is an integer number. |
-| `properties` | object | List of properties to configure the load test. |
-| `properties.userPropertyFile` | string | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
-| `secrets` | object | List of secrets that the Apache JMeter script references. |
-| `secrets.name` | string | Name of the secret. This name should match the secret name that you use in the Apache JMeter script. |
-| `secrets.value` | string | URI for the Azure Key Vault secret. |
-| `env` | object | List of environment variables that the Apache JMeter script references. |
-| `env.name` | string | Name of the environment variable. This name should match the secret name that you use in the Apache JMeter script. |
-| `env.value` | string | Value of the environment variable. |
-| `keyVaultReferenceIdentity` | string | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information isn't needed. Make sure to grant this user-assigned identity access to your Azure key vault. |
+| Key | Type | Default value | Description |
+| -- | -- | -- | - |
+| `version` | string | | Version of the YAML configuration file that the service uses. Currently, the only valid value is `v0.1`. |
+| `testName` | string | | *Required*. Name of the test to run. The results of various test runs will be collected under this test name in the Azure portal. |
+| `testPlan` | string | | *Required*. Relative path to the Apache JMeter test script to run. |
+| `engineInstances` | integer | | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. |
+| `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. |
+| `description` | string | | Short description of the test run. |
+| `failureCriteria` | object | | Criteria that indicate failure of the test. Each criterion is in the form of:<BR>`[Aggregate_function] ([client_metric]) > [value]`<BR><BR>- `[Aggregate function] ([client_metric])` is either `avg(response_time_ms)` or `percentage(error).`<BR>- `value` is an integer number. |
+| `properties` | object | | List of properties to configure the load test. |
+| `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
+| `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). |
+| `secrets` | object | | List of secrets that the Apache JMeter script references. |
+| `secrets.name` | string | | Name of the secret. This name should match the secret name that you use in the Apache JMeter script. |
+| `secrets.value` | string | | URI for the Azure Key Vault secret. |
+| `env` | object | | List of environment variables that the Apache JMeter script references. |
+| `env.name` | string | | Name of the environment variable. This name should match the secret name that you use in the Apache JMeter script. |
+| `env.value` | string | | Value of the environment variable. |
+| `keyVaultReferenceIdentity` | string | | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information isn't needed. Make sure to grant this user-assigned identity access to your Azure key vault. |
The following YAML snippet contains an example load test configuration:
configurationFiles:
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
+splitAllCSVs: True
env: - name: my-variable value: my-value
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Previously updated : 05/24/2022 Last updated : 06/01/2022
The following table shows additional limits in the platform. Please reach out to
Azure Machine Learning managed online endpoints have limits described in the following table.
-To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#view-metrics). To request an exception from the Azure Machine Learning product team, please open a technical support ticket.
- | **Resource** | **Limit** | | | | | Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> |
To determine the current usage for an endpoint, [view the metrics](how-to-monito
<sup>3</sup> If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include these limit increases in the same request.
+To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#view-metrics).
+
+To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases) section and provide the following information:
+
+1. Provide the Azure __subscriptions__ and __regions__ where you want to increase the quota.
+1. Provide the __tenant ID__ and __customer name__.
+1. Provide the __quota type__ and __new limit__. Use the following table as a guide:
+
+ | Quota Type | New Limit |
+ | -- | -- |
+ | MaxEndpointsPerSub (Number of endpoints per subscription) | ? |
+ | MaxDeploymentsPerSub (Number of deployments per subscription) | ? |
+ | MaxDeploymentsPerEndpoint (Number of deployments per endpoint) | ? |
+ | MaxInstancesPerDeployment (Number of instances per deployment) | ? |
+ | EndpointRequestRateLimitPerSec (Total requests per second at endpoint level for all deployments) | ? |
+ | EndpointConnectionRateLimitPerSec (Total connections per second at endpoint level for all deployments) | ? |
+ | EndpointConnectionLimit (Total connections active at endpoint level for all deployments) | ? |
+ | EndpointBandwidthLimitKBps (Total bandwidth at endpoint level for all deployments (MBPS)) | ? |
+ ### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/orders-dashboard.md
Previously updated : 04/28/2022 Last updated : 06/06/2022 # Orders dashboard in commercial marketplace analytics
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Marketplace Subscription ID | Marketplace Subscription ID | The unique identifier associated with the Azure subscription the customer used to purchase your commercial marketplace offer. For infrastructure offers, this is the customer's Azure subscription GUID. For SaaS offers, this is shown as zeros since SaaS purchases do not require an Azure subscription. | Marketplace Subscription ID | | MonthStartDate | Month Start Date | Month Start Date represents month of Purchase. The format is yyyy-mm-dd. | MonthStartDate | | Offer Type | Offer Type | The type of commercial marketplace offering. | OfferType |
-| Azure License Type | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as Channel. The possible values are:<ul><li>[Cloud Solution Provider](cloud-solution-providers.md)</li><li>Enterprise</li><li>Enterprise through Reseller</li><li>Pay as You Go</li><li>GTM</li></ul> | AzureLicenseType |
+| Azure License Type | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as Channel. The possible values are:<ul><li>Cloud Solution Provider</li><li>Enterprise</li><li>Enterprise through Reseller</li><li>Pay as You Go</li><li>GTM</li></ul> | AzureLicenseType |
| Marketplace License Type | Marketplace License Type | The billing method of the commercial marketplace offer. The possible values are:<ul><li>Billed through Azure</li><li>Bring Your Own License</li><li>Free</li><li>Microsoft as Reseller</li></ul> | MarketplaceLicenseType | | SKU | SKU | The plan associated with the offer | SKU | | Customer Country | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountry |
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Customer Company Name | Customer Company Name | The company name provided by the customer. Name could be different than the city in a customer's Azure subscription. | CustomerCompanyName | | Order Purchase Date | Order Purchase Date | The date the commercial marketplace order was created. The format is yyyy-mm-dd. | OrderPurchaseDate | | Offer Name | Offer Name | The name of the commercial marketplace offering. | OfferName |
-| Is Private Offer | Is Private Offer | Indicates whether a marketplace offer is private or a public offer<ul><li>0 value indicates false</li><li>1 value indicates true</li</ul> | Is Private Offer |
-| Term Start Date | TermStartDate | Indicates the start date of a term for an order. | TermStartDate |
-| Term End Date | TermEndDate | Indicates the end date of a term for an order. | TermEndDate |
+| Is Private Offer | Is Private Offer | Indicates whether a marketplace offer is private or a public offer<ul><li>0 value indicates false</li><li>1 value indicates true</li></ul>**Note:** [Private plans are different from Private offers](isv-customer-faq.yml). | Is Private Offer |
+| Not available | BillingTerm | Indicates the term duration of the offer purchased by the customer | BillingTerm |
+| Not available | BillingPlan | Indicates the billing frequency of the offer purchased by the customer | BillingPlan |
+| Term Start Date | TermStartDate | Indicates the start date of a term for an order | TermStartDate |
+| Term End Date | TermEndDate | Indicates the end date of a term for an order | TermEndDate |
| Not available | purchaseRecordId | The identifier of the purchase record for an order purchase | purchaseRecordId | | Not available | purchaseRecordLineItemId | The identifier of the purchase record line item related to this order. | purchaseRecordLineItemId | | Billed Revenue USD | EstimatedCharges | The price the customer will be charged for all order units before taxation. This is calculated in customer transaction currency. In tax-inclusive countries, this price includes the tax, otherwise it does not. | EstimatedCharges |
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Trial End Date | Trial End Date | The date the trial period for this order will end or has ended. | TrialEndDate | | Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerID | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId |
+| Reference Id | ReferenceId | A key to link orders having usage details in usage report. Map this field value with the value for UsageReference key in usage report. This is applicable for SaaS with custom meters and VM software reservation offer types | ReferenceId |
| PlanId | PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. | PlanId |
+| Auto Renew | Auto Renew | Indicates whether a subscription is due for an automatic renewal. Possible values are:<br><ul><li>TRUE: Indicates that on the TermEnd the subscription will renew automatically.</li><li>FALSE: Indicates that on the TermEnd the subscription will expire.</li><li>NULL: The product does not support renewals. Indicates that on the TermEnd the subscription will expire. This is displayed "-" on the UI</li></ul> | AutoRenew |
+| Not available | Event Timestamp | Indicates the timestamp of an order management event, such as an order purchase, cancelation, renewal, and so on | EventTimestamp |
### Orders page filters
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
description: Learn how to use the Subscription APIs, which are part of the the
Previously updated : 03/07/2022 Last updated : 06/03/2022
Response body example:
"emailId": "test@test.com", "objectId": "<guid>", "tenantId": "<guid>",
- "pid": "<ID of the user>"
+ "puid": "<ID of the user>"
}, "purchaser": { "emailId": "test@test.com", "objectId": "<guid>", "tenantId": "<guid>",
- "pid": "<ID of the user>"
+ "puid": "<ID of the user>"
}, "planId": "silver", "term": {
Returns the list of all existing subscriptions for all offers made by this publi
"emailId": " test@contoso.com", "objectId": "<guid>", "tenantId": "<guid>",
- "pid": "<ID of the user>"
+ "puid": "<ID of the user>"
}, "purchaser": { "emailId": "purchase@csp.com ", "objectId": "<guid>", "tenantId": "<guid>",
- "pid": "<ID of the user>"
+ "puid": "<ID of the user>"
}, "term": { "startDate": "2019-05-31",
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-business-continuity.md
The table below illustrates the features that Flexible server offers.
| **Backup & Recovery** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained for any period between 1 to 35 days. You'll be able to restore your database server to any point in time within your backup retention period. Recovery time will be dependent on the size of the data to restore + the time to perform log recovery. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details. |Backup data remains within the region | | **Local redundant backup** | Flexible server backups are automatically and securely stored in a local redundant storage within a region and in same availability zone. The locally redundant backups replicate the server backup data files three times within a single physical location in the primary region. Locally redundant backup storage provides at least 99.999999999% (11 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Applicable in all regions | | **Geo-redundant backup** | Flexible server backups can be configured as geo-redundant at create time. Enabling Geo-redundancy replicates the server backup data files in the primary region’s paired region to provide regional resiliency. Geo-redundant backup storage provides at least 99.99999999999999% (16 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Available in all [Azure paired regions](overview.md#azure-regions) |
-| **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. This protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and Business Critical compute tiers. Available only in regions where multiple zones are available.|
+| **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. Zone redundant high availibility protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and Business Critical compute tiers. Available only in regions where multiple zones are available.|
| **Premium file shares** | Database files are stored in a highly durable and reliable Azure premium file shares that provide data redundancy with three copies of replica stored within an availability zone with automatic data recovery capabilities. Refer to [Premium File shares](../../storage/files/storage-how-to-create-file-share.md) for more details. | Data stored within an availability zone | ## Planned downtime mitigation
Here are some unplanned failure scenarios and the recovery process:
| **Scenario** | **Recovery process [non-HA]** | **Recovery process [HA]** | | :- | - | - |
-| **Database server failure** |If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. Azure will attempt to restart the database server. If that succeeds, then the database recovery is performed. If the restart fails, the database server will be attempted to restart on another physical node.<br /> <br />The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server start-up process. The RPO will be zero as no data loss is expected for the committed transactions. Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the connections are directed to the newly created database server.<br /> <br />Other available options are restore from backup. You can use both PITR or Geo restore from paired region. <br /> **PITR** : RTO: Varies, RPO=0sec <br /> **Geo Restore :** RTO: Varies RPO <15Mins. <br /> <br />You can also use [read replica](./concepts-read-replicas.md) as DR solution. You can [stop the replication](./concepts-read-replicas.md#stop-replication) which make the read replica read-write(standalone and then redirect the application traffic to this database. The RTO in most cases will be few minutes and RPO < 5 min. RTO and RPO can be much higher in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload. | If the database server failure or non-recoverable errors is detected, the standby database server is activated, thus reducing downtime. Refer to HA concepts page for more details. RTO is expected to be 60-120 s, with RPO=0. <br /> <br /> **Note:** *The options for Recovery process [non-HA] is also applicable here. Read replica are currently not supported for HA enabled servers.*|
-| **Storage failure** |Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created.<br /> <br />In a rare or worst-case scenario if all copies are corrupted, we can use restore from Geo restore (paired region). RPO would be <15 mins and RTO would vary.<br /> <br />You can also use read replica as DR solution as detailed above. | For this scenario, the options are same as for Recovery process [non-HA] . Read replica are currently not supported for HA enabled servers. |
+| **Database server failure** |If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. Azure will attempt to restart the database server. If that succeeds, then the database recovery is performed. If the restart fails, the database server will be attempted to restart on another physical node.<br /> <br />The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server start-up process. The RPO will be zero as no data loss is expected for the committed transactions. Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the connections are directed to the newly created database server.<br /> <br />Other available options are restore from backup. You can use both PITR or Geo restore from paired region. <br /> **PITR** : RTO: Varies, RPO=0sec <br /> **Geo Restore :** RTO: Varies RPO <1 h. <br /> <br />You can also use [read replica](./concepts-read-replicas.md) as DR solution. You can [stop the replication](./concepts-read-replicas.md#stop-replication) which make the read replica read-write(standalone and then redirect the application traffic to this database. The RTO in most cases will be few minutes and RPO < 1 h. RTO and RPO can be much higher in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload. | If the database server failure or non-recoverable errors is detected, the standby database server is activated, thus reducing downtime. Refer to HA concepts page for more details. RTO is expected to be 60-120 s, with RPO=0. <br /> <br /> **Note:** *The options for Recovery process [non-HA] is also applicable here. Read replica are currently not supported for HA enabled servers.*|
+| **Storage failure** |Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created.<br /> <br />In a rare or worst-case scenario if all copies are corrupted, we can use restore from Geo restore (paired region). RPO would be < 1 h and RTO would vary.<br /> <br />You can also use read replica as DR solution as detailed above. | For this scenario, the options are same as for Recovery process [non-HA] . Read replica are currently not supported for HA enabled servers. |
| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup-restore.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> You can recover a deleted MySQL flexible server resource within five days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps] (../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). | These user errors aren't protected with high availability since all user operations are replicated to the standby too. For this scenario, the options are same as for Recovery process [non-HA] |
-| **Availability zone failure** | While it's a rare event, if you want to recover from a zone-level failure, you can perform Geo restore from to a paired region. RPO would be <15 mins and RTO would vary. <br /> <br /> You can also use [read replica](./concepts-read-replicas.md) as DR solution by creating replica in other availability zone. RTO\RPO is like what is detailed above. | If you have enabled Zone redundant HA, Flexible server performs automatic failover to the standby site. Refer to [HA concepts](./concepts-high-availability.md) for more details. RTO is expected to be 60-120 s, with RPO=0.<br /> <br />Other available options are restored from backup. You can use both PITR or Geo restore from paired region.<br />**PITR :** RTO: Varies, RPO=0 sec <br />**Geo Restore :** RTO: Varies, RPO <15Mins <br /> <br /> **Note:** *If you have same zone HA enabled the options are same as what we have for Recovery process [non-HA]* |
-| **Region failure** |While it's a rare event, if you want to recover from a region-level failure, you can perform database recovery by creating a new server using the latest geo-redundant backup available under the same subscription to get to the latest data. A new flexible server will be deployed to the selected region. The time taken to restore depends on the previous backup and the number of transaction logs to recover. RPO in most cases would be <15 mins and RTO would vary. | For this scenario, the options are same as for Recovery process [non-HA] . |
+| **Availability zone failure** | While it's a rare event, if you want to recover from a zone-level failure, you can perform Geo restore from to a paired region. RPO would be <1 h and RTO would vary. <br /> <br /> You can also use [read replica](./concepts-read-replicas.md) as DR solution by creating replica in other availability zone. RTO\RPO is like what is detailed above. | If you have enabled Zone redundant HA, Flexible server performs automatic failover to the standby site. Refer to [HA concepts](./concepts-high-availability.md) for more details. RTO is expected to be 60-120 s, with RPO=0.<br /> <br />Other available options are restored from backup. You can use both PITR or Geo restore from paired region.<br />**PITR :** RTO: Varies, RPO=0 sec <br />**Geo Restore :** RTO: Varies, RPO <1 h <br /> <br /> **Note:** *If you have same zone HA enabled the options are same as what we have for Recovery process [non-HA]* |
+| **Region failure** |While it's a rare event, if you want to recover from a region-level failure, you can perform database recovery by creating a new server using the latest geo-redundant backup available under the same subscription to get to the latest data. A new flexible server will be deployed to the selected region. The time taken to restore depends on the previous backup and the number of transaction logs to recover. RPO in most cases would be <1 h and RTO would vary. | For this scenario, the options are same as for Recovery process [non-HA] . |
## Next steps
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas
| Deleted source and standalone servers | When a source server is deleted, replication is stopped to all read replicas. These replicas automatically become standalone servers and can accept both reads and writes. The source server itself is deleted. | | User accounts | Users on the source server are replicated to the read replicas. You can only connect to a read replica using the user accounts available on the source server. | | Server parameters | To prevent data from becoming out of sync and to avoid potential data loss or corruption, some server parameters are locked from being updated when using read replicas. <br> The following server parameters are locked on both the source and replica servers:<br> - [`innodb_file_per_table`](https://dev.mysql.com/doc/refman/8.0/en/innodb-file-per-table-tablespaces.html) <br> - [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) <br> The [`event_scheduler`](https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_event_scheduler) parameter is locked on the replica servers. <br> To update one of the above parameters on the source server, delete replica servers, update the parameter value on the source, and recreate replicas.
-<br> When configuring session level parameters such as ΓÇÿforeign_keys_checksΓÇÖ on the read replica, ensure the parameter values being set on the read replica are consistent with that of the source server.|
+|Session level parameters | When configuring session level parameters such as ΓÇÿforeign_keys_checksΓÇÖ on the read replica, ensure the parameter values being set on the read replica are consistent with that of the source server.|
+|Adding AUTO_INCREMENT Primary Key column to the existing table in the source server.|We donΓÇÖt recommend altering table with AUTO_INCREMENT post read replica creation, as it breaks the replication. But in case you would like to add the auto increment column post creating a replica server. We recommend these two approaches: <br> - Create a new table with the same schema of table you want to modify. In the new table alter the column with AUTO_INCREMENT and then from the original table restore the data. Drop old table and rename it in the source, this doesnΓÇÖt need us to delete the replica server but may need large insert cost to creating backup table. <br> - The other quicker method is to recreate the replica after adding all auto increment columns.|
| Other | - Creating a replica of a replica is not supported. <br> - In-memory tables may cause replicas to become out of sync. This is a limitation of the MySQL replication technology. Read more in the [MySQL reference documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features-memory.html) for more information. <br>- Ensure the source server tables have primary keys. Lack of primary keys may result in replication latency between the source and replicas.<br>- Review the full list of MySQL replication limitations in the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html) | ## Next steps
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 05/24/2022
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## June 2022
+
+**Known Issues**
+
+On few servers where audit or slow logs are enabled, you may no longer see logs being uploaded to data sinks configured under diagnostics settings. Please verify whether your logs have the latest updated timestamp for the events, based on the [data sink](./tutorial-query-performance-insights.md#set-up-diagnostics) you have configured. If your server is affected by this issue, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) so that we can apply a quick fix on the server to resolve the issue. Alternatively, you can wait for our next deployment cycle, during which we will apply a permanent fix in all regions.
+ ## May 2022 - **Announcing Azure Database for MySQL - Flexible Server for business-critical workloads** Azure Database for MySQL ΓÇô Flexible Server Business Critical service tier is now generally available. Business Critical service tier is ideal for Tier 1 production workloads that require low latency, high concurrency, fast failover, and high scalability, such as gaming, e-commerce, and Internet-scale applications, to learn more about [Business Critical service Tier](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/announcing-azure-database-for-mysql-flexible-server-for-business/ba-p/3361718). - **Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
- We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-compute-storage).
+ We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](./concepts-compute-storage.md).
- **Known issues** - The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
notification-hubs Notification Hubs Windows Store Dotnet Get Started Wns Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md
To send push notifications to UWP apps, associate your app to the Windows Store.
### Configure WNS settings for the hub 1. In the **NOTIFICATION SETTINGS** category, select **Windows (WNS)**.
-2. Enter values for **Package SID** and **Security Key** (the **Application Secret**) you noted from the previous section.
+2. Enter values for **Package SID** (like this "ms-app://`<Your Package SID>`") and **Security Key** (the **Application Secret**) you noted from the previous section.
3. Click **Save** on the toolbar. ![The Package SID and Security Key boxes](./media/notification-hubs-windows-store-dotnet-get-started/notification-hub-configure-wns.png)
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
+
+ Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"
+
+description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server.
++++ Last updated : 05/11/2022+++
+# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview)
+
+>[!NOTE]
+> Single Server to Flexible Server migration tool is in public preview.
+
+Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration tool enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration tool. This tool automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The tool is provided free of cost for customers.
+
+Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
+
+## Overview
+
+Single to Flexible server migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
+
+You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration tool automates the following steps:
+
+1. Creates the migration infrastructure in the region of the target flexible server
+2. Creates public IP address and attaches it to the migration infrastructure
+3. Allow-listing of migration infrastructureΓÇÖs IP address on the firewall rules of both source and target servers
+4. Creates a migration project with both source and target types as Azure database for PostgreSQL
+5. Creates a migration activity to migrate the databases specified by the user from source to target.
+6. Migrates schema from source to target
+7. Creates databases with the same name on the target Flexible server
+8. Migrates data from source to target
+
+Following is the flow diagram for Single to Flexible migration tool.
+
+**Steps:**
+1. Create a Flex PG server
+2. Invoke migration
+3. Migration infrastructure provisioned (DMS)
+4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only)
+5. Cutover to the target
+
+The migration tool is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
+
+## Migration modes comparison
+
+Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases.
+
+The following table summarizes the differences between these two modes of migration.
+
+| Capability | Online | Offline |
+|:|:-|:--|
+| Database availability for reads during migration | Available | Available |
+| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated |
+| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
+| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime |
+| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. |
+| Manual Cutover | Required | Not required |
+| Downtime Required | Less | More |
+| Logical replication limitations | Applicable | Not Applicable |
+| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size |
+
+**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server.
+
+The following table shows the approximate time taken to perform offline migrations for databases of various sizes.
+
+>[!NOTE]
+> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases.
+
+| Database Size | Approximate Time Taken (HH:MM) |
+|:|:-|
+| 1 GB | 00:01 |
+| 5 GB | 00:05 |
+| 10 GB | 00:10 |
+| 50 GB | 00:45 |
+| 100 GB | 06:00 |
+| 500 GB | 08:00 |
+| 1000 GB | 09:30 |
+
+**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding).
+
+The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server.
+
+Based on the above differences, pick the mode that best works for your workloads.
+++
+## Migration steps
+
+### Pre-requisites
+
+Follow the steps provided in this section before you get started with the single to flexible server migration tool.
+
+- **Target Server Creation** - You need to create the target PostgreSQL Flexible Server before using the migration tool. Use the creation [QuickStart guide](../flexible-server/quickstart-create-server-portal.md) to create one.
+
+- **Source Server pre-requisites** - You must [enable logical replication](../single-server/concepts-logical.md) on the source server.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
+
+>[!NOTE]
+> Enabling logical replication will require a server reboot for the change to take effect.
+
+- **Azure Active Directory App set up** - It is a critical component of the migration tool. Azure AD App helps with role-based access control as the migration tool needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-azure-ad-app-portal.md) for step-by-step process.
+
+### Data and schema migration
+
+Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI.
+
+- [Migrate using Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)
+- [Migrate using Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
+
+### Post migration
+
+- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you.
+
+- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation.
+
+- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases.
+
+- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration.
+
+- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes.
+
+- Post data validations and making your application point to flexible server, you can consider deleting your single server.
+
+## Limitations
+
+### Size limitations
+
+- Databases of sizes up to 1TB can be migrated using this tool. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
+
+- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.
+
+### Performance limitations
+
+- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance.
+
+- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline).
+
+### Replication limitations
+
+- Single to Flexible Server migration tool uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
+ - **DDL commands** are not replicated.
+ - **Sequence** data is not replicated.
+ - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
+
+ - Views, Materialized views, partition root tables and foreign tables will not be migrated.
+
+- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration.
+
+### Other limitations
+
+- The migration tool migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
+
+- It does not validate the data in flexible server post migration. The customers must manually do this.
+
+- The migration tool only migrates user databases including Postgres database and not system/maintenance databases.
+
+- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name has to be created.
+
+- The migration tool does not include assessment of your single server.
+
+## Best practices
+
+- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations.
+- Batch similar sized databases in a migration task.
+- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.
+- Perform test migrations before migrating for production.
+ - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility.
+
+ - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss.
+
+- Cut over all dependent applications to access the new primary database and open the applications for production usage.
+- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+- [Migrate to Flexible Server using Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md).
+- [Migrate to Flexible Server using Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
+
+ Title: "Oracle to Azure Database for PostgreSQL: Migration guide"
+description: This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
+++++ Last updated : 03/18/2021++
+# Migrate Oracle to Azure Database for PostgreSQL
+
+This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL.
+
+For detailed and comprehensive migration guidance, see the [Migration guide resources](https://github.com/microsoft/OrcasNinjaTeam/blob/master/Oracle%20to%20PostgreSQL%20Migration%20Guide/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Guide.pdf).
+
+## Prerequisites
+
+To migrate your Oracle schema to Azure Database for PostgreSQL, you need to:
+
+- Verify your source environment is supported.
+- Download the latest version of [ora2pg](https://ora2pg.darold.net/).
+- Have the latest version of the [DBD module](https://www.cpan.org/modules/by-module/DBD/).
++
+## Overview
+
+PostgreSQL is one of world's most advanced open-source databases. This article describes how to use the free ora2pg tool to migrate an Oracle database to PostgreSQL. You can use ora2pg to migrate an Oracle database or MySQL database to a PostgreSQL-compatible schema.
+
+The ora2pg tool connects your Oracle database, scans it automatically, and extracts its structure or data. Then ora2pg generates SQL scripts that you can load into your PostgreSQL database. You can use ora2pg for tasks such as reverse-engineering an Oracle database, migrating a huge enterprise database, or simply replicating some Oracle data into a PostgreSQL database. The tool is easy to use and requires no Oracle database knowledge besides the ability to provide the parameters needed to connect to the Oracle database.
+
+> [!NOTE]
+> For more information about using the latest version of ora2pg, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
+
+### Typical ora2pg migration architecture
+
+![Screenshot of the ora2pg migration architecture.](media/how-to-migrate-from-oracle/ora2pg-migration-architecture.png)
+
+After you provision the VM and Azure Database for PostgreSQL, you need two configurations to enable connectivity between them: **Allow access to Azure services** and **Enforce SSL Connection**:
+
+- **Connection Security** blade > **Allow access to Azure services** > **ON**
+
+- **Connection Security** blade > **SSL Settings** > **Enforce SSL Connection** > **DISABLED**
+
+### Recommendations
+
+- To improve the performance of the assessment or export operations in the Oracle server, collect statistics:
+
+ ```
+ BEGIN
+
+ DBMS_STATS.GATHER_SCHEMA_STATS
+ DBMS_STATS.GATHER_DATABASE_STATS
+ DBMS_STATS.GATHER_DICTIONARY_STATS
+ END;
+ ```
+
+- Export data by using the `COPY` command instead of `INSERT`.
+
+- Avoid exporting tables with their foreign keys (FKs), constraints, and indexes. These elements slow down the process of importing data into PostgreSQL.
+
+- Create materialized views by using the *no data clause*. Then refresh the views later.
+
+- If possible, use unique indexes in materialized views. These indexes can speed up the refresh when you use the syntax `REFRESH MATERIALIZED VIEW CONCURRENTLY`.
++
+## Pre-migration
+
+After you verify that your source environment is supported and that you've addressed any prerequisites, you're ready to start the premigration stage. To begin:
+
+1. **Discover**: Inventory the databases that you need to migrate.
+2. **Assess**: Assess those databases for potential migration issues or blockers.
+3. **Convert**: Resolve any items you uncovered.
+
+For heterogenous migrations such as Oracle to Azure Database for PostgreSQL, this stage also involves making the source database schemas compatible with the target environment.
+
+### Discover
+
+The goal of the discovery phase is to identify existing data sources and details about the features that are being used. This phase helps you better understand and plan for the migration. The process involves scanning the network to identify all your organization's Oracle instances together with the version and features in use.
+
+Microsoft pre-assessment scripts for Oracle run against the Oracle database. The pre-assessment scripts query the Oracle metadata. The scripts provide:
+
+- A database inventory, including counts of objects by schema, type, and status.
+- A rough estimate of the raw data in each schema, based on statistics.
+- The size of tables in each schema.
+- The number of code lines per package, function, procedure, and so on.
+
+Download the related scripts from [github](https://github.com/microsoft/DataMigrationTeam/tree/master/Whitepapers).
+
+### Assess
+
+After you inventory the Oracle databases, you'll have an idea of the database size and potential challenges. The next step is to run the assessment.
+
+Estimating the cost of a migration from Oracle to PostgreSQL isn't easy. To assess the migration cost, ora2pg checks all database objects, functions, and stored procedures for objects and PL/SQL code that it can't automatically convert.
+
+The ora2pg tool has a content analysis mode that inspects the Oracle database to generate a text report. The report describes what the Oracle database contains and what can't be exported.
+
+To activate the *analysis and report* mode, use the exported type `SHOW_REPORT` as shown in the following command:
+
+```
+ora2pg -t SHOW_REPORT
+```
+
+The ora2pg tool can convert SQL and PL/SQL code from Oracle syntax to PostgreSQL. So after the database is analyzed, ora2pg can estimate the code difficulties and the time necessary to migrate a full database.
+
+To estimate the migration cost in human-days, ora2pg allows you to use a configuration directive called `ESTIMATE_COST`. You can also enable this directive at a command prompt:
+
+```
+ora2pg -t SHOW_REPORT --estimate_cost
+```
+
+The default migration unit represents around five minutes for a PostgreSQL expert. If this migration is your first, you can increase the default migration unit by using the configuration directive `COST_UNIT_VALUE` or the `--cost_unit_value` command-line option.
+
+The last line of the report shows the total estimated migration code in human-days. The estimate follows the number of migration units estimated for each object.
+
+In the following code example, you see some assessment variations:
+* Tables assessment
+* Columns assessment
+* Schema assessment that uses a default cost unit of 5 minutes
+* Schema assessment that uses a cost unit of 10 minutes
+
+```
+ora2pg -t SHOW_TABLE -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\tables.txt
+ora2pg -t SHOW_COLUMN -c c:\ora2pg\ora2pg_hr.conf > c:\ts303\hr_migration\reports\columns.txt
+ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report.html
+ora2pg -t SHOW_REPORT -c c:\ora2pg\ora2pg_hr.conf ΓÇô-cost_unit_value 10 --dump_as_html --estimate_cost > c:\ts303\hr_migration\reports\report2.html
+```
+
+Here's the output of the schema assessment migration level B-5:
+
+* Migration levels:
+
+ * A - Migration that can be run automatically
+
+ * B - Migration with code rewrite and a human-days cost up to 5 days
+
+ * C - Migration with code rewrite and a human-days cost over 5 days
+
+* Technical levels:
+
+ * 1 = Trivial: No stored functions and no triggers
+
+ * 2 = Easy: No stored functions, but triggers; no manual rewriting
+
+ * 3 = Simple: Stored functions and/or triggers; no manual rewriting
+
+ * 4 = Manual: No stored functions, but triggers or views with code rewriting
+
+ * 5 = Difficult: Stored functions and/or triggers with code rewriting
+
+The assessment consists of:
+* A letter (A or B) to specify whether the migration needs manual rewriting.
+
+* A number from 1 to 5 to indicate the technical difficulty.
+
+Another option, `-human_days_limit`, specifies the limit of human-days. Here, set the migration level to C to indicate that the migration needs a large amount of work, full project management, and migration support. The default is 10 human-days. You can use the configuration directive `HUMAN_DAYS_LIMIT` to change this default value permanently.
+
+This schema assessment was developed to help users decide which database to migrate first and which teams to mobilize.
+
+### Convert
+
+In minimal-downtime migrations, your migration source changes. It drifts from the target in terms of data and schema after the one-time migration. During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied to the target, you can *cut over* from the source to the target environment.
+
+In this step of the migration, the Oracle code and DDL scripts are converted or translated to PostgreSQL. The ora2pg tool exports the Oracle objects in a PostgreSQL format automatically. Some of the generated objects can't be compiled in the PostgreSQL database without manual changes.
+
+To understand which elements need manual intervention, first compile the files generated by ora2pg against the PostgreSQL database. Check the log, and then make any necessary changes until the schema structure is compatible with PostgreSQL syntax.
++
+#### Create a migration template
+
+We recommend using the migration template that ora2pg provides. When you use the options `--project_base` and `--init_project`, ora2pg creates a project template with a work tree, a configuration file, and a script to export all objects from the Oracle database. For more information, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
+
+Use the following command:
+
+```
+ora2pg --project_base /app/migration/ --init_project test_project
+```
+
+Here's the example output:
+
+```
+ora2pg --project_base /app/migration/ --init_project test_project
+ Creating project test_project.
+ /app/migration/test_project/
+ schema/
+ dblinks/
+ directories/
+ functions/
+ grants/
+ mviews/
+ packages/
+ partitions/
+ procedures/
+ sequences/
+ synonyms/
+ tables/
+ tablespaces/
+ triggers/
+ types/
+ views/
+ sources/
+ functions/
+ mviews/
+ packages/
+ partitions/
+ procedures/
+ triggers/
+ types/
+ views/
+ data/
+ config/
+ reports/
+
+ Generating generic configuration file
+ Creating script export_schema.sh to automate all exports.
+ Creating script import_all.sh to automate all imports.
+```
+
+The `sources/` directory contains the Oracle code. The `schema/` directory contains the code ported to PostgreSQL. And the `reports/` directory contains the HTML reports and the migration cost assessment.
++
+After the project structure is created, a generic config file is created. Define the Oracle database connection and the relevant config parameters in the config file. For more information about the config file, see the [ora2pg documentation](https://ora2pg.darold.net/documentation.html).
++
+#### Export Oracle objects
+
+Next, export the Oracle objects as PostgreSQL objects by running the file *export_schema.sh*.
+
+```
+cd /app/migration/mig_project
+./export_schema.sh
+```
+
+Run the following command manually.
+
+```
+SET namespace="/app/migration/mig_project"
+
+ora2pg -p -t DBLINK -o dblink.sql -b %namespace%/schema/dblinks -c %namespace%/config/ora2pg.conf
+ora2pg -p -t DIRECTORY -o directory.sql -b %namespace%/schema/directories -c %namespace%/config/ora2pg.conf
+ora2pg -p -t FUNCTION -o functions2.sql -b %namespace%/schema/functions -c %namespace%/config/ora2pg.conf
+ora2pg -p -t GRANT -o grants.sql -b %namespace%/schema/grants -c %namespace%/config/ora2pg.conf
+ora2pg -p -t MVIEW -o mview.sql -b %namespace%/schema/mviews -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PACKAGE -o packages.sql -b %namespace%/schema/packages -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PARTITION -o partitions.sql -b %namespace%/schema/partitions -c %namespace%/config/ora2pg.conf
+ora2pg -p -t PROCEDURE -o procs.sql -b %namespace%/schema/procedures -c %namespace%/config/ora2pg.conf
+ora2pg -p -t SEQUENCE -o sequences.sql -b %namespace%/schema/sequences -c %namespace%/config/ora2pg.conf
+ora2pg -p -t SYNONYM -o synonym.sql -b %namespace%/schema/synonyms -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TABLE -o table.sql -b %namespace%/schema/tables -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TABLESPACE -o tablespaces.sql -b %namespace%/schema/tablespaces -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TRIGGER -o triggers.sql -b %namespace%/schema/triggers -c %namespace%/config/ora2pg.conf
+ora2pg -p -t TYPE -o types.sql -b %namespace%/schema/types -c %namespace%/config/ora2pg.conf
+ora2pg -p -t VIEW -o views.sql -b %namespace%/schema/views -c %namespace%/config/ora2pg.conf
+```
+
+To extract the data, use the following command.
+
+```
+ora2pg -t COPY -o data.sql -b %namespace/data -c %namespace/config/ora2pg.conf
+```
+
+#### Compile files
+
+Finally, compile all files against the Azure Database for PostgreSQL server. You can choose to load the manually generated DDL files or use the second script *import_all.sh* to import those files interactively.
+
+```
+psql -f %namespace%\schema\sequences\sequence.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\ schema\sequences\create_sequences.log
+
+psql -f %namespace%\schema\tables\table.sql -h server1-server.postgres.database.azure.com p 5432 -U username@server1-server -d database -l %namespace%\schema\tables\create_table.log
+```
+
+Here's the data import command:
+
+```
+psql -f %namespace%\data\table1.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table1.log
+
+psql -f %namespace%\data\table2.sql -h server1-server.postgres.database.azure.com -p 5432 -U username@server1-server -d database -l %namespace%\data\table2.log
+```
+
+While the files are being compiled, check the logs and correct any syntax that ora2pg couldn't convert on its own.
+
+For more information, see [Oracle to Azure Database for PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf).
+
+## Migrate
+
+After you have the necessary prerequisites and you've completed the premigration steps, you can start the schema and data migration.
+
+### Migrate schema and data
+
+When you've made the necessary fixes, a stable build of the database is ready to deploy. Run the `psql` import commands, pointing to the files that contain the modified code. This task compiles the database objects against the PostgreSQL database and imports the data.
+
+In this step, you can implement a level of parallelism on importing the data.
+
+### Sync data and cut over
+
+In online (minimal-downtime) migrations, the migration source continues to change. It drifts from the target in terms of data and schema after the one-time migration.
+
+During the *Data sync* phase, ensure that all changes in the source are captured and applied to the target in near real time. After you verify that all changes are applied, you can cut over from the source to the target environment.
+
+To do an online migration, contact AskAzureDBforPostgreSQL@service.microsoft.com for support.
+
+In a *delta/incremental* migration that uses ora2pg, for each table, use a query that filters (*cuts*) by date, time, or another parameter. Then finish the migration by using a second query that migrates the remaining data.
+
+In the source data table, migrate all the historical data first. Here's an example:
+
+```
+select * from table1 where filter_data < 01/01/2019
+```
+
+You can query the changes since the initial migration by running a command like this one:
+
+```
+select * from table1 where filter_data >= 01/01/2019
+```
+
+In this case, we recommended that you enhance validation by checking data parity on both sides, the source and the target.
+
+## Post-migration
+
+After the *Migration* stage, complete the post-migration tasks to ensure that everything is functioning as smoothly and efficiently as possible.
+
+### Remediate applications
+
+After the data is migrated to the target environment, all the applications that formerly consumed the source need to start consuming the target. The setup sometimes requires changes to the applications.
+
+### Test
+
+After the data is migrated to the target, run tests against the databases to verify that the applications work well with the target. Make sure the source and target are properly migrated by running the manual data validation scripts against the Oracle source and PostgreSQL target databases.
+
+Ideally, if the source and target databases have a networking path, ora2pg should be used for data validation. You can use the `TEST` action to ensure that all objects from the Oracle database have been created in PostgreSQL.
+
+Run this command:
+
+```
+ora2pg -t TEST -c config/ora2pg.conf > migration_diff.txt
+```
+
+### Optimize
+
+The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness. In this phase, you also address performance issues with the workload.
+
+## Migration assets
+
+For more information about this migration scenario, see the following resources. They support real-world migration project engagement.
+
+| Resource | Description |
+| -- | |
+| [Oracle to Azure PostgreSQL migration cookbook](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20PostgreSQL%20Migration%20Cookbook.pdf) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. |
+| [Oracle to Azure PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document helps architects, consultants, database administrators, and related roles quickly fix or work around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. |
+| [Steps to install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
+
+The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to the Microsoft Azure data platform.
+
+## More support
+
+For migration help beyond the scope of ora2pg tooling, contact [@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
+
+## Next steps
+
+For a matrix of services and tools for database and data migration and for specialty tasks, see [Services and tools for data migration](../../dms/dms-tools-matrix.md).
+
+Documentation:
+- [Azure Database for PostgreSQL documentation](../index.yml)
+- [ora2pg documentation](https://ora2pg.darold.net/documentation.html)
+- [PostgreSQL website](https://www.postgresql.org/)
+- [Autonomous transaction support in PostgreSQL](http://blog.dalibo.com/2016/08/19/Autonoumous_transactions_support_in_PostgreSQL.html) 
postgresql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-online.md
+
+ Title: Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
+description: This article describes how to perform a minimal-downtime migration of a PostgreSQL database to Azure Database for PostgreSQL - Single Server by using the Azure Database Migration Service.
+++++ Last updated : 5/6/2019++
+# Minimal-downtime migration to Azure Database for PostgreSQL - Single Server
+
+You can perform PostgreSQL migrations to Azure Database for PostgreSQL with minimal downtime by using the newly introduced **continuous sync capability** for the [Azure Database Migration Service](https://aka.ms/get-dms) (DMS). This functionality limits the amount of downtime that is incurred by the application.
+
+## Overview
+Azure DMS performs an initial load of your on-premises to Azure Database for PostgreSQL, and then continuously syncs any new transactions to Azure while the application remains running. After the data catches up on the target Azure side, you stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then update your connection string to point to Azure. When you are finished, your application will be live on Azure!
++
+## Next steps
+- View the video [App Modernization with Microsoft Azure](https://medius.studios.ms/Embed/Video/BRK2102?sid=BRK2102), which contains a demo showing how to migrate PostgreSQL apps to Azure Database for PostgreSQL.
+- See the tutorial [Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS](../../dms/tutorial-postgresql-azure-postgresql-online.md).
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI
+
+>[!NOTE]
+> Single Server to Flexible Server migration tool is in public preview.
+
+This quick start article shows you how to use Single to Flexible Server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
+
+3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I Database Migration Service register button." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
+
+## Pre-requisites
+
+### Setup Azure CLI
+
+1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli)
+2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli.md).
+3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli.md).
+
+ ```bash
+ az login
+ ```
+4. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration tool.
+
+## Migration CLI commands
+
+Single to Flexible Server migration tool comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
+
+```azurecli-interactive
+az postgres flexible-server migration --help
+```
+
+ gives you the following output.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-help.png":::
+
+It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail.
+
+### Create migration
+
+The create migration command helps in creating a migration from a source server to a target server
+
+```azurecli-interactive
+az postgres flexible-server migration create -- help
+```
+
+gives the following result
++
+It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below
+
+```azurecli
+az postgres flexible-server migration create [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--properties]
+```
+
+| Parameter | Description |
+| - | - |
+|**subscription** | Subscription ID of the target flexible server |
+| **resource-group** | Resource group of the target flexible server |
+| **name** | Name of the target flexible server |
+| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. |
+| **properties** | Absolute path to a JSON file, that has the information about the source single server |
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
+```
+
+The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
+
+The migration tool offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
+
+Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
+
+The structure of the JSON is given below.
+
+```bash
+{
+"properties": {
+ "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
+
+"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
+"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
+
+"SecretParameters": {
+ "AdminCredentials":
+ {
+ "SourceServerPassword": "<password>",
+ "TargetServerPassword": "<password>"
+ },
+"AADApp":
+ {
+ "ClientId": "<client id>",
+ "TenantId": "<tenant id>",
+ "AadSecret": "<secret>"
+ }
+},
+
+"MigrationResourceGroup":
+ {
+ "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
+ "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
+ },
+
+"DBsToMigrate":
+ [
+ "<db1>","<db2>"
+ ],
+
+"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
+
+"OverwriteDBsInTarget":ΓÇ»"true"
+
+}
+
+}
+
+```
+
+Create migration parameters:
+
+| Parameter | Type | Description |
+| - | - | - |
+| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. |
+| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. |
+| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. |
+| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
+| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this tool are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
+| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. |
+| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
+| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration tool, permission to automatically overwrite databases by setting the value of this property to **true** |
+
+### Mode of migrations
+
+The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server.
+
+If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command.
+
+### List migrations
+
+The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--filter]
+```
+
+There is a parameter called **filter** and it can take **Active** and **All** as values.
+
+- **Active** ΓÇô Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/canceled/succeeded state.
+- **All** ΓÇô Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state.
+
+```azurecli-interactive
+az postgres flexible-server migration list -- help
+```
+
+For any additional information.
+
+### Show Details
+
+The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below:
+
+```azurecli
+az postgres flexible-server migration list [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command.
++
+Some important points to note on the command response:
+
+- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section.
+
+```azurecli-interactive
+ az postgres flexible-server migration show -- help
+ ```
+
+for any additional information.
+
+### Update migration
+
+As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states.
+
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--initiate-data-migration]
+```
+
+You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
+```
+
+In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action.
+
+- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--overwrite-dbs]
+```
+
+You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server.
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
+```
+
+- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command.
+
+```azurecli
+az postgres flexible-server migration update [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+ [--cutover]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
+```
+
+After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server.
+
+```azurecli-interactive
+ az postgres flexible-server migration update -- help
+ ```
+
+for any additional information.
+
+### Delete/Cancel Migration
+
+Any ongoing migration attempts can be deleted or canceled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration
+
+```azurecli
+az postgres flexible-server migration delete [--subscription]
+ [--resource-group]
+ [--name]
+ [--migration-name]
+```
+
+**For example:**
+
+```azurecli-interactive
+az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
+```
+
+```azurecli-interactive
+ az postgres flexible-server migration delete -- help
+ ```
+
+for any additional information.
+
+## Monitoring Migration
+
+The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration.
+
+Migration **states**:
+
+| Migration State | Description |
+| - | - |
+| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. |
+| **Canceled** | The migration has been canceled or deleted. |
+| **Failed** | The migration has failed. |
+| **Succeeded** | The migration has succeeded and is complete. |
+| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
+
+Migration **substates**:
+
+| Migration substates | Description |
+| - | - |
+| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. |
+| **MigratingData** | Data is being migrated. |
+| **CompletingMigration** | Migration cutover in progress. |
+| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. |
+| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. |
+| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into. <br> You can enable this via the **update migration** CLI command. |
+| **Completed** | Cutover was successful, and migration is complete. |
++
+## How to find if custom DNS is used for name resolution?
+Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server.
++
+## Post Migration Steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
+
+ Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal"
+
+description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal.
++++ Last updated : 05/09/2022++
+# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
+
+This guide shows you how to use Single to Flexible server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+
+## Before you begin
+
+1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Register your subscription for the Azure Database Migration Service
+
+Go to Azure portal homepage and navigate to your subscription as shown below.
++
+In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**.
++
+## Pre-requisites
+
+Take care of the [pre-requisites](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration tool.
+
+## Configure migration task
+
+Single to Flexible server migration tool comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
+
+1. **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
+
+2. Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](../flexible-server/quickstart-create-server-portal.md).
+
+3. In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration Preview Tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
+
+4. Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration tool, you will see an empty grid with a prompt to begin your first migration.
+
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
+
+ If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers.
+
+5. Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server.
+
+### Setup tab
+
+The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same.
++
+- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.
+- The **Migration resource group** is where all the migration-related components will be created by this migration tool.
+
+By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
+
+- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field.
++
+Click on the **Next** button.
+
+### Source tab
++
+The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region.
+
+Choose the single server from which you want to migrate databases from, in the drop down.
+
+Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration tool to login into the single server to initiate the dump and migration.
+
+You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
+
+The final property in the source tab is migration mode. The migration tool offers online and offline mode of migration. The concepts page talks more about the [migration modes and their differences](./concepts-single-to-flexible.md).
+
+Once you pick the migration mode, the restrictions associated with the mode are displayed.
+
+After filling out all the fields, please click the **Next** button.
+
+### Target tab
++
+This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration tool to login into the flexible server to perform restore operations.
+
+Choose an option **yes/no** for **Authorize DB overwrite**.
+
+- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present.
+- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration.
+
+Click on the **Next** button
+
+### Networking tab
+
+The content on the Networking tab depends on the networking topology of your source and target servers.
+
+- If both source and target servers are in public access, then you are going to see the message below.
++
+In this case, you need not do anything and can just click on the **Next** button.
+
+- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server:
+
+- **Single Server Private Access** ΓÇô **Deny public network access** set to **Yes** and a private end point configured
+- **Flexible Server Private Access** ΓÇô When flexible server is deployed inside a Vnet.
+
+If either source or target is configured in private access, then the networking tab looks like the following
++
+All the fields will be automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure DMS to move data between the source and target.
+
+You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
+
+After picking a subnet, click on **Next** button
+
+### Review + create tab
+
+This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration.
++
+## Monitoring migrations
+
+After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created.
++
+You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration
++
+The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid.
+
+You can use the refresh button to refresh the status of the migrations.
+
+You can click on the migration name in the grid to see the details of that migration.
++
+- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks.
+- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.
+- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.
+- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc.
+- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
+- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully.
++
+You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**.
++
+At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server.
+
+After completing the above steps, click on the **Cutover** button. You should see the following message
++
+Click on the **Yes** button to start cutover.
+
+In a few seconds after starting cutover, you should see the following notification
++
+Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful.
+
+After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server.
+
+Possible migration states include
+
+- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress.
+- **Canceled**: The migration has been canceled or deleted.
+- **Failed**: The migration has failed.
+- **Succeeded**: The migration has succeeded and is complete.
+- **WaitingForUserAction**: Migration is waiting on a user action..
+
+Possible migration substates include
+
+- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration
+- **MigratingData**: Data is being migrated
+- **CompletingMigration**: Migration cutover in progress
+- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.
+- **WaitingForCutoverTrigger**: Migration is ready for cutover.
+- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into.
+- **Completed**: Cutover was successful, and migration is complete.
+
+## Cancel migrations
+
+You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**.
+
+You can choose multiple ongoing migrations at once and can cancel them.
++
+Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a canceled migration on your target server.
+
+## Post migration steps
+
+Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+
+## Next steps
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
postgresql How To Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-dump-and-restore.md
+
+ Title: Dump and restore - Azure Database for PostgreSQL - Single Server
+description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.
+++++ Last updated : 09/22/2020++
+# Migrate your PostgreSQL database by using dump and restore
+
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a dump file. Then use [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) to restore the PostgreSQL database from an archive file created by `pg_dump`.
+
+## Prerequisites
+
+To step through this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](../single-server/quickstart-create-server-database-portal.md), including firewall rules to allow access.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
+
+## Create a dump file that contains the data to be loaded
+
+To back up an existing PostgreSQL database on-premises or in a VM, run the following command:
+
+```bash
+pg_dump -Fc -v --host=<host> --username=<name> --dbname=<database name> -f <database>.dump
+```
+For example, if you have a local server and a database called **testdb** in it, run:
+
+```bash
+pg_dump -Fc -v --host=localhost --username=masterlogin --dbname=testdb -f testdb.dump
+```
+
+## Restore the data into the target database
+
+After you've created the target database, you can use the `pg_restore` command and the `--dbname` parameter to restore the data into the target database from the dump file.
+
+```bash
+pg_restore -v --no-owner --host=<server name> --port=<port> --username=<user-name> --dbname=<target database name> <database>.dump
+```
+
+Including the `--no-owner` parameter causes all objects created during the restore to be owned by the user specified with `--username`. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/app-pgrestore.html).
+
+> [!NOTE]
+> On Azure Database for PostgreSQL servers, TLS/SSL connections are on by default. If your PostgreSQL server requires TLS/SSL connections, but doesn't have them, set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error might read: "FATAL: SSL connection is required. Please specify SSL options and retry." In the Windows command line, run the command `SET PGSSLMODE=require` before running the `pg_restore` command. In Linux or Bash, run the command `export PGSSLMODE=require` before running the `pg_restore` command.
+>
+
+In this example, restore the data from the dump file **testdb.dump** into the database **mypgsqldb**, on target server **mydemoserver.postgres.database.azure.com**.
+
+Here's an example for how to use this `pg_restore` for Single Server:
+
+```bash
+pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb testdb.dump
+```
+
+Here's an example for how to use this `pg_restore` for Flexible Server:
+
+```bash
+pg_restore -v --no-owner --host=mydemoserver.postgres.database.azure.com --port=5432 --username=mylogin --dbname=mypgsqldb testdb.dump
+```
+
+## Optimize the migration process
+
+One way to migrate your existing PostgreSQL database to Azure Database for PostgreSQL is to back up the database on the source and restore it in Azure. To minimize the time required to complete the migration, consider using the following parameters with the backup and restore commands.
+
+> [!NOTE]
+> For detailed syntax information, see [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
+>
+
+### For the backup
+
+Take the backup with the `-Fc` switch, so that you can perform the restore in parallel to speed it up. For example:
+
+```bash
+pg_dump -h my-source-server-name -U source-server-username -Fc -d source-databasename -f Z:\Data\Backups\my-database-backup.dump
+```
+
+### For the restore
+
+- Move the backup file to an Azure VM in the same region as the Azure Database for PostgreSQL server you are migrating to. Perform the `pg_restore` from that VM to reduce network latency. Create the VM with [accelerated networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) enabled.
+
+- Open the dump file to verify that the create index statements are after the insert of the data. If it isn't the case, move the create index statements after the data is inserted. This should already be done by default, but it's a good idea to confirm.
+
+- Restore with the switches `-Fc` and `-j` (with a number) to parallelize the restore. The number you specify is the number of cores on the target server. You can also set to twice the number of cores of the target server to see the impact.
+
+ Here's an example for how to use this `pg_restore` for Single Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+ Here's an example for how to use this `pg_restore` for Flexible Server:
+
+ ```bash
+ pg_restore -h my-target-server.postgres.database.azure.com -U azure-postgres-username@my-target-server -Fc -j 4 -d my-target-databasename Z:\Data\Backups\my-database-backup.dump
+ ```
+
+- You can also edit the dump file by adding the command `set synchronous_commit = off;` at the beginning, and the command `set synchronous_commit = on;` at the end. Not turning it on at the end, before the apps change the data, might result in subsequent loss of data.
+
+- On the target Azure Database for PostgreSQL server, consider doing the following before the restore:
+
+ - Turn off query performance tracking. These statistics aren't needed during the migration. You can do this by setting `pg_stat_statements.track`, `pg_qs.query_capture_mode`, and `pgms_wait_sampling.query_capture_mode` to `NONE`.
+
+ - Use a high compute and high memory SKU, like 32 vCore Memory Optimized, to speed up the migration. You can easily scale back down to your preferred SKU after the restore is complete. The higher the SKU, the more parallelism you can achieve by increasing the corresponding `-j` parameter in the `pg_restore` command.
+
+ - More IOPS on the target server might improve the restore performance. You can provision more IOPS by increasing the server's storage size. This setting isn't reversible, but consider whether a higher IOPS would benefit your actual workload in the future.
+
+Remember to test and validate these commands in a test environment before you use them in production.
+
+## Next steps
+
+- To migrate a PostgreSQL database by using export and import, see [Migrate your PostgreSQL database using export and import](how-to-migrate-using-export-and-import.md).
+- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql How To Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-using-export-and-import.md
+
+ Title: Migrate a database - Azure Database for PostgreSQL - Single Server
+description: Describes how extract a PostgreSQL database into a script file and import the data into the target database from that file.
+++++ Last updated : 09/22/2020+
+# Migrate your PostgreSQL database using export and import
+
+You can use [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) to extract a PostgreSQL database into a script file and [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to import the data into the target database from that file.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- An [Azure Database for PostgreSQL server](../single-server/quickstart-create-server-database-portal.md) with firewall rules to allow access and database under it.
+- [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) command-line utility installed
+- [psql](https://www.postgresql.org/docs/current/static/app-psql.html) command-line utility installed
+
+Follow these steps to export and import your PostgreSQL database.
+
+## Create a script file using pg_dump that contains the data to be loaded
+To export your existing PostgreSQL database on-premises or in a VM to a sql script file, run the following command in your existing environment:
+
+```bash
+pg_dump --host=<host> --username=<name> --dbname=<database name> --file=<database>.sql
+```
+For example, if you have a local server and a database called **testdb** in it:
+```bash
+pg_dump --host=localhost --username=masterlogin --dbname=testdb --file=testdb.sql
+```
+
+## Import the data on target Azure Database for PostgreSQL
+You can use the psql command line and the --dbname parameter (-d) to import the data into the Azure Database for PostgreSQL server and load data from the sql file.
+
+```bash
+psql --file=<database>.sql --host=<server name> --port=5432 --username=<user> --dbname=<target database name>
+```
+This example uses psql utility and a script file named **testdb.sql** from previous step to import data into the database **mypgsqldb** on the target server **mydemoserver.postgres.database.azure.com**.
+
+For **Single Server**, use this command
+```bash
+psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin@mydemoserver --dbname=mypgsqldb
+```
+
+For **Flexible Server**, use this command
+```bash
+psql --file=testdb.sql --host=mydemoserver.database.windows.net --port=5432 --username=mylogin --dbname=mypgsqldb
+```
+++
+## Next steps
+- To migrate a PostgreSQL database using dump and restore, see [Migrate your PostgreSQL database using dump and restore](how-to-migrate-using-dump-and-restore.md).
+- For more information about migrating databases to Azure Database for PostgreSQL, see the [Database Migration Guide](/data-migration/).
postgresql How To Setup Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-setup-azure-ad-app-portal.md
+
+ Title: "Set up Azure AD app to use with Single to Flexible migration"
+
+description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature.
++++ Last updated : 05/09/2022++
+# Set up Azure AD app to use with Single to Flexible server Migration
+
+This quick start article shows you how to set up Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps:
+
+## Create Azure AD App
+
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
+2. Search for Azure Active Directory in the search bar on the top in the portal.
+3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**.
+4. Click on **New Registration**
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png":::
+
+5. Give the app registration a name, choose an option that suits your needs for account types and click register
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png":::
+
+6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png":::
+
+7. In the next screen, click on **New client secret**.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png":::
+
+8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png":::
+
+9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png":::
+
+10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources:
+
+ | Resource | Type | Description |
+ | - | - | - |
+ | Single Server | Required | Source single server you're migrating from. |
+ | Flexible Server | Required | Target flexible server you're migrating into. |
+ | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. |
+ | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. |
++
+## Add contributor privileges to an Azure resource
+
+Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used).
+
+1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png":::
+
+2. Click **Add** and choose **Add role assignment**.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png":::
+
+> [!NOTE]
+> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments.
+
+3. Under the **Role** tab, click on **Contributor** and click Next button
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png":::
+
+4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**.
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png":::
+
+
+## Next steps
+
+- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
+- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flexible-portal.md)
+- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
# Azure Private Link availability
-Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a [private endpoint](private-endpoint-overview.md) in your virtual network.
+Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a [private endpoint](private-endpoint-overview.md) in your virtual network.
> [!IMPORTANT]
-> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS will onboard to Azure Private Link at different schedules. For known limitations, see [Private Endpoint](private-endpoint-overview.md#limitations) and [Private Link Service](private-link-service-overview.md#limitations).
+> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS will onboard to Azure Private Link at different schedules. For known limitations, see [Private Endpoint](private-endpoint-overview.md#limitations) and [Private Link Service](private-link-service-overview.md#limitations).
## Service availability
-The following tables list the Private Link services and the regions where they're available.
+The following tables list the Private Link services and the regions where they're available.
### AI + Machine Learning
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| |Azure App Configuration | All public regions | | GA </br> [Learn how to create a private endpoint for Azure App Configuration](../azure-app-configuration/concept-private-endpoint.md) | |Azure-managed Disks | All public regions<br/> All Government regions<br/>All China regions | [Select for known limitations](../virtual-machines/disks-enable-private-links-for-import-export-portal.md#limitations) | GA <br/> [Learn how to create a private endpoint for Azure Managed Disks.](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) |
+| Azure Batch (batchAccount) | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
+| Azure Batch (nodeManagement) | [Selected regions](../batch/simplified-compute-node-communication.md#supported-regions) | Supported for [simplified compute node communication](../batch/simplified-compute-node-communication.md) | Preview <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
### Containers
The following tables list the Private Link services and the regions where they'r
| Azure File Sync | All public regions | | GA <br/> [Learn how to create Azure Files network endpoints.](../storage/file-sync/file-sync-networking-endpoints.md) | | Azure Queue storage | All public regions<br/> All Government regions | Supported only on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for queue storage.](tutorial-private-endpoint-storage-portal.md) | | Azure Table storage | All public regions<br/> All Government regions | Supported only on Account Kind General Purpose V2 | GA <br/> [Learn how to create a private endpoint for table storage.](tutorial-private-endpoint-storage-portal.md) |
-| Azure Batch | All public regions except: Germany CENTRAL, Germany NORTHEAST <br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Batch.](../batch/private-connectivity.md) |
### Web |Supported services |Available regions | Other considerations | Status |
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
It's important to correctly configure your DNS settings to resolve the private endpoint IP address to the fully qualified domain name (FQDN) of the connection string.
-Existing Microsoft Azure services might already have a DNS configuration for a public endpoint. This configuration must be overridden to connect using your private endpoint.
-
-The network interface associated with the private endpoint contains the information to configure your DNS. The network interface information includes FQDN and private IP addresses for your private link resource.
-
-You can use the following options to configure your DNS settings for private endpoints:
-- **Use the host file (only recommended for testing)**. You can use the host file on a virtual machine to override the DNS.
+Existing Microsoft Azure services might already have a DNS configuration for a public endpoint. This configuration must be overridden to connect using your private endpoint.
+
+The network interface associated with the private endpoint contains the information to configure your DNS. The network interface information includes FQDN and private IP addresses for your private link resource.
+
+You can use the following options to configure your DNS settings for private endpoints:
+- **Use the host file (only recommended for testing)**. You can use the host file on a virtual machine to override the DNS.
- **Use a private DNS zone**. You can use [private DNS zones](../dns/private-dns-privatednszone.md) to override the DNS resolution for a private endpoint. A private DNS zone can be linked to your virtual network to resolve specific domains. - **Use your DNS forwarder (optional)**. You can use your DNS forwarder to override the DNS resolution for a private link resource. Create a DNS forwarding rule to use a private DNS zone on your [DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) hosted in a virtual network. > [!IMPORTANT]
-> It is not recommended to override a zone that's actively in use to resolve public endpoints. Connections to resources won't be able to resolve correctly without DNS forwarding to the public DNS. To avoid issues, create a different domain name or follow the suggested name for each service below.
+> It is not recommended to override a zone that's actively in use to resolve public endpoints. Connections to resources won't be able to resolve correctly without DNS forwarding to the public DNS. To avoid issues, create a different domain name or follow the suggested name for each service below.
> [!IMPORTANT] > Existing Private DNS Zones tied to a single service should not be associated with two different Private Endpoints as it will not be possible to properly resolve two different A-Records that point to the same service. However, Private DNS Zones tied to multiple services would not face this resolution constraint. ## Azure services DNS zone configuration
-Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints.
-
-Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications.
+Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints.
+
+Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications.
> [!IMPORTANT]
-> Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
+> Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
For Azure services, use the recommended zone names as described in the following table:
For Azure services, use the recommended zone names as described in the following
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
-| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.{region}.batch.azure.com | {region}.batch.azure.com |
+| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.batch.azure.com | {region}.batch.azure.com |
+| Azure Batch (Microsoft.Batch/batchAccounts) / nodeManagement | privatelink.batch.azure.com | {region}.service.batch.azure.com |
| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com | | Azure Database for MySQL (Microsoft.DBforMySQL/servers) / mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com | | Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
For Azure services, use the recommended zone names as described in the following
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn |
+| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.batch.chinacloudapi.cn | {region}.batch.chinacloudapi.cn |
+| Azure Batch (Microsoft.Batch/batchAccounts) / nodeManagement | privatelink.batch.chinacloudapi.cn | {region}.service.batch.chinacloudapi.cn |
| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn | | Azure Database for MySQL (Microsoft.DBforMySQL/servers) / mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn | | Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.chinacloudapi.cn | mariadb.database.chinacloudapi.cn |
DNS is a critical component to make the application work correctly by successful
Based on your preferences, the following scenarios are available with DNS resolution integrated: -- [Virtual network workloads without custom DNS server](#virtual-network-workloads-without-custom-dns-server)-- [On-premises workloads using a DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)-- [Virtual network and on-premises workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
+- [Azure Private Endpoint DNS configuration](#azure-private-endpoint-dns-configuration)
+ - [Azure services DNS zone configuration](#azure-services-dns-zone-configuration)
+ - [China](#china)
+ - [DNS configuration scenarios](#dns-configuration-scenarios)
+ - [Virtual network workloads without custom DNS server](#virtual-network-workloads-without-custom-dns-server)
+ - [On-premises workloads using a DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)
+ - [Virtual network and on-premises workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
+ - [Private DNS zone group](#private-dns-zone-group)
+ - [Next steps](#next-steps)
> [!NOTE] > [Azure Firewall DNS proxy](../firewall/dns-settings.md#dns-proxy) can be used as DNS forwarder for [On-premises workloads](#on-premises-workloads-using-a-dns-forwarder) and [Virtual network workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
You can extend this model to peered virtual networks associated to the same priv
> [!IMPORTANT] > If you're using a private endpoint in a hub-and-spoke model from a different subscription or even within the same subscription, link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
-In this scenario, there's a [hub and spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) networking topology. The spoke networks share a private endpoint. The spoke virtual networks are linked to the same private DNS zone.
+In this scenario, there's a [hub and spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) networking topology. The spoke networks share a private endpoint. The spoke virtual networks are linked to the same private DNS zone.
:::image type="content" source="media/private-endpoint-dns/hub-and-spoke-azure-dns.png" alt-text="Hub and spoke with Azure-provided DNS":::
In this scenario, there's a [hub and spoke](/azure/architecture/reference-archit
For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](#azure-services-dns-zone-configuration) in Azure. A [DNS forwarder](/windows-server/identity/ad-ds/plan/reviewing-dns-concepts#resolving-names-by-using-forwarding) is a Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can proxy DNS queries coming from other Virtual Networks or from on-premises. This is required as the query must be originated from the Virtual Network to Azure DNS. A few options for DNS proxies are: Windows running DNS services, Linux running DNS services, [Azure Firewall](../firewall/dns-settings.md).
-The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
+The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
> [!NOTE] > This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](#azure-services-dns-zone-configuration).
The following diagram shows the DNS resolution for both networks, on-prem
## Private DNS zone group
-If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group is a strong association between the private DNS zone and the private endpoint that helps auto-updating the private DNS zone when there is an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated.
+If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group is a strong association between the private DNS zone and the private endpoint that helps auto-updating the private DNS zone when there is an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated.
Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there is no need to write any additional CLI/PowerShell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group will be deleted as well.
-A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating differents zones in each spoke.
+A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating different zones in each spoke.
> [!NOTE] > Each DNS zone group can support up to 5 DNS zones. > [!NOTE]
-> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
+> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
## Next steps - [Learn about private endpoints](private-endpoint-overview.md)
purview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/disaster-recovery.md
Previously updated : 04/23/2021 Last updated : 06/03/2022 # Disaster recovery for Microsoft Purview
purview How To Data Owner Policies Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-arc-sql-server.md
Previously updated : 05/25/2022 Last updated : 06/03/2022 # Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
This how-to guide describes how a data owner can delegate authoring policies in
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]-- SQL server version 2022 CTP 2.0 or later
+- SQL server version 2022 CTP 2.0 or later. [Follow this link](https://www.microsoft.com/sql-server/sql-server-2022)
- Complete process to onboard that SQL server with Azure Arc and enable Azure AD Authentication. [Follow this guide to learn how](https://aka.ms/sql-on-arc-AADauth). **Enforcement of policies for this data source is available only in the following regions for Microsoft Purview**
role-based-access-control Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-cli.md
Previously updated : 09/28/2020 Last updated : 06/03/2022
You can assign a role to a user, group, service principal, or managed identity.
For an Azure AD user, get the user principal name, such as *patlong\@contoso.com* or the user object ID. To get the object ID, you can use [az ad user show](/cli/azure/ad/user#az-ad-user-show). ```azurecli
-az ad user show --id "{principalName}" --query "objectId" --output tsv
+az ad user show --id "{principalName}" --query "id" --output tsv
``` **Group**
az ad user show --id "{principalName}" --query "objectId" --output tsv
For an Azure AD group, you need the group object ID. To get the object ID, you can use [az ad group show](/cli/azure/ad/group#az-ad-group-show) or [az ad group list](/cli/azure/ad/group#az-ad-group-list). ```azurecli
-az ad group show --group "{groupName}" --query "objectId" --output tsv
+az ad group show --group "{groupName}" --query "id" --output tsv
``` **Service principal**
az ad group show --group "{groupName}" --query "objectId" --output tsv
For an Azure AD service principal (identity used by an application), you need the service principal object ID. To get the object ID, you can use [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). For a service principal, use the object ID and **not** the application ID. ```azurecli
-az ad sp list --all --query "[].{displayName:displayName, objectId:objectId}" --output tsv
+az ad sp list --all --query "[].{displayName:displayName, id:id}" --output tsv
az ad sp list --display-name "{displayName}" ```
role-based-access-control Role Assignments List Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-cli.md
na Previously updated : 10/30/2020 Last updated : 06/03/2022
az role assignment list --scope /providers/Microsoft.Management/managementGroups
To get the principal ID of a user-assigned managed identity, you can use [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list) or [az identity list](/cli/azure/identity#az-identity-list). ```azurecli
- az ad sp list --display-name "{name}" --query [].objectId --output tsv
+ az ad sp list --display-name "{name}" --query [].id --output tsv
``` To get the principal ID of a system-assigned managed identity, you can use [az ad sp list](/cli/azure/ad/sp#az-ad-sp-list). ```azurecli
- az ad sp list --display-name "{vmname}" --query [].objectId --output tsv
+ az ad sp list --display-name "{vmname}" --query [].id --output tsv
``` 1. To list the role assignments, use [az role assignment list](/cli/azure/role/assignment#az-role-assignment-list).
role-based-access-control Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-template.md
Previously updated : 01/21/2021 Last updated : 06/03/2022 ms.devlang: azurecli
$objectid = (Get-AzADUser -DisplayName "{name}").id
``` ```azurecli
-objectid=$(az ad user show --id "{email}" --query objectId --output tsv)
+objectid=$(az ad user show --id "{email}" --query id --output tsv)
``` ### Group
$objectid = (Get-AzADGroup -DisplayName "{name}").id
``` ```azurecli
-objectid=$(az ad group show --group "{name}" --query objectId --output tsv)
+objectid=$(az ad group show --group "{name}" --query id --output tsv)
``` ### Managed identities
$objectid = (Get-AzADServicePrincipal -DisplayName <Azure resource name>).id
``` ```azurecli
-objectid=$(az ad sp list --display-name <Azure resource name> --query [].objectId --output tsv)
+objectid=$(az ad sp list --display-name <Azure resource name> --query [].id --output tsv)
``` ### Application
$objectid = (Get-AzADServicePrincipal -DisplayName "{name}").id
``` ```azurecli
-objectid=$(az ad sp list --display-name "{name}" --query [].objectId --output tsv)
+objectid=$(az ad sp list --display-name "{name}" --query [].id --output tsv)
``` ## Assign an Azure role
security Best Practices And Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/best-practices-and-patterns.md
na Previously updated : 5/03/2019 Last updated : 6/02/2022 # Azure security best practices and patterns
-The articles below contain security best practices to use when youΓÇÖre designing, deploying, and managing your cloud solutions by using Azure. These best practices come from our experience with Azure security and the experiences of customers like you.
+The articles below contain security best practices to use when you're designing, deploying, and managing your cloud solutions by using Azure. These best practices come from our experience with Azure security and the experiences of customers like you.
The best practices are intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions.
The best practices are intended to be a resource for IT pros. This might include
* [Securing PaaS web and mobile applications using Azure Storage](paas-applications-using-storage.md) * [Security best practices for IaaS workloads in Azure](iaas.md)
-The white paper [Security best practices for Azure solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions) is a collection of the security best practices found in the articles listed above.
+## Next steps
-[Download the white paper](https://azure.microsoft.com/mediahandler/files/resourcefiles/security-best-practices-for-azure-solutions/Azure%20Security%20Best%20Practices.pdf)
+Microsoft has found that using security benchmarks can help you quickly secure cloud deployments. Benchmark recommendations from your cloud service provider give you a starting point for selecting specific security configuration settings in your environment and allow you to quickly reduce risk to your organization. See the [Azure Security Benchmark](/security/benchmark/azure/introduction) for a collection of high-impact security recommendations you can use to help secure the services you use in Azure.
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
If you only need the SDK, you can install this package:
The current versions are:
-* Service Fabric SDK and Tools 6.0.1017
-* Service Fabric runtime 9.0.1017
+* Service Fabric SDK and Tools 6.0.1028
+* Service Fabric runtime 9.0.1028
For a list of supported versions, see [Service Fabric versions](service-fabric-versions.md)
service-fabric Service Fabric Reliable Services Communication Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting.md
Title: Service remoting by using C# in Service Fabric description: Service Fabric remoting allows clients and services to communicate with C# services by using a remote procedure call. Previously updated : 05/17/2022 Last updated : 06/03/2022 # Service remoting in C# with Reliable Services
class MyService : StatelessService, IMyService
## Call remote service methods
+> [!NOTE]
+> If you are using more than one partition, the ServiceProxy.Create() must be provided the appropriate ServicePartitionKey. This is not needed for a one partition scenario.
+ Calling methods on a service by using the remoting stack is done by using a local proxy to the service through the `Microsoft.ServiceFabric.Services.Remoting.Client.ServiceProxy` class. The `ServiceProxy` method creates a local proxy by using the same interface that the service implements. With that proxy, you can call methods on the interface remotely. ```csharp
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
+| 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 8.2 CU3<br>8.2.1620.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 | | 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
+| 8.1 CU4<br>8.1.388.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
| 8.1 CU3.1<br>8.1.337.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.335.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU2<br>8.1.329.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1 (GA), <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
+| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
| 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 | | 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
+| 8.1 CU4<br>8.1.360.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
| 8.1 CU3.1<br>8.1.340.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.334.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU2<br>8.1.328.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
+| 9.0 CU1 | 9.0.1028.9590 | 9.0.1035.1 |
| 9.0 RTO | 9.0.1017.9590 | 9.0.1018.1 |
+| 8.2 CU3 | 8.2.1620.9590 | 8.2.1434.1 |
| 8.2 CU2.1 | 8.2.1571.9590 | 8.2.1397.1 | | 8.2 CU2 | 8.2.1486.9590 | 8.2.1285.1 | | 8.2 CU1 | 8.2.1363.9590 | 8.2.1204.1 | | 8.2 RTO | 8.2.1235.9590 | 8.2.1124.1 |
+| 8.1 CU4 | 8.1.388.9590 | 8.1.360.1 |
| 8.1 CU3.1 | 8.1.337.9590 | 8.1.340.1 | | 8.1 CU3 | 8.1.335.9590 | 8.1.334.1 | | 8.1 CU2 | 8.1.329.9590 | 8.1.328.1 |
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 03/04/2022 Last updated : 06/03/2022
This article describes limitations and known issues of SFTP support for Azure Bl
The following clients are known to be incompatible with SFTP for Azure Blob Storage (preview). See [Supported algorithms](secure-file-transfer-protocol-support.md#supported-algorithms) for more information. -- Axway - Five9 - Kemp-- Moveit - Mule - paramiko 1.16.0-- Salesforce - SSH.NET 2016.1.0-- XFB.Gateway > [!NOTE] > The unsupported client list above is not exhaustive and may change over time.
The following clients are known to be incompatible with SFTP for Azure Blob Stor
| Random writes and appends | <li>Operations that include both READ and WRITE flags. For example: [SSH.NET create API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20SftpFileStream-,Create,-(string%20path))<li>Operations that include APPEND flag. For example: [SSH.NET append API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20void-,AppendAllLines,-(string%20path%2C%20IEnumerable%3Cstring%3E%20contents)). | | Links |<li>`symlink` - creating symbolic links<li>`ln` - creating hard links<li>Reading links not supported | | Capacity Information | `df` - usage info for filesystem |
-| Extensions | Unsupported extensions include but are not limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com |
+| Extensions | Unsupported extensions include but aren't limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com |
| SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of the key exchange will fail. |
-| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) are not allowed from other protocols on blobs that are created by using SFTP. Full overwrites are allowed.|
+| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols on blobs that are created by using SFTP. Full overwrites are allowed.|
## Authentication and authorization - _Local users_ is the only form of identity management that is currently supported for the SFTP endpoint. -- Azure Active Directory (Azure AD) is not supported for the SFTP endpoint.
+- Azure Active Directory (Azure AD) isn't supported for the SFTP endpoint.
-- POSIX-like access control lists (ACLs) are not supported for the SFTP endpoint.
+- POSIX-like access control lists (ACLs) aren't supported for the SFTP endpoint.
> [!NOTE] > After your data is ingested into Azure Storage, you can use the full breadth of Azure storage security settings. While authorization mechanisms such as role-based access control (RBAC) and access control lists aren't supported as a means to authorize a connecting SFTP client, they can be used to authorize access via Azure tools (such Azure portal, Azure CLI, Azure PowerShell commands, and AzCopy) as well as Azure SDKS, and Azure REST APIs. -- Account and container level operations are not supported for the SFTP endpoint.
+- Account and container level operations aren't supported for the SFTP endpoint.
## Networking - To access the storage account using SFTP, your network must allow traffic on port 22. -- When a firewall is configured, connections from non-allowed IPs are not rejected as expected. However, if there is a successful connection for an authenticated user then all data plane operations will be rejected.--- There's a 4 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+- There's a 4-minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
## Security
The following clients are known to be incompatible with SFTP for Azure Blob Stor
## Integrations -- Change feed and Event Grid notifications are not supported.
+- Change feed and Event Grid notifications aren't supported.
- Network File System (NFS) 3.0 and SFTP can't be enabled on the same storage account.
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
## Other -- Special containers such as $logs, $blobchangefeed, $root, $web are not accessible via the SFTP endpoint.
+- Special containers such as $logs, $blobchangefeed, $root, $web aren't accessible via the SFTP endpoint.
-- Symbolic links are not supported.
+- Symbolic links aren't supported.
-- `ssh-keyscan` is not supported.
+- `ssh-keyscan` isn't supported.
-- SSH and SCP commands, that are not SFTP, are not supported.
+- SSH and SCP commands that aren't SFTP aren't supported.
-- FTPS and FTP are not supported.
+- FTPS and FTP aren't supported.
## Troubleshooting
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 03/04/2022 Last updated : 06/03/2022
# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management.
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management.
> [!IMPORTANT] > SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts. Complete [this form](https://forms.office.com/r/gZguN0j65Y) BEFORE using the feature in preview. Registration via 'preview features' is NOT required and confirmation email will NOT be sent after filling out the form. You can IMMEDIATELY access the feature.
Azure allows secure data transfer to Blob Storage accounts using Azure Blob serv
Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. You would have to create a virtual machine (VM) in Azure to host an SFTP server, and then figure out a way to move data into the storage account.
-Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any additional work.
+Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any more work.
This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
Different protocols extend from the hierarchical namespace. The SFTP is one of t
## SFTP permission model
-Azure Blob Storage does not support Azure Active Directory (Azure AD) authentication or authorization via SFTP. Instead, SFTP utilizes a new form of identity management called _local users_.
+Azure Blob Storage doesn't support Azure Active Directory (Azure AD) authentication or authorization via SFTP. Instead, SFTP utilizes a new form of identity management called _local users_.
Local users must use either a password or a Secure Shell (SSH) private key credential for authentication. You can have a maximum of 1000 local users for a storage account.
-To set up access permissions, you will create a local user, and choose authentication methods. Then, for each container in your account, you can specify the level of access you want to give that user.
+To set up access permissions, you'll create a local user, and choose authentication methods. Then, for each container in your account, you can specify the level of access you want to give that user.
> [!CAUTION] > Local users do not interoperate with other Azure Storage permission models such as RBAC (role based access control), ABAC (attribute based access control), and ACLs (access control lists).
For SFTP enabled storage accounts, you can use the full breadth of Azure Blob St
## Authentication methods
-You can authenticate local users connecting via SFTP by using a password or a Secure Shell (SSH) public-private keypair. You can configure both forms of authentication and let connecting local users choose which one to use. However, multifactor authentication, whereby both a valid password and a valid public-private key pair are required for successful authentication is not supported.
+You can authenticate local users connecting via SFTP by using a password or a Secure Shell (SSH) public-private keypair. You can configure both forms of authentication and let connecting local users choose which one to use. However, multifactor authentication, whereby both a valid password and a valid public-private key pair are required for successful authentication isn't supported.
#### Passwords
-Passwords are generated for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you will have to generate a new one. For security reasons, you can't set the password yourself.
+Passwords are generated for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you'll have to generate a new one. For security reasons, you can't set the password yourself.
#### SSH key pairs
If you choose to authenticate with private-public key pair, you can either gener
## Container permissions
-In the current release, you can specify only container-level permissions. Directory-level permissions are not supported. You can choose which containers you want to grant access to and what level of access you want to provide (Read, Write, List, Delete, and Create). Those permissions apply to all directories and subdirectories in the container. You can grant each local user access to as many as 100 containers. Container permissions can also be updated after creating a local user. The following table describes each permission in more detail.
+In the current release, you can specify only container-level permissions. Directory-level permissions aren't supported. You can choose which containers you want to grant access to and what level of access you want to provide (Read, Write, List, Delete, and Create). Those permissions apply to all directories and subdirectories in the container. You can grant each local user access to as many as 100 containers. Container permissions can also be updated after creating a local user. The following table describes each permission in more detail.
| Permission | Symbol | Description | ||||
sftp myaccount.myusername@myaccount.blob.core.windows.net
put logfile.txt ```
-If you set the home directory of a user to `mycontainer/mydirectory`, then they would connect to that directory. Then, the `logfile.txt` file would be uploaded to `mycontainer/mydirectory`. If you did not set the home directory, then the connection attempt would fail. Instead, connecting users would have to specify a container along with the request and then use SFTP commands to navigate to the target directory before uploading a file. The following example shows this:
+If you set the home directory of a user to `mycontainer/mydirectory`, then they would connect to that directory. Then, the `logfile.txt` file would be uploaded to `mycontainer/mydirectory`. If you didn't set the home directory, then the connection attempt would fail. Instead, connecting users would have to specify a container along with the request and then use SFTP commands to navigate to the target directory before uploading a file. The following example shows this:
```powershell sftp myaccount.mycontainer.myusername@myaccount.blob.core.windows.net
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm
### Known supported clients
-The following clients have compatible algorithm support with SFTP for Azure Blob Storage (preview). See [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) if you are having trouble connecting.
+The following clients have compatible algorithm support with SFTP for Azure Blob Storage (preview). See [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) if you're having trouble connecting.
- AsyncSSH 2.1.0+
+- Axway
- Cyberduck 7.8.2+ - edtFTPjPRO 7.0.0+ - FileZilla 3.53.0+ - libssh 0.9.5+ - Maverick Legacy 1.7.15+
+- Moveit 12.7
- OpenSSH 7.4+ - paramiko 2.8.1+ - PuTTY 0.74+ - QualysML 12.3.41.1+ - RebexSSH 5.0.7119.0+
+- Salesforce
- ssh2js 0.1.20+ - sshj 0.27.0+ - SSH.NET 2020.0.0+ - WinSCP 5.10+ - Workday
+- XFB.Gateway
> [!NOTE] > The supported client list above is not exhaustive and may change over time.
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
description: The Azurite open-source emulator provides a free local environment
Previously updated : 12/03/2021 Last updated : 06/03/2022
You can pass the following connection strings to the [Azure SDKs](https://aka.ms
The full connection string is:
-`DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10001/devstoreaccount1;`
To connect to the blob service only, the connection string is:
To connect to the queue service only, the connection string is:
`DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;`
+To connect to the table service only, the connection string is:
+
+`DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=http://127.0.0.1:10001/devstoreaccount1;`
+ #### HTTPS connection strings The full HTTPS connection string is:
-`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
To use the blob service only, the HTTPS connection string is:
To use the queue service only, the HTTPS connection string is:
`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
+To use the table service only, the HTTPS connection string is:
+
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;`
+ If you used `dotnet dev-certs` to generate your self-signed certificate, use the following connection string.
-`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;`
+`DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;TableEndpoint=https://localhost:10001/devstoreaccount1;`
Update the connection string when using [custom storage accounts and keys](#custom-storage-accounts-and-keys).
var client = new BlobContainerClient(
// With connection string var client = new BlobContainerClient(
- "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "container-name"
+ "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;", "container-name"
); // With account name and key
var client = new QueueClient(
// With connection string var client = new QueueClient(
- "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "queue-name"
+ "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "queue-name"
); // With account name and key
var client = new QueueClient(
); ```
+#### Azure Table Storage
+
+You can also instantiate a TableClient or TableServiceClient.
+
+```csharp
+// With table URL and DefaultAzureCredential
+var client = new Client(
+ new Uri("https://127.0.0.1:10001/devstoreaccount1/table-name"), new DefaultAzureCredential()
+ );
+
+// With connection string
+var client = new TableClient(
+ "DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;TableEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "table-name"
+ );
+
+// With account name and key
+var client = new TableClient(
+ new Uri("https://127.0.0.1:10001/devstoreaccount1/table-name"),
+ new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==")
+ );
+```
+ ### Microsoft Azure Storage Explorer You can use Storage Explorer to view the data stored in Azurite.
The following files and folders may be created in the workspace location when in
- `__blobstorage__` - Directory containing Azurite blob service persisted binary data - `__queuestorage__` - Directory containing Azurite queue service persisted binary data
+- `__tablestorage__` - Directory containing Azurite table service persisted binary data
- `__azurite_db_blob__.json` - Azurite blob service metadata file - `__azurite_db_blob_extent__.json` - Azurite blob service extent metadata file - `__azurite_db_queue__.json` - Azurite queue service metadata file - `__azurite_db_queue_extent__.json` - Azurite queue service extent metadata file
+- `__azurite_db_table__.json` - Azurite table service metadata file
+- `__azurite_db_table_extent__.json` - Azurite table service extent metadata file
To clean up Azurite, delete above files and folders and restart the emulator.
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
description: Learn how to deploy Azure File Sync, from start to finish, using th
Previously updated : 05/27/2022 Last updated : 06/03/2022
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account:
+ - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+ - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations). 4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account:
+ - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+ - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations). 4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
We strongly recommend that you read [Planning for an Azure Files deployment](../
1. An **Azure file share** in the same region that you want to deploy Azure File Sync. For more information, see: - [Region availability](file-sync-planning.md#azure-file-sync-region-availability) for Azure File Sync. - [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share.
-2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+2. The following **storage account** settings must be enabled to allow Azure File Sync access to the storage account:
+ - **SMB security settings** must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
+ - **Allow storage account key access** must be **Enabled**. To check this setting, navigate to your storage account and select Configuration under the Settings section.
3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations). 4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
storage File Sync How To Manage Tiered Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md
There are several ways to check whether a file has been tiered to your Azure fil
When an application accesses a file, the last access time for the file is updated in the cloud tiering database. Applications that scan the file system like anti-virus cause all files to have the same last access time, which impacts when files are tiered.
-To exclude applications from last access time tracking, add the process name to the appropriate registry setting that is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync.
+To exclude applications from last access time tracking, add the process exclusions to the HeatTrackingProcessNamesExclusionList registry setting.
-For v11 and v12 release, add the process exclusions to the HeatTrackingProcessNameExclusionList registry setting.
-Example: reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v HeatTrackingProcessNameExclusionList /t REG_MULTI_SZ /d "SampleApp.exe\0AnotherApp.exe" /f
-
-For v13 release and newer, add the process exclusions to the HeatTrackingProcessNamesExclusionList registry setting.
Example: reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v HeatTrackingProcessNamesExclusionList /t REG_SZ /d "SampleApp.exe|AnotherApp.exe" /f > [!NOTE]
Optional parameters:
- `-Order CloudTieringPolicy` will recall the most recently modified or accessed files first and is allowed by the current tiering policy. * If volume free space policy is configured, files will be recalled until the volume free space policy setting is reached. For example if the volume free policy setting is 20%, recall will stop once the volume free space reaches 20%. * If volume free space and date policy is configured, files will be recalled until the volume free space or date policy setting is reached. For example, if the volume free policy setting is 20% and the date policy is 7 days, recall will stop once the volume free space reaches 20% or all files accessed or modified within 7 days are local.-- `-ThreadCount` determines how many files can be recalled in parallel.
+- `-ThreadCount` determines how many files can be recalled in parallel (thread count limit is 32).
- `-PerFileRetryCount`determines how often a recall will be attempted of a file that is currently blocked. - `-PerFileRetryDelaySeconds`determines the time in seconds between retry to recall attempts and should always be used in combination with the previous parameter.
storage File Sync Server Endpoint Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-create.md
As part of this section, a choice can be made for how content from the Azure fil
:::column-end::: :::row-end:::
-Once you selected an initial download option, you cannot change it after you confirm to create the server endpoint. How files appear on the server after initial download finishes, depends on your use of the cloud tiering feature and whether or not you opted to [proactively recall changes in the cloud](file-sync-cloud-tiering-overview.md#proactive-recalling). The latter is a feature useful for sync groups with multiple server endpoints in different geographic locations.
+Once you select an initial download option, you cannot change it after you confirm to create the server endpoint.
+
+> [!NOTE]
+> To improve the file download performance when adding a server endpoint to a sync group, use the [Invoke-StorageSyncFileRecall](file-sync-how-to-manage-tiered-files.md#how-to-recall-a-tiered-file-to-disk) cmdlet.
+
+### File download behavior once initial download completes
+
+How files appear on the server after initial download finishes, depends on your use of the cloud tiering feature and whether or not you opted to [proactively recall changes in the cloud](file-sync-cloud-tiering-overview.md#proactive-recalling). The latter is a feature useful for sync groups with multiple server endpoints in different geographic locations.
* **Cloud tiering is enabled** </br> New and changed files from other server endpoints will appear as tiered files on this server endpoint. These changes will only come down as full files if you opted for [proactive recall](file-sync-cloud-tiering-overview.md#proactive-recalling) of changes in the Azure file share by other server endpoints. * **Cloud tiering is disabled** </br> New and changed files from other server endpoints will appear as full files on this server endpoint. They will not appear as tiered files first and then recalled. Tiered files with cloud tiering off are a fast disaster recovery feature and appear only during initial provisioning.
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
description: Troubleshoot common issues in a deployment on Azure File Sync, whic
Previously updated : 11/2/2021 Last updated : 6/2/2022
Antivirus, backup, and other applications that read large numbers of files cause
Consult with your software vendor to learn how to configure their solution to skip reading offline files.
-Unintended recalls also might occur in other scenarios, like when you are browsing files in File Explorer. Opening a folder that has cloud-tiered files in File Explorer on the server might result in unintended recalls. This is even more likely if an antivirus solution is enabled on the server.
+Unintended recalls also might occur in other scenarios, like when you are browsing cloud-tiered files in File Explorer. This is likely to occur on Windows Server 2016 if the folder contains executable files. File Explorer was improved for Windows Server 2019 and later to better handle offline files.
> [!NOTE] >Use Event ID 9059 in the Telemetry event log to determine which application(s) is causing recalls. This event provides application recall distribution for a server endpoint and is logged once an hour.
stream-analytics Data Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-error-codes.md
Title: Data error codes - Azure Stream Analytics
-description: Troubleshoot Azure Stream Analytics issues with data error codes.
+description: Troubleshoot Azure Stream Analytics issues with data error codes, which occur when there's bad data in the stream.
Previously updated : 05/07/2020 Last updated : 05/25/2022 + # Azure Stream Analytics data error codes
-You can use activity logs and resource logs to help debug unexpected behaviors from your Azure Stream Analytics job. This article lists the description for every data error error code. Data errors occur when there is bad data in the stream, such as an unexpected record schema.
+You can use activity logs and resource logs to help debug unexpected behaviors from your Azure Stream Analytics job. This article lists the description for every data error code. Data errors occur when there's bad data in the stream, such as an unexpected record schema.
## InputDeserializationError
You can use activity logs and resource logs to help debug unexpected behaviors f
## InputEventTimestampNotFound
-* **Cause**: Stream Analytics is unable to get a timestamp for resource.
+* **Cause**: Stream Analytics is unable to get a timestamp for a resource.
## InputEventTimestampByOverValueNotFound
-* **Cause**: Stream Analytics is unable to get value of `TIMESTAMP BY OVER COLUMN`.
+* **Cause**: Stream Analytics is unable to get the value of `TIMESTAMP BY OVER COLUMN`.
## InputEventLateBeyondThreshold
You can use activity logs and resource logs to help debug unexpected behaviors f
## EventHubOutputRecordExceedsSizeLimit
-* **Cause**: An output record exceeds the maximum size limit when writing to Event Hub.
+* **Cause**: An output record exceeds the maximum size limit when writing to Azure Event Hubs.
## CosmosDBOutputInvalidId
You can use activity logs and resource logs to help debug unexpected behaviors f
## CosmosDBOutputMissingId
-* **Cause**: The output record doesn't contain the column \[id] to use as the primary key property.
+* **Cause**: The output record doesn't contain the column `[id]` to use as the primary key property.
## CosmosDBOutputMissingIdColumn * **Cause**: The output record doesn't contain the Document ID property.
-* **Recommendation**: Ensure the query output contains the column with a unique non-empty string less than '255' characters.
+* **Recommendation**: Ensure the query output contains the column with a unique non-empty string of no more than 255 characters.
## CosmosDBOutputMissingPartitionKey
-* **Cause**: The output record is missing the a column to use as the partition key property.
+* **Cause**: The output record is missing a column to use as the partition key property.
## CosmosDBOutputSingleRecordTooLarge
-* **Cause**: A single record write to Cosmos DB is too large.
+* **Cause**: A single record write to Azure Cosmos DB is too large.
## SQLDatabaseOutputDataError
-* **Cause**: Stream Analytics can't write event(s) to SQL Database due to issues in the data.
+* **Cause**: Stream Analytics can't write event(s) to Azure SQL Database due to issues in the data.
## Next steps
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
Title: "Design and performance for Netezza migrations"
-description: Learn how Netezza and Azure Synapse Analytics SQL databases differ in their approach to high query performance on exceptionally large data volumes.
+description: Learn how Netezza and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes.
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Design and performance for Netezza migrations
Due to end of support from IBM, many existing users of Netezza data warehouse sy
Although Netezza and Azure Synapse Analytics are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach: -- Legacy Netezza systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
+- Legacy Netezza systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud-based and uses Azure storage and compute resources.
- Upgrading a Netezza configuration is a major task involving additional physical hardware and potentially lengthy database reconfiguration, or dump and reload. Since storage and compute resources are separate in the Azure environment, these resources can be scaled upwards or downwards independently, leveraging the elastic scaling capability. - Azure Synapse can be paused or resized as required to reduce resource utilization and cost.
-Microsoft Azure is a globally available, highly secure, scalable cloud environment, that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
+Microsoft Azure is a globally available, highly secure, scalable cloud environment that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
:::image type="content" source="../media/1-design-performance-migration/azure-synapse-ecosystem.png" border="true" alt-text="Chart showing the Azure Synapse ecosystem of supporting tools and capabilities.":::
Legacy Netezza environments have typically evolved over time to encompass multip
- Create a template for further migrations specific to the source Netezza environment and the current tools and processes that are already in place.
-A good candidate for an initial migration from the Netezza environment that would enable the items above, is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modifications&mdash;normally a star or snowflake schema.
+A good candidate for an initial migration from the Netezza environment that would enable the preceding items is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modification, normally a star or snowflake schema.
-The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
+The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10 TB range.
To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts. However, this won't address the broader topics such as ETL migration and historical data migration as part of the initial migration project. Address these topics in later phases of the project, once the migrated data mart layer is backfilled with the data and processes required to build them. #### Lift and shift as-is versus a phased approach incorporating changes > [!TIP]
-> 'Lift and shift' is a good starting point, even if subsequent phases will implement changes to the data model.
+> "Lift and shift" is a good starting point, even if subsequent phases will implement changes to the data model.
Whatever the drive and scope of the intended migration, there are&mdash;broadly speaking&mdash;two types of migration:
In a Netezza environment, there are often multiple separate databases for indivi
> [!TIP] > Replace Netezza-specific features with Azure Synapse features.
-Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse to mimic any separate databases migrated from the Netezza environment. If the Netezza environment already uses schemas, you may need to use a new naming convention to move the existing Netezza tables and views to the new environment&mdash;for example, concatenate the existing Netezza schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
+Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse database to mimic any separate databases migrated from the Netezza environment. If the Netezza environment already uses schemas, you may need to use a new naming convention to move the existing Netezza tables and views to the new environment&mdash;for example, concatenate the existing Netezza schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
- Views in Azure Synapse are read-only, so any updates to the data must take place on the underlying base tables.
Querying within the Azure Synapse environment is limited to a single database. S
When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated as these may not be needed or may be implemented differently within the new target environment.
-However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if queries in the source Netezza environment frequently use zone maps, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques (such as table replication) may be more applicable that a straight 'like for like' index creation.
+However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if queries in the source Netezza environment frequently use zone maps, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight "like-for-like" index creation.
#### Unsupported Netezza database object types
However, it's important to understand where performance optimizations such as in
Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment: -- Zone Maps: In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+- Zone maps: in Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone maps are created on the following column types:
- `INTEGER` columns of length 8 bytes or less. - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`. - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause. You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning. -- Clustered Base tables (CBT): In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+- Clustered base tables (CBT): in Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes. -- Materialized views: Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
+- Materialized views: Netezza supports materialized views and recommends creating one or more of these over large tables having many columns where only a few of those columns are regularly used in queries. The system automatically maintains materialized views when data in the base table is updated.
Azure Synapse supports materialized views, with the same functionality as Netezza.
Netezza implements some database objects that aren't directly supported in Azure
Most Netezza data types have a direct equivalent in Azure Synapse. This table shows these data types together with the recommended approach for handling them.
-| Netezza Data Type | Azure Synapse Data Type |
+| Netezza Data Type | Azure Synapse Data Type |
|--|-| | BIGINT | BIGINT | | BINARY VARYING(n) | VARBINARY(n) |
Most Netezza data types have a direct equivalent in Azure Synapse. This table sh
| BYTEINT | TINYINT | | CHARACTER VARYING(n) | VARCHAR(n) | | CHARACTER(n) | CHAR(n) |
-| DATE | DATE(DATE |
+| DATE | DATE(date) |
| DECIMAL(p,s) | DECIMAL(p,s) | | DOUBLE PRECISION | FLOAT | | FLOAT(n) | FLOAT(n) | | INTEGER | INT |
-| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse but can be calculated using temporal functions such as DATEDIFF |
+| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse, but can be calculated using temporal functions such as DATEDIFF. |
| MONEY | MONEY | | NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) | | NATIONAL CHARACTER(n) | NCHAR(n) | | NUMERIC(p,s) | NUMERIC(p,s) | | REAL | REAL | | SMALLINT | SMALLINT |
-| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY. |
| TIME | TIME | | TIME WITH TIME ZONE | DATETIMEOFFSET | | TIMESTAMP | DATETIME |
There are third-party vendors who offer tools and services to automate migration
There are a few differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse (T-SQL) that you should be aware of during migration: -- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
+- `STRPOS`: in Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
- `AGE`: Netezza supports the `AGE` operator to give the interval between two temporal values, such as timestamps or dates. For example, `SELECT AGE('23-03-1956','01-01-2019') FROM...`. In Azure Synapse, `DATEDIFF` gives the interval. For example, `SELECT DATEDIFF(day, '1956-03-26','2019-01-01') FROM...`. Note the date representation sequence.
If sufficient network bandwidth is available, extract data directly from an on-p
Recommended data formats for the extracted data include delimited text files (also called Comma Separated Values or CSV), Optimized Row Columnar (ORC), or Parquet files.
-For more information about the process of migrating data and ETL from a Netezza environment, see [Data migration, ETL, and load for Netezza migration](1-design-performance-migration.md).
+For more information about the process of migrating data and ETL from a Netezza environment, see [Data migration, ETL, and load for Netezza migrations](2-etl-load-migration-considerations.md).
## Performance recommendations for Netezza migrations
This article provides general information and guidelines about use of performanc
When moving from a Netezza environment, many of the performance tuning concepts for Azure Data Warehouse will be remarkably familiar. For example: -- Using data distribution to co-locate data to be joined onto the same processing node
+- Using data distribution to collocate data to be joined onto the same processing node.
-- Using the smallest data type for a given column will save storage space and accelerate query processing
+- Using the smallest data type for a given column will save storage space and accelerate query processing.
-- Ensuring data types of columns to be joined are identical will optimize join processing by reducing the need to transform data for matching
+- Ensuring data types of columns to be joined are identical will optimize join processing by reducing the need to transform data for matching.
-- Ensuring statistics are up to date will help the optimizer produce the best execution plan
+- Ensuring statistics are up to date will help the optimizer produce the best execution plan.
### Differences in performance tuning approach
This section highlights lower-level implementation differences between Netezza a
`CREATE TABLE` statements in both Netezza and Azure Synapse allow for specification of a distribution definition&mdash;via `DISTRIBUTE ON` in Netezza, and `DISTRIBUTION =` in Azure Synapse.
-Compared to Netezza, Azure Synapse provides an additional way to achieve local joins for small table-large table joins (typically dimension table to fact table in a start schema model) is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described previously is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
+Compared to Netezza, Azure Synapse provides an additional way to achieve local joins for small table-large table joins (typically dimension table to fact table in a star schema model), which is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described previously is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
#### Data indexing
-Azure Synapse provides several user-definable indexing options, but these are different from the system managed zone maps in Netezza. To understand the different indexing options, see [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
+Azure Synapse provides several user-definable indexing options, but these are different from the system-managed zone maps in Netezza. For more information about the different indexing options, see [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
-The existing system managed zone maps within the source Netezza environment can indicate how the data is currently used. They can identify candidate columns for indexing within the Azure Synapse environment.
+The existing system-managed zone maps within the source Netezza environment can indicate how the data is currently used. They can identify candidate columns for indexing within the Azure Synapse environment.
#### Data partitioning In an enterprise data warehouse, fact tables can contain many billions of rows. Partitioning optimizes the maintenance and querying of these tables by splitting them into separate parts to reduce the amount of data processed. The `CREATE TABLE` statement defines the partitioning specification for a table.
-Only one field per table can be used for partitioning. This is frequently a date field since many queries are filtered by date or a date range. It's possible to change the partitioning of a table after initial load by recreating the table with the new distribution using the `CREATE TABLE AS` (or CTAS) statement. See [table partitions](/azure/sql-data-warehouse/sql-data-warehouse-tables-partition) for a detailed discussion of partitioning in Azure Synapse.
+Only one field per table can be used for partitioning. That field is frequently a date field since many queries are filtered by date or a date range. It's possible to change the partitioning of a table after initial load by recreating the table with the new distribution using the `CREATE TABLE AS` (or CTAS) statement. See [table partitions](/azure/sql-data-warehouse/sql-data-warehouse-tables-partition) for a detailed discussion of partitioning in Azure Synapse.
#### Data table statistics
Use [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-m
## Next steps
-To learn more about ETL and load for Netezza migration, see the next article in this series: [Data migration, ETL, and load for Netezza migration](2-etl-load-migration-considerations.md).
+To learn more about ETL and load for Netezza migration, see the next article in this series: [Data migration, ETL, and load for Netezza migrations](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Netezza migration"
+ Title: "Data migration, ETL, and load for Netezza migrations"
description: Learn how to plan your data migration from Netezza to Azure Synapse Analytics to minimize the risk and impact on users.
Previously updated : 05/24/2022 Last updated : 05/31/2022
-# Data migration, ETL, and load for Netezza migration
+# Data migration, ETL, and load for Netezza migrations
This article is part two of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for ETL and load migration.
This article is part two of a seven part series that provides guidance on how to
When migrating a Netezza data warehouse, you need to ask some basic data-related questions. For example: -- Should unused table structures be migrated or not?
+- Should unused table structures be migrated?
-- What's the best migration approach to minimize risk and impact for users?
+- What's the best migration approach to minimize risk and user impact?
-- Migrating data marts&mdash;stay physical or go virtual?
+- When migrating data marts: stay physical or go virtual?
The next sections discuss these points within the context of migration from Netezza.
Even if a data model change is an intended part of the overall migration, it's g
When migrating from Netezza, often the existing data model is already suitable for as-is migration to Azure Synapse.
-#### Migrate data marts - stay physical or go virtual?
+#### Migrate data marts: stay physical or go virtual?
> [!TIP] > Virtualizing data marts can save on storage and processing resources.
If these data marts are implemented as physical tables, they'll require addition
With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](../../partner/data-integration.md). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
-There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is 'pushed down' into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations, on large data volumes.
+There's another potential benefit to this approach. By implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is "pushed down" into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations on large data volumes.
The primary drivers for choosing a virtual data mart implementation over a physical data mart are: -- More agility, since a virtual data mart is easier to change than physical tables and the associated ETL processes.
+- More agility: a virtual data mart is easier to change than physical tables and the associated ETL processes.
-- Lower total cost of ownership, since a virtualized implementation requires fewer data stores and copies of data.
+- Lower total cost of ownership: a virtualized implementation requires fewer data stores and copies of data.
- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment. -- Performance, since although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+- Performance: although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
### Data migration from Netezza #### Understand your data
-Part of migration planning is understanding in detail the volume of data that needs to be migrated since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the 'raw data' within the tables to be migrated. In this context, 'raw data' means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
+Part of migration planning is understanding in detail the volume of data that needs to be migrated since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the "raw data" within the tables to be migrated. In this context, "raw data" means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
-Get an accurate number for the volume of data to be migrated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
+You can get an accurate number for the volume of data to be migrated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
#### Netezza data type mapping
Most Netezza data types have a direct equivalent in Azure Synapse. The following
| BYTEINT | TINYINT | | CHARACTER VARYING(n) | VARCHAR(n) | | CHARACTER(n) | CHAR(n) |
-| DATE | DATE(DATE |
+| DATE | DATE(date) |
| DECIMAL(p,s) | DECIMAL(p,s) | | DOUBLE PRECISION | FLOAT | | FLOAT(n) | FLOAT(n) | | INTEGER | INT |
-| INTERVAL | INTERVAL data types aren't currently directly supported in ASA but can be calculated using temporal functions, such as DATEDIFF |
+| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse Analytics, but can be calculated using temporal functions, such as DATEDIFF. |
| MONEY | MONEY | | NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) | | NATIONAL CHARACTER(n) | NCHAR(n) | | NUMERIC(p,s) | NUMERIC(p,s) | | REAL | REAL | | SMALLINT | SMALLINT |
-| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse Analytics, but the data could be stored as VARCHAR or VARBINARY |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse Analytics, but the data could be stored as VARCHAR or VARBINARY. |
| TIME | TIME | | TIME WITH TIME ZONE | DATETIMEOFFSET | | TIMESTAMP | DATETIME |
The following sections discuss migration options and make recommendations for va
:::image type="content" source="../media/2-etl-load-migration-considerations/migration-options-flowchart.png" border="true" alt-text="Flowchart of migration options and recommendations.":::
-The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
+The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard "built-in" Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use. > [!TIP] > Leverage investment in existing third-party tools to reduce cost and risk.
-If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include 'native' connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient parallel data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
+If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include "native" connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
-If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or to move it into the Azure environment to achieve cost, performance, and scalability benefits.
+If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or move it into the Azure environment to achieve cost, performance, and scalability benefits.
### Re-engineer existing Netezza-specific scripts
If some or all the existing Netezza warehouse ETL/ELT processing is handled by c
> [!TIP] > The inventory of ETL tasks to be migrated should include scripts and stored procedures.
-Some elements of the ETL process are easy to migrate. For example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of nzload. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
+Some elements of the ETL process are easy to migrate, for example by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of nzload. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
-One way of testing Netezza SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Netezza query history, then prefix those queries with `EXPLAIN`, and then (assuming a like-for-like migrated data model in Azure Synapse) run those EXPLAIN statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task.
+One way of testing Netezza SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Netezza query history, then prefix those queries with `EXPLAIN`, and then&mdash;assuming a like-for-like migrated data model in Azure Synapse&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task.
[Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Netezza SQL and stored procedures to Azure Synapse.
As described in the previous section, in many cases the existing legacy data war
> [!TIP] > Third-party tools can simplify and automate the migration process and therefore reduce risk.
-When it comes to migrating data from a Netezza data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Netezza environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which is discussed in the next sections.
+When it comes to migrating data from a Netezza data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Netezza environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which are discussed in the next sections.
- Will you extract the data to files, or move it directly via a network connection?
When it comes to migrating data from a Netezza data warehouse, there are some ba
> [!TIP] > Understand the data volumes to be migrated and the available network bandwidth since these factors influence the migration approach decision.
-Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Netezza system and loaded into the new environment. There are two basic approaches:
+Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Netezza system and into the new environment. There are two basic approaches:
-- **File extract**: Extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example, creates a CSV file via an external table:
+- **File extract**: extract the data from the Netezza tables to flat files, normally in CSV format, via nzsql with the -o option or via the `CREATE EXTERNAL TABLE` statement. Use an external table whenever possible since it's the most efficient in terms of data throughput. The following SQL example creates a CSV file via an external table:
```sql CREATE EXTERNAL TABLE '/data/export.csv' USING (delimiter ',') AS SELECT col1, col2, expr1, expr2, col3, col1 || col2 FROM your table; ```
- Use an external table if you're exporting data to a mounted file system on a local Netezza host. If you're exporting data to a remote machine that has JDBC, ODBC, or OLEDB installed, then your 'remotesource odbc' option is the `USING` clause.
+ Use an external table if you're exporting data to a mounted file system on a local Netezza host. If you're exporting data to a remote machine that has JDBC, ODBC, or OLEDB installed, then your "remotesource odbc" option is the `USING` clause.
This approach requires space to land the extracted data files. The space could be local to the Netezza source database (if sufficient storage is available), or remote in Azure Blob Storage. The best performance is achieved when a file is written locally, since that avoids network overhead. To minimize the storage and network transfer requirements, it's good practice to compress the extracted data files using a utility like gzip.
- Once extracted, the flat files can either be moved into Azure Blob Storage (co-located with the target Azure Synapse instance), or loaded directly into Azure Synapse using PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql). The method for physically moving data from local on-premises storage to the Azure cloud environment depends on the amount of data and the available network bandwidth.
+ Once extracted, the flat files can either be moved into Azure Blob Storage (collocated with the target Azure Synapse instance), or loaded directly into Azure Synapse using PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql). The method for physically moving data from local on-premises storage to the Azure cloud environment depends on the amount of data and the available network bandwidth.
Microsoft provides various options to move large volumes of data, including AzCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box for files moving to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer). -- **Direct extract and load across network**: The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Netezza system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Netezza database and the Azure environment. For very large data volumes, this approach may not be practical.
+- **Direct extract and load across network**: the target Azure environment sends a data extract request, normally via a SQL command, to the legacy Netezza system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Netezza database and the Azure environment. For very large data volumes, this approach may not be practical.
There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
To summarize, our recommendations for migrating data and associated ETL processe
- Understand the data volumes to be migrated, and the network bandwidth between the on-premises data center and Azure cloud environments. -- Leverage standard 'built-in' Azure features when appropriate, to minimize the migration workload.
+- Leverage standard "built-in" Azure features to minimize the migration workload.
-- Identify and understand the most efficient tools for data extract and load in both Netezza and Azure environments. Use the appropriate tools in each phase in the process.
+- Identify and understand the most efficient tools for data extraction and loading in both Netezza and Azure environments. Use the appropriate tools in each phase in the process.
- Use Azure facilities, such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Netezza system.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Security, access, and operations for Netezza migrations
This article is part three of a seven part series that provides guidance on how
## Security considerations
-This article discusses the methods of connection for existing legacy Netezza environments and how they can be migrated to Azure Synapse Analytics with minimal risk and user impact.
+This article discusses connection methods for existing legacy Netezza environments and how they can be migrated to Azure Synapse Analytics with minimal risk and user impact.
-We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities from the Azure portal to create and manage a new security regime.
+This article assumes that there's a requirement to migrate the existing methods of connection and user/role/permission structure as-is. If not, use the Azure portal to create and manage a new security regime.
For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options, see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
For more information on the [Azure Synapse security](../../sql-data-warehouse/sq
#### Netezza authorization options
-The IBM&reg; Netezza&reg; system offers several authentication methods for Netezza database users:
+The IBM Netezza system offers several authentication methods for Netezza database users:
- **Local authentication**: Netezza administrators define database users and their passwords by using the `CREATE USER` command or through Netezza administrative interfaces. In local authentication, use the Netezza system to manage database accounts and passwords, and to add and remove database users from the system. This method is the default authentication method. -- **LDAP authentication**: Use an LDAP name server to authenticate database users, manage passwords, database account activations, and deactivations. The Netezza system uses a Pluggable Authentication Module (PAM) to authenticate users on the LDAP name server. Microsoft Active Directory conforms to the LDAP protocol, so it can be treated like an LDAP server for the purposes of LDAP authentication.
+- **LDAP authentication**: use an LDAP name server to authenticate database users, and manage passwords, database account activations, and deactivations. The Netezza system uses a Pluggable Authentication Module (PAM) to authenticate users on the LDAP name server. Microsoft Active Directory conforms to the LDAP protocol, so it can be treated like an LDAP server for the purposes of LDAP authentication.
-- **Kerberos authentication**: Use a Kerberos distribution server to authenticate database users, manage passwords, database account activations, and deactivations.
+- **Kerberos authentication**: use a Kerberos distribution server to authenticate database users, and manage passwords, database account activations, and deactivations.
Authentication is a system-wide setting. Users must be either locally authenticated or authenticated by using the LDAP or Kerberos method. If you choose LDAP or Kerberos authentication, create users with local authentication on a per-user basis. LDAP and Kerberos can't be used at the same time to authenticate users. Netezza host supports LDAP or Kerberos authentication for database user logins only, not for operating system logins on the host.
Azure Synapse supports two basic options for connection and authorization:
- **SQL authentication**: SQL authentication is via a database connection that includes a database identifier, user ID, and password plus other optional parameters. This is functionally equivalent to Netezza local connections. -- **Azure Active Directory (Azure AD) authentication**: With Azure Active Directory authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage SQL Data Warehouse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
+- **Azure Active Directory (Azure AD) authentication**: with Azure AD authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage Azure Synapse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
### Users, roles, and permissions
Azure Synapse supports two basic options for connection and authorization:
> [!TIP] > High-level planning is essential for a successful migration project.
-Both Netezza and Azure Synapse implement database access control via a combination of users, roles (groups in Netezza), and permissions. Both use standard `SQL CREATE USER` and `CREATE ROLE/GROUP` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
+Both Netezza and Azure Synapse implement database access control via a combination of users, roles (groups in Netezza), and permissions. Both use standard SQL `CREATE USER` and `CREATE ROLE/GROUP` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
> [!TIP] > Automation of migration processes is recommended to reduce elapsed time and scope for errors.
See the following sections for more details.
> [!TIP] > Migration of a data warehouse requires more than just tables, views, and SQL statements.
-The information about current users and groups in a Netezza system is held in system catalog views `_v_users` and `_v_groupusers`. Use the nzsql utility or tools such as the Netezza&reg; Performance, NzAdmin, or the Netezza Utility scripts to list user privileges. For example, use the `dpu` and `dpgu` commands in nzsql to display users or groups with their permissions.
+The information about current users and groups in a Netezza system is held in system catalog views `_v_users` and `_v_groupusers`. Use the nzsql utility or tools such as the Netezza Performance, NzAdmin, or Netezza Utility scripts to list user privileges. For example, use the `dpu` and `dpgu` commands in nzsql to display users or groups with their permissions.
Use or edit the utility scripts `nz_get_users` and `nz_get_user_groups` to retrieve the same information in the required format.
nz_ddl_grant_user -usrobj dbname > output_file_dbname;
The output file can be modified to produce a script that is a series of `GRANT` statements for Azure Synapse.
-Netezza supports two classes of access rights,&mdash;Admin and Object. See the following table for a list of Netezza access rights and their equivalent in Azure Synapse.
+Netezza supports two classes of access rights, Admin and Object. See the following tables for a list of Netezza access rights and their equivalent in Azure Synapse.
| Admin Privilege | Description | Azure Synapse Equivalent | |-|-|--|
-| Backup | Allows user to create backups. The user can run backups. The user can run the command `nzbackup`. | \* |
-| [Create] Aggregate | Allows the user to create user-defined aggregates (UDAs). Permission to operate on existing UDAs is controlled by object privileges. | CREATE FUNCTION \*\*\* |
+| Backup | Allows user to create backups. The user can run backups. The user can run the command `nzbackup`. | <sup>1</sup> |
+| [Create] Aggregate | Allows the user to create user-defined aggregates (UDAs). Permission to operate on existing UDAs is controlled by object privileges. | CREATE FUNCTION <sup>3</sup> |
| [Create] Database | Allows the user to create databases. Permission to operate on existing databases is controlled by object privileges. | CREATE DATABASE | | [Create] External Table | Allows the user to create external tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE | | [Create] Function | Allows the user to create user-defined functions (UDFs). Permission to operate on existing UDFs is controlled by object privileges. | CREATE FUNCTION | | [Create] Group | Allows the user to create groups. Permission to operate on existing groups is controlled by object privileges. | CREATE ROLE | | [Create] Index | For system use only. Users can't create indexes. | CREATE INDEX |
-| [Create] Library | Allows the user to create shared libraries. Permission to operate on existing shared libraries is controlled by object privileges. | \* |
+| [Create] Library | Allows the user to create shared libraries. Permission to operate on existing shared libraries is controlled by object privileges. | <sup>1</sup> |
| [Create] Materialized View | Allows the user to create materialized views. | CREATE VIEW | | [Create] Procedure | Allows the user to create stored procedures. Permission to operate on existing stored procedures is controlled by object privileges. | CREATE PROCEDURE | | [Create] Schema | Allows the user to create schemas. Permission to operate on existing schemas is controlled by object privileges. | CREATE SCHEMA |
-| [Create] Sequence | Allows the user to create database sequences. | \* |
+| [Create] Sequence | Allows the user to create database sequences. | <sup>1</sup> |
| [Create] Synonym | Allows the user to create synonyms. | CREATE SYNONYM | | [Create] Table | Allows the user to create tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE | | [Create] Temp Table | Allows the user to create temporary tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE | | [Create] User | Allows the user to create users. Permission to operate on existing users is controlled by object privileges. | CREATE USER | | [Create] View | Allows the user to create views. Permission to operate on existing views is controlled by object privileges. | CREATE VIEW |
-| [Manage Hardware | Allows the user to do the following hardware-related operations: view hardware status, manage SPUs, manage topology and mirroring, and run diagnostic tests. The user can run these commands: nzhw and nzds. | \*\*\*\* |
-| [Manage Security | Allows the user to run commands and operations that relate to the following advanced security options such as: managing and configuring history databases, managing multi- level security objects, and specifying security for users and groups, managing database key stores and keys and key stores for the digital signing of audit data. | \*\*\*\* |
-| [Manage System | Allows the user to do the following management operations: start/stop/pause/resume the system, abort sessions, view the distribution map, system statistics, and logs. The user can use these commands: nzsystem, nzstate, nzstats, and nzsession. | \*\*\*\* |
-| Restore | Allows the user to restore the system. The user can run the nzrestore command. | \*\* |
-| Unfence | Allows the user to create or alter a user-defined function or aggregate to run in unfenced mode. | \* |
+| [Manage Hardware | Allows the user to do the following hardware-related operations: view hardware status, manage SPUs, manage topology and mirroring, and run diagnostic tests. The user can run these commands: nzhw and nzds. | <sup>4</sup> |
+| [Manage Security | Allows the user to run commands and operations that relate to the following advanced security options such as: managing and configuring history databases, managing multi-level security objects, and specifying security for users and groups, managing database key stores and keys and key stores for the digital signing of audit data. | <sup>4</sup> |
+| [Manage System | Allows the user to do the following management operations: start/stop/pause/resume the system, abort sessions, view the distribution map, system statistics, and logs. The user can use these commands: nzsystem, nzstate, nzstats, and nzsession. | <sup>4</sup> |
+| Restore | Allows the user to restore the system. The user can run the nzrestore command. | <sup>2</sup> |
+| Unfence | Allows the user to create or alter a user-defined function or aggregate to run in unfenced mode. | <sup>1</sup> |
| Object Privilege Abort | Description | Azure Synapse Equivalent | |-|-|--|
Netezza supports two classes of access rights,&mdash;Admin and Object. See the f
| Delete | Allows the user to delete table rows. Applies only to tables. | DELETE | | Drop | Allows the user to drop objects. Applies to all object types. | DROP | | Execute | Allows the user to run user-defined functions, user-defined aggregates, or stored procedures. | EXECUTE |
-| GenStats | Allows the user to generate statistics on tables or databases. The user can run GENERATE STATISTICS command. | \*\* |
-| Groom | Allows the user to reclaim disk space for deleted or outdated rows, and reorganize a table by the organizing keys, or to migrate data for tables that have multiple stored versions. | \*\* |
+| GenStats | Allows the user to generate statistics on tables or databases. The user can run GENERATE STATISTICS command. | <sup>2</sup> |
+| Groom | Allows the user to reclaim disk space for deleted or outdated rows, and reorganize a table by the organizing keys, or to migrate data for tables that have multiple stored versions. | <sup>2</sup> |
| Insert | Allows the user to insert rows into a table. Applies only to tables. | INSERT | | List | Allows the user to display an object name, either in a list or in another manner. Applies to all objects. | LIST | | Select | Allows the user to select (or query) rows within a table. Applies to tables and views. | SELECT | | Truncate | Allows the user to delete all rows from a table. Applies only to tables. | TRUNCATE | | Update | Allows the user to modify table rows. Applies to tables only. | UPDATE |
-Comments on the preceding table:
+Table notes:
-\* There's no direct equivalent to this function in Azure Synapse.
+1. There's no direct equivalent to this function in Azure Synapse.
-\*\* These Netezza functions are handled automatically in Azure Synapse.
+1. These Netezza functions are handled automatically in Azure Synapse.
-\*\*\* The Azure Synapse `CREATE FUNCTION` feature incorporates Netezza aggregate functionality.
+1. The Azure Synapse `CREATE FUNCTION` feature incorporates Netezza aggregate functionality.
-\*\*\*\* These features are managed automatically by the system or via Azure portal in Azure Synapse&mdash;see the next section on Operational considerations.
+1. These features are managed automatically by the system or via the Azure portal in Azure Synapse. See the next section on Operational considerations.
Refer to [Azure Synapse Analytics security permissions](../../guidance/security-white-paper-introduction.md).
Netezza administration tasks typically fall into two categories:
- Database administration, which is managing user databases and their content, loading data, backing up data, restoring data, and controlling access to data and permissions.
-IBM&reg; Netezza&reg; offers several ways or interfaces that you can use to perform the various system and database management tasks:
+IBM Netezza offers several ways or interfaces that you can use to perform the various system and database management tasks:
-- Netezza commands (nz* commands) are installed in the /nz/kit/bin directory on the Netezza host. For many of the nz* commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default nz user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you have permission to do the task.
+- Netezza commands (`nz*` commands) are installed in the `/nz/kit/bin` directory on the Netezza host. For many of the `nz*` commands, you must be able to sign into the Netezza system to access and run those commands. In most cases, users sign in as the default `nz` user account, but you can create other Linux user accounts on your system. Some commands require you to specify a database user account, password, and database to ensure that you have permission to do the task.
-- The Netezza CLI client kits package a subset of the nz* commands that can be run from Windows and UNIX client systems. The client commands might also require you to specify a database user account, password, and database to ensure that you have database administrative and object permissions to perform the task.
+- The Netezza CLI client kits package a subset of the `nz*` commands that can be run from Windows and UNIX client systems. The client commands might also require you to specify a database user account, password, and database to ensure that you have database administrative and object permissions to perform the task.
- The SQL commands support administration tasks and queries within a SQL database session. You can run the SQL commands from the Netezza nzsql command interpreter or through SQL APIs such as ODBC, JDBC, and the OLE DB Provider. You must have a database user account to run the SQL commands with appropriate permissions for the queries and tasks that you perform. - The NzAdmin tool is a Netezza interface that runs on Windows client workstations to manage Netezza systems.
-While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more 'manual' approach in legacy data warehouses such as Netezza).
+While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more "manual" approach in legacy data warehouses such as Netezza).
The following sections compare Netezza and Azure Synapse options for various operational tasks.
The following sections compare Netezza and Azure Synapse options for various ope
> [!TIP] > Housekeeping tasks keep a production warehouse operating efficiently and optimize use of resources such as storage.
-In most legacy data warehouse environments, regular 'housekeeping' tasks are time-consuming. Reclaim disk storage space by removing old versions of updated or deleted rows or reorganizing data, log file or index blocks for efficiency (`GROOM` and `VACUUM` in Netezza). Collecting statistics is also a potentially time-consuming task, required after a bulk data ingest to provide the query optimizer with up-to-date data on which to base query execution plans.
+In most legacy data warehouse environments, regular "housekeeping" tasks are time-consuming. Reclaim disk storage space by removing old versions of updated or deleted rows or reorganizing data, log files, or index blocks for efficiency (`GROOM` and `VACUUM` in Netezza). Collecting statistics is also a potentially time-consuming task, required after a bulk data ingest to provide the query optimizer with up-to-date data on which to base query execution plans.
Netezza recommends collecting statistics as follows:
Netezza recommends collecting statistics as follows:
- Prototype phase, newly populated tables. -- Production phase, after a significant percentage of change to the table or partition (~10% rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+- Production phase, after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
-- Recommendation: Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+- Recommendation: collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
-Netezza Database contains many log tables in the Data Dictionary that accumulate data, either automatically or after certain features are enabled. Because log data grows over time, purge older information to avoid using up permanent space. There are options to automate the maintenance of these logs available.
+Netezza database contains many log tables in the data dictionary that accumulate data, either automatically or after certain features are enabled. Because log data grows over time, purge older information to avoid using up permanent space. There are options to automate the maintenance of these logs available.
> [!TIP] > Automate and monitor housekeeping tasks in Azure.
Azure Synapse has an option to automatically create statistics so that they can
> [!TIP] > Netezza Performance Portal is the recommended method of monitoring and logging for Netezza systems.
-Netezza provides the Netezza Performance Portal to monitor various aspects of one or more Netezza systems including activity, performance, queuing, and resource utilization. Netezza Performance Portal is an interactive GUI which allows users to drill down into low-level details for any chart.
+Netezza provides the Netezza Performance Portal to monitor various aspects of one or more Netezza systems including activity, performance, queuing, and resource utilization. Netezza Performance Portal is an interactive GUI that allows users to drill down into low-level details for any chart.
> [!TIP]
-> Azure Portal provides a GUI to manage monitoring and auditing tasks for all Azure data and processes.
+> The Azure portal provides a UI to manage monitoring and auditing tasks for all Azure data and processes.
Similarly, Azure Synapse provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
The portal also enables integration with other Azure monitoring services such as
Resource utilization statistics for Azure Synapse are automatically logged within the system. The metrics for each query include usage statistics for CPU, memory, cache, I/O, and temporary workspace, as well as connectivity information like failed connection attempts.
-Azure Synapse provides a set of [Dynamic management views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
+Azure Synapse provides a set of [Dynamic Management Views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
For more information, see [Azure Synapse operations and management options](/azure/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance). ### High Availability (HA) and Disaster Recovery (DR)
-Netezza appliances are redundant, fault-tolerant systems and there are diverse options in a Netezza system to enable high availability and disaster recovery.
+Netezza appliances are redundant, fault-tolerant systems, and there are diverse options in a Netezza system to enable high availability and disaster recovery.
-Adding IBM&reg; Netezza Replication Services for disaster recovery improves fault tolerance by extending redundancy across local and wide area networks.
+Adding IBM Netezza Replication Services for disaster recovery improves fault tolerance by extending redundancy across local and wide area networks.
IBM Netezza Replication Services protects against data loss by synchronizing data on a primary system (the primary node) with data on one or more target nodes (subordinates). These nodes make up a replication set.
Azure Synapse automatically takes snapshots throughout the day, creating restore
User-defined restore points are also supported, allowing manual triggering of snapshots to create restore points of a data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for a desired RPO less than 8 hours.
-As well as the snapshots described previously, Azure Synapse also performs as standard a geo-backup once per day to a [paired data center.](/azure/best-practices-availability-paired-regions) The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored in case the restore points in the primary region aren't available.
+As well as the snapshots described previously, Azure Synapse also performs as standard a geo-backup once per day to a [paired data center](/azure/best-practices-availability-paired-regions). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored in case the restore points in the primary region aren't available.
| Technique | Description | |--|-|
In Azure Synapse, resource classes are pre-determined resource limits that gover
See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
-This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
+This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of "spiky" workloads, such as workloads with temporary, intense bursts of activity surrounded by periods of infrequent activity.
### Scale compute resources > [!TIP] > A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
-The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](../../sql-data-warehouse/quickstart-scale-compute-portal.md) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, save on compute costs by pausing compute.
+The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](../../sql-data-warehouse/quickstart-scale-compute-portal.md) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, you can save on compute costs by pausing compute.
Compute resources can be scaled up or scaled back by adjusting the data warehouse units setting for the data warehouse. Loading and query performance will increase linearly as you add more data warehouse units.
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Visualization and reporting for Netezza migrations
Almost every organization accesses data warehouses and data marts using a range
- Custom analytic applications that have embedded BI tool functionality inside the application. -- Operational applications that request BI on demand, by invoking queries and reports as-a-service on a BI platform, which in turn queries data in the data warehouse or data marts that are being migrated.
+- Operational applications that request BI on demand, invoke queries and reports as-a-service on a BI platform, which in turn queries data in the data warehouse or data marts that are being migrated.
-- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter Notebooks.
+- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, and Jupyter Notebooks.
-The migration of visualization and reporting as part of a data warehouse migration program means that all the existing queries, reports, and dashboards generated and issued by these tools and applications, need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
+The migration of visualization and reporting as part of a data warehouse migration program means that all the existing queries, reports, and dashboards generated and issued by these tools and applications need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
> [!TIP] > Existing users, user groups, roles and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
-To make that happen, everything that BI tools and applications depend on needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. Access and security are important considerations for data access in the migrated system, and are specifically discussed in [another guide](3-security-access-operations.md) in this series. When you address access and security, ensure that:
+To make that happen, everything that BI tools and applications depend on still needs to work once you migrate your data warehouse schema and data to Azure Synapse. That includes the obvious and the not so obvious&mdash;such as access and security. Access and security are important considerations for data access in the migrated system, and are specifically discussed in [another guide](3-security-access-operations.md) in this series. When you address access and security, ensure that:
- Authentication is migrated to let users sign in to the data warehouse and data mart databases on Azure Synapse.
In addition, all the required data needs to be migrated to ensure the same resul
> [!TIP] > Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
-If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
+If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Netezza), there are [tools](../../partner/data-integration.md) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
There's a lot to think about here, so let's look at all this in more detail.
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
-The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, if you're going to do that, it can affect BI tool business users and applications accessing your data warehouse, especially if it involves structural changes in your data model. Even if there were no new data structures because of new requirements, but you're considering adopting a different data modeling technique (like Data Vault) in your migrated data warehouse, you're likely to cause structural changes that impact BI reports and dashboards. If you want to adopt an agile data modeling technique, do so after migration. One way in which you can minimize the impact of things like schema changes on BI tools, users, and the reports they produce, is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
+The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, these changes can affect the BI tools accessing your data warehouse, especially if it involves structural changes in your data model. If you want to adopt an agile data modeling technique or implement structural changes, do so *after* migration.
+
+One way in which you can minimize the impact of things like schema changes on BI tools is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
:::image type="content" source="../media/4-visualization-reporting/migration-data-virtualization.png" border="true" alt-text="Diagram showing how to hide the migration from users through data virtualization.":::
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straightforward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits.
### Migrate reports based on usage Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
-If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator of the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
-For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it's necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
### Migrate reports based on business value Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this over time.
-This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
+This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), KPI targets that need to be achieved, and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
-It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **business strategy objective** table.
| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** | |-|-|-|-|-|-|
BI tool reports and dashboards, and other visualizations, are produced by issuin
- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse. -- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
+- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
-In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it's possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
> [!TIP] > Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
The impact may be less than you think, because many BI tools don't support such
### The impact of SQL incompatibilities and differences
-Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS, is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
+Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
- Accesses legacy data warehouse DBMS views that include proprietary SQL functions that have no equivalent in Azure Synapse.
You can't rely on documentation associated with reports, dashboards, and other v
> [!TIP] > Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
-One way is to get a hold of the SQL log files of your legacy data warehouse. Use a script to pull out a representative set of SQL statements into a file, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
+One way is to view the recent SQL activity of your legacy Netezza data warehouse. Query the `_v_qryhist` system table to view recent history data and determine a representative set of SQL statements into a file. For more information, see [Query history table](https://www.ibm.com/docs/en/psfa/7.2.1?topic=tables-query-history-table). Then, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
Metadata from your legacy data warehouse DBMS will also help you when it comes t
A key element in data warehouse migration is the testing of reports and dashboards against Azure Synapse to verify that the migration has worked. To do this, you need to define a series of tests and a set of required outcomes for each test that needs to be run to verify success. It's important to ensure that reports and dashboards are tested and compared across your existing and migrated data warehouse systems to: -- Identify whether schema changes made during migration such as data types to be converted, have impacted reports in terms of ability to run, results, and corresponding visualizations.
+- Identify whether schema changes made during migration, such as data types to be converted, have impacted reports in terms of ability to run results and corresponding visualizations.
- Verify all users are migrated.
A key element in data warehouse migration is the testing of reports and dashboar
- Ensure consistent results of all known queries, reports, and dashboards. -- Ensure that data and ETL migration is complete and error free.
+- Ensure that data and ETL migration is complete and error-free.
- Ensure data privacy is upheld.
Ad-hoc analysis and reporting are more challenging and require a set of tests to
In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
-It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
+It's also important to communicate the cutover to all users, so they know what's changing and what to expect.
## Analyze lineage to understand dependencies between reports, dashboards, and data
A critical success factor in migrating reports and dashboards is understanding l
In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you have Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
-To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
+To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end-to-end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
-You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage, for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
+You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
Data lineage visualization not only reduces time, effort, and error in the migration process, but also enables faster execution of the migration project.
A good way to get everything consistent across multiple BI tools is to create a
> [!TIP] > Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
-In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the data virtualization server lets multiple BI tools see a common semantic layer.
:::image type="content" source="../media/4-visualization-reporting/data-virtualization-semantics.png" border="true" alt-text="Diagram with common data names and definitions that relate to the data virtualization server.":::
In this way, you get consistency across all BI tools, while at the same time bre
> [!TIP] > Identify incompatibilities early to gauge the extent of the migration effort. Migrate your users, group roles and privilege assignments. Only migrate the reports and visualizations that are used and are contributing to business value.
-In a lift-and-shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
+In a lift and shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
-However, if data structures change, then data is stored in unsupported data types or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
+However, if data structures change, then data is stored in unsupported data types, or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Minimize SQL issues for Netezza migrations
By creating metadata to list the data tables to be migrated and their location,
> [!TIP] > SQL DDL commands `CREATE TABLE` and `CREATE VIEW` have standard core elements but are also used to define implementation-specific options.
-The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Netezza and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution and partitioning options.
+The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Netezza and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution, and partitioning options.
-The following sections discuss Netezza-specific options to consider during a migration to Azure Synapse.
+The following sections discuss Netezza-specific options to consider during a migration to Azure Synapse.
### Table considerations > [!TIP] > Use existing indexes to give an indication of candidates for indexing in the migrated warehouse.
-When migrating tables between different technologies, only the raw data and its descriptive metadata gets physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, the `TEMPORARY` option within Netezza's `CREATE TABLE` syntax is equivalent to prefixing the table name with a "#" character in Azure Synapse.
+When migrating tables between different technologies, only the raw data and its descriptive metadata get physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, the `TEMPORARY` option within Netezza's `CREATE TABLE` syntax is equivalent to prefixing the table name with a "#" character in Azure Synapse.
-It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if zone maps were created in the source Netezza environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if zone maps were created in the source Netezza environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse database. Other native performance optimization techniques, such as table replication, may be more applicable than a straight "like-for-like" index creation.
### Unsupported Netezza database object types
It's important to understand where performance optimizations&mdash;such as index
Netezza implements some database objects that aren't directly supported in Azure Synapse, but there are methods to achieve the same functionality within the new environment: -- Zone Maps: In Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone Maps are created on the following column types:
+- Zone maps: in Netezza, zone maps are automatically created and maintained for some column types and are used at query time to restrict the amount of data to be scanned. Zone maps are created on the following column types:
- `INTEGER` columns of length 8 bytes or less. - Temporal columns. For instance, `DATE`, `TIME`, and `TIMESTAMP`. - `CHAR` columns, if these are part of a materialized view and mentioned in the `ORDER BY` clause. You can find out which columns have zone maps by using the `nz_zonemap` utility, which is part of the NZ Toolkit. Azure Synapse doesn't include zone maps, but you can achieve similar results by using other user-defined index types and/or partitioning. -- Clustered Base tables (CBT): In Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT via allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
+- Clustered base tables (CBT): in Netezza, CBTs are commonly used for fact tables, which can have billions of records. Scanning such a huge table requires a lot of processing time, since a full table scan might be needed to get relevant records. Organizing records on restrictive CBT allows Netezza to group records in same or nearby extents. This process also creates zone maps that improve the performance by reducing the amount of data to be scanned.
In Azure Synapse, you can achieve a similar effect by use of partitioning and/or use of other indexes.
Most Netezza data types have a direct equivalent in Azure Synapse. The following
| BYTEINT | TINYINT | | CHARACTER VARYING(n) | VARCHAR(n) | | CHARACTER(n) | CHAR(n) |
-| DATE | DATE(DATE |
+| DATE | DATE(date) |
| DECIMAL(p,s) | DECIMAL(p,s) | | DOUBLE PRECISION | FLOAT | | FLOAT(n) | FLOAT(n) | | INTEGER | INT |
-| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse but can be calculated using temporal functions such as DATEDIFF |
+| INTERVAL | INTERVAL data types aren't currently directly supported in Azure Synapse but can be calculated using temporal functions such as DATEDIFF. |
| MONEY | MONEY | | NATIONAL CHARACTER VARYING(n) | NVARCHAR(n) | | NATIONAL CHARACTER(n) | NCHAR(n) | | NUMERIC(p,s) | NUMERIC(p,s) | | REAL | REAL | | SMALLINT | SMALLINT |
-| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY |
+| ST_GEOMETRY(n) | Spatial data types such as ST_GEOMETRY aren't currently supported in Azure Synapse, but the data could be stored as VARCHAR or VARBINARY. |
| TIME | TIME | | TIME WITH TIME ZONE | DATETIMEOFFSET | | TIMESTAMP | DATETIME |
Most Netezza data types have a direct equivalent in Azure Synapse. The following
### Data Definition Language (DDL) generation > [!TIP]
-> Use existing Netezza metadata to automate the generation of `CREATE TABLE` and `CREATE VIEW DDL` for Azure Synapse.
+> Use existing Netezza metadata to automate the generation of `CREATE TABLE` and `CREATE VIEW` DDL for Azure Synapse.
Edit existing Netezza `CREATE TABLE` and `CREATE VIEW` scripts to create the equivalent definitions with modified data types as described previously if necessary. Typically, this involves removing or modifying any extra Netezza-specific clauses such as `ORGANIZE ON`.
There are [Microsoft partners](../../partner/data-integration.md) who offer tool
### SQL Data Manipulation Language (DML) > [!TIP]
-> SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
+> SQL DML commands `SELECT`, `INSERT`, and `UPDATE` have standard core elements but may also implement different syntax options.
The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. Both Netezza and Azure Synapse use these commands, but in some cases there are implementation differences.
The following sections discuss the Netezza-specific DML commands that you should
### SQL DML syntax differences
-Be aware of these differences in SQL DML syntax between Netezza SQL and Azure Synapse when migrating:
+Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse when migrating:
-- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
+- `STRPOS`: in Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
- `AGE`: Netezza supports the `AGE` operator to give the interval between two temporal values, such as timestamps or dates. For example, `SELECT AGE('23-03-1956','01-01-2019') FROM...`. In Azure Synapse, `DATEDIFF` gives the interval. For example, `SELECT DATEDIFF(day, '1956-03-26','2019-01-01') FROM...`. Note the date representation sequence.
As with most database products, Netezza supports system functions and user-defin
Most modern database products allow for procedures to be stored within the database. Netezza provides the NZPLSQL language, which is based on Postgres PL/pgSQL. A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
-SQL Azure Data Warehouse also supports stored procedures using T-SQL, so if you must migrate stored procedures, recode them accordingly.
+Azure Synapse Analytics also supports stored procedures using T-SQL, so if you must migrate stored procedures, recode them accordingly.
#### Sequences In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
-In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory) using SQL code to create the next sequence number in a series.
+In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [IDENTITY to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory) using SQL code to create the next sequence number in a series.
### Use [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97) to validate legacy SQL > [!TIP] > Find potential migration issues by using real queries from the existing system query logs.
-Capture some representative SQL statements from the legacy query history logs to evaluate legacy Netezza SQL for compatibility with Azure Synapse. Then prefix those queries with `EXPLAIN` and&mdash;assuming a 'like for like' migrated data model in Azure Synapse with the same table and column names&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will return an error. Use this information to determine the scale of the recoding task. This approach doesn't require data to be loaded into the Azure environment, only that the relevant tables and views have been created.
+Capture some representative SQL statements from the legacy query history logs to evaluate legacy Netezza SQL for compatibility with Azure Synapse. Then prefix those queries with `EXPLAIN` and&mdash;assuming a "like-for-like" migrated data model in Azure Synapse with the same table and column names&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will return an error. Use this information to determine the scale of the recoding task. This approach doesn't require data to be loaded into the Azure environment, only that the relevant tables and views have been created.
#### IBM Netezza to T-SQL mapping
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
Title: "Tools for Netezza data warehouse migration to Azure Synapse Analytics"
-description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Netezza to Azure Synapse.
+description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Netezza to Azure Synapse Analytics.
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Tools for Netezza data warehouse migration to Azure Synapse Analytics
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure
This article is part seven of a seven part series that provides guidance on how
## Beyond data warehouse migration to Azure
-One of the key reasons to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integration with technologies like:
+One of the key reasons to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like:
-- Azure Data Lake Storage, for cost effective data ingestion, staging, cleansing, and transformation to free up data warehouse capacity occupied by fast growing staging tables.
+- Azure Data Lake Storage for cost effective data ingestion, staging, cleansing, and transformation, to free up data warehouse capacity occupied by fast growing staging tables.
-- Azure Data Factory, for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
+- Azure Data Factory for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
-- [The Open Data Model Common Data Initiative](/common-data-model/), to share consistent trusted data across multiple technologies, including:
+- [The Open Data Model Common Data Initiative](/common-data-model/) to share consistent trusted data across multiple technologies, including:
- Azure Synapse - Azure Synapse Spark - Azure HDInsight
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- ML.NET - .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale. -- [Azure HDInsight](../../../hdinsight/index.yml), to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
+- [Azure HDInsight](../../../hdinsight/index.yml) to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
-- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka), to integrate with live streaming data within Azure Synapse.
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka) to integrate with live streaming data from Azure Synapse.
There's often acute demand to integrate with [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
Let's look at these in more detail to understand how you can take advantage of t
## Offload data staging and ETL processing to Azure Data Lake and Azure Data Factory
-Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
+Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening OLTP systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT) streams, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation, and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
For ELT strategies, consider offloading ELT processing to Azure Data Lake to eas
> [!TIP] > Data Factory allows you to build scalable data integration pipelines code-free.
-[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines in a code-free manner that can:
-- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS-based data sources.
+- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use, and connect to on-premises, cloud, and SaaS-based data sources.
- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale. Take automatic action, such as a recommendation or alert. - Seamlessly author, monitor, and manage pipelines that span data stores both on-premises and in the cloud. -- Enable pay-as-you-go scale out in alignment with customer growth.
+- Enable pay-as-you-go scale-out in alignment with customer growth.
> [!TIP] > Data Factory can connect to on-premises, cloud, and SaaS data.
All of this can be done without writing any code. However, adding custom code to
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot showing an example of an Azure Data Factory pipeline."::: > [!TIP]
-> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise class data integration software aimed at IT professionals with a data wrangling facility for business users.
+> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise-class data integration software aimed at IT professionals with a data wrangling facility for business users.
Implement Data Factory pipeline development from any of several places including:
Implement Data Factory pipeline development from any of several places including
- REST APIs
-Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
+Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid since they can connect, ingest, clean, transform, and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real-time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
Data Factory can support multiple use cases, including:
#### Data sources
-Azure Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
#### Transform data using Azure Data Factory > [!TIP]
-> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform, and integrate data without the need to write code.
-Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
+Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured such as JSON or Avro, and unstructured data.
-Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time series aggregations that require a window to be placed on data columns.
+Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time-series aggregations that require a window to be placed on data columns.
> [!TIP] > Data Factory supports the ability to automatically detect and manage schema changes in inbound data, such as in streaming data.
Data engineers can profile data quality and view the results of individual data
> [!TIP] > Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
-Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool Notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
Another new capability in Data Factory is wrangling data flows. This lets busine
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+This differs from Excel and Power BI, as Data Factory [wrangling data flows](/azure/data-factory/wrangling-tutorial) use Power Query to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark pool notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
#### Link data and analytics in analytical pipelines
-In addition to cleaning and transforming data, Azure Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
-Models developed code-free with Azure Machine Learning Studio or with the Azure Machine Learning SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+Models developed code-free with Azure Machine Learning Studio, or with the Azure Machine Learning SDK using Azure Synapse Spark pool notebooks or using R in RStudio, can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark pool notebooks.
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
Store integrated data and any results from analytics included in a Data Factory
A key objective in any data integration setup is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust. > [!TIP]
-> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
+> Azure Data Lake Storage is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere. > [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage by using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to be loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage by using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse, and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Another key requirement in modernizing your migrated data warehouse is to integr
### Microsoft technologies for data science on Azure > [!TIP]
-> Develop machine learning models using a no/low code approach or from a range of programming languages like Python, R and .NET.
+> Develop machine learning models using a no/low-code approach or from a range of programming languages like Python, R, and .NET.
Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes:
Microsoft offers a range of technologies to build predictive analytical models u
- Azure Machine Learning -- Azure Synapse Spark Pool Notebooks
+- Azure Synapse Spark pool notebooks
-- ML.NET (API, CLI or .NET Model Builder for Visual Studio)
+- ML.NET (API, CLI, or ML.NET Model Builder for Visual Studio)
- .NET for Apache Spark
Azure Machine Learning Studio is a fully managed cloud service that lets you eas
> [!TIP] > Azure Machine Learning provides an SDK for developing machine learning models using several open-source frameworks.
-Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
+Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark pool notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, Docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
> [!TIP]
-> Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
+> Organize and manage related data stores, experiments, trained models, Docker images, and deployed services in workspaces.
-#### Azure Synapse Spark Pool Notebooks
+#### Azure Synapse Spark pool notebooks
> [!TIP]
-> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
+> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service, offering scalable execution of data preparation, model development, and deployed model execution.
-[Azure Synapse Spark Pool Notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+[Azure Synapse Spark pool notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure, which:
-- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
+- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory.
-- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results
+- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results.
> [!TIP] > Azure Synapse Spark can access data in a range of Microsoft analytical ecosystem data stores on Azure.
-Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
+Jobs running in Azure Synapse Spark pool notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the MLflow open-source framework to manage the machine learning lifecycle.
Autoscaling and auto-termination are also supported to reduce total cost of owne
> [!TIP] > Microsoft has extended its machine learning capability to .NET developers.
-ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
+ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like ML.NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
#### .NET for Apache Spark
-.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark pool notebook.
### Use Azure Synapse Analytics with your data warehouse > [!TIP]
-> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in Azure Synapse.
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark pool notebook by using data in Azure Synapse.
-Combine machine learning models built using the tools with Azure Synapse by:
+Combine machine learning models with Azure Synapse by:
- Using machine learning models in batch mode or in real-time to produce new insights, and add them to what you already know in Azure Synapse.
Combine machine learning models built using the tools with Azure Synapse by:
> [!TIP] > Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
-In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
+In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark pool notebooks together with Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark pool notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Azure Machine Learning.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-train-predict.png" border="true" alt-text="Screenshot of an Azure Synapse Analytics train and predict model.":::
-In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
+In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark pool notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
## Integrate live streaming data into Azure Synapse Analytics
When analyzing data in a modern data warehouse, you must be able to analyze stre
Once you've successfully migrated your data warehouse to Azure Synapse, you can introduce this capability as part of a data warehouse modernization exercise. Do this by taking advantage of additional functionality in Azure Synapse. > [!TIP]
-> Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
+> Ingest streaming data into Azure Data Lake Storage from Azure Event Hubs or Kafka, and access it from Azure Synapse using PolyBase external tables.
-To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in Data Lake Storage.
+To do this, ingest streaming data via Azure Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard T-SQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in ADLS.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
To do this, ingest streaming data via Microsoft Event Hubs or other technologies
PolyBase offers the capability to create a logical data warehouse to simplify user access to multiple analytical data stores.
-This is attractive because many companies have adopted 'workload optimized' analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
+This is attractive because many companies have adopted "workload optimized" analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
-- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics
+- ADLS with Azure Synapse Spark pool notebook (Spark-as-a-service), for big data analytics.
-- Azure HDInsight (Hadoop as-a-service), also for big data analytics
+- Azure HDInsight (Hadoop as-a-service), also for big data analytics.
-- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB
+- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB.
-- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion
+- Azure Event Hubs and Azure Stream Analytics, for real-time analysis of data in motion.
You may have non-Microsoft equivalents of some of these. You may also have a master data management (MDM) system that needs to be accessed for consistent trusted data on customers, suppliers, products, assets, and more.
These additional analytical platforms have emerged because of the explosion of n
- Machine generated data, such as IoT sensor data and clickstream data. -- Human generated data, such as social network data, review web site data, customer in-bound email, image, and video.
+- Human generated data, such as social network data, review web site data, customer inbound email, images, and video.
- Other external data, such as open government data and weather data.
-This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video), which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
-As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
+As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in the following diagram.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/analytical-workload-platforms.png" border="true" alt-text="Screenshot of different analytical platforms for different types of analytical workloads in Azure Synapse Analytics.":::
Since these platforms are producing new insights, it's normal to see a requireme
> [!TIP] > The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
-By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake Storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Azure Cosmos DB&mdash;or to streaming data flowing into ADLS from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into ADLS and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark pool notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
The previous diagram shows how other technologies of the Microsoft analytical ec
> [!TIP] > Migrating your data warehouse to Azure Synapse lets you make use of a rich Microsoft analytical ecosystem running on Azure.
-Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You can't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
+Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You don't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
-Broaden your ETL processing to ingest data of any type into Azure Data Lake Storage. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in-real-time on streaming data and on-demand as a service.
+Broaden your ETL processing to ingest data of any type into ADLS. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in real-time on streaming data, and on-demand as a service.
Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify access to insights from multiple underlying analytical platforms on Azure by creating holistic integrated views in a logical data warehouse. Easily access streaming, big data, and traditional data warehouse insights from BI tools and applications to drive new value in your business.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Design and performance for Teradata migrations
This article is part one of a seven part series that provides guidance on how to
## Overview
-Many existing users of Teradata data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, or PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+Many existing users of Teradata data warehouse systems want to take advantage of the innovations provided by newer environments such as cloud, IaaS, and PaaS, and to delegate tasks like infrastructure maintenance and platform development to the cloud provider.
> [!TIP] > More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools. Although Teradata and Azure Synapse Analytics are both SQL databases designed to use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach: -- Legacy Teradata systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud based and uses Azure storage and compute resources.
+- Legacy Teradata systems are often installed on-premises and use proprietary hardware, while Azure Synapse is cloud-based and uses Azure Storage and compute resources.
-- Since storage and compute resources are separate in the Azure environment, these resources can be scaled upwards and downwards independently, leveraging the elastic scaling capability.
+- Since storage and compute resources are separate in the Azure environment, these resources can be scaled upwards or downwards independently, leveraging the elastic scaling capability.
- Azure Synapse can be paused or resized as required to reduce resource utilization and cost. - Upgrading a Teradata configuration is a major task involving additional physical hardware and potentially lengthy database reconfiguration or reload.
-Microsoft Azure is a globally available, highly secure, scalable cloud environment, that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
+Microsoft Azure is a globally available, highly secure, scalable cloud environment that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
:::image type="content" source="../media/1-design-performance-migration/azure-synapse-ecosystem.png" border="true" alt-text="Chart showing the Azure Synapse ecosystem of supporting tools and capabilities.":::
Legacy Teradata environments have typically evolved over time to encompass multi
- Prove the viability of migrating to Azure Synapse by quickly delivering the benefits of the new environment. -- Allow the in-house technical staff to gain relevant experience of the processes and tools involved which can be used in migrations to other areas.
+- Allow the in-house technical staff to gain relevant experience of the processes and tools involved, which can be used in migrations to other areas.
- Create a template for further migrations specific to the source Teradata environment and the current tools and processes that are already in place.
-A good candidate for an initial migration from the Teradata environment that would enable the items above, is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modifications&mdash;normally a star or snowflake schema.
+A good candidate for an initial migration from the Teradata environment that would enable the preceding items is typically one that implements a BI/Analytics workload, rather than an online transaction processing (OLTP) workload, with a data model that can be migrated with minimal modification, normally a star or snowflake schema.
-The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10TB range.
+The migration data volume for the initial exercise should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment while quickly demonstrating the value&mdash;typically in the 1-10 TB range.
To minimize the risk and reduce implementation time for the initial migration project, confine the scope of the migration to just the data marts, such as the OLAP DB part of a Teradata warehouse. However, this won't address the broader topics such as ETL migration and historical data migration. Address these topics in later phases of the project, once the migrated data mart layer is backfilled with the data and processes required to build them. #### Lift and shift as-is versus a phased approach incorporating changes > [!TIP]
-> 'Lift and shift' is a good starting point, even if subsequent phases will implement changes to the data model.
+> "Lift and shift" is a good starting point, even if subsequent phases will implement changes to the data model.
Whatever the drive and scope of the intended migration, there are&mdash;broadly speaking&mdash;two types of migration:
This is a good fit for existing Teradata environments where a single data mart i
##### Phased approach incorporating modifications
-In cases where a legacy warehouse has evolved over a long time, you may need to re-engineer to maintain the required performance levels or to support new data like IoT streams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+In cases where a legacy warehouse has evolved over a long time, you might need to re-engineer to maintain the required performance levels or to support new data, such as Internet of Things (IoT) streams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
Microsoft recommends moving the existing data model as-is to Azure (optionally using a VM Teradata instance in Azure) and using the performance and flexibility of the Azure environment to apply the re-engineering changes, leveraging Azure's capabilities to make the changes without impacting the existing source system.
With this approach, standard Teradata utilities such as Teradata Parallel Data T
#### Use Azure Data Factory to implement a metadata-driven migration
-Automate and orchestrate the migration process by making use of the capabilities in the Azure environment. This approach minimizes the impact on the existing Teradata environment, which may already be running close to full capacity.
+Automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the impact on the existing Teradata environment, which may already be running close to full capacity.
-Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;to ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;to ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage the migration process.
By creating metadata to list the data tables to be migrated and their location,
In a Teradata environment, there are often multiple separate databases for individual parts of the overall environment. For example, there may be a separate database for data ingestion and staging tables, a database for the core warehouse tables, and another database for data marts, sometimes called a semantic layer. Processing these as ETL/ELT pipelines may implement cross-database joins and will move data between these separate databases.
-Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse to mimic any separate databases migrated from the Teradata environment. If the Teradata environment already uses schemas, you may need to use a new naming convention to move the existing Teradata tables and views to the new environment&mdash;for example, concatenate the existing Teradata schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
+Querying within the Azure Synapse environment is limited to a single database. Schemas are used to separate the tables into logically separate groups. Therefore, we recommend using a series of schemas within the target Azure Synapse database to mimic any separate databases migrated from the Teradata environment. If the Teradata environment already uses schemas, you may need to use a new naming convention to move the existing Teradata tables and views to the new environment&mdash;for example, concatenate the existing Teradata schema and table names into the new Azure Synapse table name and use schema names in the new environment to maintain the original separate database names. Schema consolidation naming can have dots&mdash;however, Azure Synapse Spark may have issues. You can use SQL views over the underlying tables to maintain the logical structures, but there are some potential downsides to this approach:
- Views in Azure Synapse are read-only, so any updates to the data must take place on the underlying base tables.
Querying within the Azure Synapse environment is limited to a single database. S
> [!TIP] > Use existing indexes to indicate candidates for indexing in the migrated warehouse.
-When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated, as these may not be needed or may be implemented differently within the new target environment.
+When migrating tables between different technologies, only the raw data and the metadata that describes it gets physically moved between the two environments. Other database elements from the source system&mdash;such as indexes&mdash;aren't migrated as these may not be needed or may be implemented differently within the new target environment.
-However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if a non-unique secondary index (NUSI) has been created within the source Teradata environment, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+However, it's important to understand where performance optimizations such as indexes have been used in the source environment, as this can indicate where to add performance optimization in the new target environment. For example, if a non-unique secondary index (NUSI) has been created within the source Teradata environment, it may indicate that a non-clustered index should be created within the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight "like-for-like" index creation.
#### High availability for the database
-Teradata supports data replication across nodes via the FALLBACK option, where table rows that reside physically on a given node are replicated to another node within the system. This approach guarantees that data won't be lost if there's a node failure and provides the basis for failover scenarios.
+Teradata supports data replication across nodes via the `FALLBACK` option, where table rows that reside physically on a given node are replicated to another node within the system. This approach guarantees that data won't be lost if there's a node failure and provides the basis for failover scenarios.
-The goal of the high availability architecture in Azure SQL Database is to guarantee that your database is up and running 99.9% of time, without worrying about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks such as patching, backups, and Windows and SQL upgrades, as well as unplanned events such as underlying hardware, software, or network failures.
+The goal of the high availability architecture in Azure SQL Database is to guarantee that your database is up and running 99.9% of the time, without worrying about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks such as patching, backups, and Windows and SQL upgrades, as well as unplanned events such as underlying hardware, software, or network failures.
Data storage in Azure Synapse is automatically [backed up](../../sql-data-warehouse/backup-and-restore.md) with snapshots. These snapshots are a built-in feature of the service that creates restore points. You don't have to enable this capability. Users can't currently delete automatic restore points where the service uses these restore points to maintain SLAs for recovery.
-Azure Synapse Dedicated SQL pool takes snapshots of the data warehouse throughout the day creating restore points that are available for seven days. This retention period can't be changed. SQL Data Warehouse supports an eight-hour recovery point objective (RPO). You can restore your data warehouse in the primary region from any one of the snapshots taken in the past seven days. If you require more granular backups, other user-defined options are available.
+Azure Synapse Dedicated SQL pool takes snapshots of the data warehouse throughout the day creating restore points that are available for seven days. This retention period can't be changed. Azure Synapse supports an eight-hour recovery point objective (RPO). You can restore your data warehouse in the primary region from any one of the snapshots taken in the past seven days. If you require more granular backups, other user-defined options are available.
#### Unsupported Teradata table types > [!TIP]
-> Standard tables in Azure Synapse can support migrated Teradata time series and temporal data.
+> Standard tables in Azure Synapse can support migrated Teradata time-series and temporal data.
-Teradata supports special table types for time series and temporal data. The syntax and some of the functions for these table types aren't directly supported in Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
+Teradata supports special table types for time-series and temporal data. The syntax and some of the functions for these table types aren't directly supported in Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently used in the source Teradata environment and is to be migrated, add this additional filtering into the relevant temporal queries.
Most modern database products allow for procedures to be stored within the datab
A stored procedure typically contains SQL statements and some procedural logic, and may return data or a status.
-Azure Synapse Analytics from Azure SQL Data Warehouse also supports stored procedures using T-SQL. If you must migrate stored procedures, recode these procedures for their new environment.
+Azure Synapse Analytics also supports stored procedures using T-SQL. If you must migrate stored procedures, recode these procedures for their new environment.
##### Triggers
With Azure Synapse, sequences are handled in a similar way to Teradata. Use [IDE
> [!TIP] > Use existing Teradata metadata to automate the generation of CREATE TABLE and CREATE VIEW DDL for Azure Synapse Analytics.
-You can edit existing Teradata CREATE TABLE and CREATE VIEW scripts to create the equivalent definitions with modified data types, if necessary, as described in the previous section. Typically, this involves removing extra Teradata-specific clauses such as FALLBACK.
+You can edit existing Teradata `CREATE TABLE` and `CREATE VIEW` scripts to create the equivalent definitions with modified data types, if necessary, as described in the previous section. Typically, this involves removing extra Teradata-specific clauses such as `FALLBACK`.
However, all the information that specifies the current definitions of tables and views within the existing Teradata environment is maintained within system catalog tables. These tables are the best source of this information, as it's guaranteed to be up to date and complete. User-maintained documentation may not be in sync with the current table definitions.
-Access the information in these tables via views into the catalog such as `DBC.ColumnsV`, and generate the equivalent CREATE TABLE DDL statements for the equivalent tables in Azure Synapse.
+Access the information in these tables via views into the catalog such as `DBC.ColumnsV`, and generate the equivalent `CREATE TABLE` DDL statements for the equivalent tables in Azure Synapse.
Third-party migration and ETL tools also use the catalog information to achieve the same result.
Call Teradata Parallel Transporter directly from Azure Data Factory. This is the
Recommended data formats for the extracted data include delimited text files (also called Comma Separated Values or CSV), Optimized Row Columnar (ORC), or Parquet files.
-For more detailed information on the process of migrating data and ETL from a Teradata environment, see [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
+For more information about the process of migrating data and ETL from a Teradata environment, see [Data migration, ETL, and load for Teradata migrations](2-etl-load-migration-considerations.md).
## Performance recommendations for Teradata migrations
This section highlights lower-level implementation differences between Teradata
Azure enables the specification of data distribution methods for individual tables. The aim is to reduce the amount of data that must be moved between processing nodes when executing a query.
-For large table-large table joins, hash distribute one or ideally both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
+For large table-large table joins, hash distribute one or, ideally, both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
-Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
+Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as previously described is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
#### Data indexing
-Azure Synapse provides several indexing options, but these are different from the indexing options implemented in Teradata. More details of the different indexing options are described in [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
+Azure Synapse provides several indexing options, but these are different from the indexing options implemented in Teradata. For more information about the different indexing options, see [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
Existing indexes within the source Teradata environment can however provide a useful indication of how the data is currently used. They can identify candidates for indexing within the Azure Synapse environment.
Use [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-m
## Next steps
-To learn more about ETL and load for Teradata migration, see the next article in this series: [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
+To learn more about ETL and load for Teradata migration, see the next article in this series: [Data migration, ETL, and load for Teradata migrations](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Data migration, ETL, and load for Teradata migrations
When migrating a Teradata data warehouse, you need to ask some basic data-relate
- What's the best migration approach to minimize risk and user impact? -- When migrating data marts&mdash;stay physical or go virtual?
+- When migrating data marts: stay physical or go virtual?
The next sections discuss these points within the context of migration from Teradata.
When migrating from Teradata, consider creating a Teradata environment in a VM w
#### Use a VM Teradata instance as part of a migration
-One optional approach for migrating from an on-premises Teradata environment is to leverage the Azure environment to create a Teradata instance in a VM within Azure, co-located with the target Azure Synapse environment. This is possible because Azure provides cheap cloud storage and elastic scalability.
+One optional approach for migrating from an on-premises Teradata environment is to leverage the Azure environment to create a Teradata instance in a VM within Azure, collocated with the target Azure Synapse environment. This is possible because Azure provides cheap cloud storage and elastic scalability.
With this approach, standard Teradata utilities, such as Teradata Parallel Data Transporter&mdash;or third-party data replication tools, such as Attunity Replicate&mdash;can be used to efficiently move the subset of Teradata tables that need to be migrated to the VM instance. Then, all migration tasks can take place within the Azure environment. This approach has several benefits:
With this approach, standard Teradata utilities, such as Teradata Parallel Data
- The migration process is orchestrated and controlled entirely within the Azure environment.
-#### Migrate data marts - stay physical or go virtual?
+#### Migrate data marts: stay physical or go virtual?
> [!TIP] > Virtualizing data marts can save on storage and processing resources.
If these data marts are implemented as physical tables, they'll require addition
With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](../../partner/data-integration.md). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
-There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is pushed down into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations on large data volumes.
+There's another potential benefit to this approach. By implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is "pushed down" into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations on large data volumes.
The primary drivers for choosing a virtual data mart implementation over a physical data mart are: -- More agility, since a virtual data mart is easier to change than physical tables and the associated ETL processes.
+- More agility: a virtual data mart is easier to change than physical tables and the associated ETL processes.
-- Lower total cost of ownership, since a virtualized implementation requires fewer data stores and copies of data.
+- Lower total cost of ownership: a virtualized implementation requires fewer data stores and copies of data.
- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment. -- Performance, since although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
+- Performance: although physical data marts have historically been more performant, virtualization products now implement intelligent caching techniques to mitigate.
### Data migration from Teradata #### Understand your data
-Part of migration planning is understanding in detail the volume of data that needs to be migrated, since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the raw data within the tables to be migrated. In this context, 'raw data' means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
+Part of migration planning is understanding in detail the volume of data that needs to be migrated since that can impact decisions about the migration approach. Use system metadata to determine the physical space taken up by the "raw data" within the tables to be migrated. In this context, "raw data" means the amount of space used by the data rows within a table, excluding overheads such as indexes and compression. This is especially true for the largest fact tables since these will typically comprise more than 95% of the data.
-You can get an accurate number for the volume of data to be mitigated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
+You can get an accurate number for the volume of data to be migrated for a given table by extracting a representative sample of the data&mdash;for example, one million rows&mdash;to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row of that table. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
## ETL migration considerations
The following sections discuss migration options and make recommendations for va
:::image type="content" source="../media/2-etl-load-migration-considerations/migration-options-flowchart.png" border="true" alt-text="Flowchart of migration options and recommendations.":::
-The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
+The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard "built-in" Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
In the Teradata environment, some or all ETL processing may be performed by cust
> [!TIP] > Leverage investment in existing third-party tools to reduce cost and risk.
-If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include 'native' connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for most efficient data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
+If a third-party ETL tool is already in use, and especially if there's a large investment in skills or several existing workflows and schedules use that tool, then decision 3 is whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include "native" connectors that can leverage Azure facilities like PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient data loading. There's a way to call an external process, such as PolyBase or `COPY INTO`, and pass in the appropriate parameters. In this case, leverage existing skills and workflows, with Azure Synapse as the new target environment.
If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or move it into the Azure environment to achieve cost, performance, and scalability benefits.
If some or all the existing Teradata warehouse ETL/ELT processing is handled by
> [!TIP] > The inventory of ETL tasks to be migrated should include scripts and stored procedures.
-Some elements of the ETL process are easy to migrate&mdash;for example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of fast load or MLOAD. If the exported files are Parquet, you can use a native Parquet reader, which is a faster option than PolyBase. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
+Some elements of the ETL process are easy to migrate, for example by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of fast load or MLOAD. If the exported files are Parquet, you can use a native Parquet reader, which is a faster option than PolyBase. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
One way of testing Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Teradata logs, then prefix those queries with `EXPLAIN`, and then&mdash;assuming a like-for-like migrated data model in Azure Synapse&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task. [Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Teradata SQL and stored procedures to Azure Synapse.
-### Use third party ETL tools
+### Use third-party ETL tools
As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
As described in the previous section, in many cases the existing legacy data war
> [!TIP] > Third-party tools can simplify and automate the migration process and therefore reduce risk.
-When migrating data from a Teradata data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Teradata environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which are discussed in the next sections.
+When it comes to migrating data from a Teradata data warehouse, there are some basic questions associated with data loading that need to be resolved. You'll need to decide how the data will be physically moved from the existing on-premises Teradata environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which are discussed in the next sections.
- Will you extract the data to files, or move it directly via a network connection?
When migrating data from a Teradata data warehouse, there are some basic questio
> [!TIP] > Understand the data volumes to be migrated and the available network bandwidth since these factors influence the migration approach decision.
-Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Teradata system and load it into the new environment. There are two basic approaches:
+Once the database tables to be migrated have been created in Azure Synapse, you can move the data to populate those tables out of the legacy Teradata system and into the new environment. There are two basic approaches:
-- **File extract**: Extract the data from the Teradata tables to flat files, normally in CSV format, via BTEQ, Fast Export, or Teradata Parallel Transporter (TPT). Use TPT whenever possible since it's the most efficient in terms of data throughput.
+- **File extract**: extract the data from the Teradata tables to flat files, normally in CSV format, via BTEQ, Fast Export, or Teradata Parallel Transporter (TPT). Use TPT whenever possible since it's the most efficient in terms of data throughput.
This approach requires space to land the extracted data files. The space could be local to the Teradata source database (if sufficient storage is available), or remote in Azure Blob Storage. The best performance is achieved when a file is written locally, since that avoids network overhead.
Once the database tables to be migrated have been created in Azure Synapse, you
Microsoft provides different options to move large volumes of data, including AZCopy for moving files across the network into Azure Storage, Azure ExpressRoute for moving bulk data over a private network connection, and Azure Data Box where the files are moved to a physical storage device that's then shipped to an Azure data center for loading. For more information, see [data transfer](/azure/architecture/data-guide/scenarios/data-transfer). -- **Direct extract and load across network**: The target Azure environment sends a data extract request, normally via a SQL command, to the legacy Teradata system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to 'land' the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Teradata database and the Azure environment. For very large data volumes this approach may not be practical.
+- **Direct extract and load across network**: the target Azure environment sends a data extract request, normally via a SQL command, to the legacy Teradata system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Teradata database and the Azure environment. For very large data volumes this approach may not be practical.
There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For the large volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
To summarize, our recommendations for migrating data and associated ETL processe
- Consider using a Teradata instance in an Azure VM as a stepping stone to offload migration from the legacy Teradata environment. -- Leverage standard built-in Azure features to minimize the migration workload.
+- Leverage standard "built-in" Azure features to minimize the migration workload.
-- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools at each phase in the process.
+- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools in each phase in the process.
-- Use Azure facilities such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) to orchestrate and automate the migration process while minimizing impact on the Teradata system.
+- Use Azure facilities, such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Teradata system.
## Next steps
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Security, access, and operations for Teradata migrations
This article is part three of a seven part series that provides guidance on how
This article discusses connection methods for existing legacy Teradata environments and how they can be migrated to Azure Synapse Analytics with minimal risk and user impact.
-We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities from the Azure portal to create and manage a new security regime.
+This article assumes that there's a requirement to migrate the existing methods of connection and user/role/permission structure as-is. If not, use the Azure portal to create and manage a new security regime.
For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options, see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
Azure Synapse supports two basic options for connection and authorization:
- **SQL authentication**: SQL authentication is via a database connection that includes a database identifier, user ID, and password plus other optional parameters. This is functionally equivalent to Teradata TD1, TD2 and default connections. -- **Azure Active Directory (Azure AD) authentication**: With Azure Active Directory authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage SQL Data Warehouse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
+- **Azure Active Directory (Azure AD) authentication**: with Azure AD authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. Central ID management provides a single place to manage SQL Data Warehouse users and simplifies permission management. Azure AD can also support connections to LDAP and Kerberos services&mdash;for example, Azure AD can be used to connect to existing LDAP directories if these are to remain in place after migration of the database.
### Users, roles, and permissions
Azure Synapse supports two basic options for connection and authorization:
> [!TIP] > High-level planning is essential for a successful migration project.
-Both Teradata and Azure Synapse implement database access control via a combination of users, roles, and permissions. Both use standard `SQL CREATE USER` and `CREATE ROLE` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
+Both Teradata and Azure Synapse implement database access control via a combination of users, roles, and permissions. Both use standard SQL `CREATE USER` and `CREATE ROLE` statements to define users and roles, and `GRANT` and `REVOKE` statements to assign or remove permissions to those users and/or roles.
> [!TIP] > Automation of migration processes is recommended to reduce elapsed time and scope for errors.
WHERE RoleName='BI_DEVELOPER'
Order By 2,3,4,5; ```
-Modify these example `SELECT` statements to produce a result set which is a series of `GRANT` statements by including the appropriate text as a literal within the `SELECT` statement.
+Modify these example `SELECT` statements to produce a result set that's a series of `GRANT` statements by including the appropriate text as a literal within the `SELECT` statement.
Use the table `AccessRightsAbbv` to look up the full text of the access right, as the join key is an abbreviated 'type' field. See the following table for a list of Teradata access rights and their equivalent in Azure Synapse.
This section discusses how to implement typical Teradata operational tasks in Az
As with all data warehouse products, once in production there are ongoing management tasks that are necessary to keep the system running efficiently and to provide data for monitoring and auditing. Resource utilization and capacity planning for future growth also falls into this category, as does backup/restore of data.
-While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more manual approach in legacy data warehouses such as Teradata).
+While conceptually the management and operations tasks for different data warehouses are similar, the individual implementations may differ. In general, modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach (as opposed to a more "manual" approach in legacy data warehouses such as Teradata).
The following sections compare Teradata and Azure Synapse options for various operational tasks.
The following sections compare Teradata and Azure Synapse options for various op
> [!TIP] > Housekeeping tasks keep a production warehouse operating efficiently and optimize use of resources such as storage.
-In most legacy data warehouse environments, there's a requirement to perform regular 'housekeeping' tasks such as reclaiming disk storage space that can be freed up by removing old versions of updated or deleted rows, or reorganizing data log files or index blocks for efficiency. Collecting statistics is also a potentially time-consuming task. Collecting statistics is required after a bulk data ingest to provide the query optimizer with up-to-date data to base generation of query execution plans.
+In most legacy data warehouse environments, there's a requirement to perform regular "housekeeping" tasks such as reclaiming disk storage space that can be freed up by removing old versions of updated or deleted rows, or reorganizing data log files or index blocks for efficiency. Collecting statistics is also a potentially time-consuming task. Collecting statistics is required after a bulk data ingest to provide the query optimizer with up-to-date data to base generation of query execution plans.
Teradata recommends collecting statistics as follows:
Teradata recommends collecting statistics as follows:
- Prototype phase, newly populated tables. -- Production phase, after a significant percentage of change to the table or partition (~10% rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+- Production phase, after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
-- Recommendation: Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+- Recommendation: collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
Azure Synapse has an option to automatically create statistics so that they can
> [!TIP] > Over time, several different tools have been implemented to allow monitoring and logging of Teradata systems.
-Teradata provides several tools to monitor the operation including Teradata Viewpoint and Ecosystem Manager. For logging query history, the Database Query Log (DBQL) is a Teradata Database feature that provides a series of predefined tables that can store historical records of queries and their duration, performance, and target activity based on user-defined rules.
+Teradata provides several tools to monitor the operation including Teradata Viewpoint and Ecosystem Manager. For logging query history, the Database Query Log (DBQL) is a Teradata database feature that provides a series of predefined tables that can store historical records of queries and their duration, performance, and target activity based on user-defined rules.
Database administrators can use Teradata Viewpoint to determine system status, trends, and individual query status. By observing trends in system usage, system administrators are better able to plan project implementations, batch jobs, and maintenance to avoid peak periods of use. Business users can use Teradata Viewpoint to quickly access the status of reports and queries and drill down into details.
For more information, see [Azure Synapse operations and management options](/azu
### High Availability (HA) and Disaster Recovery (DR)
-Teradata implements features such as Fallback, Archive Restore Copy utility (ARC), and Data Stream Architecture (DSA) to provide protection against data loss and high availability (HA) via replication and archive of data. Disaster Recovery (DR) options include Dual Active Solution, DR as a service, or a replacement system depending on the recovery time requirement.
+Teradata implements features such as `FALLBACK`, Archive Restore Copy utility (ARC), and Data Stream Architecture (DSA) to provide protection against data loss and high availability (HA) via replication and archive of data. Disaster Recovery (DR) options include Dual Active Solution, DR as a service, or a replacement system depending on the recovery time requirement.
> [!TIP] > Azure Synapse creates snapshots automatically to ensure fast recovery times.
In Azure Synapse, resource classes are pre-determined resource limits that gover
See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
-This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
+This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of "peaky" workloads.
### Scale compute resources
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Visualization and reporting for Teradata migrations
This article is part four of a seven part series that provides guidance on how t
## Access Azure Synapse Analytics using Microsoft and third-party BI tools
-Almost every organization accesses data warehouses and data marts by using a range of BI tools and applications, such as:
+Almost every organization accesses data warehouses and data marts using a range of BI tools and applications, such as:
- Microsoft BI tools, like Power BI.
Almost every organization accesses data warehouses and data marts by using a ran
- Custom analytic applications that have embedded BI tool functionality inside the application. -- Operational applications that request BI on demand by invoking queries and reports as-a-service on a BI platform, that in turn queries data in the data warehouse or data marts that are being migrated.
+- Operational applications that request BI on demand, invoke queries and reports as-a-service on a BI platform, which in turn queries data in the data warehouse or data marts that are being migrated.
-- Interactive data science development tools, for instance, Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, Jupyter Notebooks.
+- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, and Jupyter Notebooks.
-The migration of visualization and reporting as part of a data warehouse migration program, means that all the existing queries, reports, and dashboards generated and issued by these tools and applications need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
+The migration of visualization and reporting as part of a data warehouse migration program means that all the existing queries, reports, and dashboards generated and issued by these tools and applications need to run on Azure Synapse and yield the same results as they did in the original data warehouse prior to migration.
> [!TIP] > Existing users, user groups, roles and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
In addition, all the required data needs to be migrated to ensure the same resul
> [!TIP] > Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
-If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
+If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Teradata), there are [tools](../../partner/data-integration.md) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged. > [!TIP]
-> Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully,.
+> Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully.
Testing is critical to visualization and report migration. You need a test suite and agreed-on test data to run and rerun tests in both environments. A test harness is also useful, and a few are mentioned later in this guide. In addition, it's also important to have significant business involvement in this area of migration to keep confidence high and to keep them engaged and part of the project.
There's a lot to think about here, so let's look at all this in more detail.
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
-The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, if you're going to do that, it can affect BI tool business users and applications accessing your data warehouse, especially if it involves structural changes in your data model. Even if there were no new data structures because of new requirements, but you're considering adopting a different data modeling technique (like Data Vault) in your migrated data warehouse, you're likely to cause structural changes that impact BI reports and dashboards. If you want to adopt an agile data modeling technique, do so after migration. One way in which you can minimize the impact of things like schema changes on BI tools, users, and the reports they produce, is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
+The temptation during data warehouse migration to the cloud is to take the opportunity to make changes during the migration to fulfill long-term requirements, such as opening business requests, missing data, new features, and more. However, these changes can affect the BI tools accessing your data warehouse, especially if it involves structural changes in your data model. If you want to adopt an agile data modeling technique or implement structural changes, do so *after* migration.
+
+One way in which you can minimize the impact of things like schema changes on BI tools is to introduce data virtualization between BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide the migration from users.
:::image type="content" source="../media/4-visualization-reporting/migration-data-virtualization.png" border="true" alt-text="Diagram showing how to hide the migration from users through data virtualization.":::
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straightforward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits.
### Migrate reports based on usage Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
-If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
+If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator of the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you have and defining their business purpose and usage statistics.
-For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
+For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it's necessary to decommission those reports to optimize your migration efforts. A key question worth asking when deciding to decommission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
### Migrate reports based on business value Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this over time.
-This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), and KPI targets that need to be achieved and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
+This level of detail is unlikely to be available in most organizations. One way in which you can get deeper on business value to drive migration order is to look at alignment with business strategy. A business strategy set by your executive typically lays out strategic business objectives, key performance indicators (KPIs), KPI targets that need to be achieved, and who is accountable for achieving them. In that sense, classifying your reports and dashboards by strategic business objectives&mdash;for example, reduce fraud, improve customer engagement, and optimize business operations&mdash;will help understand business purpose and show what objective(s), specific reports, and dashboards these are contributing to. Reports and dashboards associated with high priority objectives in the business strategy can then be highlighted so that migration is focused on delivering business value in a strategic high priority area.
-It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with, helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **Business strategy objective** table.
+It's also worthwhile to classify reports and dashboards as operational, tactical, or strategic, to understand the level in the business where they're used. Delivering strategic business objectives requires contribution at all these levels. Knowing which reports and dashboards are used, at what level, and what objectives they're associated with helps to focus migration on high priority business value that will drive the company forward. Business contribution of reports and dashboards is needed to understand this, perhaps like what is shown in the following **business strategy objective** table.
| **Level** | **Report / dashboard name** | **Business purpose** | **Department used** | **Usage frequency** | **Business priority** | |-|-|-|-|-|-|
When it comes to migrating to Azure Synapse, there are several things that can i
BI tool reports and dashboards, and other visualizations, are produced by issuing SQL queries that access physical tables and/or views in your data warehouse or data mart. When it comes to migrating your data warehouse or data mart schema to Azure Synapse, there may be incompatibilities that can impact reports and dashboards, such as: -- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse (like the Teradata Time-Series tables)
+- Non-standard table types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse, for example Teradata Time-Series tables.
-- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse. For example, Teradata Geospatial or Interval data types.
+- Data types supported in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse, for example Teradata Geospatial or Interval data types.
-In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it's possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
+In many cases, where there are incompatibilities, there may be ways around them. For example, the data in unsupported table types can be migrated into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it may be possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same. Either way, it will need refactoring.
> [!TIP] > Querying the system catalog of your legacy warehouse DBMS is a quick and straightforward way to identify schema incompatibilities with Azure Synapse.
The impact may be less than you think, because many BI tools don't support such
### The impact of SQL incompatibilities and differences
-Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS, is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
+Additionally, any report, dashboard, or other visualization in an application or tool that makes use of proprietary SQL extensions associated with your legacy data warehouse DBMS is likely to be impacted when migrating to Azure Synapse. This could happen because the BI tool or application:
- Accesses legacy data warehouse DBMS views that include proprietary SQL functions that have no equivalent in Azure Synapse.
You can't rely on documentation associated with reports, dashboards, and other v
> [!TIP] > Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
-One way is to get a hold of the SQL log files of your legacy data warehouse. Use a script to pull out a representative set of SQL statements into a file, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
+One way is to get hold of the SQL log files of your legacy data warehouse. Use a script to pull out a representative set of SQL statements into a file, prefix each SQL statement with an `EXPLAIN` statement, and then run all the `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary SQL extensions from your legacy data warehouse that are unsupported will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach would at least give you an idea of how significant or otherwise the use of incompatible SQL is.
Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
Metadata from your legacy data warehouse DBMS will also help you when it comes t
A key element in data warehouse migration is the testing of reports and dashboards against Azure Synapse to verify that the migration has worked. To do this, you need to define a series of tests and a set of required outcomes for each test that needs to be run to verify success. It's important to ensure that reports and dashboards are tested and compared across your existing and migrated data warehouse systems to: -- Identify whether schema changes made during migration such as data types to be converted, have impacted reports in terms of ability to run, results, and corresponding visualizations.
+- Identify whether schema changes made during migration, such as data types to be converted, have impacted reports in terms of ability to run results and corresponding visualizations.
- Verify all users are migrated.
A key element in data warehouse migration is the testing of reports and dashboar
- Ensure consistent results of all known queries, reports, and dashboards. -- Ensure that data and ETL migration is complete and error free.
+- Ensure that data and ETL migration is complete and error-free.
- Ensure data privacy is upheld.
Ad-hoc analysis and reporting are more challenging and require a set of tests to
In terms of security, the best way to do this is to create roles, assign access privileges to roles, and then attach users to roles. To access your newly migrated data warehouse, set up an automated process to create new users, and to do role assignment. To detach users from roles, you can follow the same steps.
-It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
+It's also important to communicate the cutover to all users, so they know what's changing and what to expect.
## Analyze lineage to understand dependencies between reports, dashboards, and data
A critical success factor in migrating reports and dashboards is understanding l
In multi-vendor data warehouse environments, business analysts in BI teams may map out data lineage. For example, if you have Informatica for your ETL, Oracle for your data warehouse, and Tableau for reporting, each of which have their own metadata repository, figuring out where a specific data element in a report came from can be challenging and time consuming.
-To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end to end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
+To migrate seamlessly from a legacy data warehouse to Azure Synapse, end-to-end data lineage helps prove like-for-like migration when comparing reports and dashboards against your legacy environment. That means that metadata from several tools needs to be captured and integrated to show the end-to-end journey. Having access to tools that support automated metadata discovery and data lineage will let you see duplicate reports and ETL processes and reports that rely on data sources that are obsolete, questionable, or even non-existent. With this information, you can reduce the number of reports and ETL processes that you migrate.
-You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage, for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
+You can also compare end-to-end lineage of a report in Azure Synapse against the end-to-end lineage for the same report in your legacy data warehouse environment, to see if there are any differences that have occurred inadvertently during migration. This helps enormously with testing and verifying migration success.
Data lineage visualization not only reduces time, effort, and error in the migration process, but also enables faster execution of the migration project.
A good way to get everything consistent across multiple BI tools is to create a
> [!TIP] > Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
-In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the data virtualization server lets multiple BI tools see a common semantic layer.
:::image type="content" source="../media/4-visualization-reporting/data-virtualization-semantics.png" border="true" alt-text="Diagram with common data names and definitions that relate to the data virtualization server.":::
In this way, you get consistency across all BI tools, while at the same time bre
> [!TIP] > Identify incompatibilities early to gauge the extent of the migration effort. Migrate your users, group roles and privilege assignments. Only migrate the reports and visualizations that are used and are contributing to business value.
-In a lift-and-shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
+In a lift and shift data warehouse migration to Azure Synapse, most reports and dashboards should migrate easily.
-However, if data structures change, then data is stored in unsupported data types or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
+However, if data structures change, then data is stored in unsupported data types, or access to data in the data warehouse or data mart is via a view that includes proprietary SQL that's unsupported in your Azure Synapse environment. You'll need to deal with those issues if they arise.
You can't rely on documentation to find out where the issues are likely to be. Making use of `EXPLAIN` statements is a pragmatic and quick way to identify incompatibilities in SQL. Rework these to achieve similar results in Azure Synapse. In addition, it's recommended that you make use of automated metadata discovery and lineage tools to help you identify duplicate reports, reports that are no longer valid because they're using data from data sources that you no longer use, and to understand dependencies. Some of these tools help compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Minimize SQL issues for Teradata migrations
By creating metadata to list the data tables to be migrated and their location,
> [!TIP] > SQL DDL commands `CREATE TABLE` and `CREATE VIEW` have standard core elements but are also used to define implementation-specific options.
-The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Teradata and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution and partitioning options.
+The ANSI SQL standard defines the basic syntax for DDL commands such as `CREATE TABLE` and `CREATE VIEW`. These commands are used within both Teradata and Azure Synapse, but they've also been extended to allow definition of implementation-specific features such as indexing, table distribution, and partitioning options.
The following sections discuss Teradata-specific options to consider during a migration to Azure Synapse.
The following sections discuss Teradata-specific options to consider during a mi
> [!TIP] > Use existing indexes to give an indication of candidates for indexing in the migrated warehouse.
-When migrating tables between different technologies, only the raw data and its descriptive metadata gets physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, there's no equivalent of the `MULTISET` option within Teradata's `CREATE TABLE` syntax.
+When migrating tables between different technologies, only the raw data and its descriptive metadata get physically moved between the two environments. Other database elements from the source system, such as indexes and log files, aren't directly migrated as these may not be needed or may be implemented differently within the new target environment. For example, there's no equivalent of the `MULTISET` option within Teradata's `CREATE TABLE` syntax.
-It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if a non-unique secondary index (NUSI) has been created in the source Teradata environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse. Other native performance optimization techniques, such as table replication, may be more applicable than a straight 'like for like' index creation.
+It's important to understand where performance optimizations&mdash;such as indexes&mdash;were used in the source environment. This indicates where performance optimization can be added in the new target environment. For example, if a non-unique secondary index (NUSI) has been created in the source Teradata environment, this might indicate that a non-clustered index should be created in the migrated Azure Synapse database. Other native performance optimization techniques, such as table replication, may be more applicable than a straight "like-for-like" index creation.
### Unsupported Teradata table types > [!TIP]
-> Standard tables within Azure Synapse can support migrated Teradata time series and temporal tables.
+> Standard tables within Azure Synapse can support migrated Teradata time-series and temporal tables.
-Teradata includes support for special table types for time series and temporal data. The syntax and some of the functions for these table types aren't directly supported within Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
+Teradata includes support for special table types for time-series and temporal data. The syntax and some of the functions for these table types aren't directly supported within Azure Synapse, but the data can be migrated into a standard table with appropriate data types and indexing or partitioning on the date/time column.
Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently in use within the source Teradata environment and is to be migrated, then this additional filtering will need to be added into the relevant temporal queries.
There are [Microsoft partners](../../partner/data-integration.md) who offer tool
### SQL Data Manipulation Language (DML) > [!TIP]
-> SQL DML commands `SELECT`, `INSERT` and `UPDATE` have standard core elements but may also implement different syntax options.
+> SQL DML commands `SELECT`, `INSERT`, and `UPDATE` have standard core elements but may also implement different syntax options.
The ANSI SQL standard defines the basic syntax for DML commands such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. Both Teradata and Azure Synapse use these commands, but in some cases there are implementation differences.
The following sections discuss the Teradata-specific DML commands that you shoul
### SQL DML syntax differences
-There are a few differences in SQL DML syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware of during migration:
+Be aware of these differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse (T-SQL) when migrating:
- `QUALIFY`: Teradata supports the `QUALIFY` operator. For example:
There are a few differences in SQL DML syntax between Teradata SQL and Azure Syn
> [!TIP] > Use real queries from the existing system query logs to find potential migration issues.
-One way of testing legacy Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from the legacy system query logs, prefix those queries with [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97), and (assuming a 'like for like' migrated data model in Azure Synapse with the same table and column names) run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will throw an error&mdash;use this information to determine the scale of the recoding task. This approach doesn't require that data is loaded into the Azure environment, only that the relevant tables and views have been created.
+One way of testing legacy Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from the legacy system query logs, prefix those queries with [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97), and (assuming a "like-for-like" migrated data model in Azure Synapse with the same table and column names) run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will throw an error&mdash;use this information to determine the scale of the recoding task. This approach doesn't require that data is loaded into the Azure environment, only that the relevant tables and views have been created.
### Functions, stored procedures, triggers, and sequences
Azure Synapse doesn't support the creation of triggers, but you can implement th
#### Sequences
-Azure Synapse sequences are handled in a similar way to Teradata, using [identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory).
+Azure Synapse sequences are handled in a similar way to Teradata, using [IDENTITY to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory).
#### Teradata to T-SQL mapping
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Tools for Teradata data warehouse migration to Azure Synapse Analytics
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
Previously updated : 05/24/2022 Last updated : 05/31/2022 # Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure
This article is part seven of a seven part series that provides guidance on how
One of the key reasons to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. Azure Synapse also lets you integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of, and integrate with, other Microsoft technologies that help you modernize your migrated data warehouse. This includes integrating with technologies like: -- Azure Data Lake Storage, for cost effective data ingestion, staging, cleansing, and transformation to free up data warehouse capacity occupied by fast growing staging tables.
+- Azure Data Lake Storage for cost effective data ingestion, staging, cleansing, and transformation, to free up data warehouse capacity occupied by fast growing staging tables.
-- Azure Data Factory, for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
+- Azure Data Factory for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
-- [The Open Data Model Common Data Initiative](/common-data-model/), to share consistent trusted data across multiple technologies including:
+- [The Open Data Model Common Data Initiative](/common-data-model/) to share consistent trusted data across multiple technologies, including:
- Azure Synapse - Azure Synapse Spark - Azure HDInsight
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- Azure IoT - Microsoft ISV Partners -- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools) including:
+- [Microsoft's data science technologies](/azure/architecture/data-science-process/platforms-and-tools), including:
- Azure Machine Learning Studio - Azure Machine Learning - Azure Synapse Spark (Spark as a service)
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- ML.NET - .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale. -- [Azure HDInsight](../../../hdinsight/index.yml), to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
+- [Azure HDInsight](../../../hdinsight/index.yml) to leverage big data analytical processing and join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
-- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka), to integrate with live streaming data within Azure Synapse.
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka) to integrate with live streaming data from Azure Synapse.
There's often acute demand to integrate with [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
Let's look at these in more detail to understand how you can take advantage of t
## Offload data staging and ETL processing to Azure Data Lake and Azure Data Factory
-Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
+Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening OLTP systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume.
The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT) streams, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation, and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
For ELT strategies, consider offloading ELT processing to Azure Data Lake to eas
> [!TIP] > Data Factory allows you to build scalable data integration pipelines code-free.
-[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+[Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines in a code-free manner that can:
-- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use and connect to on-premises, cloud, and SaaS-based data sources.
+- Build scalable data integration pipelines code-free. Easily acquire data at scale. Pay only for what you use, and connect to on-premises, cloud, and SaaS-based data sources.
- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale. Take automatic action, such as a recommendation or alert. - Seamlessly author, monitor, and manage pipelines that span data stores both on-premises and in the cloud. -- Enable pay-as-you-go scale out in alignment with customer growth.
+- Enable pay-as-you-go scale-out in alignment with customer growth.
> [!TIP] > Data Factory can connect to on-premises, cloud, and SaaS data.
All of this can be done without writing any code. However, adding custom code to
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-data-factory-pipeline.png" border="true" alt-text="Screenshot showing an example of an Azure Data Factory pipeline."::: > [!TIP]
-> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise class data integration software aimed at IT professionals with a data wrangling facility for business users.
+> Pipelines called data factories control the integration and analysis of data. Data Factory is enterprise-class data integration software aimed at IT professionals with a data wrangling facility for business users.
Implement Data Factory pipeline development from any of several places including:
Implement Data Factory pipeline development from any of several places including
- REST APIs
-Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid as they can connect, ingest, clean, transform and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
+Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can also be hybrid since they can connect, ingest, clean, transform, and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
Once you develop Data Factory pipelines to integrate and analyze data, deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real-time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor pipeline execution to ensure performance and track errors.
Data Factory can support multiple use cases, including:
#### Data sources
-Azure Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
#### Transform data using Azure Data Factory > [!TIP]
-> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform, and integrate data without the need to write code.
-Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
+Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured such as JSON or Avro, and unstructured data.
-Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time series aggregations that require a window to be placed on data columns.
+Professional ETL developers can use Data Factory mapping data flows to filter, split, join (many types), lookup, pivot, unpivot, sort, union, and aggregate data without writing any code. In addition, Data Factory supports surrogate keys, multiple write processing options such as insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time-series aggregations that require a window to be placed on data columns.
> [!TIP] > Data Factory supports the ability to automatically detect and manage schema changes in inbound data, such as in streaming data.
Data engineers can profile data quality and view the results of individual data
> [!TIP] > Data Factory pipelines are also extensible since Data Factory allows you to write your own code and run it as part of a pipeline.
-Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark Pool Notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
+Extend Data Factory transformational and analytical functionality by adding a linked service containing your own code into a pipeline. For example, an Azure Synapse Spark pool notebook containing Python code could use a trained model to score the data integrated by a mapping data flow.
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
Another new capability in Data Factory is wrangling data flows. This lets busine
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+This differs from Excel and Power BI, as Data Factory [wrangling data flows](/azure/data-factory/wrangling-tutorial) use Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark pool notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
#### Link data and analytics in analytical pipelines
-In addition to cleaning and transforming data, Azure Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
-Models developed code-free with Azure Machine Learning Studio or with the Azure Machine Learning SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
+Models developed code-free with Azure Machine Learning Studio, or with the Azure Machine Learning SDK using Azure Synapse Spark pool notebooks or using R in RStudio, can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark pool notebooks.
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores, such as Azure Data Lake Storage, Azure Synapse, or Azure HDInsight (Hive tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
Store integrated data and any results from analytics included in a Data Factory
A key objective in any data integration setup is the ability to integrate data once and reuse it everywhere, not just in a data warehouse&mdash;for example, in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust. > [!TIP]
-> Azure Data Lake is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
+> Azure Data Lake Storage is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and Azure HDInsight.
To achieve this goal, establish a set of common data names and definitions describing logical data entities that need to be shared across the enterprise&mdash;such as customer, account, product, supplier, orders, payments, returns, and so forth. Once this is done, IT and business professionals can use data integration software to create these common data assets and store them to maximize their reuse to drive consistency everywhere. > [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to be loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake Storage by using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse, and Azure Machine Learning. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Another key requirement in modernizing your migrated data warehouse is to integr
### Microsoft technologies for data science on Azure > [!TIP]
-> Develop machine learning models using a no/low code approach or from a range of programming languages like Python, R and .NET.
+> Develop machine learning models using a no/low-code approach or from a range of programming languages like Python, R, and .NET.
Microsoft offers a range of technologies to build predictive analytical models using machine learning, analyze unstructured data using deep learning, and perform other kinds of advanced analytics. This includes:
Microsoft offers a range of technologies to build predictive analytical models u
- Azure Machine Learning -- Azure Synapse Spark Pool Notebooks
+- Azure Synapse Spark pool notebooks
-- ML.NET (API, CLI or .NET Model Builder for Visual Studio)
+- ML.NET (API, CLI, or ML.NET Model Builder for Visual Studio)
- .NET for Apache Spark
Azure Machine Learning Studio is a fully managed cloud service that lets you eas
> [!TIP] > Azure Machine Learning provides an SDK for developing machine learning models using several open-source frameworks.
-Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark Pool Notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
+Azure Machine Learning provides a software development kit (SDK) and services for Python to quickly prepare data, as well as train and deploy machine learning models. Use Azure Machine Learning from Azure notebooks (a Jupyter Notebook service) and utilize open-source frameworks, such as PyTorch, TensorFlow, Spark MLlib (Azure Synapse Spark pool notebooks), or scikit-learn. Azure Machine Learning provides an AutoML capability that automatically identifies the most accurate algorithms to expedite model development. You can also use it to build machine learning pipelines that manage end-to-end workflow, programmatically scale on the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning uses logical containers called workspaces, which can be either created manually from the Azure portal or created programmatically. These workspaces keep compute targets, experiments, data stores, trained machine learning models, Docker images, and deployed services all in one place to enable teams to work together. Use Azure Machine Learning from Visual Studio with a Visual Studio for AI extension.
> [!TIP]
-> Organize and manage related data stores, experiments, trained models, docker images and deployed services in workspaces.
+> Organize and manage related data stores, experiments, trained models, Docker images, and deployed services in workspaces.
-#### Azure Synapse Spark Pool Notebooks
+#### Azure Synapse Spark pool notebooks
> [!TIP]
-> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
+> Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service, offering scalable execution of data preparation, model development, and deployed model execution.
-[Azure Synapse Spark Pool Notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+[Azure Synapse Spark pool notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure, which:
-- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
+- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory.
-- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results
+- Allows data scientists to build and execute machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL; and to visualize results.
> [!TIP] > Azure Synapse Spark can access data in a range of Microsoft analytical ecosystem data stores on Azure.
-Jobs running in Azure Synapse Spark Pool Notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
+Jobs running in Azure Synapse Spark pool notebook can retrieve, process, and analyze data at scale from Azure Blob Storage, Azure Data Lake Storage, Azure Synapse, Azure HDInsight, and streaming data services such as Kafka.
Autoscaling and auto-termination are also supported to reduce total cost of ownership (TCO). Data scientists can use the MLflow open-source framework to manage the machine learning lifecycle.
Autoscaling and auto-termination are also supported to reduce total cost of owne
> [!TIP] > Microsoft has extended its machine learning capability to .NET developers.
-ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like .NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
+ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS), created by Microsoft for .NET developers so that they can use existing tools&mdash;like ML.NET Model Builder for Visual Studio&mdash;to develop custom machine learning models and integrate them into .NET applications.
#### .NET for Apache Spark
-.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+.NET for Apache Spark aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark pool notebook.
### Use Azure Synapse Analytics with your data warehouse > [!TIP]
-> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in Azure Synapse.
+> Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark pool notebook by using data in Azure Synapse.
-Combine machine learning models built using the tools with Azure Synapse by:
+Combine machine learning models with Azure Synapse by:
- Using machine learning models in batch mode or in real-time to produce new insights, and add them to what you already know in Azure Synapse.
Combine machine learning models built using the tools with Azure Synapse by:
> [!TIP] > Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
-In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark Pool notebooks together with Microsoft Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark Pool Notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Machine Learning.
+In terms of machine learning model development, data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark pool notebooks together with Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark pool notebooks using data in Azure Synapse. For example, they could create an unsupervised model to segment customers for use in driving different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as predicting a customer's propensity to churn, or recommending the next best offer for a customer to try to increase their value. The next diagram shows how Azure Synapse Analytics can be leveraged for Azure Machine Learning.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-train-predict.png" border="true" alt-text="Screenshot of an Azure Synapse Analytics train and predict model.":::
-In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
+In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark pool notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
## Integrate live streaming data into Azure Synapse Analytics
When analyzing data in a modern data warehouse, you must be able to analyze stre
Once you've successfully migrated your data warehouse to Azure Synapse, you can introduce this capability as part of a data warehouse modernization exercise. Do this by taking advantage of additional functionality in Azure Synapse. > [!TIP]
-> Ingest streaming data into Azure Data Lake Storage from Microsoft Event Hub or Kafka, and access it from Azure Synapse using PolyBase external tables.
+> Ingest streaming data into Azure Data Lake Storage from Azure Event Hubs or Kafka, and access it from Azure Synapse using PolyBase external tables.
-To do this, ingest streaming data via Microsoft Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard TSQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in Azure Data Lake.
+To do this, ingest streaming data via Azure Event Hubs or other technologies, such as Kafka, using Azure Data Factory (or using an existing ETL tool if it supports the streaming data sources). Store the data in Azure Data Lake Storage (ADLS). Next, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Azure Data Lake. Your migrated data warehouse will now contain new tables that provide access to real-time streaming data. Query this external table as if the data was in the data warehouse via standard T-SQL from any BI tool that has access to Azure Synapse. You can also join this data to other tables containing historical data and create views that join live streaming data to historical data to make it easier for business users to access. In the following diagram, a real-time data warehouse on Azure Synapse Analytics is integrated with streaming data in ADLS.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
To do this, ingest streaming data via Microsoft Event Hubs or other technologies
PolyBase offers the capability to create a logical data warehouse to simplify user access to multiple analytical data stores.
-This is attractive because many companies have adopted 'workload optimized' analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
+This is attractive because many companies have adopted "workload optimized" analytical data stores over the last several years in addition to their data warehouses. Examples of these platforms on Azure include:
-- Azure Data Lake Storage with Azure Synapse Spark Pool Notebook (Spark-as-a-service), for big data analytics.
+- ADLS with Azure Synapse Spark pool notebook (Spark-as-a-service), for big data analytics.
- Azure HDInsight (Hadoop as-a-service), also for big data analytics.
These additional analytical platforms have emerged because of the explosion of n
- Machine generated data, such as IoT sensor data and clickstream data. -- Human generated data, such as social network data, review web site data, customer in-bound email, image, and video.
+- Human generated data, such as social network data, review web site data, customer inbound email, images, and video.
- Other external data, such as open government data and weather data.
-This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video), which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
-As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
+As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in the following diagram.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/analytical-workload-platforms.png" border="true" alt-text="Screenshot of different analytical platforms for different types of analytical workloads in Azure Synapse Analytics.":::
Since these platforms are producing new insights, it's normal to see a requireme
> [!TIP] > The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
-By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Cosmos DB&mdash;or to streaming data flowing into Azure Data Lake Storage from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
+By leveraging PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse. Join data in Azure Synapse to data in other Azure and on-premises analytical data stores&mdash;like Azure HDInsight or Azure Cosmos DB&mdash;or to streaming data flowing into ADLS from Azure Stream Analytics and Event Hubs. Users access external tables in Azure Synapse, unaware that the data they're accessing is stored in multiple underlying analytical systems. The next diagram shows the complex data warehouse structure accessed through comparatively simpler but still powerful user interface methods.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into ADLS and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark pool notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
The previous diagram shows how other technologies of the Microsoft analytical ec
> [!TIP] > Migrating your data warehouse to Azure Synapse lets you make use of a rich Microsoft analytical ecosystem running on Azure.
-Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You can't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
+Once you migrate your data warehouse to Azure Synapse, you can leverage other technologies in the Microsoft analytical ecosystem. You don't only modernize your data warehouse, but combine insights produced in other Azure analytical data stores into an integrated analytical architecture.
-Broaden your ETL processing to ingest data of any type into Azure Data Lake Storage. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in-real-time on streaming data and on-demand as a service.
+Broaden your ETL processing to ingest data of any type into ADLS. Prepare and integrate it at scale using Azure Data Factory to produce trusted, commonly understood data assets that can be consumed by your data warehouse and accessed by data scientists and other applications. Build real-time and batch-oriented analytical pipelines and create machine learning models to run in batch, in real-time on streaming data, and on-demand as a service.
Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify access to insights from multiple underlying analytical platforms on Azure by creating holistic integrated views in a logical data warehouse. Easily access streaming, big data, and traditional data warehouse insights from BI tools and applications to drive new value in your business. ## Next steps
-To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
+To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
You can also add IP firewall rules to a Synapse workspace after the workspace is
You can connect to your Synapse workspace using Synapse Studio. You can also use SQL Server Management Studio (SSMS) to connect to the SQL resources (dedicated SQL pools and serverless SQL pool) in your workspace.
-Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1433 for Synapse Studio.
-For private endpoints of your workspace target resources (Sql, SqlOnDemand, Dev), allow outgoing communication on TCP port 443 and 1433, unless you have configured other custom ports.
+Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1443 for Synapse Studio.
Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433.
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 04/05/2022 Last updated : 06/03/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
To enable Azure AD authentication on a storage account, you need to create an Az
} '@ $now = [DateTime]::UtcNow
- $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddDays(-1).ToString("s")
+ $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s")
$json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s") $json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password $Headers = @{'authorization' = "Bearer $($Token)"}
The service principal's password will expire every six months. To update the pas
'@ $now = [DateTime]::UtcNow
- $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddDays(-1).ToString("s")
+ $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s")
$json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s") $json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Users can now make meetings more personalized and avoid unexpected distractions
### Multi-window and "Call me with Teams" features now generally available
-The multi-window feature gives users the option to pop out chats, meetings, calls, or documents into separate windows to streamline their workflow. The "Call me" feature lets users transfer a Teams call to their phone. Both features are now generally available in Teams on Azure Virtual Desktop. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-multi-window-support-and-call-me-are-now-in-ga/ba-p/3401830).
+The multi-window feature gives users the option to pop out chats, meetings, calls, or documents into separate windows to streamline their workflow. The "Call me with Teams" feature lets users transfer a Teams call to their phone. Both features are now generally available in Teams on Azure Virtual Desktop. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-multi-window-support-and-call-me-are-now-in-ga/ba-p/3401830).
### Japan metadata service in public preview
virtual-machine-scale-sets Cli Sample Enable Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/scripts/cli-sample-enable-autoscale.md
This script uses the commands outlined in the following table:
||| | [az group create](/cli/azure/ad/group) | Creates a resource group in which all resources are stored. | | [az vmss create](/cli/azure/vmss) | Creates the virtual machine scale set and connects it to the virtual network, subnet, and network security group. A load balancer is also created to distribute traffic to multiple VM instances. This command also specifies the VM image to be used and administrative credentials. |
-| [az monitor autoscale-settings create](/cli/azure/monitor/autoscale-settings) | Creates and applies autoscale rules to a virtual machine scale set. |
+| [az monitor autoscale-settings create](/cli/azure/monitor/autoscale) | Creates and applies autoscale rules to a virtual machine scale set. |
| [az group delete](/cli/azure/ad/group) | Deletes a resource group including all nested resources. | ## Next steps
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
As the Azure Compute Gallery, definition, and version are all resources, they ca
We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
+## Activity Log
+The [Activity log](../azure-monitor/essentials/activity-log.md) displays recent activity on the gallery, image, or version including any configuration changes and when it was created and deleted. View the activity log in the Azure portal, or create a [diagnostic setting to send it to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace), where you can view events over time or analyze them with other collected data
+
+The following table lists a few example operations that relate to gallery operations in the activity log. For a complete list of possible log entries, see [Microsoft.Compute Resource Provider options](../role-based-access-control/resource-provider-operations.md#compute)
+
+| Operation | Description |
+|:|:|
+| Microsoft.Compute/galleries/write | Creates a new Gallery or updates an existing one |
+| Microsoft.Compute/galleries/delete | Deletes the Gallery |
+| Microsoft.Compute/galleries/share/action | Shares a Gallery to different scopes |
+| Microsoft.Compute/galleries/images/read | Gets the properties of Gallery Image |
+| Microsoft.Compute/galleries/images/write | Creates a new Gallery Image or updates an existing one |
+| Microsoft.Compute/galleries/images/versions/read | Gets the properties of Gallery Image Version |
+| | |
+ ## Billing There is no extra charge for using the Azure Compute Gallery service. You will be charged for the following resources:
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-cli.md
Previously updated : 03/30/2021 Last updated : 06/01/2022
The following example creates a VM named *myVM* and adds a user account named *a
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image Debian \
--admin-username azureuser \ --generate-ssh-keys ```
It takes a few minutes to create the VM and supporting resources. The following
} ```
-Note your own `publicIpAddress` in the output from your VM. This address is used to access the VM in the next steps.
+Make a note of the `publicIpAddress` to use later.
-
-## Open port 80 for web traffic
+## Install web server
-By default, only SSH connections are opened when you create a Linux VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the NGINX web server:
+To see your VM in action, install the NGINX web server. Update your package sources and then install the latest NGINX package.
```azurecli-interactive
-az vm open-port --port 80 --resource-group myResourceGroup --name myVM
-```
-
-## Connect to virtual machine
-
-SSH to your VM as normal. Replace the IP address in the example with the public IP address of your VM as noted in the previous output:
-
-```bash
-ssh azureuser@40.68.254.142
+az vm run-command invoke \
+ -g myResourceGroup \
+ -n myVM \
+ --command-id RunShellScript \
+ --scripts "sudo apt-get update && sudo apt-get install -y nginx"
```
-## Install web server
+## Open port 80 for web traffic
-To see your VM in action, install the NGINX web server. Update your package sources and then install the latest NGINX package.
+By default, only SSH connections are opened when you create a Linux VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the NGINX web server:
-```bash
-sudo apt-get -y update
-sudo apt-get -y install nginx
+```azurecli-interactive
+az vm open-port --port 80 --resource-group myResourceGroup --name myVM
```
-When done, type `exit` to leave the SSH session.
- ## View the web server in action Use a web browser of your choice to view the default NGINX welcome page. Use the public IP address of your VM as the web address. The following example shows the default NGINX web site:
-![View the NGINX welcome page](./media/quick-create-cli/view-the-nginx-welcome-page.png)
+![Screenshot showing the N G I N X default web page.](./media/quick-create-cli/nginix-welcome-page-debian.png)
## Clean up resources
In this quickstart, you deployed a simple virtual machine, opened a network port
> [!div class="nextstepaction"] > [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)++
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-powershell.md
Previously updated : 01/14/2022 Last updated : 06/01/2022
To open the Cloud Shell, just select **Try it** from the upper right corner of a
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed: ```azurepowershell-interactive
-New-AzResourceGroup -Name "myResourceGroup" -Location "EastUS"
+New-AzResourceGroup -Name 'myResourceGroup' -Location 'EastUS'
``` - ## Create a virtual machine
-We will be automatically generating an SSH key pair to use for connecting to the VM. The public key that is created using `-GenerateSshKey` will be stored in Azure as a resource, using the name you provide as `SshKeyName`. The SSH key resource can be reused for creating additional VMs. Both the public and private keys will also downloaded for you. When you create your SSH key pair using the Cloud Shell, the keys are stored in a [storage account that is automatically created by Cloud Shell](../../cloud-shell/persisting-shell-storage.md). Don't delete the storage account, or the file share in it, until after you have retrieved your keys or you will lose access to the VM.
+We will be automatically generating an SSH key pair to use for connecting to the VM. The public key that is created using `-GenerateSshKey` will be stored in Azure as a resource, using the name you provide as `SshKeyName`. The SSH key resource can be reused for creating additional VMs. Both the public and private keys will also be downloaded for you. When you create your SSH key pair using the Cloud Shell, the keys are stored in a [storage account that is automatically created by Cloud Shell](../../cloud-shell/persisting-shell-storage.md). Don't delete the storage account, or the file share in it, until after you have retrieved your keys or you will lose access to the VM.
You will be prompted for a user name that will be used when you connect to the VM. You will also be asked for a password, which you can leave blank. Password login for the VM is disabled when using an SSH key.
In this example, you create a VM named *myVM*, in *East US*, using the *Standard
```azurepowershell-interactive New-AzVm `
- -ResourceGroupName "myResourceGroup" `
- -Name "myVM" `
- -Location "East US" `
- -Image UbuntuLTS `
+ -ResourceGroupName 'myResourceGroup' `
+ -Name 'myVM' `
+ -Location 'East US' `
+ -Image Debian `
-size Standard_B2s ` -PublicIpAddressName myPubIP `
- -OpenPorts 80,22 `
+ -OpenPorts 80 `
-GenerateSshKey ` -SshKeyName mySSHKey ```
Private key is saved to /home/user/.ssh/1234567891
Public key is saved to /home/user/.ssh/1234567891.pub ```
-Make a note of the path to your private key to use later.
- It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
+## Install NGINX
-## Connect to the VM
-
-You need to change the permission on the SSH key using `chmod`. Replace *~/.ssh/1234567891* in the following example with the private key name and path from the earlier output.
-
-```azurepowershell-interactive
-chmod 600 ~/.ssh/1234567891
-```
-
-Create an SSH connection with the VM using the public IP address. To see the public IP address of the VM, use the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) cmdlet:
+To see your VM in action, install the NGINX web server.
```azurepowershell-interactive
-Get-AzPublicIpAddress -ResourceGroupName "myResourceGroup" | Select "IpAddress"
+Invoke-AzVMRunCommand `
+ -ResourceGroupName 'myResourceGroup' `
+ -Name 'myVM' `
+ -CommandId 'RunShellScript' `
+ -ScriptString 'sudo apt-get update && sudo apt-get install -y nginx'
```
-Using the same shell you used to create your SSH key pair, paste the the following command into the shell to create an SSH session. Replace *~/.ssh/1234567891* in the following example with the private key name and path from the earlier output. Replace *10.111.12.123* with the IP address of your VM and *azureuser* with the name you provided when you created the VM.
+The `-ScriptString' parameter requires version `4.27.0` or later of the 'Az.Compute` module.
-```bash
-ssh -i ~/.ssh/1234567891 azureuser@10.111.12.123
-```
-## Install NGINX
+## View the web server in action
-To see your VM in action, install the NGINX web server. From your SSH session, update your package sources and then install the latest NGINX package.
+Get the public IP address of your VM:
-```bash
-sudo apt-get -y update
-sudo apt-get -y install nginx
+```azurepowershell-interactive
+Get-AzPublicIpAddress -Name myPubIP -ResourceGroupName myResourceGroup | select "IpAddress"
```
-When done, type `exit` to leave the SSH session.
--
-## View the web server in action
-
-Use a web browser of your choice to view the default NGINX welcome page. Enter the public IP address of the VM as the web address. The public IP address can be found on the VM overview page or as part of the SSH connection string you used earlier.
+Use a web browser of your choice to view the default NGINX welcome page. Enter the public IP address of the VM as the web address.
-![NGINX default Welcome page](./media/quick-create-cli/nginix-welcome-page.png)
+![Screenshot showing the N G I N X default web page.](./media/quick-create-cli/nginix-welcome-page-debian.png)
## Clean up resources When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to remove the resource group, VM, and all related resources: ```azurepowershell-interactive
-Remove-AzResourceGroup -Name "myResourceGroup"
+Remove-AzResourceGroup -Name 'myResourceGroup'
``` ## Next steps
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
# Configure SAP system parameters
-Configuration for the [SAP deployment automation framework on Azure](automation-deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment.
+Configuration for the [SAP deployment automation framework on Azure](automation-deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment. You can find examples of the variable file in the 'samples/WORKSPACES/SYSTEM' folder.
The automation supports both creating resources (green field deployment) or using existing resources (brownfield deployment).
The table below contains the parameters that define the environment settings.
> | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx | > | 'name_override_file' | Name override file | Optional | see [Custom naming](automation-naming-module.md) | - ## Resource group parameters The table below contains the parameters that define the resource group.
The table below contains the parameters that define the resource group.
> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional | ++
+## SAP Virtual Hostname parameters
+
+In the SAP deployment automation framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter.
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `use_secondary_ips` | Boolean flag indicating if SAP should be installed using Virtual hostnames | Optional |
+ ### Database tier parameters The database tier defines the infrastructure for the database tier, supported database backends are:
The Virtual Machine and the operating system image is defined using the followin
publisher="SUSE" offer="sles-sap-15-sp3" sku="gen2"
- version="8.2.2021040902"
+ version="latest"
} ```
The table below contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | -- | | |
-> | `network_logical_name` | The logical name of the network. | Required | |
-> | `network_address_space` | The address range for the virtual network. | Mandatory | For new environment deployments |
-> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For new environment deployments |
-> | `admin_subnet_arm_id` | The Azure resource identifier for the 'admin' subnet. | Mandatory | For existing environment deployments |
-> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name. | Optional | |
-> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For existing environment deployments |
-> | `db_subnet_name` | The name of the 'db' subnet. | Optional | |
-> | `db_subnet_address_prefix` | The address range for the 'db' subnet. | Mandatory | For new environment deployments |
-> | `db_subnet_arm_id` | The Azure resource identifier for the 'db' subnet. | Mandatory | For existing environment deployments |
-> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name. | Optional | |
-> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the 'db' Network Security Group. | Mandatory | For existing environment deployments |
-> | `app_subnet_name` | The name of the 'app' subnet. | Optional | |
-> | `app_subnet_address_prefix` | The address range for the 'app' subnet. | Mandatory | For new environment deployments |
-> | `app_subnet_arm_id` | The Azure resource identifier for the 'app' subnet. | Mandatory | For existing environment deployments |
-> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name. | Optional | |
-> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the 'app' Network Security Group. | Mandatory | For existing environment deployments |
-> | `web_subnet_name` | The name of the 'web' subnet. | Optional | |
-> | `web_subnet_address_prefix` | The address range for the 'web' subnet. | Mandatory | For new environment deployments |
-> | `web_subnet_arm_id` | The Azure resource identifier for the 'web' subnet. | Mandatory | For existing environment deployments |
-> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name. | Optional | |
-> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the 'web' Network Security Group. | Mandatory | For existing environment deployments |
-
-\* = Required for existing environment deployments
+> | Variable | Description | Type | Notes |
+> | -- | -- | | - |
+> | `network_logical_name` | The logical name of the network. | Required | |
+> | | | Optional | |
+> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | |
+> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For green field deployments. |
+> | `admin_subnet_arm_id` * | The Azure resource identifier for the 'admin' subnet. | Mandatory | For brown field deployments. |
+> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name. | Optional | |
+> | `admin_subnet_nsg_arm_id` * | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For brown field deployments. |
+> | | | Optional | |
+> | `db_subnet_name` | The name of the 'db' subnet. | Optional | |
+> | `db_subnet_address_prefix` | The address range for the 'db' subnet. | Mandatory | For green field deployments. |
+> | `db_subnet_arm_id` * | The Azure resource identifier for the 'db' subnet. | Mandatory | For brown field deployments. |
+> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name. | Optional | |
+> | `db_subnet_nsg_arm_id` * | The Azure resource identifier for the 'db' Network Security Group. | Mandatory | For brown field deployments. |
+> | | | Optional | |
+> | `app_subnet_name` | The name of the 'app' subnet. | Optional | |
+> | `app_subnet_address_prefix` | The address range for the 'app' subnet. | Mandatory | For green field deployments. |
+> | `app_subnet_arm_id` * | The Azure resource identifier for the 'app' subnet. | Mandatory | For brown field deployments. |
+> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name. | Optional | |
+> | `app_subnet_nsg_arm_id` * | The Azure resource identifier for the 'app' Network Security Group. | Mandatory | For brown field deployments. |
+> | | | Optional | |
+> | `web_subnet_name` | The name of the 'web' subnet. | Optional | |
+> | `web_subnet_address_prefix` | The address range for the 'web' subnet. | Mandatory | For green field deployments. |
+> | `web_subnet_arm_id` * | The Azure resource identifier for the 'web' subnet. | Mandatory | For brown field deployments. |
+> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name. | Optional | |
+> | `web_subnet_nsg_arm_id` * | The Azure resource identifier for the 'web' Network Security Group. | Mandatory | For brown field deployments. |
+
+\* = Required For brown field deployments.
### Anchor virtual machine parameters
The table below contains the parameters related to the anchor virtual machine.
The Virtual Machine and the operating system image is defined using the following structure: ```python {
-os_type=""
-source_image_id=""
-publisher="Canonical"
-offer="0001-com-ubuntu-server-focal"
-sku="20_04-lts"
-version="latest"
+ os_type="linux"
+ source_image_id=""
+ publisher="SUSE"
+ offer="sles-sap-15-sp3"
+ sku="gen2"
+ version="latest"
} ```
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -| -- |
-> | `automation_username` | Administrator account name | Optional |
+> | `automation_username` | Administrator account name | Optional |
> | `automation_password` | Administrator password | Optional | > | `automation_path_to_public_key` | Path to existing public key | Optional | > | `automation_path_to_private_key` | Path to existing private key | Optional |
By default the SAP System deployment uses the credentials from the SAP Workload
### Azure NetApp Files Support > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | - | --| -- | |
-> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
-> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
-> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
-> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
-> | | | | |
-> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
-> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
-> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
-> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
-> | `ANF_use_existing_shared_volume` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
-> | `ANF_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
-> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
-> | `ANF_use_existing_sapmnt_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
-> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
-> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
-> | | | | |
-> | `ANF_use_for_usrsap` | Create Azure NetApp Files volume for usrsap. | Optional | |
-> | `ANF_use_existing_usrsap_volume` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
-> | `ANF_usrsap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
-> | `ANF_usrsap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
+> | Variable | Description | Type | Notes |
+> | - | --| -- | |
+> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
+> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
+> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
+> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
+> | | | | |
+> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
+> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
+> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
+> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
+> | `ANF_use_existing_shared_volume` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
+> | `ANF_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
+> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
+> | `ANF_use_existing_sapmnt_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
+> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
+> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_usrsap` | Create Azure NetApp Files volume for usrsap. | Optional | |
+> | `ANF_use_existing_usrsap_volume` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
+> | `ANF_usrsap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
+> | `ANF_usrsap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
## Oracle parameters
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
An [SAP application](automation-deployment-framework.md#sap-concepts) typically
## Workload zone deployment configuration
-The configuration of the SAP workload zone is done via a Terraform tfvars variable file.
-
-## Terraform Parameters
-
-The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts.
+The configuration of the SAP workload zone is done via a Terraform tfvars variable file. You can find examples of the variable file in the 'samples/WORKSPACES/LANDSCAPE' folder.
-
-| Variable | Type | Description |
-| -- | - | - |
-| `tfstate_resource_id` | Required * | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files |
-| `deployer_tfstate_key` | Required * | The name of the state file for the Deployer |
+The sections below show the different sections of the variable file.
## Environment parameters
app_subnet_address_prefix = "10.110.32.0/19"
The table below defines the credentials used for defining the Virtual Machine authentication > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -| -- |
-> | `automation_username` | Administrator account name | Optional |
-> | `automation_password` | Administrator password | Optional |
-> | `automation_path_to_public_key` | Path to existing public key | Optional |
-> | `automation_path_to_private_key` | Path to existing private key | Optional |
+> | Variable | Description | Type | Notes |
+> | - | -| -- | - |
+> | `automation_username` | Administrator account name | Optional | Default: 'azureadm' |
+> | `automation_password` | Administrator password | Optional | |
+> | `automation_path_to_public_key` | Path to existing public key | Optional | |
+> | `automation_path_to_private_key` | Path to existing private key | Optional | |
**Minimum required authentication definition**
ANF_service_level = "Ultra"
```
+## Other Parameters
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | | - | -- | - |
+> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
+> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used for storage accounts and key vaults. | Optional | |
+> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
+> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
++ ## ISCSI Parameters
ANF_service_level = "Ultra"
> | `iscsi_nic_ips` | IP addresses for the iSCSI Virtual Machines | Optional | ignored if `iscsi_use_DHCP` is defined |
-## Other Parameters
+## Terraform Parameters
-> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | | - | -- | - |
-> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
-> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used for storage accounts and key vaults. | Optional | |
-> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
-> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
+The table below contains the Terraform parameters. These parameters need to be entered manually if not using the deployment scripts.
++
+| Variable | Type | Description |
+| -- | - | - |
+| `tfstate_resource_id` | Required * | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files |
+| `deployer_tfstate_key` | Required * | The name of the state file for the Deployer |
## Next Step
virtual-machines Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/deployment-guide.md
vm-linux Previously updated : 07/16/2020 Last updated : 06/02/2022 # Azure Virtual Machines deployment for SAP NetWeaver
For more information about user-defined routes, see [User-defined routes and IP
When you've prepared the VM as described in [Deployment scenarios of VMs for SAP on Azure][deployment-guide-3], the Azure VM Agent is installed on the virtual machine. The next step is to deploy the Azure Extension for SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more information, see [Configure the Azure Extension for SAP][deployment-guide-4.5].
+## Next steps
+
+Learn about [RHEL for SAP in-place upgrade](../redhat/redhat-in-place-upgrade.md#upgrade-sap-environments-from-rhel-7-vms-to-rhel-8-vms)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 05/11/2022 Last updated : 06/02/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- June 02, 2022: Change in the [SAP Deployment Guide](deployment-guide.md) to add a link to RHEL in-place upgrade documentation
- June 02, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to add sizing considerations - May 11, 2022: Change in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md), [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS](./sap-high-availability-infrastructure-wsfc-shared-disk.md) and [SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md) to update instruction about the usage of Azure shared disk for SAP deployment with PPG. - May 10, 2022: Changes in Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to adjust parameters per SAP note 3024346
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
While Private Traffic includes both branch and Virtual Network address prefixes
> Inter-region traffic can be inspected by Azure Firewall or NVA for Virtual Hubs deployed in select Azure regions. For available regions, please contact previewinterhub@microsoft.com.
-* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or Network Virtual Appliance specified as part of the Routing Policy.
+* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or **Network Virtual Appliance** specified as part of the Routing Policy.
+
+ In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN will propagate a **default** route to all spokes and Gateways. In the case of a **Network Virtual Appliance** this routes will be learned and propagated through BGP via the vWAN Route Service and learned by the BGP speakers inside the **Network Virtual Appliance**
* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource or Network Virtual Appliance resource that was specified in the Private Traffic Routing Policy.
While Private Traffic includes both branch and Virtual Network address prefixes
10. Repeat steps 2-8 for other Secured Virtual WAN hubs that you want to configure Routing policies for. 11. At this point, you are ready to send test traffic. Please make sure your Firewall Policies are configured appropriately to allow/deny traffic based on your desired security configurations.
-## <a name="nva"></a> Configure routing policies (through Virtual WAN portal)
+## <a name="nva"></a> Configure routing policies for network virtual appliances (through Virtual WAN portal)
>[!NOTE] > The only Network Virtual Appliance deployed in the Virtual WAN hub compatible with routing intent and routing policies are listed in the [Partners section](about-nva-hub.md) as dual-role connectivity and Next-Generation Firewall solution providers.
While Private Traffic includes both branch and Virtual Network address prefixes
4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
+ > [!NOTE]
+ > At this point in time, Routing Policies for **Network Virtual Appliances** do not allow you to edit the RFC1918 prefixes. Azure vWAN will be propagating the RFC 1918 space to all spokes and Gateways across, as well as to BGP speakers inside the ****Network Virtual Appliances**. Be mindful of the implications about the propagation of these prefixes into your environment and create the appropriate policies inside your **Network Virtual Appliance** to control routing behavior. Should it be desired to propagate more specific RFC 1918 spaces (i.e Spoke address space), those prefixes need to be added as well on the box below explicit.
+ :::image type="content" source="./media/routing-policies/private-prefixes-nva.png"alt-text="Screenshot showing how to configure additional private prefixes for NVA routing policies."lightbox="./media/routing-policies/private-prefixes-nva.png"::: 5. If you want to configure a Internet Traffic Routing Policy, under **Internet traffic** select **Network Virtual Appliance** and under **Next Hop Resource** select the Network Virtual Appliance you want to send internet-bound traffic to.
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
You can create an empty virtual hub (a virtual hub that doesn't contain any gate
By default, the virtual hub router is automatically configured to deploy with a virtual hub capacity of 2 routing infrastructure units. This supports a minimum of 3 Gbps aggregate throughput, and 2000 connected VMs deployed in all virtual networks connected to that virtual hub.
-When you deploy a new virtual hub, you can specify additional routing infrastructure units to increase the default virtual hub capacity in increments of 1 Gbps and 1000 VMs. This feature gives you the ability to secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed. The scale unit on which the virtual hub is created becomes the minimum capacity. You can view routing infrastructure units, router Gbps, and number of VMs supported, in the Azure portal **Virtual hub** pages for **Create virtual hub** and **Edit virtual hub**.
+When you deploy a new virtual hub, you can specify additional routing infrastructure units to increase the default virtual hub capacity in increments of 1 Gbps and 1000 VMs. This feature gives you the ability to secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed. The scale unit on which the virtual hub is created becomes the minimum capacity. Creating a virtual hub without a gateway takes about 5 - 7 minutes while creating a virtual hub and a gateway can take about 30 minutes to complete. You can view routing infrastructure units, router Gbps, and number of VMs supported, in the Azure portal **Virtual hub** pages for **Create virtual hub** and **Edit virtual hub**.
### Configure virtual hub capacity
The following table shows the configurations available for each virtual WAN type
## Next steps
-For virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+For virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).