Updates from: 12/31/2020 04:04:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-token-exchange-saml-oauth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-token-exchange-saml-oauth.md
@@ -28,4 +28,4 @@ The general strategy is to add the OIDC/OAuth stack to your app. With your app t
> The recommended library for adding OIDC/OAuth behavior is the Microsoft Authentication Library (MSAL). To learn more about MSAL, see [Overview of Microsoft Authentication Library (MSAL)](msal-overview.md). The previous library was called Active Directory Authentication Library (ADAL), however it is not recommended as MSAL is replacing it. ## Next steps-- [authentication flows and application scenarios](authentication-flows-app-scenarios.md)
+- [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/known-issues.md
@@ -16,7 +16,7 @@ ms.workload: identity
ms.date: 12/01/2020 ms.author: barclayn ms.collection: M365-identity-device-management
-ms.custom: has-adal-ref, devx-track-azurecli
+ms.custom: has-adal-ref
--- # FAQs and known issues with managed identities for Azure resources
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/new-relic-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/new-relic-tutorial.md
@@ -75,7 +75,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://rpm.newrelic.com/accounts/{acc_id}/sso/saml/login` - Be sure to substitute `acc_id` with your own Account ID of New Relic by Account.
+ `https://rpm.newrelic.com:443/accounts/{acc_id}/sso/saml/finalize` - Be sure to substitute `acc_id` with your own Account ID of New Relic by Account.
b. In the **Identifier (Entity ID)** text box, type a URL: `rpm.newrelic.com`
@@ -189,4 +189,4 @@ When you click the New Relic by Account tile in the Access Panel, you should be
- [Try New Relic by Account with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
aks https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
@@ -6,7 +6,7 @@ services: container-service
ms.topic: quickstart ms.date: 10/06/2020
-ms.custom: mvc, seo-javascript-october2019, devx-track-azurecli
+ms.custom: mvc, seo-javascript-october2019
#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure. ---
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-app-update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-app-update.md
@@ -5,7 +5,7 @@ services: container-service
ms.topic: tutorial ms.date: 09/30/2020
-ms.custom: mvc, devx-track-azurecli
+ms.custom: mvc
#Customer intent: As a developer, I want to learn how to update an existing application deployment in an Azure Kubernetes Service (AKS) cluster so that I can maintain the application lifecycle. ---
aks https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-scale.md
@@ -5,7 +5,7 @@ services: container-service
ms.topic: tutorial ms.date: 09/30/2020
-ms.custom: mvc, devx-track-azurecli
+ms.custom: mvc
#Customer intent: As a developer or IT pro, I want to learn how to scale my applications in an Azure Kubernetes Service (AKS) cluster so that I can provide high availability or respond to customer demand and application load. ---
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-hybrid-connections.md
@@ -7,7 +7,7 @@ ms.assetid: 66774bde-13f5-45d0-9a70-4e9536a4f619
ms.topic: article ms.date: 06/08/2020 ms.author: ccompy
-ms.custom: seodec18, fasttrack-edit, devx-track-azurecli
+ms.custom: seodec18, fasttrack-edit
--- # Azure App Service Hybrid Connections
app-service https://docs.microsoft.com/en-us/azure/app-service/configure-common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-common.md
@@ -5,7 +5,7 @@ keywords: azure app service, web app, app settings, environment variables
ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 ms.topic: article ms.date: 12/07/2020
-ms.custom: "devx-track-csharp, seodec18, devx-track-azurecli"
+ms.custom: "devx-track-csharp, seodec18"
--- # Configure an App Service app in the Azure portal
app-service https://docs.microsoft.com/en-us/azure/app-service/environment/using-an-ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
@@ -7,7 +7,7 @@ ms.assetid: a22450c4-9b8b-41d4-9568-c4646f4cf66b
ms.topic: article ms.date: 5/10/2020 ms.author: ccompy
-ms.custom: seodec18, devx-track-azurecli
+ms.custom: seodec18
--- # Use an App Service Environment
app-service https://docs.microsoft.com/en-us/azure/app-service/overview-inbound-outbound-ips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-inbound-outbound-ips.md
@@ -3,7 +3,7 @@ title: Inbound/Outbound IP addresses
description: Learn how inbound and outbound IP addresses are used in Azure App Service, when they change, and how to find the addresses for your app. ms.topic: article ms.date: 08/25/2020
-ms.custom: seodec18, devx-track-azurecli
+ms.custom: seodec18
---
app-service https://docs.microsoft.com/en-us/azure/app-service/troubleshoot-diagnostic-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
@@ -4,7 +4,7 @@ description: Learn how to enable diagnostic logging and add instrumentation to y
ms.assetid: c9da27b2-47d4-4c33-a3cb-1819955ee43b ms.topic: article ms.date: 09/17/2019
-ms.custom: "devx-track-csharp, seodec18, devx-track-azurecli"
+ms.custom: "devx-track-csharp, seodec18"
--- # Enable diagnostics logging for apps in Azure App Service
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
@@ -41,10 +41,10 @@ The quickstart uses the `copy` element to create multiple instances of key-value
Two Azure resources are defined in the template: -- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/2020-06-01/configurationstores): create an App Configuration store.-- Microsoft.AppConfiguration/configurationStores/keyValues: create a key-value inside the App Configuration store.
+- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores): create an App Configuration store.
+- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores/keyvalues): create a key-value inside the App Configuration store.
-> [!NOTE]
+> [!TIP]
> The `keyValues` resource's name is a combination of key and label. The key and label are joined by the `$` delimiter. The label is optional. In the above example, the `keyValues` resource with name `myKey` creates a key-value without a label. > > Percent-encoding, also known as URL encoding, allows keys or labels to include characters that are not allowed in ARM template resource names. `%` is not an allowed character either, so `~` is used in its place. To correctly encode a name, follow these steps:
@@ -55,6 +55,13 @@ Two Azure resources are defined in the template:
> > For example, to create a key-value pair with key name `AppName:DbEndpoint` and label name `Test`, the resource name should be `AppName~3ADbEndpoint$Test`.
+> [!NOTE]
+> App Configuration allows key-value data access over a [private link](concept-private-endpoint.md) from your virtual network. By default, when the feature is enabled, all requests for your App Configuration data over the public network are denied. Because the ARM template runs outside your virtual network, data access from an ARM template isn't allowed. To allow data access from an ARM template when a private link is used, you can enable public network access by using the following Azure CLI command. It's important to consider the security implications of enabling public network access in this scenario.
+>
+> ```azurecli-interactive
+> az appconfig update -g MyResourceGroup -n MyAppConfiguration --enable-public-network true
+> ```
+ ## Deploy the template Select the following image to sign in to Azure and open a template. The template creates an App Configuration store with two key-values inside.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/configure-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/configure-monitoring.md
@@ -3,7 +3,7 @@ title: Configure monitoring for Azure Functions
description: Learn how to connect your function app to Application Insights for monitoring and how to configure data collection. ms.date: 8/31/2020 ms.topic: how-to
-ms.custom: "contperf-fy21q2, devx-track-azurecli"
+ms.custom: "contperf-fy21q2"
# Customer intent: As a developer, I want to understand how to correctly configure monitoring for my functions so I can collect the data that I need. ---
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid-trigger.md
@@ -6,7 +6,7 @@ author: craigshoemaker
ms.topic: reference ms.date: 02/14/2020 ms.author: cshoe
-ms.custom: "devx-track-csharp, fasttrack-edit, devx-track-python, devx-track-azurecli"
+ms.custom: "devx-track-csharp, fasttrack-edit, devx-track-python"
--- # Azure Event Grid trigger for Azure Functions
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/overview.md
@@ -27,7 +27,7 @@ If you're new to Azure Resource Manager, there are some terms you might not be f
* **resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Resource groups, subscriptions, management groups, and tags are also examples of resources. * **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [Resource groups](#resource-groups).
-* **resource provider** - A service that supplies Azure resources. For example, a common resource provider is Microsoft.Compute, which supplies the virtual machine resource. Microsoft.Storage is another common resource provider. See [Resource providers and types](resource-providers-and-types.md).
+* **resource provider** - A service that supplies Azure resources. For example, a common resource provider is `Microsoft.Compute`, which supplies the virtual machine resource. `Microsoft.Storage` is another common resource provider. See [Resource providers and types](resource-providers-and-types.md).
* **Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md). * **declarative syntax** - Syntax that lets you state "Here is what I intend to create" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure. See [Template deployment overview](../templates/overview.md).
@@ -55,7 +55,7 @@ Azure provides four levels of scope: [management groups](../../governance/manage
![Management levels](./media/overview/scope-levels.png)
-You apply management settings at any of these levels of scope. The level you select determines how widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply a [policy](../../governance/policy/overview.md) to the subscription, the policy is applied to all resource groups and resources in your subscription. When you apply a policy on the resource group, that policy is applied the resource group and all its resources. However, another resource group doesn't have that policy assignment.
+You apply management settings at any of these levels of scope. The level you select determines how widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply a [policy](../../governance/policy/overview.md) to the subscription, the policy is applied to all resource groups and resources in your subscription. When you apply a policy on the resource group, that policy is applied to the resource group and all its resources. However, another resource group doesn't have that policy assignment.
You can deploy templates to tenants, management groups, subscriptions, or resource groups.
@@ -93,7 +93,7 @@ There are some important factors to consider when defining your resource group:
## Resiliency of Azure Resource Manager
-The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to management.azure.com) in the REST API are:
+The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to `management.azure.com`) in the REST API are:
* Distributed across regions. Some services are regional.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/azure-hybrid-benefit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-hybrid-benefit.md
@@ -5,7 +5,7 @@ description: Use existing SQL Server licenses for Azure SQL Database and SQL Man
services: sql-database ms.service: sql-db-mi ms.subservice: features
-ms.custom: sqldbrb=4, devx-track-azurecli
+ms.custom: sqldbrb=4
ms.topic: conceptual author: stevestein ms.author: sstein
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/tde-certificate-migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/tde-certificate-migrate.md
@@ -4,7 +4,7 @@ description: Migrate a certificate protecting the database encryption key of a d
services: sql-database ms.service: sql-managed-instance ms.subservice: security
-ms.custom: sqldbrb=1, devx-track-azurecli
+ms.custom: sqldbrb=1
ms.devlang: ms.topic: how-to author: MladjoA
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/user-initiated-failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/user-initiated-failover.md
@@ -3,7 +3,7 @@ title: Manually initiate a failover on SQL Managed Instance
description: Learn how to manually failover primary and secondary replicas on Azure SQL Managed Instance. services: sql-database ms.service: sql-managed-instance
-ms.custom: seo-lt-2019, sqldbrb=1, devx-track-azurecli
+ms.custom: seo-lt-2019, sqldbrb=1
ms.devlang: ms.topic: how-to author: danimir
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-private-clouds-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
@@ -43,18 +43,8 @@ Hosts used to build or scale clusters come from an isolated pool of hosts. Those
## VMware software versions
-The current software versions of the VMware software used in Azure VMware Solution private cloud clusters are:
+[!INCLUDE [vmware-software-versions](includes/vmware-software-versions.md)]
-| Software | Version |
-| :--- | :---: |
-| VCSA / vSphere / ESXi | 6.7 U3 |
-| ESXi | 6.7 U3 |
-| vSAN | 6.7 U3 |
-| NSX-T | 2.5 |
-
-For any new cluster in a private cloud, the software version matches what's currently running. For any new private cloud in a subscription, the software stack's latest version gets installed.
-
-You can find the general upgrade policies and processes for the Azure VMware Solution platform software described in [Private cloud updates and upgrades](concepts-upgrades.md).
## Host maintenance and lifecycle management
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -139,7 +139,8 @@ No. High-end ESXi hosts are reserved for use in production clusters.
#### What versions of VMware software is used in private clouds?
-Private clouds use vSphere 6.7 U3, vSAN 6.7 U3, VMware HCX, and NSX-T 2.5. For more information, see [the VMware software version requirements](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html).
+[!INCLUDE [vmware-software-versions](includes/vmware-software-versions.md)]
+ #### Do private clouds use VMware NSX?
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/includes/vmware-software-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/includes/vmware-software-versions.md new file mode 100644
@@ -0,0 +1,27 @@
+---
+title: VMware software versions
+description: Supported VMware software versions for Azure VMware Solution.
+ms.topic: include
+ms.date: 12/29/2020
+---
+
+<!-- Used in faq.md and concepts-private-clouds-clusters.md -->
++
+The current software versions of the VMware software used in Azure VMware Solution private cloud clusters are:
+
+| Software | Version |
+| :--- | :---: |
+| VCSA / vSphere / ESXi | 6.7 U3 |
+| ESXi | 6.7 U3 |
+| vSAN | 6.7 U3 |
+| NSX-T | 2.5 |
++
+>[!NOTE]
+>NSX-T is the only supported version of NSX.
+
+For any new cluster in a private cloud, the software version matches what's currently running. For any new private cloud in a subscription, the software stack's latest version gets installed. For more information, see the [VMware software version requirements](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html).
+
+The private cloud software bundle upgrades keep the software within one version of the most recent software bundle release from VMware. The private cloud software versions may differ from the most recent versions of the individual software components (ESXi, NSX-T, vCenter, vSAN). You can find the general upgrade policies and processes for the Azure VMware Solution platform software described in [Private cloud updates and upgrades](../concepts-upgrades.md).
+
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/know-baremetal-terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/know-baremetal-terms.md new file mode 100644
@@ -0,0 +1,25 @@
+---
+title: Know the terms of Azure BareMetal Infrastructure
+description: Know the terms of Azure BareMetal Infrastructure.
+ms.topic: conceptual
+ms.date: 12/31/2020
+---
+
+# Know the terms for BareMetal Infrastructure
+
+In this article, we'll cover some important BareMetal terms.
+
+- **Revision**: There are two different stamp revisions for BareMetal Instance stamps. Each version differs in architecture and proximity to Azure virtual machine hosts:
+ - **Revision 3** (Rev 3): is the original design.
+ - **Revision 4** (Rev 4): is a new design that provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units.
+ - **Revision 4.2** (Rev 4.2): is the latest rebranded BareMetal Infrastructure that uses the existing Rev 4 architecture. You can access and manage your BareMetal instances through the Azure portal.
+
+- **Stamp**: Defines the Microsoft internal deployment size of BareMetal Instances. Before instance units can get deployed, a BareMetal Instance stamp consisting out of compute, network, and storage racks must be deployed in a datacenter location. Such a deployment is called a BareMetal Instance stamp or from Revision 4.2.
+
+- **Tenant**: A customer deployed in BareMetal Instance stamp gets isolated into a *tenant.* A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants can't see each other or communicate with each other on the BareMetal Instance stamp level. A customer can choose to have deployments into different tenants. Even then, there's no communication between tenants on the BareMetal Instance stamp level.
+
+## Next steps
+Learn how to identify and interact with BareMetal Instance units through the [Azure portal](workloads/sap/baremetal-infrastructure-portal.md).
++
+
\ No newline at end of file
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal.md new file mode 100644
@@ -0,0 +1,168 @@
+---
+title: BareMetal Instance units in Azure
+description: Learn how to identify and interact with BareMetal Instance units through the Azure portal.
+ms.topic: how-to
+ms.date: 12/31/2020
+---
+
+# Manage BareMetal Instances through the Azure portal
+
+This article shows how the [Azure portal](https://portal.azure.com/) displays [BareMetal Instances](baremetal-overview-architecture.md). This article also shows you the activities you can do in the Azure portal with your deployed BareMetal Instance units.
+
+## Register the resource provider
+An Azure resource provider for BareMetal Instances provides visibility of the instances in the Azure portal, currently in public preview. By default, the Azure subscription you use for BareMetal Instance deployments registers the *BareMetalInfrastructure* resource provider. If you don't see your deployed BareMetal Instance units, you must register the resource provider with your subscription. There are two ways to register the BareMetal Instance resource provider:
+
+* [Azure CLI](#azure-cli)
+
+* [Azure portal](#azure-portal)
+
+### Azure CLI
+
+Sign in to the Azure subscription you use for the BareMetal Instance deployment through the Azure CLI. You can register the BareMetalInfrastructure resource provider with:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.BareMetalInfrastructure
+```
+
+For more information, see the article [Azure resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell).
+
+### Azure portal
+
+You can register the BareMetalInfrastructure resource provider through the Azure portal.
+
+You'll need to list your subscription in the Azure portal and then double-click on the subscription used to deploy your BareMetal Instance units.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. On the Azure portal menu, select **All services**.
+
+1. In the **All services** box, enter **subscription**, and then select **Subscriptions**.
+
+1. Select the subscription from the subscription list to view.
+
+1. Select **Resource providers** and enter **BareMetalInfrastructure** into the search. The resource provider should be **Registered**, as the image shows.
+
+>[!NOTE]
+>If the resource provider is not registered, select **Register**.
+
+:::image type="content" source="media/baremetal-infrastructure-portal/register-resource-provider-azure-portal.png" alt-text="Screenshot that shows the BareMetal Instance unit registered":::
+
+## BareMetal Instance units in the Azure portal
+
+When you submit a BareMetal Instance deployment request, you'll specify the Azure subscription that you're connecting to the BareMetal Instances. Use the same subscription you use to deploy the application layer that works against the BareMetal Instance units.
+
+During the deployment of your BareMetal Instances, a new [Azure resource group](../../../azure-resource-manager/management/manage-resources-portal.md) gets created in the Azure subscription you used in the deployment request. This new resource group lists all your BareMetal Instance units you've deployed in the specific subscription.
+
+1. In the BareMetal subscription, in the Azure portal, select **Resource groups**.
+
+ :::image type="content" source="media/baremetal-infrastructure-portal/view-baremetal-instance-units-azure-portal.png" alt-text="Screenshot that shows the list of Resource Groups":::
+
+1. In the list, locate the new resource group.
+
+ :::image type="content" source="media/baremetal-infrastructure-portal/filter-resource-groups.png" alt-text="Screenshot that shows the BareMetal Instance unit in a filtered Resource groups list" lightbox="media/baremetal-infrastructure-portal/filter-resource-groups.png":::
+
+ >[!TIP]
+ >You can filter on the subscription you used to deploy the BareMetal Instance. After you filter to the proper subscription, you might have a long list of resource groups. Look for one with a post-fix of **-Txxx** where xxx is three digits like **-T250**.
+
+1. Select the new resource group to show the details of it. The image shows one BareMetal Instance unit deployed.
+
+ >[!NOTE]
+ >If you deployed several BareMetal Instance tenants under the same Azure subscription, you would see multiple Azure resource groups.
+
+## View the attributes of a single instance
+
+You can view the details of a single unit. In the list of the BareMetal instance, select the single instance you want to view.
+
+:::image type="content" source="media/baremetal-infrastructure-portal/view-attributes-single-baremetal-instance.png" alt-text="Screenshot that shows the BareMetal Instance unit attributes of a single instance" lightbox="media/baremetal-infrastructure-portal/view-attributes-single-baremetal-instance.png":::
+
+The attributes in the image don't look much different than the Azure virtual machine (VM) attributes. On the left, you'll see the Resource group, Azure region, and subscription name and ID. If you assigned tags, then you'll see them here as well. By default, the BareMetal Instance units don't have tags assigned.
+
+On the right, you'll see the unit's name, operating system (OS), IP address, and SKU that shows the number of CPU threads and memory. You'll also see the power state and hardware version (revision of the BareMetal Instance stamp). The power state indicates if the hardware unit is powered on or off. The operating system details, however, don't indicate whether it's up and running.
+
+The possible hardware revisions are:
+
+* Revision 3
+
+* Revision 4
+
+* Revision 4.2
+
+>[!NOTE]
+>Revision 4.2 is the latest rebranded BareMetal Infrastructure using the Revision 4 architecture. It has significant improvements in network latency between Azure VMs and BareMetal instance units deployed in Revision 4 stamps or rows. For more information about the different revisions, see [BareMetal Infrastructure on Azure](baremetal-overview-architecture.md).
+
+Also, on the right side, you'll find the [Azure Proximity Placement Group's](../../../virtual-machines/linux/co-location.md) name, which is created automatically for each deployed BareMetal Instance unit. Reference the Proximity Placement Group when you deploy the Azure VMs that host the application layer. When you use the Proximity Placement Group associated with the BareMetal Instance unit, you ensure that the Azure VMs get deployed close to the BareMetal Instance unit.
+
+>[!TIP]
+>To locate the application layer in the same Azure datacenter as Revision 4.x, see [Azure proximity placement groups for optimal network latency](../../../virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md).
+
+## Check activities of a single instance
+
+You can check the activities of a single unit. One of the main activities recorded are restarts of the unit. The data listed includes the activity's status, timestamp the activity triggered, subscription ID, and the Azure user who triggered the activity.
+
+:::image type="content" source="media/baremetal-infrastructure-portal/check-activities-single-baremetal-instance.png" alt-text="Screenshot that shows the BareMetal Instance unit activities" lightbox="media/baremetal-infrastructure-portal/check-activities-single-baremetal-instance.png":::
+
+Changes to the unit's metadata in Azure also get recorded in the Activity log. Besides the restart initiated, you can see the activity of **Write BareMetallnstances**. This activity makes no changes on the BareMetal Instance unit itself but documents the changes to the unit's metadata in Azure.
+
+Another activity that gets recorded is when you add or delete a [tag](../../../azure-resource-manager/management/tag-resources.md) to an instance.
+
+## Add and delete an Azure tag to an instance
+
+You can add Azure tags to a BareMetal Instance unit or delete them. The way tags get assigned doesn't differ from assigning tags to VMs. As with VMs, the tags exist in the Azure metadata, and for BareMetal Instances, they have the same restrictions as the tags for VMs.
+
+Deleting tags work the same way as with VMs. Applying and deleting a tag are listed in the BareMetal Instance unit's Activity log.
+
+## Check properties of an instance
+
+When you acquire the instances, you can go to the Properties section to view the data collected about the instances. The data collected includes the Azure connectivity, storage backend, ExpressRoute circuit ID, unique resource ID, and the subscription ID. You'll use this information in support requests or when setting up storage snapshot configuration.
+
+Another critical piece of information you'll see is the storage NFS IP address. It isolates your storage to your **tenant** in the BareMetal Instance stack. You'll use this IP address when you edit the [configuration file for storage snapshot backups](../../../virtual-machines/workloads/sap/hana-backup-restore.md#set-up-storage-snapshots).
+
+:::image type="content" source="media/baremetal-infrastructure-portal/baremetal-instance-properties.png" alt-text="Screenshot that shows the BareMetal Instance property settings" lightbox="media/baremetal-infrastructure-portal/baremetal-instance-properties.png":::
+
+## Restart a unit through the Azure portal
+
+There are various situations where the OS won't finish a restart, which requires a power restart of the BareMetal Instance unit. You can do a power restart of the unit directly from the Azure portal:
+
+Select **Restart** and then **Yes** to confirm the restart of the unit.
+
+:::image type="content" source="media/baremetal-infrastructure-portal/baremetal-instance-restart.png" alt-text="Screenshot that shows how to restart the BareMetal Instance unit":::
+
+When you restart a BareMetal Instance unit, you'll experience a delay. During this delay, the power state moves from **Starting** to **Started**, which means the OS has started up completely. As a result, after a restart, you can't log into the unit as soon as the state switches to **Started**.
+
+>[!IMPORTANT]
+>Depending on the amount of memory in your BareMetal Instance unit, a restart and a reboot of the hardware and the operating system can take up to one hour.
+
+## Open a support request for BareMetal Instances
+
+You can submit support requests specifically for a BareMetal Instance unit.
+1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+
+ - **Issue type:** Select an issue type
+
+ - **Subscription:** Select your subscription
+
+ - **Service:** BareMetal Infrastructure
+
+ - **Resource:** Provide the name of the instance
+
+ - **Summary:** Provide a summary of your request
+
+ - **Problem type:** Select a problem type
+
+ - **Problem subtype:** Select a subtype for the problem
+
+1. Select the **Solutions** tab to find a solution to your problem. If you can't find a solution, go to the next step.
+
+1. Select the **Details** tab and select whether the issue is with VMs or the BareMetal Instance units. This information helps direct the support request to the correct specialists.
+
+1. Indicate when the problem began and select the instance region.
+
+1. Provide more details about the request and upload a file if needed.
+
+1. Select **Review + Create** to submit the request.
+
+It takes up to five business days for a support representative to confirm your request.
+
+## Next steps
+
+If you want to learn more about the workloads, see [BareMetal workload types](../../../virtual-machines/workloads/sap/get-started.md).
\ No newline at end of file
baremetal-infrastructure https://docs.microsoft.com/en-us/azure/baremetal-infrastructure/workloads/sap/baremetal-overview-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/sap/baremetal-overview-architecture.md new file mode 100644
@@ -0,0 +1,101 @@
+---
+title: Overview of BareMetal Infrastructure Preview in Azure
+description: Overview of how to deploy BareMetal Infrastructure in Azure.
+ms.custom: references_regions
+ms.topic: conceptual
+ms.date: 12/31/2020
+---
+
+# What is BareMetal Infrastructure Preview on Azure?
+
+Azure BareMetal Infrastructure provides a secure solution for migrating enterprise custom workloads. The BareMetal instances are non-shared host/server hardware assigned to you. It unlocks porting your on-prem solution with specialized workloads requiring certified hardware, licensing, and support agreements. Azure handles infrastructure monitoring and maintenance for you, while in-guest operating system (OS) monitoring and application monitoring fall within your ownership.
+
+BareMetal Infrastructure provides a path to modernize your infrastructure landscape while maintaining your existing investments and architecture. With BareMetal Infrastructure, you can bring specialized workloads to Azure, allowing you access and integration with Azure services with low latency.
+
+## SKU availability in Azure regions
+BareMetal Infrastructure for specialized and general-purpose workloads is available, starting with four regions based on Revision 4.2 (Rev 4.2) stamps:
+- West Europe
+- North Europe
+- East US 2
+- South Central US
+
+>[!NOTE]
+>**Rev 4.2** is the latest rebranded BareMetal Infrastructure that uses the existing Rev 4 architecture. Rev 4 provides closer proximity to the Azure virtual machine (VM) hosts and lowers the latency between Azure VMs and BareMetal Instance units. You can access and manage your BareMetal instances through the Azure portal.
+
+## Support
+BareMetal Infrastructure is ISO 27001, ISO 27017, SOC 1, and SOC 2 compliant. It also uses a bring-your-own-license (BYOL) model: OS, specialized workload, and third-party applications.
+
+As soon as you receive root access and full control, you assume responsibility for:
+- Designing and implementing backup and recovery solutions, high availability, and disaster recovery
+- Licensing, security, and support for OS and third-party software
+
+Microsoft is responsible for:
+- Providing certified hardware for specialized workloads
+- Provisioning the OS
+
+:::image type="content" source="media/baremetal-support-model.png" alt-text="BareMetal Infrastructure support model" border="false":::
+
+## Compute
+BareMetal Infrastructure offers multiple SKUs certified for specialized workloads. Available SKUs available range from the smaller two-socket system to the 24-socket system. Use the workload-specific certified SKUs for your specialized workload.
+
+The BareMetal instance stamp itself combines the following components:
+
+- **Computing:** Servers based on a different generation of Intel Xeon processors that provide the necessary computing capability and are certified for the specialized workload.
+
+- **Network:** A unified high-speed network fabric interconnects computing, storage, and LAN components.
+
+- **Storage:** An infrastructure accessed through a unified network fabric.
+
+Within the multi-tenant infrastructure of the BareMetal stamp, customers are deployed in isolated tenants. When deploying a tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one that BareMetal instances are billed.
+
+>[!NOTE]
+>A customer deployed in the BareMetal instance gets isolated into a tenant. A tenant is isolated in the networking, storage, and compute layer from other tenants. Storage and compute units assigned to the different tenants cannot see each other or communicate with each other on the BareMetal instances.
+
+## OS
+During the provisioning of the BareMetal instance, you can select the OS you want to install on the machines.
+
+>[!NOTE]
+>Remember, BareMetal Infrastructure is a BYOL model.
+
+The available Linux OS versions are:
+- Red Hat Enterprise Linux (RHEL) 7.6
+- SUSE Linux Enterprise Server (SLES)
+ - SLES 12 SP2
+ - SLES 12 SP3
+ - SLES 12 SP4
+ - SLES 12 SP5
+ - SLES 15 SP1
+
+## Storage
+BareMetal instances based on specific SKU type come with predefined NFS storage based on specific workload type. When you provision BareMetal, you can provision additional storage based on your estimated growth by submitting a support request. All storage comes with an all-flash disk in Revision 4.2 with support for NFSv3 and NFSv4. The newer Revision 4.5 NVMe SSD will be available. For more information on storage sizing, see the [BareMetal workload type](../../../virtual-machines/workloads/sap/get-started.md) section.
+
+>[!NOTE]
+>The storage used for BareMetal meets FIPS 140-2 security requirements offering Encryption at Rest by default. The data is stored securely on the disks.
+
+## Networking
+The architecture of Azure network services is a key component for a successful deployment of specialized workloads in BareMetal instances. It is likely that not all IT systems are located in Azure already. Azure offers you network technology to make Azure look like a virtual data center to your on-premises software deployments. The Azure network functionality required for BareMetal instances is:
+
+- Azure virtual networks are connected to the ExpressRoute circuit that connects to your on-premises network assets.
+- An ExpressRoute circuit that connects on-premises to Azure should have a minimum bandwidth of 1 Gbps or higher.
+- Extended Active directory and DNS in Azure or completely running in Azure.
+
+Using ExpressRoute lets you extend your on-premises network into Microsoft cloud over a private connection with a connectivity provider's help. You can enable **ExpressRoute Premium** to extend connectivity across geopolitical boundaries or use **ExpressRoute Local** for cost-effective data transfer between the location near the Azure region you want.
+
+BareMetal instances are provisioned within your Azure VNET server IP address range.
+
+:::image type="content" source="media/baremetal-infrastructure-portal/baremetal-infrastructure-diagram.png" alt-text="Azure BareMetal Infrastructure diagram" lightbox="media/baremetal-infrastructure-portal/baremetal-infrastructure-diagram.png" border="false":::
+
+The architecture shown is divided into three sections:
+- **Left:** Shows the customer on-premise infrastructure that runs different applications, connecting through the partner or local Edge router like Equinix. For more information, see [Connectivity providers and locations: Azure ExpressRoute](../../../expressroute/expressroute-locations.md).
+- **Center:** Shows [ExpressRoute](../../../expressroute/expressroute-introduction.md) provisioned using your Azure subscription offering connectivity to Azure edge network.
+- **Right:** Shows Azure IaaS, and in this case use of VMs to host your applications, which are provisioned within your Azure virtual network.
+- **Bottom:** Shows using your ExpressRoute Gateway enabled with [ExpressRoute FastPath](../../../expressroute/about-fastpath.md) for BareMetal connectivity offering low latency.
+ >[!TIP]
+ >To support this, your ExpressRoute Gateway should be UltraPerformance. For more information, see [About ExpressRoute virtual network gateways](../../../expressroute/expressroute-about-virtual-network-gateways.md).
+
+## Next steps
+
+The next step is to learn how to identify and interact with BareMetal Instance units through the Azure portal.
+
+> [!div class="nextstepaction"]
+> [Manage BareMetal Instances through the Azure portal](baremetal-infrastructure-portal.md)
\ No newline at end of file
batch https://docs.microsoft.com/en-us/azure/batch/tutorial-rendering-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-rendering-cli.md
@@ -1,29 +1,29 @@
--- title: Tutorial - Render a scene in the cloud
-description: Tutorial - How to render an Autodesk 3ds Max scene with Arnold using the Batch Rendering Service and Azure Command-Line Interface
+description: Learn how to render an Autodesk 3ds Max scene with Arnold using the Batch Rendering Service and Azure Command-Line Interface
ms.topic: tutorial
-ms.date: 03/05/2020
+ms.date: 12/30/2020
ms.custom: mvc, devx-track-azurecli ---
-# Tutorial: Render a scene with Azure Batch
+# Tutorial: Render a scene with Azure Batch
Azure Batch provides cloud-scale rendering capabilities on a pay-per-use basis. Azure Batch supports rendering apps including Autodesk Maya, 3ds Max, Arnold, and V-Ray. This tutorial shows you the steps to render a small scene with Batch using the Azure Command-Line Interface. You learn how to: > [!div class="checklist"]
-> * Upload a scene to Azure storage
-> * Create a Batch pool for rendering
-> * Render a single-frame scene
-> * Scale the pool, and render a multi-frame scene
-> * Download rendered output
+> - Upload a scene to Azure storage
+> - Create a Batch pool for rendering
+> - Render a single-frame scene
+> - Scale the pool, and render a multi-frame scene
+> - Download rendered output
In this tutorial, you render a 3ds Max scene with Batch using the [Arnold](https://www.autodesk.com/products/arnold/overview) ray-tracing renderer. The Batch pool uses an Azure Marketplace image with pre-installed graphics and rendering applications that provide pay-per-use licensing. ## Prerequisites
- - You need a pay-as-you-go subscription or other Azure purchase option to use rendering applications in Batch on a pay-per-use basis. **Pay-per-use licensing isn't supported if you use a free Azure offer that provides a monetary credit.**
+- You need a pay-as-you-go subscription or other Azure purchase option to use rendering applications in Batch on a pay-per-use basis. **Pay-per-use licensing isn't supported if you use a free Azure offer that provides a monetary credit.**
- - The sample 3ds Max scene for this tutorial is on [GitHub](https://github.com/Azure/azure-docs-cli-python-samples/tree/master/batch/render-scene), along with a sample Bash script and JSON configuration files. The 3ds Max scene is from the [Autodesk 3ds Max sample files](https://download.autodesk.com/us/support/files/3dsmax_sample_files/2017/Autodesk_3ds_Max_2017_English_Win_Samples_Files.exe). (Autodesk 3ds Max sample files are available under a Creative Commons Attribution-NonCommercial-Share Alike license. Copyright &copy; Autodesk, Inc.)
+- The sample 3ds Max scene for this tutorial is on [GitHub](https://github.com/Azure/azure-docs-cli-python-samples/tree/master/batch/render-scene), along with a sample Bash script and JSON configuration files. The 3ds Max scene is from the [Autodesk 3ds Max sample files](https://download.autodesk.com/us/support/files/3dsmax_sample_files/2017/Autodesk_3ds_Max_2017_English_Win_Samples_Files.exe). (Autodesk 3ds Max sample files are available under a Creative Commons Attribution-NonCommercial-Share Alike license. Copyright &copy; Autodesk, Inc.)
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
@@ -31,13 +31,14 @@ In this tutorial, you render a 3ds Max scene with Batch using the [Arnold](https
> [!TIP] > You can view [Arnold job templates](https://github.com/Azure/batch-extension-templates/tree/master/templates/arnold/render-windows-frames) in the Azure Batch Extension Templates GitHub repository.+ ## Create a Batch account
-If you haven't already, create a resource group, a Batch account, and a linked storage account in your subscription.
+If you haven't already, create a resource group, a Batch account, and a linked storage account in your subscription.
Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *eastus2* location.
-```azurecli-interactive
+```azurecli-interactive
az group create \ --name myResourceGroup \ --location eastus2
@@ -52,9 +53,10 @@ az storage account create \
--location eastus2 \ --sku Standard_LRS ```+ Create a Batch account with the [az batch account create](/cli/azure/batch/account#az-batch-account-create) command. The following example creates a Batch account named *mybatchaccount* in *myResourceGroup*, and links the storage account you created.
-```azurecli-interactive
+```azurecli-interactive
az batch account create \ --name mybatchaccount \ --storage-account mystorageaccount \
@@ -64,12 +66,13 @@ az batch account create \
To create and manage compute pools and jobs, you need to authenticate with Batch. Log in to the account with the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command. After you log in, your `az batch` commands use this account context. The following example uses shared key authentication, based on the Batch account name and key. Batch also supports authentication through [Azure Active Directory](batch-aad-auth.md), to authenticate individual users or an unattended application.
-```azurecli-interactive
+```azurecli-interactive
az batch account login \ --name mybatchaccount \ --resource-group myResourceGroup \ --shared-key-auth ```+ ## Upload a scene to storage To upload the input scene to storage, you first need to access the storage account and create a destination container for the blobs. To access the Azure storage account, export the `AZURE_STORAGE_KEY` and `AZURE_STORAGE_ACCOUNT` environment variables. The first Bash shell command uses the [az storage account keys list](/cli/azure/storage/account/keys#az-storage-account-keys-list) command to get the first account key. After you set these environment variables, your storage commands use this account context.
@@ -130,16 +133,18 @@ Create a Batch pool for rendering using the [az batch pool create](/cli/azure/ba
"enableInterNodeCommunication": false } ```
-Batch supports dedicated nodes and [low-priority](batch-low-pri-vms.md) nodes, and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Low-priority nodes are offered at a reduced price from surplus VM capacity in Azure. Low-priority nodes become unavailable if Azure does not have enough capacity.
+
+Batch supports dedicated nodes and [low-priority](batch-low-pri-vms.md) nodes, and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Low-priority nodes are offered at a reduced price from surplus VM capacity in Azure. Low-priority nodes become unavailable if Azure does not have enough capacity.
The pool specified contains a single low-priority node running a Windows Server image with software for the Batch Rendering service. This pool is licensed to render with 3ds Max and Arnold. In a later step, you scale the pool to a larger number of nodes.
-Create the pool by passing the JSON file to the `az batch pool create` command:
+If you aren't already signed in to your Batch account, use the [az batch account login](/cli/azure/batch/account#az-batch-account-login) command to do so. Then create the pool by passing the JSON file to the `az batch pool create` command:
```azurecli-interactive az batch pool create \ --json-file mypool.json
-```
+```
+ It takes a few minutes to provision the pool. To see the status of the pool, run the [az batch pool show](/cli/azure/batch/pool#az-batch-pool-show) command. The following command gets the allocation state of the pool: ```azurecli-interactive
@@ -152,7 +157,7 @@ Continue the following steps to create a job and tasks while the pool state is c
## Create a blob container for output
-In the examples in this tutorial, every task in the rendering job creates an output file. Before scheduling the job, create a blob container in your storage account as the destination for the output files. The following example uses the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command to create the *job-myrenderjob* container with public read access.
+In the examples in this tutorial, every task in the rendering job creates an output file. Before scheduling the job, create a blob container in your storage account as the destination for the output files. The following example uses the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command to create the *job-myrenderjob* container with public read access.
```azurecli-interactive az storage container create \
@@ -160,27 +165,25 @@ az storage container create \
--name job-myrenderjob ```
-To write output files to the container, Batch needs to use a Shared Access Signature (SAS) token. Create the token with the [az storage account generate-sas](/cli/azure/storage/account#az-storage-account-generate-sas) command. This example creates a token to write to any blob container in the account, and the token expires on November 15, 2020:
+To write output files to the container, Batch needs to use a Shared Access Signature (SAS) token. Create the token with the [az storage account generate-sas](/cli/azure/storage/account#az-storage-account-generate-sas) command. This example creates a token to write to any blob container in the account, and the token expires on November 15, 2021:
```azurecli-interactive az storage account generate-sas \ --permissions w \ --resource-types co \ --services b \
- --expiry 2020-11-15
+ --expiry 2021-11-15
```
-Take note of the token returned by the command, which looks similar to the following. You use this token in a later step.
+Take note of the token returned by the command, which looks similar to the following. You'll use this token in a later step.
-```
-se=2020-11-15&sp=rw&sv=2019-09-24&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-```
+`se=2021-11-15&sp=rw&sv=2019-09-24&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
## Render a single-frame scene ### Create a job
-Create a rendering job to run on the pool by using the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command. Initially the job has no tasks.
+Create a rendering job to run on the pool by using the [az batch job create](/cli/azure/batch/job#az-batch-job-create) command. Initially, the job has no tasks.
```azurecli-interactive az batch job create \
@@ -198,10 +201,7 @@ Modify the `blobSource` and `containerURL` elements in the JSON file so that the
> [!TIP] > Your `containerURL` ends with your SAS token and is similar to:
->
-> ```
-> https://mystorageaccount.blob.core.windows.net/job-myrenderjob/$TaskOutput?se=2018-11-15&sp=rw&sv=2017-04-17&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-> ```
+> `https://mystorageaccount.blob.core.windows.net/job-myrenderjob/$TaskOutput?se=2018-11-15&sp=rw&sv=2017-04-17&ss=b&srt=co&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`
```json {
@@ -245,7 +245,6 @@ az batch task create \
Batch schedules the task, and the task runs as soon as a node in the pool is available. - ### View task output The task takes a few minutes to run. Use the [az batch task show](/cli/azure/batch/task#az-batch-task-show) command to view details about the task.
@@ -270,7 +269,6 @@ Open *dragon.jpg* on your computer. The rendered image looks similar to the foll
![Rendered dragon frame 1](./media/tutorial-rendering-cli/dragon-frame.png) - ## Scale the pool Now modify the pool to prepare for a larger rendering job, with multiple frames. Batch provides a number of ways to scale the compute resources, including [autoscaling](batch-automatic-scaling.md) which adds or removes nodes as task demands change. For this basic example, use the [az batch pool resize](/cli/azure/batch/pool#az-batch-pool-resize) command to increase the number of low-priority nodes in the pool to *6*:
@@ -308,7 +306,7 @@ az batch task show \
--job-id myrenderjob \ --task-id mymultitask1 ```
-
+ The tasks generate output files named *dragon0002.jpg* - *dragon0007.jpg* on the compute nodes and upload them to the *job-myrenderjob* container in your storage account. To view the output, download the files to a folder on your local computer using the [az storage blob download-batch](/cli/azure/storage/blob) command. For example: ```azurecli-interactive
@@ -321,12 +319,11 @@ Open one of the files on your computer. Rendered frame 6 looks similar to the fo
![Rendered dragon frame 6](./media/tutorial-rendering-cli/dragon-frame6.png) - ## Clean up resources When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, Batch account, pools, and all related resources. Delete the resources as follows:
-```azurecli-interactive
+```azurecli-interactive
az group delete --name myResourceGroup ```
@@ -335,11 +332,11 @@ az group delete --name myResourceGroup
In this tutorial, you learned about how to: > [!div class="checklist"]
-> * Upload scenes to Azure storage
-> * Create a Batch pool for rendering
-> * Render a single-frame scene with Arnold
-> * Scale the pool, and render a multi-frame scene
-> * Download rendered output
+> - Upload scenes to Azure storage
+> - Create a Batch pool for rendering
+> - Render a single-frame scene with Arnold
+> - Scale the pool, and render a multi-frame scene
+> - Download rendered output
To learn more about cloud-scale rendering, see the Batch rendering documentation.
cloud-shell https://docs.microsoft.com/en-us/azure/cloud-shell/example-terraform-bash https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/example-terraform-bash.md
@@ -14,7 +14,7 @@ ms.devlang: na
ms.topic: article ms.date: 11/15/2017 ms.author: tarcher
-ms.custom: devx-track-terraform, devx-track-azurecli
+ms.custom: devx-track-terraform
--- # Deploy with Terraform from Bash in Azure Cloud Shell
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -20,7 +20,7 @@ If you can't find answers to your questions in this FAQ, check out [other suppor
**Q: What is the difference between a baseline model and a custom Speech to Text model?**
-**A**: A baseline model has been trained by using Microsoft-owned data and is already deployed in the cloud. You can use a custom model to adapt a model to better fit a specific environment that has specific ambient noise or language. Factory floors, cars, or noisy streets would require an adapted acoustic model. Topics like biology, physics, radiology, product names, and custom acronyms would require an adapted language model.
+**A**: A baseline model has been trained by using Microsoft-owned data and is already deployed in the cloud. You can use a custom model to adapt a model to better fit a specific environment that has specific ambient noise or language. Factory floors, cars, or noisy streets would require an adapted acoustic model. Topics like biology, physics, radiology, product names, and custom acronyms would require an adapted language model. If you train a custom model, you should start with related text to improve the recognition of special terms and phrases.
**Q: Where do I start if I want to use a baseline model?**
@@ -44,6 +44,12 @@ You can deploy baseline and customized models in the portal and then run accurac
**A**: Currently, you can't roll back an acoustic or language adaptation process. You can delete imported data and models when they're in a terminal state.
+**Q: I get several results for each phrase with the detailed output format. Which one should I use?**
+
+**A**: Always take the first result, even if another result ("N-Best") might have a higher confidence value. The Speech service considers the first result to be the best. It can also be an empty string if no speech was recognized.
+
+The other results are likely worse and might not have full capitalization and punctuation applied. These results are most useful in special scenarios such as giving users the option to pick corrections from a list or handling incorrectly recognized commands.
+ **Q: What's the difference between the Search and Dictation model and the Conversational model?** **A**: You can choose from more than one baseline model in the Speech service. The Conversational model is useful for recognizing speech that is spoken in a conversational style. This model is ideal for transcribing phone calls. The Search and Dictation model is ideal for voice-triggered apps. The Universal model is a new model that aims to address both scenarios. The Universal model is currently at or above the quality level of the Conversational model in most locales.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-develop-custom-commands-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
@@ -1,7 +1,7 @@
--- title: 'How-to: Develop Custom Commands applications - Speech service' titleSuffix: Azure Cognitive Services
-description: In this how-to, you learn how to develop and customize Custom Commands applications. Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences, and is best suited for task completion or command-and-control scenarios, particularly well-matched for Internet of Things (IoT) devices, ambient and headless devices.
+description: Learn how to develop and customize Custom Commands applications. These voice-command apps are best suited for task completion or command-and-control scenarios.
services: cognitive-services author: trevorbye
@@ -15,209 +15,217 @@ ms.author: trbye
# Develop Custom Commands applications
-In this how-to, you learn how to develop and configure Custom Commands applications. Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences, and is best suited for task completion or command-and-control scenarios, particularly well-matched for Internet of Things (IoT) devices, ambient and headless devices.
+In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's particularly well suited for Internet of Things (IoT) devices and for ambient and headless devices.
-In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, the following options for customizing commands are covered:
+In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, you'll learn about the following options for customizing commands:
* Adding parameters to commands * Adding configurations to command parameters * Building interaction rules
-* Creating language generation templates for speech responses
-* Using Custom Voice
+* Creating language-generation templates for speech responses
+* Using Custom Voice tools
-## Create application with simple commands
+## Create an application by using simple commands
-First, start by creating an empty Custom Commands application. For details, refer to the [quickstart](quickstart-custom-commands-application.md). This time, instead of importing a project, you create a blank project.
+Start by creating an empty Custom Commands application. For details, refer to the [quickstart](quickstart-custom-commands-application.md). In this application, instead of importing a project, you create a blank project.
-1. In the **Name** box, enter project name as `Smart-Room-Lite` (or something else of your choice).
+1. In the **Name** box, enter the project name *Smart-Room-Lite* (or another name of your choice).
1. In the **Language** list, select **English (United States)**.
-1. Select or create a LUIS resource of your choice.
+1. Select or create a LUIS resource.
> [!div class="mx-imgBorder"]
- > ![Create a project](media/custom-commands/create-new-project.png)
+ > ![Screenshot showing the "New project" window.](media/custom-commands/create-new-project.png)
### Update LUIS resources (optional)
-You can update the authoring resource that you selected in the **New project** window, and set a prediction resource. Prediction resource is used for recognition when your Custom Commands application is published. You don't need a prediction resource during the development and testing phases.
+You can update the authoring resource that you selected in the **New project** window. You can also set a prediction resource.
-### Add TurnOn Command
+A prediction resource is used for recognition when your Custom Commands application is published. You don't need a prediction resource during the development and testing phases.
-In the empty **Smart-Room-Lite** Custom Commands application you just created, add a simple command that processes an utterance, `turn on the tv`, and responds with the message `Ok, turning the tv on`.
+### Add a TurnOn command
-1. Create a new Command by selecting **New command** at the top of the left pane. The **New command** window opens.
-1. Provide value for the **Name** field as **TurnOn**.
+In the empty Smart-Room-Lite Custom Commands application you created, add a command. The command will process an utterance, `Turn on the tv`. It will respond with the message `Ok, turning the tv on`.
+
+1. Create a new command by selecting **New command** at the top of the left pane. The **New command** window opens.
+1. For the **Name** field, provide the value `TurnOn`.
1. Select **Create**.
-The middle pane lists the different properties of the command. You configure the following properties of the command. For explanation of all the configuration properties of a command, go to [references](./custom-commands-references.md).
+The middle pane lists the properties of the command.
+
+The following table explains the command's configuration properties. For more information, see [Custom Commands concepts and definitions](./custom-commands-references.md).
| Configuration | Description | | ---------------- | --------------------------------------------------------------------------------------------------------------------------- |
-| **Example sentences** | Example utterances the user can say to trigger this Command |
-| **Parameters** | Information required to complete the Command |
-| **Completion rules** | The actions to be taken to fulfill the Command. For example, to respond to the user or communicate with another web service. |
-| **Interaction rules** | Additional rules to handle more specific or complex situations |
+| Example sentences | Example utterances the user can say to trigger this command. |
+| Parameters | Information required to complete the command. |
+| Completion rules | Actions to be taken to fulfill the command. Examples: responding to the user or communicating with a web service. |
+| Interaction rules | Other rules to handle more specific or complex situations. |
> [!div class="mx-imgBorder"]
-> ![Create a command](media/custom-commands/add-new-command.png)
+> ![Screenshot showing where to create a command.](media/custom-commands/add-new-command.png)
#### Add example sentences
-Let's start with **Example sentences** section, and provide an example of what the user can say.
+In the **Example sentences** section, you provide an example of what the user can say.
-1. Select **Example sentences** section in the middle pane.
-1. In the right most pane, add examples:
+1. In the middle pane, select **Example sentences**.
+1. In the pane on the right, add examples:
```
- turn on the tv
+ Turn on the tv
```
-1. Select **Save** at the top of the pane.
+1. At the top of the pane, select **Save**.
-For now, we don't have parameters, so we can move to the **Completion rules** section.
+You don't have parameters yet, so you can move to the **Completion rules** section.
#### Add a completion rule
-Next, the command needs to have a completion rule. This rule tells the user that a fulfillment action is being taken. To read more about rules and completion rules, go to [references](./custom-commands-references.md).
+Next, the command needs a completion rule. This rule tells the user that a fulfillment action is being taken.
+
+For more information about rules and completion rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-1. Select default completion rule **Done** and edit it as follows:
+1. Select the default completion rule **Done**. Then edit it as follows:
| Setting | Suggested value | Description | | ---------- | ---------------------------------------- | -------------------------------------------------- |
- | **Name** | ConfirmationResponse | A name describing the purpose of the rule |
+ | **Name** | `ConfirmationResponse` | A name describing the purpose of the rule |
| **Conditions** | None | Conditions that determine when the rule can run |
- | **Actions** | Send speech response > Simple editor > First variation > `Ok, turning the tv on` | The action to take when the rule condition is true |
+ | **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, turning the tv on` | The action to take when the rule condition is true |
> [!div class="mx-imgBorder"]
- > ![Create a Speech response](media/custom-commands/create-speech-response-action.png)
+ > ![Screenshot showing where to create a speech response.](media/custom-commands/create-speech-response-action.png)
1. Select **Save** to save the action. 1. Back in the **Completion rules** section, select **Save** to save all changes. > [!NOTE]
- > It's not necessary to use the default completion rule that comes with the command. If needed, you can delete the existing default completion rule and add your own rule.
+ > You don't have to use the default completion rule that comes with the command. You can delete the default completion rule and add your own rule.
-### Add SetTemperature command
+### Add a SetTemperature command
-Now, add one more command **SetTemperature** that will take a single utterance, `set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
+Now add one more command, `SetTemperature`. This command will take a single utterance, `Set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
-Follow the steps as illustrated for the **TurnOn** command to create a new command using the example sentence, "**set the temperature to 40 degrees**".
+To create the new command, follow the steps you used for the `TurnOn` command, but use the example sentence `Set the temperature to 40 degrees`.
-Then, edit the existing **Done** completion rules as follows:
+Then edit the existing **Done** completion rules as follows:
| Setting | Suggested value | | ---------- | ---------------------------------------- |
-| Name | ConfirmationResponse |
-| Conditions | None |
-| Actions | Send speech response > Simple editor > First variation > `Ok, setting temperature to 40 degrees` |
+| **Name** | `ConfirmationResponse` |
+| **Conditions** | None |
+| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting temperature to 40 degrees` |
Select **Save** to save all changes to the command.
-### Add SetAlarm command
+### Add a SetAlarm command
-Create a new Command **SetAlarm** using the example sentence, "**set an alarm for 9 am tomorrow**". Then, edit the existing **Done** completion rules as follows:
+Create a new `SetAlarm` command. Use the example sentence `Set an alarm for 9 am tomorrow`. Then edit the existing **Done** completion rules as follows:
| Setting | Suggested value | | ---------- | ---------------------------------------- |
-| Rule Name | ConfirmationResponse |
-| Conditions | None |
-| Actions | Send speech response > Simple editor > First variation >`Ok, setting an alarm for 9 am tomorrow` |
+| **Name** | `ConfirmationResponse` |
+| **Conditions** | None |
+| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting an alarm for 9 am tomorrow` |
Select **Save** to save all changes to the command. ### Try it out
-Test the behavior using the Test chat panel. Select **Train** icon present on top of the right pane. Once training completes, select **Test**. Try out the following utterance examples via voice or text:
+Test the application's behavior by using the test pane:
-- You type: set the temperature to 40 degrees
+1. In the upper-right corner of the pane, select the **Train** icon.
+1. When the training finishes, select **Test**.
+
+Try out the following utterance examples by using voice or text:
+
+- You type: *set the temperature to 40 degrees*
- Expected response: Ok, setting temperature to 40 degrees-- You type: turn on the tv
+- You type: *turn on the tv*
- Expected response: Ok, turning the tv on-- You type: set an alarm for 9 am tomorrow
+- You type: *set an alarm for 9 am tomorrow*
- Expected response: Ok, setting an alarm for 9 am tomorrow > [!div class="mx-imgBorder"]
-> ![Test with web chat](media/custom-commands/create-basic-test-chat.png)
+> ![Screenshot showing the test in a web-chat interface.](media/custom-commands/create-basic-test-chat.png)
> [!TIP]
-> In the test panel, you can select **Turn details** for information as to how this voice/text input was processed.
+> In the test pane, you can select **Turn details** for information about how this voice input or text input was processed.
## Add parameters to commands
-In this section, you learn how to add parameters to your commands. Parameters are information required by the commands to complete a task. In complex scenarios, parameters can also be used to define conditions which trigger custom actions.
+In this section, you learn how to add parameters to your commands. Commands require parameters to complete a task. In complex scenarios, parameters can be used to define conditions that trigger custom actions.
-### Configure parameters for TurnOn command
+### Configure parameters for a TurnOn command
-Start by editing the existing **TurnOn** command to turn on and turn off multiple devices.
+Start by editing the existing `TurnOn` command to turn on and turn off multiple devices.
-1. Now that the command will now handle both on and off scenarios, rename the command to **TurnOnOff**.
- 1. In the left pane, select the **TurnOn** command and then select the ellipsis (...) button next to **New command** at the top of the pane.
+1. Now that the command will handle both on and off scenarios, rename the command as *TurnOnOff*.
+ 1. In the pane on the left, select the **TurnOn** command. Then next to **New command** at the top of the pane, select the ellipsis (**...**) button.
- 1. Select **Rename**. In the **Rename command** windows, change **Name** to **TurnOnOff**.
-
-1. Next, you add a new parameter to this command which represents whether the user wants to turn the device on or off.
- 1. Select **Add** present at top of the middle pane. From the drop-down, select **Parameter**.
- 1. In the right pane, in the **Parameters** section, add value in the **Name** box as **OnOff**.
- 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation**, add
- ```
- On or Off?
- ```
+ 1. Select **Rename**. In the **Rename command** window, change the name to *TurnOnOff*.
+
+1. Add a new parameter to the command. The parameter represents whether the user wants to turn the device on or off.
+ 1. At top of the middle pane, select **Add**. From the drop-down menu, select **Parameter**.
+ 1. In the pane on the right, in the **Parameters** section, in the **Name** box, add `OnOff`.
+ 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation** field, add *On or Off?*.
1. Select **Update**. > [!div class="mx-imgBorder"]
- > ![Create required parameter response](media/custom-commands/add-required-on-off-parameter-response.png)
+ > ![Screenshot showing where to create a required parameter response.](media/custom-commands/add-required-on-off-parameter-response.png)
- 1. Now we configure the parameters properties. For explanation of all the configuration properties of a command, go to [references](./custom-commands-references.md). Configure the properties of the parameter as follows:
+ 1. Configure the parameter's properties by using the following table. For information about all of the configuration properties of a command, see [Custom Commands concepts and definitions](./custom-commands-references.md).
| Configuration | Suggested value | Description | | ------------------ | ----------------| ---------------------------------------------------------------------|
- | Name | `OnOff` | A descriptive name for parameter |
- | Is Global | unchecked | Checkbox indicating whether a value for this parameter is globally applied to all Commands in the application|
- | Required | checked | Checkbox indicating whether a value for this parameter is required before completing the Command |
- | Response for required parameter |Simple editor > `On or Off?` | A prompt to ask for the value of this parameter when it isn't known |
- | Type | String | The type of parameter, such as Number, String, Date Time or Geography |
- | Configuration | Accept predefined input values from internal catalog | For Strings, this limits inputs to a set of possible values |
- | Predefined input values | `on`, `off` | Set of possible values and their aliases |
+ | **Name** | `OnOff` | A descriptive name for the parameter |
+ | **Is Global** | Unselected | Check box indicating whether a value for this parameter is globally applied to all commands in the application.|
+ | **Required** | Selected | Check box indicating whether a value for this parameter is required before the command finishes. |
+ | **Response for required parameter** |**Simple editor** > `On or Off?` | A prompt asking for the value of this parameter when it isn't known. |
+ | **Type** | **String** | Parameter type, such as Number, String, Date Time, or Geography. |
+ | **Configuration** | **Accept predefined input values from an internal catalog** | For strings, this setting limits inputs to a set of possible values. |
+ | **Predefined input values** | `on`, `off` | Set of possible values and their aliases. |
- 1. For adding predefined input values, select **Add a predefined input** and in **New Item** window, type in **Name** as provided in the table above. In this case, we aren't using aliases, so you can leave it blank.
+ 1. To add predefined input values, select **Add a predefined input**. In **New Item** window, type *Name* as shown in the preceding table. In this case, you're not using aliases, so you can leave this field blank.
> [!div class="mx-imgBorder"]
- > ![Create parameter](media/custom-commands/create-on-off-parameter.png)
+ > ![Screenshot showing how to create a parameter.](media/custom-commands/create-on-off-parameter.png)
1. Select **Save** to save all configurations of the parameter.
-#### Add SubjectDevice parameter
+#### Add a SubjectDevice parameter
-1. Next, select **Add** again to add a second parameter to represent the name of the devices which can be controlled using this command. Use the following configuration.
+1. To add a second parameter to represent the name of the devices that can be controlled by using this command, select **Add**. Use the following configuration.
| Setting | Suggested value | | ------------------ | --------------------- |
- | Name | `SubjectDevice` |
- | Is Global | unchecked |
- | Required | checked |
- | Response for required parameter | Simple editor > `Which device do you want to control?` |
- | Type | String | |
- | Configuration | Accept predefined input values from internal catalog |
- | Predefined input values | `tv`, `fan` |
- | Aliases (`tv`) | `television`, `telly` |
+ | **Name** | `SubjectDevice` |
+ | **Is Global** | Unselected |
+ | **Required** | Selected |
+ | **Response for required parameter** | **Simple editor** > `Which device do you want to control?` |
+ | **Type** | **String** | |
+ | **Configuration** | **Accept predefined input values from an internal catalog** |
+ | **Predefined input values** | `tv`, `fan` |
+ | **Aliases** (`tv`) | `television`, `telly` |
-1. Select **Save**
+1. Select **Save**.
#### Modify example sentences
-For commands with parameters, it's helpful to add example sentences that cover all possible combinations. For example:
+For commands that use parameters, it's helpful to add example sentences that cover all possible combinations. For example:
-* Complete parameter information - `turn {OnOff} the {SubjectDevice}`
-* Partial parameter information - `turn it {OnOff}`
-* No parameter information - `turn something`
+* Complete parameter information: `turn {OnOff} the {SubjectDevice}`
+* Partial parameter information: `turn it {OnOff}`
+* No parameter information: `turn something`
-Example sentences with different degree of information allow the Custom Commands application to resolve both one-shot resolutions and multi-turn resolutions with partial information.
+Example sentences that use varying degrees of information allow the Custom Commands application to resolve both one-shot resolutions and multiple-turn resolutions by using partial information.
-With that in mind, edit the example sentences to use the parameters as suggested below:
+With that information in mind, edit the example sentences to use these suggested parameters:
``` turn {OnOff} the {SubjectDevice}
@@ -230,50 +238,52 @@ turn something
Select **Save**. > [!TIP]
-> In the Example sentences editor use curly braces to refer to your parameters. - `turn {OnOff} the {SubjectDevice}`
-> Use tab for auto-completion backed by previously created parameters.
+> In the example-sentences editor, use curly braces to refer to your parameters. For example, `turn {OnOff} the {SubjectDevice}`.
+> Use a tab for automatic completion backed by previously created parameters.
#### Modify completion rules to include parameters
-Modify the existing Completion rule **ConfirmationResponse**.
+Modify the existing completion rule `ConfirmationResponse`.
1. In the **Conditions** section, select **Add a condition**.
-1. In the **New Condition** window, in the **Type** list, select **Required parameters**. In the check-list below, check both **OnOff** and **SubjectDevice**.
-1. Leave **IsGlobal** as unchecked.
+1. In the **New Condition** window, in the **Type** list, select **Required parameters**. In the list that follows, select both **OnOff** and **SubjectDevice**.
+1. Leave **IsGlobal** unselected.
1. Select **Create**.
-1. In the **Actions** section, edit the existing **Send speech response** action by hovering over the action and selecting the edit button. This time, make use of the newly created **OnOff** and **SubjectDevice** parameters
+1. In the **Actions** section, edit the **Send speech response** action by hovering over it and selecting the edit button. This time, use the newly created `OnOff` and `SubjectDevice` parameters:
``` Ok, turning the {SubjectDevice} {OnOff} ``` 1. Select **Save**.
-Try out the changes by selecting the **Train** icon on top of the right pane. When training completes, select **Test**. A **Test your application** window will appear. Try the following interactions.
+Try out the changes by selecting the **Train** icon at the top of the pane on the right.
+
+When the training finishes, select **Test**. A **Test your application** window appears. Try the following interactions:
-- Input: turn off the tv
+- Input: *turn off the tv*
- Output: Ok, turning off the tv-- Input: turn off the television
+- Input: *turn off the television*
- Output: Ok, turning off the tv-- Input: turn it off
+- Input: *turn it off*
- Output: Which device do you want to control?-- Input: the tv
+- Input: *the tv*
- Output: Ok, turning off the tv
-### Configure parameters for SetTemperature command
+### Configure parameters for a SetTemperature command
-Modify the **SetTemperature** command to enable it to set the temperature as directed by the user.
+Modify the `SetTemperature` command to enable it to set the temperature as the user directs.
-Add new parameter **Temperature** with the following configuration
+Add a `Temperature` parameter. Use the following configuration:
| Configuration | Suggested value | | ------------------ | ----------------|
-| Name | `Temperature` |
-| Required | checked |
-| Response for required parameter | Simple editor > `What temperature would you like?`
-| Type | Number |
+| **Name** | `Temperature` |
+| **Required** | Selected |
+| **Response for required parameter** | **Simple editor** > `What temperature would you like?`
+| **Type** | `Number` |
-Edit the example utterances to the following values.
+Edit the example utterances to use the following values.
``` set the temperature to {Temperature} degrees
@@ -282,32 +292,32 @@ set the temperature
change the temperature ```
-Edit the existing completion rules as per the following configuration.
+Edit the existing completion rules. Use the following configuration.
| Configuration | Suggested value | | ------------------ | ----------------|
-| Conditions | Required parameter > Temperature |
-| Actions | Send speech response > `Ok, setting temperature to {Temperature} degrees` |
+| **Conditions** | **Required parameter** > **Temperature** |
+| **Actions** | **Send speech response** > `Ok, setting temperature to {Temperature} degrees` |
-### Configure parameters to the SetAlarm command
+### Configure parameters for a SetAlarm command
-Add a parameter called **DateTime** with the following configuration.
+Add a parameter called `DateTime`. Use the following configuration.
| Setting | Suggested value | | --------------------------------- | ----------------------------------------|
- | Name | `DateTime` |
- | Required | checked |
- | Response for required parameter | Simple editor > `For what time?` |
- | Type | DateTime |
- | Date Defaults | If date is missing use today |
- | Time Defaults | If time is missing use start of day |
+ | **Name** | `DateTime` |
+ | **Required** | Selected |
+ | **Response for required parameter** | **Simple editor** > `For what time?` |
+ | **Type** | **DateTime** |
+ | **Date Defaults** | If the date is missing, use today. |
+ | **Time Defaults** | If the time is missing, use the start of the day. |
> [!NOTE]
-> In this article, we predominantly made use of string, number and DateTime parameter types. For list of all supported parameter types and their properties, go to [references](./custom-commands-references.md).
+> This article mostly uses String, Number, and DateTime parameter types. For a list of all supported parameter types and their properties, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-Edit example utterances to the following values.
+Edit the example utterances. Use the following values.
``` set an alarm for {DateTime}
@@ -315,46 +325,46 @@ set alarm {DateTime}
alarm for {DateTime} ```
-Edit the existing completion rules as per the following configuration.
+Edit the existing completion rules. Use the following configuration.
| Setting | Suggested value | | ---------- | ------------------------------------------------------- |
- | Actions | Send speech response - `Ok, alarm set for {DateTime}` |
+ | **Actions** | **Send speech response** > `Ok, alarm set for {DateTime}` |
-Test out the all the three commands together using utterances related to different commands. Note that you can switch between the different commands.
+Test the three commands together by using utterances related to different commands. (You can switch between the different commands.)
-- Input: Set an alarm
+- Input: *Set an alarm*
- Output: For what time?-- Input: Turn on the tv
+- Input: *Turn on the tv*
- Output: Ok, turning the tv on-- Input: Set an alarm
+- Input: *Set an alarm*
- Output: For what time?-- Input: 5pm
+- Input: *5 pm*
- Output: Ok, alarm set for 2020-05-01 17:00:00
-## Add configurations to commands parameters
+## Add configurations to command parameters
In this section, you learn more about advanced parameter configuration, including:
- - How parameter values can belong to a set defined externally to custom commands application
- - How to add validation clauses on the value of the parameters
+ - How parameter values can belong to a set that's defined outside of the Custom Commands application.
+ - How to add validation clauses on the parameter values.
-### Configure parameter as external catalog entity
+### Configure a parameter as an external catalog entity
-Custom Commands allows you to configure string-type parameters to refer to external catalogs hosted over a web endpoint. This allows you to update the external catalog independently without making edits to the Custom Commands application. This approach is useful in cases where the catalog entries can be large in number.
+The Custom Commands feature allows you to configure string-type parameters to refer to external catalogs hosted over a web endpoint. So you can update the external catalog independently without editing the Custom Commands application. This approach is useful in cases where the catalog entries are numerous.
-Reuse the **SubjectDevice** parameter from the **TurnOnOff** command. The current configuration for this parameter is **Accept predefined inputs from internal catalog**. This refers to static list of devices as defined in the parameter configuration. We want to move out this content to an external data source which can be updated independently.
+Reuse the `SubjectDevice` parameter from the `TurnOnOff` command. The current configuration for this parameter is **Accept predefined inputs from internal catalog**. This configuration refers to a static list of devices in the parameter configuration. Move out this content to an external data source that can be updated independently.
-To do this, start by adding a new web endpoint. Go to **Web endpoints** section in the left pane and add a new web endpoint with the following configuration.
+To move the content, start by adding a new web endpoint. In the pane on the left, go to the **Web endpoints** section. There, add a new web endpoint. Use the following configuration.
| Setting | Suggested value | |----|----|
-| Name | `getDevices` |
-| URL | `https://aka.ms/speech/cc-sampledevices` |
-| Method | GET |
+| **Name** | `getDevices` |
+| **URL** | `https://aka.ms/speech/cc-sampledevices` |
+| **Method** | **GET** |
-If the suggested value for URL doesn't work for you, you need to configure and host a simple web endpoint which returns a json consisting of list of the devices which can be controlled. The web endpoint should return a json formatted as follows:
+If the suggested value for the URL doesn't work for you, configure and host a web endpoint that returns a JSON file that consists of the list of the devices that can be controlled. The web endpoint should return a JSON file formatted as follows:
```json {
@@ -376,168 +386,172 @@ If the suggested value for URL doesn't work for you, you need to configure and h
```
-Next go the **SubjectDevice** parameter settings page and change the properties to the following.
+Next go the **SubjectDevice** parameter settings page. Set up the following properties.
| Setting | Suggested value | | ----| ---- |
-| Configuration | Accept predefined inputs from external catalog |
-| Catalog endpoint | getDevices |
-| Method | GET |
+| **Configuration** | **Accept predefined inputs from external catalog** |
+| **Catalog endpoint** | `getDevices` |
+| **Method** | **GET** |
-Then, select **Save**.
+Then select **Save**.
> [!IMPORTANT]
-> You won't see an option to configure a parameter to accept inputs from an external catalog unless you have the web endpoint set in the **Web endpoint** section in the left pane.
+> You won't see an option to configure a parameter to accept inputs from an external catalog unless you have the web endpoint set in the **Web endpoint** section in the pane on the left.
-Try it out by selecting **Train** and wait for training completion. Once training completes, select **Test** and try a few interactions.
+Try it out by selecting **Train**. After the training finishes, select **Test** and try a few interactions.
-* Input: turn on
+* Input: *turn on*
* Output: Which device do you want to control?
-* Input: lights
+* Input: *lights*
* Output: Ok, turning the lights on > [!NOTE]
-> Notice how you can control all the devices hosted on the web endpoint now. You still need to train the application for testing out the new changes and re-publish the application.
+> You can now control all the devices hosted on the web endpoint. But you still need to train the application to test the new changes and then republish the application.
### Add validation to parameters
-**Validations** are constructs applicable to certain parameter types which allow you to configure constraints on the parameter's value, and prompt for correction if values to do not fall within the constraints. For full list of parameter types extending the validation construct, go to [references](./custom-commands-references.md)
+*Validations* are constructs that apply to certain parameter types that allow you to configure constraints on the parameter's value. They prompt you for corrections if values don't fall within the constraints. For a list of parameter types that extend the validation construct, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-Test out validations using the **SetTemperature** command. Use the following steps to add a validation for the **Temperature** parameter.
+Test out validations by using the `SetTemperature` command. Use the following steps to add a validation for the `Temperature` parameter.
-1. Select **SetTemperature** command in the left pane.
-1. Select **Temperature** in the middle pane.
-1. Select **Add a validation** present in the right pane.
-1. In the **New validation** window, configure validation as follows, and select **Create**.
+1. In the pane on the left, select the **SetTemperature** command.
+1. In the middle pane, select **Temperature**.
+1. In the pane on the right, select **Add a validation**.
+1. In the **New validation** window, configure validation as shown in the following table. Then select **Create**.
- | Parameter Configuration | Suggested value | Description |
+ | Parameter configuration | Suggested value | Description |
| ---- | ---- | ---- |
- | Min Value | `60` | For Number parameters, the minimum value this parameter can assume |
- | Max Value | `80` | For Number parameters, the maximum value this parameter can assume |
- | Failure response | Simple editor > First Variation > `Sorry, I can only set temperature between 60 and 80 degrees. What temperature do you want?` | Prompt to ask for a new value if the validation fails |
+ | **Min Value** | `60` | For Number parameters, the minimum value this parameter can assume |
+ | **Max Value** | `80` | For Number parameters, the maximum value this parameter can assume |
+ | **Failure response** | **Simple editor** > **First variation** > `Sorry, I can only set temperature between 60 and 80 degrees. What temperature do you want?` | A prompt to ask for a new value if the validation fails |
> [!div class="mx-imgBorder"]
- > ![Add a range validation](media/custom-commands/add-validations-temperature.png)
+ > ![Screenshot showing how to add a range validation.](media/custom-commands/add-validations-temperature.png)
-Try it out by selecting the **Train** icon present on top of the right pane. Once training completes, select **Test** and try a few interactions:
+Try it out by selecting the **Train** icon at the top of the pane on the right. After the training finishes, select **Test**. Try a few interactions:
-- Input: Set the temperature to 72 degrees
+- Input: *Set the temperature to 72 degrees*
- Output: Ok, setting temperature to 72 degrees-- Input: Set the temperature to 45 degrees
+- Input: *Set the temperature to 45 degrees*
- Output: Sorry, I can only set temperature between 60 and 80 degrees-- Input: make it 72 degrees instead
+- Input: *make it 72 degrees instead*
- Output: Ok, setting temperature to 72 degrees ## Add interaction rules
-Interaction rules are *additional rules* to handle specific or complex situations. While you're free to author your own custom interaction rules, in this example you make use of interaction rules for the following targeted scenarios:
+Interaction rules are *additional* rules that handle specific or complex situations. Although you're free to author your own interaction rules, in this example you use interaction rules for the following scenarios:
* Confirming commands * Adding a one-step correction to commands
-To learn more about interaction rules, go to the [references](./custom-commands-references.md) section.
+For more information about interaction rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
### Add confirmations to a command
-To add a confirmation, use the **SetTemperature** command. To achieve confirmation, you create interaction rules by using the following steps.
+To add a confirmation, you use the `SetTemperature` command. To achieve confirmation, create interaction rules by using the following steps:
-1. Select the **SetTemperature** command in the left pane.
-1. Add interaction rules by selecting **Add** in the middle pane. Then select **Interaction rules** > **Confirm command**.
+1. In the pane on the left, select the **SetTemperature** command.
+1. In the middle pane, add interaction rules by selecting **Add**. Then select **Interaction rules** > **Confirm command**.
- This action adds three interaction rules which will ask the user to confirm the date and time of the alarm and expects a confirmation (yes/no) for the next turn.
+ This action adds three interaction rules. The rules ask the user to confirm the date and time of the alarm. They expect a confirmation (yes or no) for the next turn.
- 1. Modify the **Confirm command** interaction rule as per the following configuration:
- 1. Rename **Name** to **Confirm temperature**.
- 1. Add a new condition as **Required parameters** > **Temperature**.
- 1. Add a new action as **Type** > **Send speech response** > **Are you sure you want to set the temperature as {Temperature} degrees?**
- 1. Leave the default value of **Expecting confirmation from user** in the **Expectations** section.
+ 1. Modify the **Confirm command** interaction rule by using the following configuration:
+ 1. Change the name to **Confirm temperature**.
+ 1. Add a new condition: **Required parameters** > **Temperature**.
+ 1. Add a new action: **Type** > **Send speech response** > **Are you sure you want to set the temperature as {Temperature} degrees?**
+ 1. In the **Expectations** section, leave the default value of **Expecting confirmation from user**.
> [!div class="mx-imgBorder"]
- > ![Create required parameter response](media/custom-speech-commands/add-validation-set-temperature.png)
+ > ![Screenshot showing how to create the required parameter response.](media/custom-speech-commands/add-validation-set-temperature.png)
- 1. Modify the **Confirmation succeeded** interaction rule to handle a successful confirmation (user said yes).
+ 1. Modify the **Confirmation succeeded** interaction rule to handle a successful confirmation (the user said yes).
- 1. Modify **Name** to **Confirmation temperature succeeded**.
- 1. Leave the already existing **Confirmation was successful** condition.
- 1. Add a new condition as **Type** > **Required parameters** > **Temperature**.
- 1. Leave the default value of **Post-execution state** as **Execute completion rules**.
+ 1. Change the name to **Confirmation temperature succeeded**.
+ 1. Leave the existing **Confirmation was successful** condition.
+ 1. Add a new condition: **Type** > **Required parameters** > **Temperature**.
+ 1. Leave the default **Post-execution state** value as **Execute completion rules**.
- 1. Modify the **Confirmation denied** interaction rule to handle scenarios when confirmation is denied (user said no).
+ 1. Modify the **Confirmation denied** interaction rule to handle scenarios when confirmation is denied (the user said no).
- 1. Modify **Name** to **Confirmation temperature denied**.
- 1. Leave the already existing **Confirmation was denied** condition.
- 1. Add a new condition as **Type** > **Required parameters** > **Temperature**.
- 1. Add a new action as **Type** > **Send speech response** > **No problem. What temperature then?**
- 1. Leave the default value of **Post-execution state** as **Wait for user's input**.
+ 1. Change the name to **Confirmation temperature denied**.
+ 1. Leave the existing **Confirmation was denied** condition.
+ 1. Add a new condition: **Type** > **Required parameters** > **Temperature**.
+ 1. Add a new action: **Type** > **Send speech response** > **No problem. What temperature then?**.
+ 1. Leave the default **Post-execution state** value as **Wait for user's input**.
> [!IMPORTANT]
-> In this article, you used the built-in confirmation capability. You can also manually add interaction rules one by one.
+> In this article, you use the built-in confirmation capability. You can also manually add interaction rules one by one.
-Try out the changes by selecting **Train**, wait for the training to finish, and select **Test**.
+Try out the changes by selecting **Train**. When the training finishes, select **Test**.
-- **Input**: Set temperature to 80 degrees
+- **Input**: *Set temperature to 80 degrees*
- **Output**: are you sure you want to set the temperature as 80 degrees?-- **Input**: No
+- **Input**: *No*
- **Output**: No problem. What temperature then?-- **Input**: 72 degrees
+- **Input**: *72 degrees*
- **Output**: are you sure you want to set the temperature as 72 degrees?-- **Input**: Yes
+- **Input**: *Yes*
- **Output**: OK, setting temperature to 83 degrees ### Implement corrections in a command
-In this section, you configure a one-step correction, which is used after the fulfillment action has already been executed. You also see an example of how a correction is enabled by default in case the command isn't fulfilled yet. To add a correction when the command isn't completed, add the new parameter **AlarmTone**.
+In this section, you'll configure a one-step correction. This correction is used after the fulfillment action has run. You'll also see an example of how a correction is enabled by default if the command isn't fulfilled yet. To add a correction when the command isn't finished, add the new parameter `AlarmTone`.
-Select the **SetAlarm** command from the left pane, and add the new parameter **AlarmTone**.
+In the left pane, select the **SetAlarm** command. Then and add the new parameter **AlarmTone**.
-- **Name** > **AlarmTone**
+- **Name** > `AlarmTone`
- **Type** > **String** - **Default Value** > **Chimes** - **Configuration** > **Accept predefined input values from the internal catalog**-- **Predefined input values** > **Chimes**, **Jingle**, and **Echo** as individual predefined inputs
+- **Predefined input values** > **Chimes**, **Jingle**, and **Echo** (These values are individual predefined inputs.)
Next, update the response for the **DateTime** parameter to **Ready to set alarm with tone as {AlarmTone}. For what time?**. Then modify the completion rule as follows: 1. Select the existing completion rule **ConfirmationResponse**.
-1. In the right pane, hover over the existing action and select **Edit**.
-1. Update the speech response to **OK, alarm set for {DateTime}. The alarm tone is {AlarmTone}.**
+1. In the pane on the right, hover over the existing action and select **Edit**.
+1. Update the speech response to `OK, alarm set for {DateTime}. The alarm tone is {AlarmTone}`.
> [!IMPORTANT]
-> The alarm tone could be changed without any explicit configuration in an ongoing command, for example, when the command wasn't finished yet. *A correction is enabled by default for all the command parameters, regardless of the turn number if the command is yet to be fulfilled.*
+> The alarm tone can change without any explicit configuration in an ongoing command. For example, it can change when the command hasn't finished yet. A correction is enabled *by default* for all of the command parameters, regardless of the turn number, if the command is yet to be fulfilled.
+
+#### Implement a correction when a command is finished
-#### Correction when command is completed
+The Custom Commands platform allows for one-step correction even when the command has finished. This feature isn't enabled by default. It must be explicitly configured.
-The Custom Commands platform also provides the capability for a one-step correction even when the command has been completed. This feature isn't enabled by default. It must be explicitly configured. Use the following steps to configure a one-step correction.
+Use the following steps to configure a one-step correction:
-1. In the **SetAlarm** command, add an interaction rule of the type **Update previous command** to update the previously set alarm. Rename the default **Name** of the interaction rule to **Update previous alarm**.
-1. Leave the default condition **Previous command needs to be updated** as is.
-1. Add a new condition as **Type** > **Required Parameter** > **DateTime**.
-1. Add a new action as **Type** > **Send speech response** > **Simple editor** > **Updating previous alarm time to {DateTime}.**
-1. Leave the default value of **Post-execution state** as **Command completed**.
+1. In the **SetAlarm** command, add an interaction rule of the type **Update previous command** to update the previously set alarm. Rename the interaction rule as **Update previous alarm**.
+1. Leave the default condition: **Previous command needs to be updated**.
+1. Add a new condition: **Type** > **Required Parameter** > **DateTime**.
+1. Add a new action: **Type** > **Send speech response** > **Simple editor** > **Updating previous alarm time to {DateTime}**.
+1. Leave the default **Post-execution state** value as **Command completed**.
-Try out the changes by selecting **Train**, wait for the training to finish, and select **Test**.
+Try out the changes by selecting **Train**. Wait for the training to finish, and then select **Test**.
-- **Input**: Set an alarm.
+- **Input**: *Set an alarm.*
- **Output**: Ready to set alarm with tone as Chimes. For what time?-- **Input**: Set an alarm with the tone as Jingle for 9 am tomorrow.
+- **Input**: *Set an alarm with the tone as Jingle for 9 am tomorrow.*
- **Output**: OK, alarm set for 2020-05-21 09:00:00. The alarm tone is Jingle.-- **Input**: No, 8 am.
+- **Input**: *No, 8 am.*
- **Output**: Updating previous alarm time to 2020-05-29 08:00. > [!NOTE]
-> In a real application, in the **Actions** section of this correction rule, you'll also need to send back an activity to the client or call an HTTP endpoint to update the alarm time in your system. This action should be solely responsible for updating the alarm time and not any other attribute of the command. In this case, that would be the alarm tone.
+> In a real application, in the **Actions** section of this correction rule, you'll also need to send back an activity to the client or call an HTTP endpoint to update the alarm time in your system. This action should be solely responsible for updating the alarm time. It shouldn't be responsible for any other attribute of the command. In this case, that attribute would be the alarm tone.
-## Add language generation templates for speech responses
+## Add language-generation templates for speech responses
-Language generation templates allow you to customize the responses sent to the client, and introduce variance in the responses. Language generation customization can be achieved by:
+Language-generation (LG) templates allow you to customize the responses sent to the client. They introduce variance into the responses. You can achieve language generation by using:
-* Use of language generation templates
-* Use of adaptive expressions
+* Language-generation templates.
+* Adaptive expressions.
-Custom Commands templates are based on the BotFramework's [LG templates](/azure/bot-service/file-format/bot-builder-lg-file-format#templates). Since Custom Commands creates a new LG template when required (that is, for speech responses in parameters or actions) you do not have to specify the name of the LG template. So, instead of defining your template as:
+Custom Commands templates are based on the Bot Framework's [LG templates](/azure/bot-service/file-format/bot-builder-lg-file-format#templates). Because the Custom Commands feature creates a new LG template when required (for speech responses in parameters or actions), you don't have to specify the name of the LG template.
+
+So you don't need to define your template like this:
``` # CompletionAction
@@ -546,38 +560,40 @@ Custom Commands templates are based on the BotFramework's [LG templates](/azure/
- Proceeding to turn {OnOff} {SubjectDevice} ```
-You only need to define the body of the template without the name, as follows.
+Instead, you can define the body of the template without the name, like this:
> [!div class="mx-imgBorder"]
-> ![template editor example](./media/custom-commands/template-editor-example.png)
+> ![Screenshot showing a template editor example.](./media/custom-commands/template-editor-example.png)
+
+This change introduces variation into the speech responses that are sent to the client. For an utterance, the corresponding speech response is randomly picked out of the provided options.
-This change introduces variation to the speech responses being sent to the client. So, for the same utterance, the corresponding speech response would be randomly picked out of the options provided.
+By taking advantage of LG templates, you can also define complex speech responses for commands by using adaptive expressions. For more information, see the [LG templates format](/azure/bot-service/file-format/bot-builder-lg-file-format#templates).
-Taking advantage of LG templates also allows you to define complex speech responses for commands using adaptive expressions. You can refer to the [LG templates format](/azure/bot-service/file-format/bot-builder-lg-file-format#templates) for more details. Custom Commands by default supports all the capabilities with the following minor differences:
+By default, the Custom Commands feature supports all capabilities, with the following minor differences:
-* In the LG templates entities are represented as ${entityName}. In Custom Commands we don't use entities but parameters can be used as variables with either one of these representations ${parameterName} or {parameterName}
-* Template composition and expansion are not supported in Custom Commands. This is because you never edit the `.lg` file directly, but only the responses of automatically created templates.
-* Custom functions injected by LG are not supported in Custom Commands. Predefined functions are still supported.
-* Options (strict, replaceNull & lineBreakStyle) are not supported in Custom Commands.
+* In the LG templates, entities are represented as `${entityName}`. The Custom Commands feature doesn't use entities. But you can use parameters as variables with either the `${parameterName}` representation or the `{parameterName}` representation.
+* The Custom Commands feature doesn't support template composition and expansion, because you never edit the *.lg* file directly. You edit only the responses of automatically created templates.
+* The Custom Commands feature doesn't support custom functions that LG injects. Predefined functions are supported.
+* The Custom Commands feature doesn't support options, such as `strict`, `replaceNull`, and `lineBreakStyle`.
-### Add template responses to TurnOnOff command
+### Add template responses to a TurnOnOff command
-Modify the **TurnOnOff** command to add a new parameter with the following configuration:
+Modify the `TurnOnOff` command to add a new parameter. Use the following configuration.
| Setting | Suggested value | | ------------------ | --------------------- |
-| Name | `SubjectContext` |
-| Is Global | unchecked |
-| Required | unchecked |
-| Type | String |
-| Default value | `all` |
-| Configuration | Accept predefined input values from internal catalog |
-| Predefined input values | `room`, `bathroom`, `all`|
+| **Name** | `SubjectContext` |
+| **Is Global** | Unselected |
+| **Required** | Unselected |
+| **Type** | **String** |
+| **Default value** | `all` |
+| **Configuration** | **Accept predefined input values from internal catalog** |
+| **Predefined input values** | `room`, `bathroom`, `all`|
-#### Modify completion rule
+#### Modify a completion rule
-Edit the **Actions** section of existing completion rule **ConfirmationResponse**. In the **Edit action** pop-up, switch to **Template Editor** and replace the text with the following example.
+Edit the **Actions** section of the existing completion rule **ConfirmationResponse**. In the **Edit action** window, switch to **Template Editor**. Then replace the text with the following example.
``` - IF: @{SubjectContext == "all" && SubjectDevice == "lights"}
@@ -589,37 +605,38 @@ Edit the **Actions** section of existing completion rule **ConfirmationResponse*
- Done, turning {OnOff} the {SubjectDevice} ```
-**Train** and **Test** your application as follows. Notice the variation of responses due to usage of multiple alternatives of the template value, and also use of adaptive expressions.
+Train and test your application by using the following input and output. Notice the variation of responses. The variation is created by multiple alternatives of the template value and also by use of adaptive expressions.
-* Input: turn on the tv
+* Input: *turn on the tv*
* Output: Ok, turning the tv on
-* Input: turn on the tv
+* Input: *turn on the tv*
* Output: Done, turned on the tv
-* Input: turn off the lights
+* Input: *turn off the lights*
* Output: Ok, turning all the lights off
-* Input: turn off room lights
+* Input: *turn off room lights*
* Output: Ok, turning off the room lights
-## Use Custom Voice
+## Use a custom voice
-Another way to customize Custom Commands responses is to select a custom output voice. Use the following steps to switch the default voice to a custom voice.
+Another way to customize Custom Commands responses is to select an output voice. Use the following steps to switch the default voice to a custom voice:
-1. In your custom commands application, select **Settings** from the left pane.
-1. Select **Custom Voice** from the middle pane.
-1. Select the desired custom or public voice from the table.
+1. In your Custom Commands application, in the pane on the left, select **Settings**.
+1. In the middle pane, select **Custom Voice**.
+1. In the table, select a custom voice or public voice.
1. Select **Save**. > [!div class="mx-imgBorder"]
-> ![Sample Sentences with parameters](media/custom-commands/select-custom-voice.png)
+> ![Screenshot showing sample sentences and parameters.](media/custom-commands/select-custom-voice.png)
> [!NOTE]
-> - For **Public voices**, **Neural types** are only available for specific regions. To check availability, see [standard and neural voices by region/endpoint](./regions.md#standard-and-neural-voices).
-> - For **Custom voices**, they can be created from the Custom Voice project page. See [Get Started with Custom Voice](./how-to-custom-voice.md).
+> For public voices, neural types are available only for specific regions. For more information, see [Speech service supported regions](./regions.md#standard-and-neural-voices).
+>
+> You can create custom voices on the **Custom Voice** project page. For more information, see [Get started with Custom Voice](./how-to-custom-voice.md).
Now the application will respond in the selected voice, instead of the default voice. ## Next steps
-* Learn how to [integrate your Custom Commands application](how-to-custom-commands-setup-speech-sdk.md) with a client app using the Speech SDK.
-* [Set up continuous deployment](how-to-custom-commands-deploy-cicd.md) for your Custom Commands application with Azure DevOps.
-
\ No newline at end of file
+* Learn how to [integrate your Custom Commands application](how-to-custom-commands-setup-speech-sdk.md) with a client app by using the Speech SDK.
+* [Set up continuous deployment](how-to-custom-commands-deploy-cicd.md) for your Custom Commands application by using Azure DevOps.
+
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
@@ -30,7 +30,7 @@ Android | Java | [1.14.4](https://gstreamer.freedesktop.org/data/pkg/android/1.
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
-## Prerequisites
+## GStreamer required to handle compressed audio
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/prerequisites.md)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/csharp/prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/csharp/prerequisites.md
@@ -6,5 +6,5 @@ ms.date: 03/09/2020
ms.author: trbye ---
-Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. If the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
+Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. For example, on Windows, if the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/python/prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/python/prerequisites.md
@@ -6,5 +6,5 @@ ms.date: 03/09/2020
ms.author: amishu ---
-Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. If the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
+Handling compressed audio is implemented using [GStreamer](https://gstreamer.freedesktop.org). For licensing reasons GStreamer binaries are not compiled and linked with the Speech SDK. Developers need to install several dependencies and plugins, see [Installing on Windows](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c) or [Installing on Linux](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). GStreamer binaries need to be in the system path, so that the Speech SDK can load the binaries during runtime. For example, on Windows, if the Speech SDK is able to find `libgstreamer-1.0-0.dll` during runtime, it means the gstreamer binaries are in the system path.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/supported-audio-formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/supported-audio-formats.md
@@ -6,7 +6,8 @@ ms.date: 03/16/2020
ms.author: trbye ---
-The default audio streaming format is WAV (16kHz or 8kHz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported. [Additional configuration](../how-to-use-codec-compressed-audio-input-streams.md) is needed to enable the formats listed below.
+The default audio streaming format is WAV (16 kHz or 8 kHz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported
+using GStreamer.
- MP3 - OPUS/OGG
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-container-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-faq.md
@@ -164,7 +164,7 @@ StatusCode: InvalidArgument,
Details: Voice does not match. ```
-**Answer 2:** You need to provide the correct voice name in the request, which is case-sensitive. Refer to the full service name mapping. You have to use `en-US-JessaRUS`, as `en-US-JessaNeural` is not available right now in container version of text-to-speech.
+**Answer 2:** You need to provide the correct voice name in the request, which is case-sensitive. Refer to the full service name mapping.
**Error 3:**
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-container-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
@@ -308,6 +308,11 @@ This command:
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
+> [!NOTE]
+> Containers support compressed audio input to Speech SDK using GStreamer.
+> To install GStreamer in a container,
+> follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
+ #### Analyze sentiment on the speech-to-text output Starting in v2.6.0 of the speech-to-text container, you should use TextAnalytics 3.0 API endpoint instead of the preview one. For example
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-docker-cli.md
@@ -3,7 +3,7 @@ title: Push & pull Docker image
description: Push and pull Docker images to a private container registry in Azure using the Docker CLI ms.topic: article ms.date: 01/23/2019
-ms.custom: "seodec18, H1Hack27Feb2017, devx-track-azurecli"
+ms.custom: "seodec18, H1Hack27Feb2017"
--- # Push your first image to a private Docker container registry using the Docker CLI
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-portal.md
@@ -3,7 +3,7 @@ title: Quickstart - Create registry in portal
description: Quickly learn to create a private Azure container registry using the Azure portal. ms.topic: quickstart ms.date: 08/04/2020
-ms.custom: "seodec18, mvc, devx-track-azurecli"
+ms.custom: "seodec18, mvc"
--- # Quickstart: Create an Azure container registry using the Azure portal
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-tutorial-prepare-registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tutorial-prepare-registry.md
@@ -3,7 +3,7 @@ title: Tutorial - Create geo-replicated registry
description: Create an Azure container registry, configure geo-replication, prepare a Docker image, and deploy it to the registry. Part one of a three-part series. ms.topic: tutorial ms.date: 06/30/2020
-ms.custom: "seodec18, mvc, devx-track-azurecli"
+ms.custom: "seodec18, mvc"
--- # Tutorial: Prepare a geo-replicated Azure container registry
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/troubleshoot-changefeed-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-changefeed-functions.md
@@ -4,7 +4,7 @@ description: Common issues, workarounds, and diagnostic steps, when using the Az
author: ealsur ms.service: cosmos-db ms.subservice: cosmosdb-sql
-ms.date: 03/13/2020
+ms.date: 12/29/2020
ms.author: maquaran ms.topic: troubleshooting ms.reviewer: sngun
@@ -80,17 +80,19 @@ The concept of a "change" is an operation on a document. The most common scenari
### Some changes are missing in my Trigger
-If you find that some of the changes that happened in your Azure Cosmos container are not being picked up by the Azure Function, there is an initial investigation step that needs to take place.
+If you find that some of the changes that happened in your Azure Cosmos container are not being picked up by the Azure Function or some changes are missing in the destination when you are copying them, please follow the below steps.
When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you are investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination. If some changes are missing on the destination, this could mean that is some error happening during the Azure Function execution after the changes were received.
-In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry).
+In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry).
> [!NOTE] > The Azure Functions trigger for Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination is because that you are failing to process them.
+If the destination is another Cosmos container and you are performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
+ If you find that some changes were not received at all by your trigger, the most common scenario is that there is **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process. Additionally, the scenario can be validated, if you know how many Azure Function App instances you have running. If you inspect your leases container and count the number of lease items within, the distinct values of the `Owner` property in them should be equal to the number of instances of your Function App. If there are more owners than the known Azure Function App instances, it means that these extra owners are the ones "stealing" the changes.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/quick-create-budget-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-create-budget-template.md
@@ -8,7 +8,7 @@ ms.service: cost-management-billing
ms.subservice: cost-management ms.topic: quickstart ms.date: 07/28/2020
-ms.custom: subject-armqs, devx-track-azurecli
+ms.custom: subject-armqs
--- # Quickstart: Create a budget with an ARM template
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/create-subscription-template.md
@@ -8,7 +8,7 @@ ms.topic: how-to
ms.date: 11/17/2020 ms.reviewer: andalmia ms.author: banders
-ms.custom: devx-track-azurepowershell, devx-track-azurecli
+ms.custom: devx-track-azurepowershell
--- # Programmatically create Azure subscriptions with an Azure Resource Manager template
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
@@ -5,7 +5,7 @@ services: data-factory
author: linda33wj ms.service: data-factory ms.topic: troubleshooting
-ms.date: 12/18/2020
+ms.date: 12/30/2020
ms.author: jingwang ms.reviewer: craigg ms.custom: has-adal-ref
@@ -19,7 +19,7 @@ This article explores common troubleshooting methods for connectors in Azure Dat
## Azure Blob Storage
-### Error code: AzureBlobOperationFailed
+### Error code: AzureBlobOperationFailed
- **Message**: `Blob operation Failed. ContainerName: %containerName;, path: %path;.`
@@ -28,24 +28,9 @@ This article explores common troubleshooting methods for connectors in Azure Dat
- **Recommendation**: Check the error in details. Refer to blob help document: https://docs.microsoft.com/rest/api/storageservices/blob-service-error-codes. Contact storage team if need help.
-### Error code: AzureBlobServiceNotReturnExpectedDataLength
--- **Message**: `Error occurred when trying to fetch the blob '%name;'. This could be a transient issue and you may rerun the job. If it fails again continuously, contact customer support.`--
-### Error code: AzureBlobNotSupportMultipleFilesIntoSingleBlob
--- **Message**: `Transferring multiple files into a single Blob is not supported. Currently only single file source is supported.`--
-### Error code: AzureStorageOperationFailedConcurrentWrite
--- **Message**: `Error occurred when trying to upload a file. It's possible because you have multiple concurrent copy activities runs writing to the same file '%name;'. Check your ADF configuration.`-- ### Invalid property during copy activity -- **Message**: `Copy activity <Activity Name> has an invalid "source" property. The source type is not compatible with the dataset <Dataset Name> and its linked service <Linked Service Name>. Please verify your input against.`
+- **Message**: `Copy activity <Activity Name> has an invalid "source" property. The source type is not compatible with the dataset <Dataset Name> and its linked service <Linked Service Name>. Please verify your input against.`
- **Cause**: The type defined in dataset is inconsistent with the source/sink type defined in copy activity.
@@ -74,7 +59,6 @@ This article explores common troubleshooting methods for connectors in Azure Dat
- **Cause**: There are two possible causes: - If you use **Insert** as write behavior, this error means you source data have rows/objects with same ID.- - If you use **Upsert** as write behavior and you set another unique key to the container, this error means you source data have rows/objects with different IDs but same value for the defined unique key. - **Resolution**:
@@ -96,9 +80,8 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
- **Resolution**: Here are two solutions:
- 1. **Increase the container RU** to bigger value in Cosmos DB, which will improve the copy activity performance, though incur more cost in Cosmos DB.
-
- 2. Decrease **writeBatchSize** to smaller value (such as 1000) and set **parallelCopies** to smaller value such as 1, which will make copy run performance worse than current but will not incur more cost in Cosmos DB.
+ - **Increase the container RU** to bigger value in Cosmos DB, which will improve the copy activity performance, though incur more cost in Cosmos DB.
+ - Decrease **writeBatchSize** to smaller value (such as 1000) and set **parallelCopies** to smaller value such as 1, which will make copy run performance worse than current but will not incur more cost in Cosmos DB.
### Column missing in column mapping
@@ -126,70 +109,15 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
- **Resolution**: In MongoDB connection string, add option "**uuidRepresentation=standard**". For more information, see [MongoDB connection string](connector-mongodb.md#linked-service-properties).
+## Azure Cosmos DB (SQL API)
-## Azure Data Lake Storage Gen2
-
-### Error code: AdlsGen2OperationFailed
--- **Message**: `ADLS Gen2 operation failed for: %adlsGen2Message;.%exceptionData;.`--- **Cause**: ADLS Gen2 throws the error indicating operation failed.--- **Recommendation**: Check the detailed error message thrown by ADLS Gen2. If it's caused by transient failure, please retry. If you need further help, please contact Azure Storage support and provide the request ID in error message.--- **Cause**: When the error message contains 'Forbidden', the service principal or managed identity you use may not have enough permission to access the ADLS Gen2.--- **Recommendation**: Refer to the help document: https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication.--- **Cause**: When the error message contains 'InternalServerError', the error is returned by ADLS Gen2.--- **Recommendation**: It may be caused by transient failure, please retry. If the issue persists, please contact Azure Storage support and provide the request ID in error message.--
-### Error code: AdlsGen2InvalidUrl
--- **Message**: `Invalid url '%url;' provided, expecting http[s]://<accountname>.dfs.core.windows.net.`--
-### Error code: AdlsGen2InvalidFolderPath
--- **Message**: `The folder path is not specified. Cannot locate the file '%name;' under the ADLS Gen2 account directly. Please specify the folder path instead.`-
+### Error code: CosmosDbSqlApiOperationFailed
-### Error code: AdlsGen2OperationFailedConcurrentWrite
+- **Message**: `CosmosDbSqlApi operation Failed. ErrorMessage: %msg;.`
-- **Message**: `Error occurred when trying to upload a file. It's possible because you have multiple concurrent copy activities runs writing to the same file '%name;'. Check your ADF configuration.`
+- **Cause**: CosmosDbSqlApi operation hit problem.
-
-### Error code: AdlsGen2TimeoutError
--- **Message**: `Request to ADLS Gen2 account '%account;' met timeout error. It is mostly caused by the poor network between the Self-hosted IR machine and the ADLS Gen2 account. Check the network to resolve such error.`--
-### Request to ADLS Gen2 account met timeout error
--- **Message**: Error Code = `UserErrorFailedBlobFSOperation`, Error Message = `BlobFS operation failed for: A task was canceled`.--- **Cause**: The issue is caused by the ADLS Gen2 sink timeout error, which mostly happens on the self-hosted IR machine.--- **Recommendation**: -
- 1. Place your self-hosted IR machine and target ADLS Gen2 account in the same region if possible. This can avoid random timeout error and have better performance.
-
- 1. Check whether there is any special network setting like ExpressRoute and ensure the network has enough bandwidth. It is suggested to lower the self-hosted IR concurrent jobs setting when the overall bandwidth is low, through which can avoid network resource competition across multiple concurrent jobs.
-
- 1. Use smaller block size for non-binary copy to mitigate such timeout error if the file size is moderate or small. Please refer to [Blob Storage Put Block](https://docs.microsoft.com/rest/api/storageservices/put-block).
-
- To specify the custom block size, you can edit the property in .json editor:
- ```
- "sink": {
- "type": "DelimitedTextSink",
- "storeSettings": {
- "type": "AzureBlobFSWriteSettings",
- "blockSizeInMB": 8
- }
- }
- ```
+- **Recommendation**: Check the error in details. Refer to [CosmosDb help document](https://docs.microsoft.com/azure/cosmos-db/troubleshoot-dot-net-sdk). Contact CosmosDb team if need help.
## Azure Data Lake Storage Gen1
@@ -199,7 +127,7 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
- **Symptoms**: Copy activity fails with the following error: ```
- Message: Failure happened on 'Sink' side. ErrorCode=UserErrorFailedFileOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Upload file failed at path STAGING/PLANT/INDIARENEWABLE/LiveData/2020/01/14\\20200114-0701-oem_gibtvl_mannur_data_10min.csv.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.,Source=System,''Type=System.Security.Authentication.AuthenticationException,Message=The remote certificate is invalid according to the validation procedure.,Source=System,'.
+ Message: ErrorCode = `UserErrorFailedFileOperation`, Error Message = `The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel`.
``` - **Cause**: The certificate validation failed during TLS handshake.
@@ -235,7 +163,62 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
busy to handle requests, it returns an HTTP error 503. - **Resolution**: Rerun the copy activity after several minutes.++
+## Azure Data Lake Storage Gen2
+
+### Error code: ADLSGen2OperationFailed
+
+- **Message**: `ADLS Gen2 operation failed for: %adlsGen2Message;.%exceptionData;.`
+
+- **Cause**: ADLS Gen2 throws the error indicating operation failed.
+
+- **Recommendation**: Check the detailed error message thrown by ADLS Gen2. If it's caused by transient failure, please retry. If you need further help, please contact Azure Storage support and provide the request ID in error message.
+
+- **Cause**: When the error message contains 'Forbidden', the service principal or managed identity you use may not have enough permission to access the ADLS Gen2.
+
+- **Recommendation**: Refer to the help document: https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication.
+
+- **Cause**: When the error message contains 'InternalServerError', the error is returned by ADLS Gen2.
+
+- **Recommendation**: It may be caused by transient failure, please retry. If the issue persists, please contact Azure Storage support and provide the request ID in error message.
+
+### Request to ADLS Gen2 account met timeout error
+
+- **Message**: Error Code = `UserErrorFailedBlobFSOperation`, Error Message = `BlobFS operation failed for: A task was canceled`.
+
+- **Cause**: The issue is caused by the ADLS Gen2 sink timeout error, which mostly happens on the self-hosted IR machine.
+
+- **Recommendation**:
+
+ - Place your self-hosted IR machine and target ADLS Gen2 account in the same region if possible. This can avoid random timeout error and have better performance.
+
+ - Check whether there is any special network setting like ExpressRoute and ensure the network has enough bandwidth. It is suggested to lower the self-hosted IR concurrent jobs setting when the overall bandwidth is low, through which can avoid network resource competition across multiple concurrent jobs.
+
+ - Use smaller block size for non-binary copy to mitigate such timeout error if the file size is moderate or small. Refer to [Blob Storage Put Block](https://docs.microsoft.com/rest/api/storageservices/put-block).
+
+ To specify the custom block size, you can edit the property in .json editor:
+ ```
+ "sink": {
+ "type": "DelimitedTextSink",
+ "storeSettings": {
+ "type": "AzureBlobFSWriteSettings",
+ "blockSizeInMB": 8
+ }
+ }
+ ```
+
+## Azure File Storage
+
+### Error code: AzureFileOperationFailed
+
+- **Message**: `Azure File operation Failed. Path: %path;. ErrorMessage: %msg;.`
+
+- **Cause**: Azure File storage operation hit problem.
+
+- **Recommendation**: Check the error in details. Refer to Azure File help document: https://docs.microsoft.com/rest/api/storageservices/file-service-error-codes. Contact the storage team if you need help.
+ ## Azure Synapse Analytics/Azure SQL Database/SQL Server
@@ -243,13 +226,29 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', User: '%user;'. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.`
+- **Cause**: Azure SQL: If the error message contains "SqlErrorNumber=47073", it means public network access is denied in connectivity setting.
+
+- **Recommendation**: On Azure SQL firewall, set "Deny public network access" option to "No". Learn more from https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access.
+
+- **Cause**: Azure SQL: If the error message contains SQL error code, like "SqlErrorNumber=[errorcode]", please refer to Azure SQL troubleshooting guide.
+
+- **Recommendation**: Learn more from https://docs.microsoft.com/azure/azure-sql/database/troubleshoot-common-errors-issues.
+
+- **Cause**: Check if port 1433 is in the firewall allow list.
+
+- **Recommendation**: Follow with this reference doc: https://docs.microsoft.com/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-.
+ - **Cause**: If the error message contains "SqlException", SQL Database throws the error indicating some specific operation failed. - **Recommendation**: Search by SQL error code in this reference doc for more details: https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors. If you need further help, contact Azure SQL support.
+- **Cause**: If this is a transient issue (e.g., instable network connection), please add retry in activity policy to mitigate.
+
+- **Recommendation**: Follow this reference doc: https://docs.microsoft.com/azure/data-factory/concepts-pipelines-activities#activity-policy.
+ - **Cause**: If the error message contains "Client with IP address '...' is not allowed to access the server", and you are trying to connect to Azure SQL Database, usually it is caused by Azure SQL Database firewall issue. -- **Recommendation**: In logical SQL server firewall configuration, enable "Allow Azure services and resources to access this server" option. Reference doc: https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure.
+- **Recommendation**: In Azure SQL Server firewall configuration, enable "Allow Azure services and resources to access this server" option. Reference doc: https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure.
### Error code: SqlOperationFailed
@@ -271,7 +270,6 @@ busy to handle requests, it returns an HTTP error 503.
- **Recommendation**: To identify which row encounters the problem, please enable fault tolerance feature on copy activity, which can redirect problematic row(s) to the storage for further investigation. Reference doc: https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance. - ### Error code: SqlUnauthorizedAccess - **Message**: `Cannot connect to '%connectorName;'. Detail Message: '%message;'`
@@ -339,11 +337,6 @@ busy to handle requests, it returns an HTTP error 503.
- **Recommendation**: Verify the column in the query, 'structure' in dataset, and 'mappings' in activity.
-### Error code: SqlColumnNameMismatchByCaseSensitive
--- **Message**: `Column '%column;' in DataSet '%dataSetName;' cannot be found in physical SQL Database. Column matching is case-sensitive. Column '%columnInTable;' appears similar. Check the DataSet(s) configuration to proceed further.`-- ### Error code: SqlBatchWriteTimeout - **Message**: `Timeouts in SQL write operation.`
@@ -384,11 +377,6 @@ busy to handle requests, it returns an HTTP error 503.
- **Recommendation**: Remote server closed the SQL connection. Retry. If problem repro, contact Azure SQL support.
-### Error code: SqlCreateTableFailedUnsupportedType
--- **Message**: `Type '%type;' in source side cannot be mapped to a type that supported by sink side(column name:'%name;') in autocreate table.`-- ### Error message: Conversion failed when converting from a character string to uniqueidentifier - **Symptoms**: When you copy data from tabular data source (such as SQL Server) into Azure Synapse Analytics using staged copy and PolyBase, you hit the following error:
@@ -451,9 +439,9 @@ busy to handle requests, it returns an HTTP error 503.
- Time -> 12 bytes - Tinyint -> 1 byte -- **Resolution**: Reduce column width to be less than 1 MB--- Or use bulk insert approach by disabling Polybase
+- **Resolution**:
+ - Reduce column width to be less than 1 MB.
+ - Or use bulk insert approach by disabling Polybase.
### Error message: The condition specified using HTTP conditional header(s) is not met
@@ -475,17 +463,17 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**: The root cause of the issue is mostly triggered by the bottleneck of Azure SQL side. Following are some possible causes:
- 1. Azure DB tier is not high enough.
+ - Azure DB tier is not high enough.
- 1. Azure DB DTU usage is close to 100%. You can [monitor the performance](https://docs.microsoft.com/azure/azure-sql/database/monitor-tune-overview) and consider to upgrade the DB tier.
+ - Azure DB DTU usage is close to 100%. You can [monitor the performance](https://docs.microsoft.com/azure/azure-sql/database/monitor-tune-overview) and consider to upgrade the DB tier.
- 1. Indexes are not set properly. Please remove all the indexes before data load and recreate them after load complete.
+ - Indexes are not set properly. Remove all the indexes before data load and recreate them after load complete.
- 1. WriteBatchSize is not large enough to fit schema row size. Please try to enlarge the property for the issue.
+ - WriteBatchSize is not large enough to fit schema row size. Try to enlarge the property for the issue.
- 1. Instead of bulk inset, stored procedure is being used, which is expected to have worse performance.
+ - Instead of bulk inset, stored procedure is being used, which is expected to have worse performance.
-- **Resolution**: Please refer to the TSG for [copy activity performance](https://docs.microsoft.com/azure/data-factory/copy-activity-performance-troubleshooting)
+- **Resolution**: Refer to the TSG for [copy activity performance](https://docs.microsoft.com/azure/data-factory/copy-activity-performance-troubleshooting)
### Performance tier is low and leads to copy failure
@@ -494,7 +482,7 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**: Azure SQL s1 is being used, which hit IO limits in such case. -- **Resolution**: Please upgrade the Azure SQL performance tier to fix the issue.
+- **Resolution**: Upgrade the Azure SQL performance tier to fix the issue.
### SQL Table cannot be found
@@ -503,22 +491,47 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**: The current SQL account does not have enough permission to execute requests issued by .NET SqlBulkCopy.WriteToServer. -- **Resolution**: Please switch to a more privileged SQL account.
+- **Resolution**: Switch to a more privileged SQL account.
-### String or binary data would be truncated
+### Error message: String or binary data would be truncated
- **Symptoms**: Error occurred when copying data into On-prem/Azure SQL Server table: - **Cause**: Cx Sql table schema definition has one or more columns with less length than expectation. -- **Resolution**: Please try following steps to fix the issue:
+- **Resolution**: Try following steps to fix the issue:
+
+ 1. Apply SQL sink [fault tolerance](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance), especially "redirectIncompatibleRowSettings" to troubleshoot which rows have the issue.
+
+ > [!NOTE]
+ > Please be noticed that fault tolerance might introduce additional execution time, which could lead to higher cost.
+
+ 2. Double check the redirected data with SQL table schema column length to see which column(s) need to be updated.
+
+ 3. Update table schema accordingly.
++
+## Azure Table Storage
+
+### Error code: AzureTableDuplicateColumnsFromSource
+
+- **Message**: `Duplicate columns with same name '%name;' are detected from source. This is NOT supported by Azure Table Storage sink`
+
+- **Cause**: It could be common for sql query with join, or unstructured csv files
+
+- **Recommendation**: double check the source columns and fix accordingly.
+
- 1. Apply [fault tolerance](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance), especially "redirectIncompatibleRowSettings" to troubleshoot which rows have the issue.
+## DB2
- 1. Double check the redirected data with SQL table schema column length to see which column(s) need to be updated.
+### Error code: DB2DriverRunFailed
- 1. Update table schema accordingly.
+- **Message**: `Error thrown from driver. Sql code: '%code;'`
+
+- **Cause**: If the error message contains "SQLSTATE=51002 SQLCODE=-805", please refer to the Tip in this document: https://docs.microsoft.com/azure/data-factory/connector-db2#linked-service-properties
+
+- **Recommendation**: Try to set "NULLID" in property "packageCollection"
## Delimited Text Format
@@ -534,7 +547,7 @@ busy to handle requests, it returns an HTTP error 503.
### Error code: DelimitedTextMoreColumnsThanDefined -- **Message**: `Error found when processing '%function;' source '%name;' with row number %rowCount;: found more columns than expected column count: %columnCount;.`
+- **Message**: `Error found when processing '%function;' source '%name;' with row number %rowCount;: found more columns than expected column count: %expectedColumnCount;.`
- **Cause**: The problematic row's column count is larger than the first row's column count. It may be caused by data issue or incorrect column delimiter/quote char settings.
@@ -549,40 +562,60 @@ busy to handle requests, it returns an HTTP error 503.
- **Recommendation**: Make sure the files under the given folder have identical schema.
-### Error code: DelimitedTextIncorrectRowDelimiter
+## Dynamics 365/Common Data Service/Dynamics CRM
-- **Message**: `The specified row delimiter %rowDelimiter; is incorrect. Cannot detect a row after parse %size; MB data.`
+### Error code: DynamicsCreateServiceClientError
+- **Message**: `This is a transient issue on dynamics server side. Try to rerun the pipeline.`
-### Error code: DelimitedTextTooLargeColumnCount
+- **Cause**: This is a transient issue on dynamics server side.
-- **Message**: `Column count reaches limitation when deserializing csv file. Maximum size is '%size;'. Check the column delimiter and row delimiter provided. (Column delimiter: '%columnDelimiter;', Row delimiter: '%rowDelimiter;')`
+- **Recommendation**: Rerun the pipeline. If keep failing, try to reduce the parallelism. If still fail, please contact dynamics support.
-### Error code: DelimitedTextInvalidSettings
+### Columns are missing when previewing/importing schema
-- **Message**: `%settingIssues;`
+- **Symptoms**: Some of the columns turn out to be missing when importing schema or previewing data. Error message: `The valid structure information (column name and type) are required for Dynamics source.`
+- **Cause**: This issue is basically by-design, as ADF is not able to show columns that have no value in the first 10 records. Make sure the columns you added is with correct format.
+- **Recommendation**: Manually add the columns in mapping tab.
-## Dynamics 365/Common Data Service/Dynamics CRM
-### Error code: DynamicsCreateServiceClientError
+### Error code: DynamicsMissingTargetForMultiTargetLookupField
-- **Message**: `This is a transient issue on dynamics server side. Try to rerun the pipeline.`
+- **Message**: `Cannot find the target column for multi-target lookup field: '%fieldName;'.`
-- **Cause**: This is a transient issue on dynamics server side.
+- **Cause**: The target column does not exist in source or in column mapping.
-- **Recommendation**: Rerun the pipeline. If keep failing, try to reduce the parallelism. If still fail, please contact dynamics support.
+- **Recommendation**: 1. Make sure the source contains the target column. 2. Add the target column in the column mapping. Ensure the sink column is in pattern of "{fieldName}@EntityReference".
-### Columns are missing when previewing/importing schema
+### Error code: DynamicsInvalidTargetForMultiTargetLookupField
-- **Symptoms**: Some of the columns turn out to be missing when importing schema or previewing data. Error message: `The valid structure information (column name and type) are required for Dynamics source.`
+- **Message**: `The provided target: '%targetName;' is not a valid target of field: '%fieldName;'. Valid targets are: '%validTargetNames;"`
-- **Cause**: This issue is basically by-design, as ADF is not able to show columns that have no value in the first 10 records. Please make sure the columns you added is with correct format.
+- **Cause**: A wrong entity name is provided as target entity of a multi-target lookup field.
-- **Recommendation**: Manually add the columns in mapping tab.
+- **Recommendation**: Provide a valid entity name for the multi-target lookup field.
++
+### Error code: DynamicsInvalidTypeForMultiTargetLookupField
+
+- **Message**: `The provided target type is not a valid string. Field: '%fieldName;'.`
+
+- **Cause**: The value in target column is not a string
+
+- **Recommendation**: Provide a valid string in the multi-target lookup target column.
++
+### Error code: DynamicsFailedToRequetServer
+
+- **Message**: `The dynamics server or the network is experiencing issues. Check network connectivity or check dynamics server log for more details.`
+
+- **Cause**: The dynamics server is instable or inaccessible or the network is experiencing issues.
+
+- **Recommendation**: Check network connectivity or check dynamics server log for more details. Contact dynamics support for further help.
## Excel Format
@@ -591,88 +624,95 @@ busy to handle requests, it returns an HTTP error 503.
- **Symptoms**:
- 1. When you create Excel dataset and import schema from connection/store, preview data, list or refresh worksheets, you may hit timeout error if the excel file is large in size.
+ - When you create Excel dataset and import schema from connection/store, preview data, list, or refresh worksheets, you may hit timeout error if the excel file is large in size.
- 1. When you use copy activity to copy data from large Excel file (>= 100MB) into other data store, you may experience slow performance or OOM issue.
+ - When you use copy activity to copy data from large Excel file (>= 100 MB) into other data store, you may experience slow performance or OOM issue.
- **Cause**:
- 1. For operations like importing schema, previewing data and listing worksheets on excel dataset, the timeout is 100s and static. For large Excel file, these operations may not finish within the timeout value.
+ - For operations like importing schema, previewing data, and listing worksheets on excel dataset, the timeout is 100 s and static. For large Excel file, these operations may not finish within the timeout value.
- 2. ADF copy activity reads the whole Excel file into memory then locate the specified worksheet and cells to read data. This behavior is due to the underlying SDK ADF uses.
+ - ADF copy activity reads the whole Excel file into memory then locate the specified worksheet and cells to read data. This behavior is due to the underlying SDK ADF uses.
- **Resolution**:
- 1. For importing schema, you can generate a smaller sample file which is a subset of original file, and choose "import schema from sample file" instead of "import schema from connection/store".
+ - For importing schema, you can generate a smaller sample file, which is a subset of original file, and choose "import schema from sample file" instead of "import schema from connection/store".
- 2. For listing worksheet, in the worksheet dropdown, you can click "Edit" and input the sheet name/index instead.
+ - For listing worksheet, in the worksheet dropdown, you can click "Edit" and input the sheet name/index instead.
- 3. To copy large excel file (>100MB) into other store, you can use Data Flow Excel source which sport streaming read and perform better.
+ - To copy large excel file (>100 MB) into other store, you can use Data Flow Excel source which sport streaming read and perform better.
+
+## FTP
-## HDInsight
+### Error code: FtpFailedToConnectToFtpServer
-### SSL error when ADF linked service using HDInsight ESP cluster
+- **Message**: `Failed to connect to FTP server. Please make sure the provided server informantion is correct, and try again.`
-- **Message**: `Failed to connect to HDInsight cluster: 'ERROR [HY000] [Microsoft][DriverSupport] (1100) SSL certificate verification failed because the certificate is missing or incorrect.`
+- **Cause**: Incorrect linked service type might be used for FTP server, like using SFTP Linked Service to connect to an FTP server.
-- **Cause**: The issue is most likely related with System Trust Store.
+- **Recommendation**: Check the port of the target server. By default FTP uses port 21.
-- **Resolution**: You can navigate to the path **Microsoft Integration Runtime\4.0\Shared\ODBC Drivers\Microsoft Hive ODBC Driver\lib** and open DriverConfiguration64.exe to change the setting.
- ![Uncheck Use System Trust Store](./media/connector-troubleshoot-guide/system-trust-store-setting.png)
+## Http
+### Error code: HttpFileFailedToRead
-## JSON Format
+- **Message**: `Failed to read data from http server. Check the error from http server:%message;`
-### Error code: JsonInvalidArrayPathDefinition
+- **Cause**: This error happens when Azure Data Factory talk to http server, but http request operation fail.
-- **Message**: `Error occurred when deserializing source JSON data. Check whether the JsonPath in JsonNodeReference and JsonPathDefintion is valid.`
+- **Recommendation**: Check the http status code \ message in error message and fix the remote server issue.
-### Error code: JsonEmptyJObjectData
+## Oracle
-- **Message**: `The specified row delimiter %rowDelimiter; is incorrect. Cannot detect a row after parse %size; MB data.`
+### Error code: ArgumentOutOfRangeException
+- **Message**: `Hour, Minute, and Second parameters describe an un-representable DateTime.`
-### Error code: JsonNullValueInPathDefinition
+- **Cause**: In ADF, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports wider range of DateTime value (like BC century or min/sec>59), which leads to failure in ADF.
-- **Message**: `Null JSONPath detected in JsonPathDefinition.`
+- **Recommendation**:
+ Run `select dump(<column name>)` to check if the value in Oracle is in ADF's range.
-### Error code: JsonUnsupportedHierarchicalComplexValue
+ If you wish to know the byte sequence in the result, please check https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle.
-- **Message**: `The retrieved type of data %data; with value %value; is not supported yet. Please either remove the targeted column '%name;' or enable skip incompatible row to skip the issue rows.`
+## Orc Format
-### Error code: JsonConflictPartitionDiscoverySchema
+### Error code: OrcJavaInvocationException
-- **Message**: `Conflicting partition column names detected.'%schema;', '%partitionDiscoverySchema;'`
+- **Message**: `An error occurred when invoking java, message: %javaException;.`
+- **Cause**: When the error message contains 'java.lang.OutOfMemory', 'Java heap space' and 'doubleCapacity', usually it's a memory management issue in old version of integration runtime.
-### Error code: JsonInvalidDataFormat
+- **Recommendation**: If you are using Self-hosted Integration Runtime, suggest upgrading to the latest version.
-- **Message**: `Error occurred when deserializing source JSON file '%fileName;'. Check if the data is in valid JSON object format.`
+- **Cause**: When the error message contains 'java.lang.OutOfMemory', the integration runtime doesn't have enough resource to process the file(s).
+- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted Integration Runtime, scale up to a powerful machine with memory equal to or larger than 8 GB.
-### Error code: JsonInvalidDataMixedArrayAndObject
+- **Cause**: When error message contains 'NullPointerReference', it possible is a transient error.
-- **Message**: `Error occurred when deserializing source JSON file '%fileName;'. The JSON format doesn't allow mixed arrays and objects.`
+- **Recommendation**: Retry. If the problem persists, please contact support.
+- **Cause**: When error message contains 'BufferOverflowException', it possible is a transient error.
-## Oracle
+- **Recommendation**: Retry. If the problem persists, please contact support.
-### Error code: ArgumentOutOfRangeException
+- **Cause**: When error message contains "java.lang.ClassCastException:org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to org.apache.hadoop.io.Text", this is the type conversion issue inside Java Runtime. Usually, it caused by source data cannot be well handled in Java runtime.
-- **Message**: `Hour, Minute, and Second parameters describe an un-representable DateTime.`
+- **Recommendation**: This is data issue. Try to use string instead of char/varchar in orc format data.
-- **Cause**: In ADF, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports wider range of DateTime value (like BC century or min/sec>59), which leads to failure in ADF.
+### Error code: OrcDateTimeExceedLimit
-- **Recommendation**:
+- **Message**: `The Ticks value '%ticks;' for the datetime column must be between valid datetime ticks range -621355968000000000 and 2534022144000000000.`
- Please run `select dump(<column name>)` to check if the value in Oracle is in ADF's range.
+- **Cause**: If the datetime value is '0001-01-01 00:00:00', it could be caused by the difference between Julian Calendar and Gregorian Calendar. https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar#Difference_between_Julian_and_proleptic_Gregorian_calendar_dates.
- If you wish to know the byte sequence in the result, please check https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle.
+- **Recommendation**: Check the ticks value and avoid using the datetime value '0001-01-01 00:00:00'.
## Parquet Format
@@ -817,7 +857,7 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**:
- The issue could be caused by white spaces or unsupported characters (such as ,;{}()\n\t=) in column name, as Parquet doesn't support such format.
+ The issue could be caused by white spaces or unsupported characters (such as,;{}()\n\t=) in column name, as Parquet doesn't support such format.
For example, column name like *contoso(test)* will parse the type in brackets from [code](https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/schema/MessageTypeParser.java) `Tokenizer st = new Tokenizer(schemaString, " ;{}()\n\t");`. The error will be raised as there is no such "test" type.
@@ -825,15 +865,23 @@ busy to handle requests, it returns an HTTP error 503.
- **Resolution**:
- 1. Double check if there are white spaces in sink column name.
+ - Double check if there are white spaces in sink column name.
- 1. Double check if the first row with white spaces is used as column name.
+ - Double check if the first row with white spaces is used as column name.
- 1. Double check the type OriginalType is supported or not. Try to avoid these special symbols `,;{}()\n\t=`.
+ - Double check the type OriginalType is supported or not. Try to avoid these special symbols `,;{}()\n\t=`.
## REST
+### Error code: RestSinkCallFailed
+
+- **Message**: `Rest Endpoint responded with Failure from server. Check the error from server:%message;`
+
+- **Cause**: This error happens when Azure Data Factory talk to Rest Endpoint over http protocol, and request operation fail.
+
+- **Recommendation**: Check the http status code \ message in error message and fix the remote server issue.
+ ### Unexpected network response from REST connector - **Symptoms**: Endpoint sometimes receives unexpected response (400 / 401 / 403 / 500) from REST connector.
@@ -847,7 +895,7 @@ busy to handle requests, it returns an HTTP error 503.
``` If the command returns the same unexpected response, please fix above parameters with 'curl' until it returns the expected response.
- Also you can use 'curl --help' for more advanced usage of the command.
+ Also you can use 'curl--help' for more advanced usage of the command.
- If only ADF REST connector returns unexpected response, please contact Microsoft support for further troubleshooting.
@@ -858,75 +906,76 @@ busy to handle requests, it returns an HTTP error 503.
## SFTP
-### Invalid SFTP credential provided for 'SSHPublicKey' authentication type
+#### Error code: SftpOperationFail
-- **Symptoms**: SSH public key authentication is being used while Invalid SFTP credential is provided for 'SshPublicKey' authentication type.
+- **Message**: `Failed to '%operation;'. Check detailed error from SFTP.`
-- **Cause**: This error could be caused by three possible reasons:
+- **Cause**: Sftp operation hit problem.
- 1. Private key content is fetched from AKV/SDK but it is not encoded correctly.
+- **Recommendation**: Check detailed error from SFTP.
- 1. Wrong key content format is chosen.
- 1. Invalid credential or private key content.
+### Error code: SftpRenameOperationFail
-- **Resolution**:
+- **Message**: `Failed to rename the temp file. Your SFTP server doesn't support renaming temp file, please set "useTempFileRename" as false in copy sink to disable uploading to temp file.`
- 1. For **Cause 1**:
+- **Cause**: Your SFTP server doesn't support renaming temp file.
- If private key content is from AKV and original key file can work if customer upload it directly to SFTP linked service
+- **Recommendation**: Set "useTempFileRename" as false in copy sink to disable uploading to temp file.
- Refer to https://docs.microsoft.com/azure/data-factory/connector-sftp#using-ssh-public-key-authentication, the privateKey content is a Base64 encoded SSH private key content.
- Please encode **the whole content of original private key file** with base64 encoding and store the encoded string to AKV. Original private key file is the one that can work on SFTP linked service if you click on Upload from file.
+### Error code: SftpInvalidSftpCredential
- Here's some samples used for generating the string:
+- **Message**: `Invalid Sftp credential provided for '%type;' authentication type.`
- - Use C# code:
- ```
- byte[] keyContentBytes = File.ReadAllBytes(Private Key Path);
- string keyContent = Convert.ToBase64String(keyContentBytes, Base64FormattingOptions.None);
- ```
+- **Cause**: Private key content is fetched from AKV/SDK but it is not encoded correctly.
- - Use Python code:
- ```
- import base64
- rfd = open(r'{Private Key Path}', 'rb')
- keyContent = rfd.read()
- rfd.close()
- print base64.b64encode(Key Content)
- ```
+- **Recommendation**:
- - Use third-party base64 convert tool
+ If private key content is from AKV and original key file can work if customer upload it directly to SFTP linked service
- Tools like https://www.base64encode.org/ are recommended.
+ Refer to https://docs.microsoft.com/azure/data-factory/connector-sftp#using-ssh-public-key-authentication, the privateKey content is a Base64 encoded SSH private key content.
- 1. For **Cause 2**:
+ Please encode **the whole content of original private key file** with base64 encoding and store the encoded string to AKV. Original private key file is the one that can work on SFTP linked service if you click on Upload from file.
- If PKCS#8 format SSH private key is being used
+ Here's some samples used for generating the string:
- PKCS#8 format SSH private key (start with "-----BEGIN ENCRYPTED PRIVATE KEY-----") is currently not supported to access SFTP server in ADF.
+ - Use C# code:
+ ```
+ byte[] keyContentBytes = File.ReadAllBytes(Private Key Path);
+ string keyContent = Convert.ToBase64String(keyContentBytes, Base64FormattingOptions.None);
+ ```
- Run below commands to convert the key to traditional SSH key format (start with "-----BEGIN RSA PRIVATE KEY-----"):
+ - Use Python code:
+ ```
+ import base64
+ rfd = open(r'{Private Key Path}', 'rb')
+ keyContent = rfd.read()
+ rfd.close()
+ print base64.b64encode(Key Content)
+ ```
- ```
- openssl pkcs8 -in pkcs8_format_key_file -out traditional_format_key_file
- chmod 600 traditional_format_key_file
- ssh-keygen -f traditional_format_key_file -p
- ```
- 1. For **Cause 3**:
+ - Use third-party base64 convert tool
- Please double check with tools like WinSCP to see if your key file or password is correct.
+ Tools like https://www.base64encode.org/ are recommended.
+- **Cause**: Wrong key content format is chosen
-### Incorrect linked service type is used
+- **Recommendation**:
-- **Symptoms**: FTP/SFTP server cannot be reached.
+ PKCS#8 format SSH private key (start with "-----BEGIN ENCRYPTED PRIVATE KEY-----") is currently not supported to access SFTP server in ADF.
-- **Cause**: Incorrect linked service type is used for FTP or SFTP server, like using FTP Linked Service to connect to an SFTP server or in reverse.
+ Run below commands to convert the key to traditional SSH key format (start with "-----BEGIN RSA PRIVATE KEY-----"):
-- **Resolution**: Please check the port of the target server. By default FTP uses port 21 and SFTP uses port 22.
+ ```
+ openssl pkcs8 -in pkcs8_format_key_file -out traditional_format_key_file
+ chmod 600 traditional_format_key_file
+ ssh-keygen -f traditional_format_key_file -p
+ ```
+
+- **Cause**: Invalid credential or private key content
+- **Recommendation**: Double check with tools like WinSCP to see if your key file or password is correct.
### SFTP Copy Activity failed
@@ -937,15 +986,19 @@ busy to handle requests, it returns an HTTP error 503.
- **Resolution**: Double check how your dataset configured by mapping the destination dataset column to confirm if there's such "AccMngr" column.
-### SFTP server connection throttling
+### Error code: SftpFailedToConnectToSftpServer
-- **Symptoms**: Server response does not contain SSH protocol identification and failed to copy.
+- **Message**: `Failed to connect to Sftp server '%server;'.`
-- **Cause**: ADF will create multiple connections to download from SFTP server in parallel, and sometimes it will hit SFTP server throttling. Practically, Different server will return different error when hit throttling.
+- **Cause**: If error message contains 'Socket read operation has timed out after 30000 milliseconds', one possible cause is that incorrect linked service type is used for SFTP server, like using FTP Linked Service to connect to an SFTP server.
-- **Resolution**:
+- **Recommendation**: Check the port of the target server. By default SFTP uses port 22.
- Please specify the max concurrent connection of SFTP dataset to 1 and rerun the copy. If it succeeds to pass, we can be sure that throttling is the cause.
+- **Cause**: If error message contains 'Server response does not contain SSH protocol identification', one possible cause is that SFTP server throttled the connection. ADF will create multiple connections to download from SFTP server in parallel, and sometimes it will hit SFTP server throttling. Practically, different server will return different error when hit throttling.
+
+- **Recommendation**:
+
+ Specify the max concurrent connection of SFTP dataset to 1 and rerun the copy. If it succeeds to pass, we can be sure that throttling is the cause.
If you want to promote the low throughput, please contact SFTP administrator to increase the concurrent connection count limit, or add below IP to allow list:
@@ -955,53 +1008,74 @@ busy to handle requests, it returns an HTTP error 503.
- If you're using Self-hosted IR, please add the machine IP that installed SHIR to allow list.
-### Error code: SftpRenameOperationFail
+## SharePoint Online List
-- **Symptoms**: Pipeline failed to copy data from Blob to SFTP with following error: `Operation on target Copy_5xe failed: Failure happened on 'Sink' side. ErrorCode=SftpRenameOperationFail,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException`.
+### Error code: SharePointOnlineAuthFailed
-- **Cause**: The option useTempFileRename was set as True when copying the data. This allows the process to use temp files. The error will be triggered if one or more temp files were deleted before the entire data is copied.
+- **Message**: `The access token generated failed, status code: %code;, error message: %message;.`
-- **Resolution**: Set the option of useTempFileName to False.
+- **Cause**: The service principal ID and Key may not correctly set.
+- **Recommendation**: Check your registered application (service principal ID) and key whether it's correctly set.
-## General Copy Activity Error
-### Error code: JreNotFound
+## Xml Format
-- **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.`
+### Error code: XmlSinkNotSupported
-- **Cause**: The self-hosted integration runtime cannot find Java Runtime. The Java Runtime is required for reading particular source.
+- **Message**: `Write data in xml format is not supported yet, please choose a different format!`
-- **Recommendation**: Check your integration runtime environment, the reference doc: https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime
+- **Cause**: Used an Xml dataset as sink dataset in your copy activity
+- **Recommendation**: Use a dataset in different format as copy sink.
-### Error code: WildcardPathSinkNotSupported
-- **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.`
+### Error code: XmlAttributeColumnNameConflict
-- **Cause**: Sink dataset doesn't support wildcard.
+- **Message**: `Column names %attrNames;' for attributes of element '%element;' conflict with that for corresponding child elements, and the attribute prefix used is '%prefix;'.`
-- **Recommendation**: Check the sink dataset and fix the path without wildcard value.
+- **Cause**: Used an attribute prefix, which caused the conflict.
+
+- **Recommendation**: Set a different value of the "attributePrefix" property.
++
+### Error code: XmlValueColumnNameConflict
+
+- **Message**: `Column name for the value of element '%element;' is '%columnName;' and it conflicts with the child element having the same name.`
+
+- **Cause**: Used one of the child element names as the column name for the element value.
+
+- **Recommendation**: Set a different value of the "valueColumn" property.
++
+### Error code: XmlInvalid
+- **Message**: `Input XML file '%file;' is invalid with parsing error '%error;'.`
-### Error code: MappingInvalidPropertyWithEmptyValue
+- **Cause**: The input xml file is not well formed.
-- **Message**: `One or more '%sourceOrSink;' in copy activity mapping doesn't point to any data. Choose one of the three properties 'name', 'path' and 'ordinal' to reference columns/fields.`
+- **Recommendation**: Correct the xml file to make it well formed.
-### Error code: MappingInvalidPropertyWithNamePathAndOrdinal
+## General Copy Activity Error
-- **Message**: `Mixed properties are used to reference '%sourceOrSink;' columns/fields in copy activity mapping. Please only choose one of the three properties 'name', 'path' and 'ordinal'. The problematic mapping setting is 'name': '%name;', 'path': '%path;','ordinal': '%ordinal;'.`
+### Error code: JreNotFound
+- **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.`
-### Error code: MappingDuplicatedOrdinal
+- **Cause**: The self-hosted integration runtime cannot find Java Runtime. The Java Runtime is required for reading particular source.
-- **Message**: `Copy activity 'mappings' has duplicated ordinal value "%Ordinal;". Fix the setting in 'mappings'.`
+- **Recommendation**: Check your integration runtime environment, the reference doc: https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime
-### Error code: MappingInvalidOrdinalForSinkColumn
+### Error code: WildcardPathSinkNotSupported
+
+- **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.`
+
+- **Cause**: Sink dataset doesn't support wildcard.
+
+- **Recommendation**: Check the sink dataset and fix the path without wildcard value.
-- **Message**: `Invalid 'ordinal' property for sink column under 'mappings' property. Ordinal: %Ordinal;.` ### FIPS issue
@@ -1021,6 +1095,7 @@ busy to handle requests, it returns an HTTP error 503.
3. Restart the Self-hosted Integration Runtime machine. + ## Next steps For more troubleshooting help, try these resources:
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-factory-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
@@ -5,12 +5,13 @@ services: data-factory
author: nabhishek ms.service: data-factory ms.topic: troubleshooting
-ms.date: 11/16/2020
+ms.date: 12/30/2020
ms.author: abnarain ms.reviewer: craigg --- # Troubleshoot Azure Data Factory+ [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] This article explores common troubleshooting methods for external control activities in Azure Data Factory.
@@ -493,7 +494,7 @@ The following table applies to Azure Batch.
- **Message**: `There are duplicate files in the resource folder.` -- **Cause**: Multiple files of the same name are in different sub-folders of folderPath.
+- **Cause**: Multiple files of the same name are in different subfolders of folderPath.
- **Recommendation**: Custom activities flatten folder structure under folderPath. If you need to preserve the folder structure, zip the files and extract them in Azure Batch by using an unzip command.
@@ -541,7 +542,6 @@ The following table applies to Azure Batch.
- **Recommendation**: Consider providing a service principal, which has permissions to create an HDInsight cluster in the provided subscription and try again. Verify that the [Manage Identities are set up correctly](../hdinsight/hdinsight-managed-identities.md). - ### Error code: 2300 - **Message**: `Failed to submit the job '%jobId;' to the cluster '%cluster;'. Error: %errorMessage;.`
@@ -949,6 +949,16 @@ The following table applies to Azure Batch.
- **Recommendation**: Provide an Azure Blob storage account as an additional storage for HDInsight on-demand linked service.
+### SSL error when ADF linked service using HDInsight ESP cluster
+
+- **Message**: `Failed to connect to HDInsight cluster: 'ERROR [HY000] [Microsoft][DriverSupport] (1100) SSL certificate verification failed because the certificate is missing or incorrect.`
+
+- **Cause**: The issue is most likely related with System Trust Store.
+
+- **Resolution**: You can navigate to the path **Microsoft Integration Runtime\4.0\Shared\ODBC Drivers\Microsoft Hive ODBC Driver\lib** and open DriverConfiguration64.exe to change the setting.
+
+ ![Uncheck Use System Trust Store](./media/connector-troubleshoot-guide/system-trust-store-setting.png)
+ ## Web Activity ### Error code: 2128
@@ -1012,9 +1022,9 @@ When you observe that the activity is running much longer than your normal runs
**Error message:** `The payload including configurations on activity/dataSet/linked service is too large. Please check if you have settings with very large value and try to reduce its size.`
-**Cause:** The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. The limit of such payload size is 896KB as mentioned in [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits) section.
+**Cause:** The payload for each activity run includes the activity configuration, the associated dataset(s), and linked service(s) configurations if any, and a small portion of system properties generated per activity type. The limit of such payload size is 896 KB as mentioned in [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits) section.
-**Recommendation:** You hit this limit likely because you pass in one or more large parameter values from either upstream activity output or external, especially if you pass actual data across activities in control flow. Please check if you can reduce the size of large parameter values, or tune your pipeline logic to avoid passing such values across activities and handle it inside the activity instead.
+**Recommendation:** You hit this limit likely because you pass in one or more large parameter values from either upstream activity output or external, especially if you pass actual data across activities in control flow. Check if you can reduce the size of large parameter values, or tune your pipeline logic to avoid passing such values across activities and handle it inside the activity instead.
## Next steps
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/diagnostic-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/diagnostic-logging.md
@@ -144,6 +144,10 @@ You can use this Azure Resource Manager (ARM) template to deploy an attack analy
![DDoS Protection Workbook](./media/ddos-attack-telemetry/ddos-attack-analytics-workbook.png)
+## Validate and test
+
+To simulate a DDoS attack to validate your logs, see [Validate DDoS detection](test-through-simulations.md).
+ ## Next steps In this tutorial, you learned how to:
@@ -155,4 +159,4 @@ In this tutorial, you learned how to:
To learn how to configure attack alerts, continue to the next tutorial. > [!div class="nextstepaction"]
-> [View and configure DDoS protection alerts](alerts.md)
\ No newline at end of file
+> [View and configure DDoS protection alerts](alerts.md)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
@@ -82,8 +82,6 @@ The metric names present different packet types, and bytes vs. packets, with a b
- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered. - **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
-This [Azure Monitor alert rule](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Monitor%20Alert%20-%20DDoS%20Mitigation%20Started) will run a simple query to detect when an active DDoS mitigation is occurring. To simulate a DDoS attack to validate telemetry, see [Validate DDoS detection](test-through-simulations.md).
- ## View DDoS mitigation policies DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
@@ -108,4 +106,4 @@ In this tutorial, you learned how to:
To learn how to configure attack mitigation reports and flow logs, continue to the next tutorial. > [!div class="nextstepaction"]
-> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
\ No newline at end of file
+> [View and configure DDoS diagnostic logging](diagnostic-logging.md)
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-use-apis-sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
@@ -82,15 +82,9 @@ The Azure Digital Twins .NET (C#) SDK is part of the Azure SDK for .NET. It is o
> [!NOTE] > For more information on SDK design, see the general [design principles for Azure SDKs](https://azure.github.io/azure-sdk/general_introduction.html) and the specific [.NET design guidelines](https://azure.github.io/azure-sdk/dotnet_introduction.html).
-To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your project. You will also need the latest version of the **Azure.Identity** package.
-
-* In Visual Studio, you can add packages with the NuGet Package Manager (accessed through *Tools > NuGet Package Manager > Manage NuGet Packages for Solution*).
-* Using the .NET command-line tool, you can run:
-
- ```cmd/sh
- dotnet add package Azure.DigitalTwins.Core --version 1.0.0-preview.3
- dotnet add package Azure.identity
- ```
+To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your project. You will also need the latest version of the **Azure.Identity** package. In Visual Studio, you can add these packages using the NuGet Package Manager (accessed through *Tools > NuGet Package Manager > Manage NuGet Packages for Solution*). Alternatively, you can use the .NET command line tool with the commands found in the NuGet package links below to add these to your project:
+* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
For a detailed walk-through of using the APIs in practice, see the [*Tutorial: Code a client app*](tutorial-code.md).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-end-to-end https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
@@ -135,7 +135,7 @@ For a specific target, choose **Azure Function App (Windows)** and hit *Next*.
On the *Functions instance* page, choose your subscription. This should populate a box with the *resource groups* in your subscription.
-Select your instance's resource group and hit *+ Create a new Azure Function...*.
+Select your instance's resource group and hit *+* to create a new Azure Function.
:::image type="content" source="media/tutorial-end-to-end/publish-azure-function-3.png" alt-text="Publish Azure function in Visual Studio: Functions instance (before function app)":::
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/work-with-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/work-with-data.md
@@ -3,7 +3,7 @@ title: Work with large data sets
description: Understand how to get, format, page, and skip records in large data sets while working with Azure Resource Graph. ms.date: 09/30/2020 ms.topic: conceptual
-ms.custom: devx-track-csharp, devx-track-azurecli
+ms.custom: devx-track-csharp
--- # Working with large Azure resource data sets
governance https://docs.microsoft.com/en-us/azure/governance/resource-graph/first-query-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-python.md
@@ -3,7 +3,7 @@ title: "Quickstart: Your first Python query"
description: In this quickstart, you follow the steps to enable the Resource Graph library for Python and run your first query. ms.date: 10/14/2020 ms.topic: quickstart
-ms.custom: devx-track-python, devx-track-azurecli
+ms.custom: devx-track-python
--- # Quickstart: Run your first Resource Graph query using Python
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
@@ -1,6 +1,6 @@
--- title: Apache Spark & Apache Kafka with Cosmos DB - Azure HDInsight
-description: Learn how to use Apache Spark Structured Streaming to read data from Apache Kafka and then store it into Azure Cosmos DB. In this example, you stream data using a Jupyter notebook from Spark on HDInsight.
+description: Learn how to use Apache Spark Structured Streaming to read data from Apache Kafka and then store it into Azure Cosmos DB. In this example, you stream data using a Jupyter Notebook from Spark on HDInsight.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
@@ -92,7 +92,7 @@ resourceGroupName='myresourcegroup'
name='mycosmosaccount' # WARNING: If you change the databaseName or collectionName
-# then you must update the values in the Jupyter notebook
+# then you must update the values in the Jupyter Notebook
databaseName='kafkadata' collectionName='kafkacollection'
@@ -129,7 +129,7 @@ The code for the example described in this document is available at [https://git
Use the following steps to upload the notebooks from the project to your Spark on HDInsight cluster:
-1. In your web browser, connect to the Jupyter notebook on your Spark cluster. In the following URL, replace `CLUSTERNAME` with the name of your __Spark__ cluster:
+1. In your web browser, connect to the Jupyter Notebook on your Spark cluster. In the following URL, replace `CLUSTERNAME` with the name of your __Spark__ cluster:
```http https://CLUSTERNAME.azurehdinsight.net/jupyter
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
@@ -86,7 +86,7 @@ There are three key tasks in this advanced analytics scenario:
1. Create an Azure HDInsight Hadoop cluster with an Apache Spark 2.1.0 distribution. 2. Run a custom script to install Microsoft Cognitive Toolkit on all nodes of an Azure HDInsight Spark cluster.
-3. Upload a pre-built Jupyter notebook to your HDInsight Spark cluster to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the Spark Python API (PySpark).
+3. Upload a pre-built Jupyter Notebook to your HDInsight Spark cluster to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the Spark Python API (PySpark).
This example uses the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset contains 60,000 32×32 color images belonging to 10 mutually exclusive classes:
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-kafka-spark-structured-streaming https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apache-kafka-spark-structured-streaming.md
@@ -1,6 +1,6 @@
--- title: 'Tutorial: Apache Spark Streaming & Apache Kafka - Azure HDInsight'
-description: Learn how to use Apache Spark streaming to get data into or out of Apache Kafka. In this tutorial, you stream data using a Jupyter notebook from Spark on HDInsight.
+description: Learn how to use Apache Spark streaming to get data into or out of Apache Kafka. In this tutorial, you stream data using a Jupyter Notebook from Spark on HDInsight.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-spark-with-kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apache-spark-with-kafka.md
@@ -1,6 +1,6 @@
--- title: Apache Spark streaming with Apache Kafka - Azure HDInsight
-description: Learn how to use Apache Spark to stream data into or out of Apache Kafka using DStreams. In this example, you stream data using a Jupyter notebook from Spark on HDInsight.
+description: Learn how to use Apache Spark to stream data into or out of Apache Kafka using DStreams. In this example, you stream data using a Jupyter Notebook from Spark on HDInsight.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apps-install-custom-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-install-custom-applications.md
@@ -6,7 +6,7 @@ ms.author: hrasheed
ms.reviewer: jasonh ms.service: hdinsight ms.topic: how-to
-ms.custom: hdinsightactive, devx-track-azurecli
+ms.custom: hdinsightactive
ms.date: 11/29/2019 ---
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-port-settings-for-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
@@ -161,7 +161,7 @@ Examples:
| --- | --- | --- | --- | --- | --- | | Spark Thrift servers |Head nodes |10002 |Thrift | &nbsp; | Service for connecting to Spark SQL (Thrift/JDBC) | | Livy server | Head nodes | 8998 | HTTP | &nbsp; | Service for running statements, jobs, and applications |
-| Jupyter notebook | Head nodes | 8001 | HTTP | &nbsp; | Jupyter notebook website |
+| Jupyter Notebook | Head nodes | 8001 | HTTP | &nbsp; | Jupyter Notebook website |
Examples:
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-windows-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-windows-tools.md
@@ -70,9 +70,9 @@ These articles show how:
## Notebooks on Spark for data scientists
-Apache Spark clusters in HDInsight include Apache Zeppelin notebooks and kernels that can be used with Jupyter notebooks.
+Apache Spark clusters in HDInsight include Apache Zeppelin notebooks and kernels that can be used with Jupyter Notebooks.
-* [Learn how to use kernels on Apache Spark clusters with Jupyter notebooks to test Spark applications](spark/apache-spark-zeppelin-notebook.md)
+* [Learn how to use kernels on Apache Spark clusters with Jupyter Notebooks to test Spark applications](spark/apache-spark-zeppelin-notebook.md)
* [Learn how to use Apache Zeppelin notebooks on Apache Spark clusters to run Spark jobs](spark/apache-spark-jupyter-notebook-kernels.md) ## Run Linux-based tools and technologies on Windows
@@ -80,7 +80,7 @@ Apache Spark clusters in HDInsight include Apache Zeppelin notebooks and kernels
If you come across a situation where you must use a tool or technology that is only available on Linux, consider the following options: * **Bash on Ubuntu on Windows 10** provides a Linux subsystem on Windows. Bash allows you to directly run Linux utilities without having to maintain a dedicated Linux installation. See [Windows Subsystem for Linux Installation Guide for Windows 10](/windows/wsl/install-win10) for installation steps. Other [Unix shells](https://www.gnu.org/software/bash/) will work as well.
-* **Docker for Windows** provides access to many Linux-based tools, and can be run directly from Windows. For example, you can use Docker to run the Beeline client for Hive directly from Windows. You can also use Docker to run a local Jupyter notebook and remotely connect to Spark on HDInsight. [Get started with Docker for Windows](https://docs.docker.com/docker-for-windows/)
+* **Docker for Windows** provides access to many Linux-based tools, and can be run directly from Windows. For example, you can use Docker to run the Beeline client for Hive directly from Windows. You can also use Docker to run a local Jupyter Notebook and remotely connect to Spark on HDInsight. [Get started with Docker for Windows](https://docs.docker.com/docker-for-windows/)
* **[MobaXTerm](https://mobaxterm.mobatek.net/)** allows you to graphically browse the cluster file system over an SSH connection. ## Cross-platform tools
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-plan-virtual-network-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
@@ -6,7 +6,7 @@ ms.author: hrasheed
ms.reviewer: jasonh ms.service: hdinsight ms.topic: conceptual
-ms.custom: hdinsightactive,seoapr2020, devx-track-azurecli
+ms.custom: hdinsightactive,seoapr2020
ms.date: 05/04/2020 ---
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-troubleshoot-guide.md
@@ -16,7 +16,7 @@ ms.date: 08/14/2019
|![hdinsight apache HBase icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-hbase.png)<br>[Troubleshoot Apache HBase]()|<br>[Unassigned regions](hbase/hbase-troubleshoot-unassigned-regions.md#scenario-unassigned-regions)<br><br>[Timeouts with 'hbase hbck' command in Azure HDInsight](hbase/hbase-troubleshoot-timeouts-hbase-hbck.md)<br><br>[Apache Phoenix connectivity issues in Azure HDInsight](hbase/hbase-troubleshoot-phoenix-connectivity.md)<br><br>[What causes a primary server to fail to start?](hbase/hbase-troubleshoot-start-fails.md)<br><br>[BindException - Address already in use](hbase/hbase-troubleshoot-bindexception-address-use.md)| |![hdinsight apache hdfs icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-hdfs.png)<br>[Troubleshoot Apache Hadoop HDFS](hdinsight-troubleshoot-hdfs.md)|<br>[How do I access a local HDFS from inside a cluster?](hdinsight-troubleshoot-hdfs.md#how-do-i-access-local-hdfs-from-inside-a-cluster)<br><br>[Local HDFS stuck in safe mode on Azure HDInsight cluster](hadoop/hdinsight-hdfs-troubleshoot-safe-mode.md)| |![hdinsight apache Hive icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-hive.png)<br>[Troubleshoot Apache Hive](hdinsight-troubleshoot-hive.md)|<br>[How do I export a Hive metastore and import it on another cluster?](hdinsight-troubleshoot-hive.md#how-do-i-export-a-hive-metastore-and-import-it-on-another-cluster)<br><br>[How do I locate Apache Hive logs on a cluster?](hdinsight-troubleshoot-hive.md#how-do-i-locate-hive-logs-on-a-cluster)<br><br>[How do I launch the Apache Hive shell with specific configurations on a cluster?](hdinsight-troubleshoot-hive.md#how-do-i-launch-the-hive-shell-with-specific-configurations-on-a-cluster)<br><br>[How do I analyze Apache Tez DAG data on a cluster-critical path?](hdinsight-troubleshoot-hive.md#how-do-i-analyze-tez-dag-data-on-a-cluster-critical-path)<br><br>[How do I download Apache Tez DAG data from a cluster?](hdinsight-troubleshoot-hive.md#how-do-i-download-tez-dag-data-from-a-cluster)|
-|![hdinsight apache Spark icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-spark.png)<br>[Troubleshoot Apache Spark](./spark/apache-troubleshoot-spark.md)|<br>[How do I configure an Apache Spark application by using Apache Ambari on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-apache-ambari-on-clusters)<br><br>[How do I configure an Apache Spark application by using a Jupyter notebook on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-a-jupyter-notebook-on-clusters)<br><br>[How do I configure an Apache Spark application by using Apache Livy on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-apache-livy-on-clusters)<br><br>[How do I configure an Apache Spark application by using spark-submit on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-spark-submit-on-clusters)<br><br>[How do I configure an Apache Spark application by using IntelliJ?](spark/apache-spark-intellij-tool-plugin.md)<br><br>[How do I configure an Apache Spark application by using Eclipse?](spark/apache-spark-eclipse-tool-plugin.md)<br><br>[How do I configure an Apache Spark application by using VSCode?](hdinsight-for-vscode.md)<br><br>[OutOfMemoryError exception for Apache Spark](spark/apache-spark-troubleshoot-outofmemory.md#scenario-outofmemoryerror-exception-for-apache-spark)|
+|![hdinsight apache Spark icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-spark.png)<br>[Troubleshoot Apache Spark](./spark/apache-troubleshoot-spark.md)|<br>[How do I configure an Apache Spark application by using Apache Ambari on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-apache-ambari-on-clusters)<br><br>[How do I configure an Apache Spark application by using a Jupyter Notebook on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-a-jupyter-notebook-on-clusters)<br><br>[How do I configure an Apache Spark application by using Apache Livy on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-apache-livy-on-clusters)<br><br>[How do I configure an Apache Spark application by using spark-submit on clusters?](spark/apache-troubleshoot-spark.md#how-do-i-configure-an-apache-spark-application-by-using-spark-submit-on-clusters)<br><br>[How do I configure an Apache Spark application by using IntelliJ?](spark/apache-spark-intellij-tool-plugin.md)<br><br>[How do I configure an Apache Spark application by using Eclipse?](spark/apache-spark-eclipse-tool-plugin.md)<br><br>[How do I configure an Apache Spark application by using VSCode?](hdinsight-for-vscode.md)<br><br>[OutOfMemoryError exception for Apache Spark](spark/apache-spark-troubleshoot-outofmemory.md#scenario-outofmemoryerror-exception-for-apache-spark)|
|![hdinsight apache Storm icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-storm.png)<br>[Troubleshoot Apache Storm](./storm/apache-troubleshoot-storm.md)|<br>[How do I access the Apache Storm UI on a cluster?](storm/apache-troubleshoot-storm.md#how-do-i-access-the-storm-ui-on-a-cluster)<br><br>[How do I transfer Apache Storm event hub spout checkpoint information from one topology to another?](storm/apache-troubleshoot-storm.md#how-do-i-transfer-storm-event-hub-spout-checkpoint-information-from-one-topology-to-another)<br><br>[How do I locate Storm binaries on a cluster?](storm/apache-troubleshoot-storm.md#how-do-i-locate-storm-binaries-on-a-cluster)<br><br>[How do I determine the deployment topology of a Storm cluster?](storm/apache-troubleshoot-storm.md#how-do-i-determine-the-deployment-topology-of-a-storm-cluster)<br><br>[How do I locate Apache Storm event hub spout binaries for development?](storm/apache-troubleshoot-storm.md#how-do-i-locate-storm-event-hub-spout-binaries-for-development)| |![hdinsight apache YARN icon icon](./media/hdinsight-troubleshoot-guide/hdinsight-apache-yarn.png)<br>[Troubleshoot Apache Hadoop YARN](hdinsight-troubleshoot-YARN.md)|<br>[How do I create a new Apache Hadoop YARN queue on a cluster?](hdinsight-troubleshoot-yarn.md#how-do-i-create-a-new-yarn-queue-on-a-cluster)<br><br>[How do I download Apache Hadoop YARN logs from a cluster?](hdinsight-troubleshoot-yarn.md#how-do-i-download-yarn-logs-from-a-cluster)|
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-version-release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-version-release.md
@@ -95,7 +95,7 @@ There's no supported upgrade path from previous versions of HDInsight to HDInsig
* Hive View is only available on HDInsight 4.0 clusters with a version number equal to or greater than 4.1. This version number is available in Ambari Admin -> Versions. * Shell interpreter in Apache Zeppelin isn't supported in Spark and Interactive Query clusters. * You can't *disable* LLAP on a Spark-LLAP cluster. You can only turn LLAP off.
-* Azure Data Lake Storage Gen2 can't save Jupyter notebooks in a Spark cluster.
+* Azure Data Lake Storage Gen2 can't save Jupyter Notebooks in a Spark cluster.
* Apache pig runs on Tez by default, However you can change it to Mapreduce * Spark SQL Ranger integration for row and column security is deprecated * Spark 2.4 and Kafka 2.1 are available in HDInsight 4.0, so Spark 2.3 and Kafka 1.1 are no longer supported. We recommend using Spark 2.4 & Kafka 2.1 and above in HDInsight 4.0.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/kafka/apache-kafka-connector-iot-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
@@ -7,7 +7,7 @@ ms.author: hrasheed
ms.reviewer: jasonh ms.service: hdinsight ms.topic: how-to
-ms.custom: hdinsightactive, devx-track-azurecli
+ms.custom: hdinsightactive
ms.date: 11/26/2019 ---
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-connect-to-sql-database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-connect-to-sql-database.md
@@ -31,20 +31,20 @@ Learn how to connect an Apache Spark cluster in Azure HDInsight with Azure SQL D
Start by creating a Jupyter Notebook associated with the Spark cluster. You use this notebook to run the code snippets used in this article. 1. From the [Azure portal](https://portal.azure.com/), open your cluster.
-1. Select **Jupyter notebook** underneath **Cluster dashboards** on the right side. If you don't see **Cluster dashboards**, select **Overview** from the left menu. If prompted, enter the admin credentials for the cluster.
+1. Select **Jupyter Notebook** underneath **Cluster dashboards** on the right side. If you don't see **Cluster dashboards**, select **Overview** from the left menu. If prompted, enter the admin credentials for the cluster.
- ![Jupyter notebook on Apache Spark](./media/apache-spark-connect-to-sql-database/hdinsight-spark-cluster-dashboard-jupyter-notebook.png "Jupyter notebook on Spark")
+ ![Jupyter Notebook on Apache Spark](./media/apache-spark-connect-to-sql-database/hdinsight-spark-cluster-dashboard-jupyter-notebook.png "Jupyter Notebook on Spark")
> [!NOTE]
- > You can also access the Jupyter notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
+ > You can also access the Jupyter Notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
> > `https://CLUSTERNAME.azurehdinsight.net/jupyter`
-1. In the Jupyter notebook, from the top-right corner, click **New**, and then click **Spark** to create a Scala notebook. Jupyter notebooks on HDInsight Spark cluster also provide the **PySpark** kernel for Python2 applications, and the **PySpark3** kernel for Python3 applications. For this article, we create a Scala notebook.
+1. In the Jupyter Notebook, from the top-right corner, click **New**, and then click **Spark** to create a Scala notebook. Jupyter Notebooks on HDInsight Spark cluster also provide the **PySpark** kernel for Python2 applications, and the **PySpark3** kernel for Python3 applications. For this article, we create a Scala notebook.
- ![Kernels for Jupyter notebook on Spark](./media/apache-spark-connect-to-sql-database/kernel-jupyter-notebook-on-spark.png "Kernels for Jupyter notebook on Spark")
+ ![Kernels for Jupyter Notebook on Spark](./media/apache-spark-connect-to-sql-database/kernel-jupyter-notebook-on-spark.png "Kernels for Jupyter Notebook on Spark")
- For more information about the kernels, see [Use Jupyter notebook kernels with Apache Spark clusters in HDInsight](apache-spark-jupyter-notebook-kernels.md).
+ For more information about the kernels, see [Use Jupyter Notebook kernels with Apache Spark clusters in HDInsight](apache-spark-jupyter-notebook-kernels.md).
> [!NOTE] > In this article, we use a Spark (Scala) kernel because streaming data from Spark into SQL Database is only supported in Scala and Java currently. Even though reading from and writing into SQL can be done using Python, for consistency in this article, we use Scala for all three operations.
@@ -59,7 +59,7 @@ You can now start creating your application.
In this section, you read data from a table (for example, **SalesLT.Address**) that exists in the AdventureWorks database.
-1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your database.
+1. In a new Jupyter Notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your database.
```scala // Declare the values for your database
@@ -116,7 +116,7 @@ In this section, you read data from a table (for example, **SalesLT.Address**) t
In this section, we use a sample CSV file available on the cluster to create a table in your database and populate it with data. The sample CSV file (**HVAC.csv**) is available on all HDInsight clusters at `HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv`.
-1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your database.
+1. In a new Jupyter Notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your database.
```scala // Declare the values for your database
@@ -187,7 +187,7 @@ In this section, we stream data into the `hvactable` that you created in the pre
TRUNCATE TABLE [dbo].[hvactable] ```
-1. Create a new Jupyter notebook on the HDInsight Spark cluster. In a code cell, paste the following snippet and then press **SHIFT + ENTER**:
+1. Create a new Jupyter Notebook on the HDInsight Spark cluster. In a code cell, paste the following snippet and then press **SHIFT + ENTER**:
```scala import org.apache.spark.sql._
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-custom-library-website-log-analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-custom-library-website-log-analysis.md
@@ -20,7 +20,7 @@ An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark
## Save raw data as an RDD
-In this section, we use the [Jupyter](https://jupyter.org) notebook associated with an Apache Spark cluster in HDInsight to run jobs that process your raw sample data and save it as a Hive table. The sample data is a .csv file (hvac.csv) available on all clusters by default.
+In this section, we use the [Jupyter](https://jupyter.org) Notebook associated with an Apache Spark cluster in HDInsight to run jobs that process your raw sample data and save it as a Hive table. The sample data is a .csv file (hvac.csv) available on all clusters by default.
Once your data is saved as an Apache Hive table, in the next section we'll connect to the Hive table using BI tools such as Power BI and Tableau.
@@ -28,7 +28,7 @@ Once your data is saved as an Apache Hive table, in the next section we'll conne
1. Create a new notebook. Select **New**, and then **PySpark**.
- ![Create a new Apache Jupyter notebook](./media/apache-spark-custom-library-website-log-analysis/hdinsight-create-jupyter-notebook.png "Create a new Jupyter notebook")
+ ![Create a new Apache Jupyter Notebook](./media/apache-spark-custom-library-website-log-analysis/hdinsight-create-jupyter-notebook.png "Create a new Jupyter Notebook")
1. A new notebook is created and opened with the name Untitled.pynb. Select the notebook name at the top, and enter a friendly name.
@@ -198,5 +198,5 @@ Once your data is saved as an Apache Hive table, in the next section we'll conne
Explore the following articles: * [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-eclipse-tool-plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md
@@ -344,8 +344,8 @@ There are two modes to submit the jobs. If storage credential is provided, batch
* [Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through VPN](./apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through SSH](./apache-spark-intellij-tool-debug-remotely-through-ssh.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Managing resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md
@@ -173,8 +173,8 @@ This article provides step-by-step guidance on how to use HDInsight Tools in [Az
* [Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through VPN](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use HDInsight Tools in Azure Toolkit for Eclipse to create Apache Spark applications](./apache-spark-eclipse-tool-plugin.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in the Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in the Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Manage resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-intellij-tool-failure-debug https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
@@ -137,8 +137,8 @@ Create a spark ScalaΓÇï/Java application, then run the application on a Spark cl
* [Use HDInsight Tools for IntelliJ with Hortonworks Sandbox](../hadoop/apache-hadoop-visual-studio-tools-get-started.md) * [Use HDInsight Tools in Azure Toolkit for Eclipse to create Apache Spark applications](./apache-spark-eclipse-tool-plugin.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in the Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in the Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Manage resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-intellij-tool-plugin-debug-jobs-remotely https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-intellij-tool-plugin-debug-jobs-remotely.md
@@ -321,8 +321,8 @@ We recommend that you also create an Apache Spark cluster in Azure HDInsight tha
* [Use Azure Toolkit for IntelliJ to debug Apache Spark applications remotely through SSH](apache-spark-intellij-tool-debug-remotely-through-ssh.md) * [Use HDInsight Tools in Azure Toolkit for Eclipse to create Apache Spark applications](./apache-spark-eclipse-tool-plugin.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster in HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in an Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in an Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Manage resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-job-debugging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-job-debugging.md
@@ -31,11 +31,11 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
> [!TIP] > Alternatively, you can also launch the YARN UI from the Ambari UI. To launch the Ambari UI, select **Ambari home** under **Cluster dashboards**. From the Ambari UI, navigate to **YARN** > **Quick Links** > the active Resource Manager > **Resource Manager UI**.
-2. Because you started the Spark job using Jupyter notebooks, the application has the name **remotesparkmagics** (the name for all applications started from the notebooks). Select the application ID against the application name to get more information about the job. This action launches the application view.
+2. Because you started the Spark job using Jupyter Notebooks, the application has the name **remotesparkmagics** (the name for all applications started from the notebooks). Select the application ID against the application name to get more information about the job. This action launches the application view.
![Spark history server Find Spark application ID](./media/apache-spark-job-debugging/find-application-id1.png)
- For such applications that are launched from the Jupyter notebooks, the status is always **RUNNING** until you exit the notebook.
+ For such applications that are launched from the Jupyter Notebooks, the status is always **RUNNING** until you exit the notebook.
3. From the application view, you can drill down further to find out the containers associated with the application and the logs (stdout/stderr). You can also launch the Spark UI by clicking the linking corresponding to the **Tracking URL**, as shown below.
@@ -45,7 +45,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
In the Spark UI, you can drill down into the Spark jobs that are spawned by the application you started earlier.
-1. To launch the Spark UI, from the application view, select the link against the **Tracking URL**, as shown in the screen capture above. You can see all the Spark jobs that are launched by the application running in the Jupyter notebook.
+1. To launch the Spark UI, from the application view, select the link against the **Tracking URL**, as shown in the screen capture above. You can see all the Spark jobs that are launched by the application running in the Jupyter Notebook.
![Spark history server jobs tab](./media/apache-spark-job-debugging/view-apache-spark-jobs.png)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-notebook-kernels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-jupyter-notebook-kernels.md
@@ -1,6 +1,6 @@
---
-title: Kernels for Jupyter notebook on Spark clusters in Azure HDInsight
-description: Learn about the PySpark, PySpark3, and Spark kernels for Jupyter notebook available with Spark clusters on Azure HDInsight.
+title: Kernels for Jupyter Notebook on Spark clusters in Azure HDInsight
+description: Learn about the PySpark, PySpark3, and Spark kernels for Jupyter Notebook available with Spark clusters on Azure HDInsight.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
@@ -10,9 +10,9 @@ ms.custom: hdinsightactive,hdiseo17may2017,seoapr2020
ms.date: 04/24/2020 ---
-# Kernels for Jupyter notebook on Apache Spark clusters in Azure HDInsight
+# Kernels for Jupyter Notebook on Apache Spark clusters in Azure HDInsight
-HDInsight Spark clusters provide kernels that you can use with the Jupyter notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
+HDInsight Spark clusters provide kernels that you can use with the Jupyter Notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
- **PySpark** - for applications written in Python2. - **PySpark3** - for applications written in Python3.
@@ -24,28 +24,28 @@ In this article, you learn how to use these kernels and the benefits of using th
An Apache Spark cluster in HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
-## Create a Jupyter notebook on Spark HDInsight
+## Create a Jupyter Notebook on Spark HDInsight
1. From the [Azure portal](https://portal.azure.com/), select your Spark cluster. See [List and show clusters](../hdinsight-administer-use-portal-linux.md#showClusters) for the instructions. The **Overview** view opens.
-2. From the **Overview** view, in the **Cluster dashboards** box, select **Jupyter notebook**. If prompted, enter the admin credentials for the cluster.
+2. From the **Overview** view, in the **Cluster dashboards** box, select **Jupyter Notebook**. If prompted, enter the admin credentials for the cluster.
- ![Jupyter notebook on Apache Spark](./media/apache-spark-jupyter-notebook-kernels/hdinsight-spark-open-jupyter-interactive-spark-sql-query.png "Jupyter notebook on Spark")
+ ![Jupyter Notebook on Apache Spark](./media/apache-spark-jupyter-notebook-kernels/hdinsight-spark-open-jupyter-interactive-spark-sql-query.png "Jupyter Notebook on Spark")
> [!NOTE]
- > You may also reach the Jupyter notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
+ > You may also reach the Jupyter Notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
> > `https://CLUSTERNAME.azurehdinsight.net/jupyter` 3. Select **New**, and then select either **Pyspark**, **PySpark3**, or **Spark** to create a notebook. Use the Spark kernel for Scala applications, PySpark kernel for Python2 applications, and PySpark3 kernel for Python3 applications.
- ![Kernels for Jupyter notebook on Spark](./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark.png "Kernels for Jupyter notebook on Spark")
+ ![Kernels for Jupyter Notebook on Spark](./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark.png "Kernels for Jupyter Notebook on Spark")
4. A notebook opens with the kernel you selected. ## Benefits of using the kernels
-Here are a few benefits of using the new kernels with Jupyter notebook on Spark HDInsight clusters.
+Here are a few benefits of using the new kernels with Jupyter Notebook on Spark HDInsight clusters.
- **Preset contexts**. With **PySpark**, **PySpark3**, or the **Spark** kernels, you don't need to set the Spark or Hive contexts explicitly before you start working with your applications. These contexts are available by default. These contexts are:
@@ -113,7 +113,7 @@ Whichever kernel you use, leaving the notebooks running consumes the cluster res
## Where are the notebooks stored?
-If your cluster uses Azure Storage as the default storage account, Jupyter notebooks are saved to storage account under the **/HdiNotebooks** folder. Notebooks, text files, and folders that you create from within Jupyter are accessible from the storage account. For example, if you use Jupyter to create a folder **`myfolder`** and a notebook **myfolder/mynotebook.ipynb**, you can access that notebook at `/HdiNotebooks/myfolder/mynotebook.ipynb` within the storage account. The reverse is also true, that is, if you upload a notebook directly to your storage account at `/HdiNotebooks/mynotebook1.ipynb`, the notebook is visible from Jupyter as well. Notebooks remain in the storage account even after the cluster is deleted.
+If your cluster uses Azure Storage as the default storage account, Jupyter Notebooks are saved to storage account under the **/HdiNotebooks** folder. Notebooks, text files, and folders that you create from within Jupyter are accessible from the storage account. For example, if you use Jupyter to create a folder **`myfolder`** and a notebook **myfolder/mynotebook.ipynb**, you can access that notebook at `/HdiNotebooks/myfolder/mynotebook.ipynb` within the storage account. The reverse is also true, that is, if you upload a notebook directly to your storage account at `/HdiNotebooks/mynotebook1.ipynb`, the notebook is visible from Jupyter as well. Notebooks remain in the storage account even after the cluster is deleted.
> [!NOTE] > HDInsight clusters with Azure Data Lake Storage as the default storage do not store notebooks in associated storage.
@@ -130,7 +130,7 @@ Whether the cluster uses Azure Storage or Azure Data Lake Storage as the default
## Supported browser
-Jupyter notebooks on Spark HDInsight clusters are supported only on Google Chrome.
+Jupyter Notebooks on Spark HDInsight clusters are supported only on Google Chrome.
## Suggestions
@@ -140,5 +140,5 @@ The new kernels are in evolving stage and will mature over time. So the APIs cou
- [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md) - [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)-- [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+- [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
- [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
@@ -1,6 +1,6 @@
--- title: Use custom Maven packages with Jupyter in Spark - Azure HDInsight
-description: Step-by-step instructions on how to configure Jupyter notebooks available with HDInsight Spark clusters to use custom Maven packages.
+description: Step-by-step instructions on how to configure Jupyter Notebooks available with HDInsight Spark clusters to use custom Maven packages.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
@@ -10,13 +10,13 @@ ms.custom: hdinsightactive
ms.date: 11/22/2019 ---
-# Use external packages with Jupyter notebooks in Apache Spark clusters on HDInsight
+# Use external packages with Jupyter Notebooks in Apache Spark clusters on HDInsight
Learn how to configure a [Jupyter Notebook](https://jupyter.org/) in Apache Spark cluster on HDInsight to use external, community-contributed Apache **maven** packages that aren't included out-of-the-box in the cluster. You can search the [Maven repository](https://search.maven.org/) for the complete list of packages that are available. You can also get a list of available packages from other sources. For example, a complete list of community-contributed packages is available at [Spark Packages](https://spark-packages.org/).
-In this article, you'll learn how to use the [spark-csv](https://search.maven.org/#artifactdetails%7Ccom.databricks%7Cspark-csv_2.10%7C1.4.0%7Cjar) package with the Jupyter notebook.
+In this article, you'll learn how to use the [spark-csv](https://search.maven.org/#artifactdetails%7Ccom.databricks%7Cspark-csv_2.10%7C1.4.0%7Cjar) package with the Jupyter Notebook.
## Prerequisites
@@ -26,13 +26,13 @@ In this article, you'll learn how to use the [spark-csv](https://search.maven.or
* The [URI scheme](../hdinsight-hadoop-linux-information.md#URI-and-scheme) for your clusters primary storage. This would be `wasb://` for Azure Storage, `abfs://` for Azure Data Lake Storage Gen2 or `adl://` for Azure Data Lake Storage Gen1. If secure transfer is enabled for Azure Storage or Data Lake Storage Gen2, the URI would be `wasbs://` or `abfss://`, respectively See also, [secure transfer](../../storage/common/storage-require-secure-transfer.md).
-## Use external packages with Jupyter notebooks
+## Use external packages with Jupyter Notebooks
1. Navigate to `https://CLUSTERNAME.azurehdinsight.net/jupyter` where `CLUSTERNAME` is the name of your Spark cluster. 1. Create a new notebook. Select **New**, and then select **Spark**.
- ![Create a new Spark Jupyter notebook](./media/apache-spark-jupyter-notebook-use-external-packages/hdinsight-spark-create-notebook.png "Create a new Jupyter notebook")
+ ![Create a new Spark Jupyter Notebook](./media/apache-spark-jupyter-notebook-use-external-packages/hdinsight-spark-create-notebook.png "Create a new Jupyter Notebook")
1. A new notebook is created and opened with the name Untitled.pynb. Select the notebook name at the top, and enter a friendly name.
@@ -52,9 +52,9 @@ In this article, you'll learn how to use the [spark-csv](https://search.maven.or
a. Locate the package in the Maven Repository. For this article, we use [spark-csv](https://mvnrepository.com/artifact/com.databricks/spark-csv).
- b. From the repository, gather the values for **GroupId**, **ArtifactId**, and **Version**. Make sure that the values you gather match your cluster. In this case, we're using a Scala 2.11 and Spark 1.5.0 package, but you may need to select different versions for the appropriate Scala or Spark version in your cluster. You can find out the Scala version on your cluster by running `scala.util.Properties.versionString` on the Spark Jupyter kernel or on Spark submit. You can find out the Spark version on your cluster by running `sc.version` on Jupyter notebooks.
+ b. From the repository, gather the values for **GroupId**, **ArtifactId**, and **Version**. Make sure that the values you gather match your cluster. In this case, we're using a Scala 2.11 and Spark 1.5.0 package, but you may need to select different versions for the appropriate Scala or Spark version in your cluster. You can find out the Scala version on your cluster by running `scala.util.Properties.versionString` on the Spark Jupyter kernel or on Spark submit. You can find out the Spark version on your cluster by running `sc.version` on Jupyter Notebooks.
- ![Use external packages with Jupyter notebook](./media/apache-spark-jupyter-notebook-use-external-packages/use-external-packages-with-jupyter.png "Use external packages with Jupyter notebook")
+ ![Use external packages with Jupyter Notebook](./media/apache-spark-jupyter-notebook-use-external-packages/use-external-packages-with-jupyter.png "Use external packages with Jupyter Notebook")
c. Concatenate the three values, separated by a colon (**:**).
@@ -106,11 +106,11 @@ In this article, you'll learn how to use the [spark-csv](https://search.maven.or
### Tools and extensions
-* [Use external python packages with Jupyter notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
+* [Use external python packages with Jupyter Notebooks in Apache Spark clusters on HDInsight Linux](apache-spark-python-package-installation.md)
* [Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Kernels available for Jupyter Notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Manage resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal.md
@@ -13,7 +13,7 @@ ms.date: 02/25/2020
# Quickstart: Create Apache Spark cluster in Azure HDInsight using Azure portal
-In this quickstart, you use the Azure portal to create an Apache Spark cluster in Azure HDInsight. You then create a Jupyter notebook, and use it to run Spark SQL queries against Apache Hive tables. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for HDInsight enables fast data analytics and cluster computing using in-memory processing. Jupyter notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
+In this quickstart, you use the Azure portal to create an Apache Spark cluster in Azure HDInsight. You then create a Jupyter Notebook, and use it to run Spark SQL queries against Apache Hive tables. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for HDInsight enables fast data analytics and cluster computing using in-memory processing. Jupyter Notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
For in-depth explanations of available configurations, see [Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). For more information regarding the use of the portal to create clusters, see [Create clusters in the portal](../hdinsight-hadoop-create-linux-clusters-portal.md).
@@ -48,7 +48,7 @@ You use the Azure portal to create an HDInsight cluster that uses Azure Storage
|Region | From the drop-down list, select a region where the cluster is created. | |Cluster type| Select Select cluster type to open a list. From the list, select **Spark**.| |Cluster version|This field will auto-populate with the default version once the cluster type has been selected.|
- |Cluster login username| Enter the cluster login username. The default name is **admin**. You use this account to login in to the Jupyter notebook later in the quickstart. |
+ |Cluster login username| Enter the cluster login username. The default name is **admin**. You use this account to login in to the Jupyter Notebook later in the quickstart. |
|Cluster login password| Enter the cluster login password. | |Secure Shell (SSH) username| Enter the SSH username. The SSH username used for this quickstart is **sshuser**. By default, this account shares the same password as the *Cluster Login username* account. |
@@ -73,7 +73,7 @@ You use the Azure portal to create an HDInsight cluster that uses Azure Storage
If you run into an issue with creating HDInsight clusters, it could be that you don't have the right permissions to do so. For more information, see [Access control requirements](../hdinsight-hadoop-customize-cluster-linux.md#access-control).
-## Create a Jupyter notebook
+## Create a Jupyter Notebook
Jupyter Notebook is an interactive notebook environment that supports various programming languages. The notebook allows you to interact with your data, combine code with markdown text and perform simple visualizations.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell.md
@@ -13,7 +13,7 @@ ms.custom: mvc, devx-track-azurepowershell
# Quickstart: Create Apache Spark cluster in Azure HDInsight using PowerShell
-In this quickstart, you use Azure PowerShell to create an Apache Spark cluster in Azure HDInsight. You then create a Jupyter notebook, and use it to run Spark SQL queries against Apache Hive tables. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for Azure HDInsight enables fast data analytics and cluster computing using in-memory processing. Jupyter notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
+In this quickstart, you use Azure PowerShell to create an Apache Spark cluster in Azure HDInsight. You then create a Jupyter Notebook, and use it to run Spark SQL queries against Apache Hive tables. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for Azure HDInsight enables fast data analytics and cluster computing using in-memory processing. Jupyter Notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
[Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md) | [Apache Spark](https://spark.apache.org/) | [Apache Hive](https://hive.apache.org/) | [Jupyter Notebook](https://jupyter.org/)
@@ -132,7 +132,7 @@ When you run the PowerShell script, you are prompted to enter the following valu
If you run into an issue with creating HDInsight clusters, it could be that you don't have the right permissions to do so. For more information, see [Access control requirements](../hdinsight-hadoop-customize-cluster-linux.md#access-control).
-## Create a Jupyter notebook
+## Create a Jupyter Notebook
[Jupyter Notebook](https://jupyter.org/) is an interactive notebook environment that supports various programming languages. The notebook allows you to interact with your data, combine code with markdown text and perform simple visualizations.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-known-issues.md
@@ -70,13 +70,13 @@ HDInsight Spark clusters do not support the Spark-Phoenix connector.
You must use the Spark-HBase connector instead. For the instructions, see [How to use Spark-HBase connector](https://web.archive.org/web/20190112153146/https://blogs.msdn.microsoft.com/azuredatalake/2016/07/25/hdinsight-how-to-use-spark-hbase-connector/).
-## Issues related to Jupyter notebooks
+## Issues related to Jupyter Notebooks
-Following are some known issues related to Jupyter notebooks.
+Following are some known issues related to Jupyter Notebooks.
### Notebooks with non-ASCII characters in filenames
-Do not use non-ASCII characters in Jupyter notebook filenames. If you try to upload a file through the Jupyter UI, which has a non-ASCII filename, it fails without any error message. Jupyter does not let you upload the file, but it does not throw a visible error either.
+Do not use non-ASCII characters in Jupyter Notebook filenames. If you try to upload a file through the Jupyter UI, which has a non-ASCII filename, it fails without any error message. Jupyter does not let you upload the file, but it does not throw a visible error either.
### Error while loading notebooks of larger sizes
@@ -95,15 +95,15 @@ To prevent this error from happening in the future, you must follow some best pr
### Notebook initial startup takes longer than expected
-First code statement in Jupyter notebook using Spark magic could take more than a minute.
+First code statement in Jupyter Notebook using Spark magic could take more than a minute.
**Explanation:** This happens because when the first code cell is run. In the background this initiates session configuration and Spark, SQL, and Hive contexts are set. After these contexts are set, the first statement is run and this gives the impression that the statement took a long time to complete.
-### Jupyter notebook timeout in creating the session
+### Jupyter Notebook timeout in creating the session
-When Spark cluster is out of resources, the Spark and PySpark kernels in the Jupyter notebook will time out trying to create the session.
+When Spark cluster is out of resources, the Spark and PySpark kernels in the Jupyter Notebook will time out trying to create the session.
**Mitigations:**
@@ -135,8 +135,8 @@ When Spark cluster is out of resources, the Spark and PySpark kernels in the Jup
* [Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md) ### Manage resources
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-livy-rest-interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-livy-rest-interface.md
@@ -83,7 +83,7 @@ curl -k --user "admin:mypassword1!" -v -X DELETE "https://mysparkcluster.azurehd
Livy provides high-availability for Spark jobs running on the cluster. Here is a couple of examples. * If the Livy service goes down after you've submitted a job remotely to a Spark cluster, the job continues to run in the background. When Livy is back up, it restores the status of the job and reports it back.
-* Jupyter notebooks for HDInsight are powered by Livy in the backend. If a notebook is running a Spark job and the Livy service gets restarted, the notebook continues to run the code cells.
+* Jupyter Notebooks for HDInsight are powered by Livy in the backend. If a notebook is running a Spark job and the Livy service gets restarted, the notebook continues to run the code cells.
## Show me an example
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-load-data-run-query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-load-data-run-query.md
@@ -25,7 +25,7 @@ In this tutorial, you learn how to:
An Apache Spark cluster on HDInsight. See [Create an Apache Spark cluster](./apache-spark-jupyter-spark-sql-use-portal.md).
-## Create a Jupyter notebook
+## Create a Jupyter Notebook
Jupyter Notebook is an interactive notebook environment that supports various programming languages. The notebook allows you to interact with your data, combine code with markdown text and perform simple visualizations.
@@ -46,7 +46,7 @@ Applications can create dataframes directly from files or folders on the remote
![Snapshot of data for interactive Spark SQL query](./media/apache-spark-load-data-run-query/hdinsight-spark-sample-data-interactive-spark-sql-query.png "Snapshot of data for interactive Spark SQL query")
-1. Paste the following code in an empty cell of the Jupyter notebook, and then press **SHIFT + ENTER** to run the code. The code imports the types required for this scenario:
+1. Paste the following code in an empty cell of the Jupyter Notebook, and then press **SHIFT + ENTER** to run the code. The code imports the types required for this scenario:
```python from pyspark.sql import *
@@ -92,7 +92,7 @@ Once the table is created, you can run an interactive query on the data.
## Clean up resources
-With HDInsight, your data and Jupyter notebooks are stored in Azure Storage or Azure Data Lake Storage, so you can safely delete a cluster when it isn't in use. You're also charged for an HDInsight cluster, even when it's not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use. If you plan to work on the next tutorial immediately, you might want to keep the cluster.
+With HDInsight, your data and Jupyter Notebooks are stored in Azure Storage or Azure Data Lake Storage, so you can safely delete a cluster when it isn't in use. You're also charged for an HDInsight cluster, even when it's not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use. If you plan to work on the next tutorial immediately, you might want to keep the cluster.
Open the cluster in the Azure portal, and select **Delete**.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-machine-learning-mllib-ipython https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-machine-learning-mllib-ipython.md
@@ -39,7 +39,7 @@ In the steps below, you develop a model to see what it takes to pass or fail a f
## Create an Apache Spark MLlib machine learning app
-1. Create a Jupyter notebook using the PySpark kernel. For the instructions, see [Create a Jupyter notebook file](./apache-spark-jupyter-spark-sql.md#create-a-jupyter-notebook-file).
+1. Create a Jupyter Notebook using the PySpark kernel. For the instructions, see [Create a Jupyter Notebook file](./apache-spark-jupyter-spark-sql.md#create-a-jupyter-notebook-file).
2. Import the types required for this application. Copy and paste the following code into an empty cell, and then press **SHIFT + ENTER**.
@@ -169,7 +169,7 @@ Let's start to get a sense of what the dataset contains.
SELECT COUNT(results) AS cnt, results FROM CountResults GROUP BY results ```
- The `%%sql` magic followed by `-o countResultsdf` ensures that the output of the query is persisted locally on the Jupyter server (typically the headnode of the cluster). The output is persisted as a [Pandas](https://pandas.pydata.org/) dataframe with the specified name **countResultsdf**. For more information about the `%%sql` magic, and other magics available with the PySpark kernel, see [Kernels available on Jupyter notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
+ The `%%sql` magic followed by `-o countResultsdf` ensures that the output of the query is persisted locally on the Jupyter server (typically the headnode of the cluster). The output is persisted as a [Pandas](https://pandas.pydata.org/) dataframe with the specified name **countResultsdf**. For more information about the `%%sql` magic, and other magics available with the PySpark kernel, see [Kernels available on Jupyter Notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
The output is:
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-overview.md
@@ -29,7 +29,7 @@ Spark clusters in HDInsight offer a fully managed Spark service. Benefits of cre
| Feature | Description | | --- | --- | | Ease creation |You can create a new Spark cluster in HDInsight in minutes using the Azure portal, Azure PowerShell, or the HDInsight .NET SDK. See [Get started with Apache Spark cluster in HDInsight](apache-spark-jupyter-spark-sql-use-portal.md). |
-| Ease of use |Spark cluster in HDInsight include Jupyter and Apache Zeppelin notebooks. You can use these notebooks for interactive data processing and visualization. See [Use Apache Zeppelin notebooks with Apache Spark](apache-spark-zeppelin-notebook.md) and [Load data and run queries on an Apache Spark cluster](apache-spark-load-data-run-query.md).|
+| Ease of use |Spark cluster in HDInsight include Jupyter Notebooks and Apache Zeppelin Notebooks. You can use these notebooks for interactive data processing and visualization. See [Use Apache Zeppelin notebooks with Apache Spark](apache-spark-zeppelin-notebook.md) and [Load data and run queries on an Apache Spark cluster](apache-spark-load-data-run-query.md).|
| REST APIs |Spark clusters in HDInsight include [Apache Livy](https://github.com/cloudera/hue/tree/master/apps/spark/java#welcome-to-livy-the-rest-spark-server), a REST API-based Spark job server to remotely submit and monitor jobs. See [Use Apache Spark REST API to submit remote jobs to an HDInsight Spark cluster](apache-spark-livy-rest-interface.md).| | Support for Azure Storage | Spark clusters in HDInsight can use Azure Data Lake Storage Gen1/Gen2 as both the primary storage or additional storage. For more information on Data Lake Storage Gen1, see [Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-overview.md). For more information on Data Lake Storage Gen2, see [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).| | Integration with Azure services |Spark cluster in HDInsight comes with a connector to Azure Event Hubs. You can build streaming applications using the Event Hubs. Including Apache Kafka, which is already available as part of Spark. |
@@ -47,7 +47,7 @@ Apache Spark clusters in HDInsight include the following components that are ava
* [Spark Core](https://spark.apache.org/docs/latest/). Includes Spark Core, Spark SQL, Spark streaming APIs, GraphX, and MLlib. * [Anaconda](https://docs.continuum.io/anaconda/) * [Apache Livy](https://github.com/cloudera/hue/tree/master/apps/spark/java#welcome-to-livy-the-rest-spark-server)
-* [Jupyter notebook](https://jupyter.org)
+* [Jupyter Notebook](https://jupyter.org)
* [Apache Zeppelin notebook](http://zeppelin-project.org/) HDInsight Spark clusters an [ODBC driver](https://go.microsoft.com/fwlink/?LinkId=616229) for connectivity from BI tools such as Microsoft Power BI.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-python-package-installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-python-package-installation.md
@@ -1,6 +1,6 @@
--- title: Script action for Python packages with Jupyter on Azure HDInsight
-description: Step-by-step instructions on how to use script action to configure Jupyter notebooks available with HDInsight Spark clusters to use external python packages.
+description: Step-by-step instructions on how to use script action to configure Jupyter Notebooks available with HDInsight Spark clusters to use external python packages.
author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh
@@ -162,5 +162,5 @@ To check your Anaconda version, you can SSH to the cluster header node and run `
## Next steps * [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
-* [External packages with Jupyter notebooks in Apache Spark](apache-spark-jupyter-notebook-use-external-packages.md)
+* [External packages with Jupyter Notebooks in Apache Spark](apache-spark-jupyter-notebook-use-external-packages.md)
* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-resource-manager.md
@@ -51,9 +51,9 @@ The three configuration parameters can be configured at the cluster level (for a
![Restart services](./media/apache-spark-resource-manager/apache-ambari-restart-services.png)
-### Change the parameters for an application running in Jupyter notebook
+### Change the parameters for an application running in Jupyter Notebook
-For applications running in the Jupyter notebook, you can use the `%%configure` magic to make the configuration changes. Ideally, you must make such changes at the beginning of the application, before you run your first code cell. Doing this ensures that the configuration is applied to the Livy session, when it gets created. If you want to change the configuration at a later stage in the application, you must use the `-f` parameter. However, by doing so all progress in the application is lost.
+For applications running in the Jupyter Notebook, you can use the `%%configure` magic to make the configuration changes. Ideally, you must make such changes at the beginning of the application, before you run your first code cell. Doing this ensures that the configuration is applied to the Livy session, when it gets created. If you want to change the configuration at a later stage in the application, you must use the `-f` parameter. However, by doing so all progress in the application is lost.
The following snippet shows how to change the configuration for an application running in Jupyter.
@@ -159,6 +159,6 @@ Launch the Yarn UI as shown in the beginning of the article. In Cluster Metrics
* [Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md) * [Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
-* [Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
-* [Use external packages with Jupyter notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
+* [Kernels available for Jupyter Notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Use external packages with Jupyter Notebooks](apache-spark-jupyter-notebook-use-external-packages.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-run-machine-learning-automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-run-machine-learning-automl.md
@@ -19,7 +19,7 @@ For general tutorials of automated machine learning, see [Tutorial: Use automate
All new HDInsight-Spark clusters come pre-installed with AzureML-AutoML SDK. > [!Note]
-> Azure Machine Learning packages are installed into Python3 conda environment. The installed Jupyter notebook should be run using the PySpark3 kernel.
+> Azure Machine Learning packages are installed into Python3 conda environment. The installed Jupyter Notebook should be run using the PySpark3 kernel.
You can use Zeppelin notebooks to use AutoML as well.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-settings.md
@@ -125,7 +125,7 @@ Spark clusters in HDInsight include a number of components by default. Each of t
|Spark Core|Spark Core, Spark SQL, Spark streaming APIs, GraphX, and Apache Spark MLlib.| |Anaconda|A python package manager.| |Apache Livy|The Apache Spark REST API, used to submit remote jobs to an HDInsight Spark cluster.|
-|Jupyter and Apache Zeppelin notebooks|Interactive browser-based UI for interacting with your Spark cluster.|
+|Jupyter Notebooks and Apache Zeppelin Notebooks|Interactive browser-based UI for interacting with your Spark cluster.|
|ODBC driver|Connects Spark clusters in HDInsight to business intelligence (BI) tools such as Microsoft Power BI and Tableau.| For applications running in the Jupyter Notebook, use the `%%configure` command to make configuration changes from within the notebook itself. These configuration changes will be applied to the Spark jobs run from your notebook instance. Make such changes at the beginning of the application, before you run your first code cell. The changed configuration is applied to the Livy session when it gets created.
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-use-bi-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-use-bi-tools.md
@@ -34,7 +34,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
The [Jupyter Notebook](https://jupyter.org/) that you created in the [previous tutorial](apache-spark-load-data-run-query.md) includes code to create an `hvac` table. This table is based on the CSV file available on all HDInsight Spark clusters at `\HdiSamples\HdiSamples\SensorSampleData\hvac\hvac.csv`. Use the following procedure to verify the data.
-1. From the Jupyter notebook, paste the following code, and then press **SHIFT + ENTER**. The code verifies the existence of the tables.
+1. From the Jupyter Notebook, paste the following code, and then press **SHIFT + ENTER**. The code verifies the existence of the tables.
```PySpark %%sql
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-use-with-data-lake-store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md
@@ -68,7 +68,7 @@ If you created an HDInsight cluster with Data Lake Storage as additional storage
3. Create a new notebook. Click **New**, and then click **PySpark**.
- ![Create a new Jupyter notebook](./media/apache-spark-use-with-data-lake-store/hdinsight-create-jupyter-notebook.png "Create a new Jupyter notebook")
+ ![Create a new Jupyter Notebook](./media/apache-spark-use-with-data-lake-store/hdinsight-create-jupyter-notebook.png "Create a new Jupyter Notebook")
4. Because you created a notebook using the PySpark kernel, you do not need to create any contexts explicitly. The Spark and Hive contexts will be automatically created for you when you run the first code cell. You can start by importing the types required for this scenario. To do so, paste the following code snippet in a cell and press **SHIFT + ENTER**.
@@ -78,7 +78,7 @@ If you created an HDInsight cluster with Data Lake Storage as additional storage
Every time you run a job in Jupyter, your web browser window title will show a **(Busy)** status along with the notebook title. You will also see a solid circle next to the **PySpark** text in the top-right corner. After the job is completed, this will change to a hollow circle.
- ![Status of a Jupyter notebook job](./media/apache-spark-use-with-data-lake-store/hdinsight-jupyter-job-status.png "Status of a Jupyter notebook job")
+ ![Status of a Jupyter Notebook job](./media/apache-spark-use-with-data-lake-store/hdinsight-jupyter-job-status.png "Status of a Jupyter Notebook job")
5. Load sample data into a temporary table using the **HVAC.csv** file you copied to the Data Lake Storage Gen1 account. You can access the data in the Data Lake Storage account using the following URL pattern.
@@ -119,7 +119,7 @@ If you created an HDInsight cluster with Data Lake Storage as additional storage
hvacdf.registerTempTable("hvac") ```
-6. Because you are using a PySpark kernel, you can now directly run a SQL query on the temporary table **hvac** that you just created by using the `%%sql` magic. For more information about the `%%sql` magic, as well as other magics available with the PySpark kernel, see [Kernels available on Jupyter notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
+6. Because you are using a PySpark kernel, you can now directly run a SQL query on the temporary table **hvac** that you just created by using the `%%sql` magic. For more information about the `%%sql` magic, as well as other magics available with the PySpark kernel, see [Kernels available on Jupyter Notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
```sql %%sql
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-zeppelin-notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md
@@ -110,7 +110,7 @@ HDInsight Spark clusters include [Apache Zeppelin](https://zeppelin.apache.org/)
Zeppelin notebook in Apache Spark cluster on HDInsight can use external, community-contributed packages that aren't included in the cluster. Search the [Maven repository](https://search.maven.org/) for the complete list of packages that are available. You can also get a list of available packages from other sources. For example, a complete list of community-contributed packages is available at [Spark Packages](https://spark-packages.org/).
-In this article, you'll see how to use the [spark-csv](https://search.maven.org/#artifactdetails%7Ccom.databricks%7Cspark-csv_2.10%7C1.4.0%7Cjar) package with the Jupyter notebook.
+In this article, you'll see how to use the [spark-csv](https://search.maven.org/#artifactdetails%7Ccom.databricks%7Cspark-csv_2.10%7C1.4.0%7Cjar) package with the Jupyter Notebook.
1. Open interpreter settings. From the top-right corner, select the logged in user name, then select **Interpreter**.
@@ -132,7 +132,7 @@ In this article, you'll see how to use the [spark-csv](https://search.maven.org/
b. From the repository, gather the values for **GroupId**, **ArtifactId**, and **Version**.
- ![Use external packages with Jupyter notebook](./media/apache-spark-zeppelin-notebook/use-external-packages-with-jupyter.png "Use external packages with Jupyter notebook")
+ ![Use external packages with Jupyter Notebook](./media/apache-spark-zeppelin-notebook/use-external-packages-with-jupyter.png "Use external packages with Jupyter Notebook")
c. Concatenate the three values, separated by a colon (**:**).
@@ -222,5 +222,5 @@ To validate the service from a command line, SSH to the head node. Switch user t
## Next steps * [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
-* [Kernels available for Jupyter notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
+* [Kernels available for Jupyter Notebook in Apache Spark cluster for HDInsight](apache-spark-jupyter-notebook-kernels.md)
* [Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-troubleshoot-spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-troubleshoot-spark.md
@@ -70,9 +70,9 @@ Spark configuration values can be tuned help avoid an Apache Spark application `
These changes are cluster-wide but can be overridden when you submit the Spark job.
-## How do I configure an Apache Spark application by using a Jupyter notebook on clusters?
+## How do I configure an Apache Spark application by using a Jupyter Notebook on clusters?
-In the first cell of the Jupyter notebook, after the **%%configure** directive, specify the Spark configurations in valid JSON format. Change the actual values as necessary:
+In the first cell of the Jupyter Notebook, after the **%%configure** directive, specify the Spark configurations in valid JSON format. Change the actual values as necessary:
![Add a configuration](./media/apache-troubleshoot-spark/add-configuration-cell.png)
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-security.md
@@ -8,7 +8,7 @@ ms.service: iot-hub
services: iot-hub ms.topic: conceptual ms.date: 07/18/2018
-ms.custom: [amqp, mqtt, 'Role: Cloud Development', 'Role: IoT Device', 'Role: Operations', devx-track-js, devx-track-csharp, devx-track-azurecli]
+ms.custom: [amqp, mqtt, 'Role: Cloud Development', 'Role: IoT Device', 'Role: Operations', devx-track-js, devx-track-csharp]
--- # Control access to IoT Hub
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-device-streams-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-streams-overview.md
@@ -7,7 +7,7 @@ ms.service: iot-hub
ms.topic: conceptual ms.date: 01/15/2019 ms.author: robinsh
-ms.custom: ['Role: Cloud Development','Role: IoT Device','Role: Technical Support', devx-track-azurecli]
+ms.custom: ['Role: Cloud Development','Role: IoT Device','Role: Technical Support']
--- # IoT Hub Device Streams (preview)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/how-to-export-certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-export-certificate.md
@@ -8,7 +8,7 @@ tags: azure-key-vault
ms.service: key-vault ms.subservice: certificates ms.topic: how-to
-ms.custom: mvc, devx-track-azurecli
+ms.custom: mvc
ms.date: 08/11/2020 ms.author: sebansal #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure.
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-net.md
@@ -7,7 +7,7 @@ ms.date: 09/23/2020
ms.service: key-vault ms.subservice: certificates ms.topic: quickstart
-ms.custom: devx-track-csharp, devx-track-azurecli
+ms.custom: devx-track-csharp
--- # Quickstart: Azure Key Vault certificate client library for .NET (SDK v4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/quick-create-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-node.md
@@ -7,7 +7,7 @@ ms.date: 12/6/2020
ms.service: key-vault ms.subservice: certificates ms.topic: quickstart
-ms.custom: devx-track-js, devx-track-azurecli
+ms.custom: devx-track-js
--- # Quickstart: Azure Key Vault certificate client library for JavaScript (version 4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/keys/quick-create-net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-net.md
@@ -7,7 +7,7 @@ ms.date: 09/23/2020
ms.service: key-vault ms.subservice: keys ms.topic: quickstart
-ms.custom: devx-track-csharp, devx-track-azurecli
+ms.custom: devx-track-csharp
--- # Quickstart: Azure Key Vault key client library for .NET (SDK v4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/keys/quick-create-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-node.md
@@ -7,7 +7,7 @@ ms.date: 12/6/2020
ms.service: key-vault ms.subservice: keys ms.topic: quickstart
-ms.custom: devx-track-js, devx-track-azurecli
+ms.custom: devx-track-js
--- # Quickstart: Azure Key Vault key client library for JavaScript (version 4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/quick-create-net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-net.md
@@ -7,7 +7,7 @@ ms.date: 09/23/2020
ms.service: key-vault ms.subservice: secrets ms.topic: quickstart
-ms.custom: devx-track-csharp, devx-track-azurecli
+ms.custom: devx-track-csharp
--- # Quickstart: Azure Key Vault secret client library for .NET (SDK v4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/quick-create-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-node.md
@@ -7,7 +7,7 @@ ms.date: 12/6/2020
ms.service: key-vault ms.subservice: secrets ms.topic: quickstart
-ms.custom: devx-track-js, devx-track-azurecli
+ms.custom: devx-track-js
--- # Quickstart: Azure Key Vault secret client library for JavaScript (version 4)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation.md
@@ -11,7 +11,7 @@ ms.subservice: secrets
ms.topic: tutorial ms.date: 01/26/2020 ms.author: mbaldwin
-ms.custom: devx-track-csharp, devx-track-azurecli
+ms.custom: devx-track-csharp
--- # Automate the rotation of a secret for resources that use one set of authentication credentials
@@ -20,7 +20,8 @@ The best way to authenticate to Azure services is by using a [managed identity](
This tutorial shows how to automate the periodic rotation of secrets for databases and services that use one set of authentication credentials. Specifically, this tutorial rotates SQL Server passwords stored in Azure Key Vault by using a function triggered by Azure Event Grid notification:
-![Diagram of rotation solution](../media/rotate-1.png)
+
+:::image type="content" source="../media/rotate-1.png" alt-text="Diagram of rotation solution":::
1. Thirty days before the expiration date of a secret, Key Vault publishes the "near expiry" event to Event Grid. 1. Event Grid checks the event subscriptions and uses HTTP POST to call the function app endpoint subscribed to the event.
@@ -39,19 +40,19 @@ This tutorial shows how to automate the periodic rotation of secrets for databas
Below deployment link can be used, if you don't have existing Key Vault and SQL Server:
-[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmaster%2Farm-templates%2FInitial-Setup%2Fazuredeploy.json)
+[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmain%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
1. Under **Resource group**, select **Create new**. Name the group **akvrotation**. 1. Under **Sql Admin Login**, type Sql administrator login name. 1. Select **Review + create**. 1. Select **Create**
- ![Create a resource group](../media/rotate-2.png)
+:::image type="content" source="../media/rotate-2.png" alt-text="Create a resource group":::
You'll now have a Key Vault, and a SQL Server instance. You can verify this setup in the Azure CLI by running the following command: ```azurecli
-az resource list -o table
+az resource list -o table -g akvrotation
``` The result will look something the following output:
@@ -59,9 +60,11 @@ The result will look something the following output:
```console Name ResourceGroup Location Type Status ----------------------- -------------------- ---------- --------------------------------- --------
-akvrotation-kv akvrotation eastus Microsoft.KeyVault/vaults
-akvrotation-sql akvrotation eastus Microsoft.Sql/servers
-akvrotation-sql/master akvrotation eastus Microsoft.Sql/servers/databases
+akvrotation-kv akvrotation eastus Microsoft.KeyVault/vaults
+akvrotation-sql akvrotation eastus Microsoft.Sql/servers
+akvrotation-sql/master akvrotation eastus Microsoft.Sql/servers/databases
+akvrotation-sql2 akvrotation eastus Microsoft.Sql/servers
+akvrotation-sql2/master akvrotation eastus Microsoft.Sql/servers/databases
``` ## Create and deploy sql server password rotation function
@@ -79,23 +82,24 @@ The function app requires these components:
1. Select the Azure template deployment link:
- [![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmaster%2Farm-templates%2FFunction%2Fazuredeploy.json)
+ [![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmain%2FARM-Templates%2FFunction%2Fazuredeploy.json)
1. In the **Resource group** list, select **akvrotation**. 1. In the **Sql Server Name**, type the Sql Server name with password to rotate 1. In the **Key Vault Name**, type the key vault name 1. In the **Function App Name**, type the function app name 1. In the **Secret Name**, type secret name where the password will be stored
-1. In the **Repo Url**, type function code GitHub location (**https://github.com/jlichwa/KeyVault-Rotation-SQLPassword-Csharp.git**)
+1. In the **Repo Url**, type function code GitHub location (**https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp.git**)
1. Select **Review + create**. 1. Select **Create**.
- ![Select Review+create](../media/rotate-3.png)
+:::image type="content" source="../media/rotate-3.png" alt-text="Select Review+create":::
+
After you complete the preceding steps, you'll have a storage account, a server farm, and a function app. You can verify this setup in the Azure CLI by running the following command: ```azurecli
-az resource list -o table
+az resource list -o table -g akvrotation
``` The result will look something like the following output:
@@ -184,7 +188,7 @@ This rotation method reads database information from the secret, creates a new v
} } ```
-You can find the complete code on [GitHub](https://github.com/jlichwa/KeyVault-Rotation-SQLPassword-Csharp).
+You can find the complete code on [GitHub](https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp).
## Add the secret to Key Vault Set your access policy to grant *manage secrets* permissions to users:
@@ -206,11 +210,11 @@ Creating a secret with a short expiration date will publish a `SecretNearExpiry`
To verify that the secret has rotated, go to **Key Vault** > **Secrets**:
-![Go to Secrets](../media/rotate-8.png)
+:::image type="content" source="../media/rotate-8.png" alt-text="Go to Secrets":::
Open the **sqlPassword** secret and view the original and rotated versions:
-![Open the sqluser secret](../media/rotate-9.png)
+:::image type="content" source="../media/rotate-9.png" alt-text="Go to Secrets":::
### Create a web app
@@ -222,13 +226,13 @@ The web app requires these components:
1. Select the Azure template deployment link:
- [![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjlichwa%2FKeyVault-Rotation-SQLPassword-Csharp-WebApp%2Fmaster%2Farm-templates%2FWeb-App%2Fazuredeploy.json)
+ [![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp-WebApp%2Fmain%2FARM-Templates%2FWeb-App%2Fazuredeploy.json)
1. Select the **akvrotation** resource group. 1. In the **Sql Server Name**, type the Sql Server name with password to rotate 1. In the **Key Vault Name**, type the key vault name 1. In the **Secret Name**, type secret name where the password is stored
-1. In the **Repo Url**, type web app code GitHub location (**https://github.com/jlichwa/KeyVault-Rotation-SQLPassword-Csharp-WebApp.git**)
+1. In the **Repo Url**, type web app code GitHub location (**https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp-WebApp.git**)
1. Select **Review + create**. 1. Select **Create**.
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/create-integration-service-environment-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-integration-service-environment-rest-api.md
@@ -5,7 +5,7 @@ services: logic-apps
ms.suite: integration ms.reviewer: rarayudu, logicappspm ms.topic: conceptual
-ms.date: 12/05/2020
+ms.date: 12/29/2020
--- # Create an integration service environment (ISE) by using the Logic Apps REST API
@@ -66,9 +66,7 @@ In the request header, include these properties:
In the request body, provide the resource definition to use for creating your ISE, including information for additional capabilities that you want to enable on your ISE, for example:
-* To create an ISE that permits using a self-signed certificate that's installed at the `TrustedRoot` location, include the `certificates` object inside the ISE definition's `properties` section, as this article later describes.
-
- To enable this capability on an existing ISE, you can send a PATCH request for only the `certificates` object. For more information about using self-signed certificates, see [Secure access and data - Access for outbound calls to other services and systems](../logic-apps/logic-apps-securing-a-logic-app.md#secure-outbound-requests).
+* To create an ISE that permits using a self-signed certificate and certificate issued by Enterprise Certificate Authority that's installed at the `TrustedRoot` location, include the `certificates` object inside the ISE definition's `properties` section, as this article later describes.
* To create an ISE that uses a system-assigned or user-assigned managed identity, include the `identity` object with the managed identity type and other required information in the ISE definition, as this article later describes.
@@ -120,7 +118,7 @@ Here is the request body syntax, which describes the properties to use when you
} ] },
- // Include `certificates` object to enable self-signed certificate support
+ // Include `certificates` object to enable self-signed certiificate and certificate issued by Enterprise Certificate Authority
"certificates": { "testCertificate": { "publicCertificate": "{base64-encoded-certificate}",
@@ -180,6 +178,45 @@ This example request body shows the sample values:
"publicCertificate": "LS0tLS1CRUdJTiBDRV...", "kind": "TrustedRoot" }
+ }
+ }
+}
+```
+## Add custom root certificates
+
+You often use an ISE to connect to custom services on your virtual network or on premises. These custom services are often protected by a certificate that's issued by custom root certificate authority, such as an Enterprise Certificate Authority or a self-signed certificate. For more information about using self-signed certificates, see [Secure access and data - Access for outbound calls to other services and systems](../logic-apps/logic-apps-securing-a-logic-app.md#secure-outbound-requests). For your ISE to successfully connect to these services through Transport Layer Security (TLS), your ISE needs access to these root certificates. To update your ISE with a custom trusted root certificate, make this HTTPS `PATCH` request:
+
+`PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01`
+
+Before you perform this operation, review these considerations:
+
+* Make sure that you upload the root certificate *and* all the intermediate certificates. The maximum number of certificates is 20.
+
+* Uploading root certificates is a replacement operation where the latest upload overwrites previous uploads. For example, if you send a request that uploads one certificate, and then send another request to upload another certificate, your ISE uses only the second certificate. If you need to use both certificates, add them together in the same request.
+
+* Uploading root certificates is an asynchronous operation that might take some time. To check the status or result, you can send a `GET` request by using the same URI. The response message has a `provisioningState` field that returns the `InProgress` value when the upload operation is still working. When `provisioningState` value is `Succeeded`, the upload operation is complete.
+
+#### Request body syntax for adding custom root certificates
+
+Here is the request body syntax, which describes the properties to use when you add root certificates:
+
+```json
+{
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}",
+ "name": "{ISE-name}",
+ "type": "Microsoft.Logic/integrationServiceEnvironments",
+ "location": "{Azure-region}",
+ "properties": {
+ "certificates": {
+ "testCertificate1": {
+ "publicCertificate": "{base64-encoded-certificate}",
+ "kind": "TrustedRoot"
+ },
+ "testCertificate2": {
+ "publicCertificate": "{base64-encoded-certificate}",
+ "kind": "TrustedRoot"
+ }
+ }
} } ```
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
@@ -3,7 +3,7 @@ title: Reference guide for functions in expressions
description: Reference guide to functions in expressions for Azure Logic Apps and Power Automate services: logic-apps ms.suite: integration
-ms.reviewer: estfan, logicappspm
+ms.reviewer: estfan, logicappspm, azla
ms.topic: conceptual ms.date: 09/04/2020 ---
@@ -1814,7 +1814,7 @@ This example creates a URI-encoded version for this string:
encodeUriComponent('https://contoso.com') ```
-And returns this result: `"http%3A%2F%2Fcontoso.com"`
+And returns this result: `"https%3A%2F%2Fcontoso.com"`
<a name="empty"></a>
@@ -4273,8 +4273,7 @@ triggerBody()
### triggerFormDataMultiValues
-Return an array with values that match a key name
-in a trigger's *form-data* or *form-encoded* output.
+Return an array with values that match a key name in a trigger's *form-data* or *form-encoded* output.
``` triggerFormDataMultiValues('<key>')
@@ -4292,14 +4291,13 @@ triggerFormDataMultiValues('<key>')
*Example*
-This example creates an array from the "feedUrl" key value in
-an RSS trigger's form-data or form-encoded output:
+This example creates an array from the "feedUrl" key value in an RSS trigger's form-data or form-encoded output:
``` triggerFormDataMultiValues('feedUrl') ```
-And returns this array as an example result: `["http://feeds.reuters.com/reuters/topNews"]`
+And returns this array as an example result: `["https://feeds.a.dj.com/rss/RSSMarketsMain.xml"]`
<a name="triggerFormDataValue"></a>
@@ -4333,7 +4331,7 @@ an RSS trigger's form-data or form-encoded output:
triggerFormDataValue('feedUrl') ```
-And returns this string as an example result: `"http://feeds.reuters.com/reuters/topNews"`
+And returns this string as an example result: `"https://feeds.a.dj.com/rss/RSSMarketsMain.xml"`
<a name="triggerMultipartBody"></a>
@@ -4471,7 +4469,7 @@ This example creates a URI-encoded version for this string:
uriComponent('https://contoso.com') ```
-And returns this result: `"http%3A%2F%2Fcontoso.com"`
+And returns this result: `"https%3A%2F%2Fcontoso.com"`
<a name="uriComponentToBinary"></a>
@@ -4498,7 +4496,7 @@ uriComponentToBinary('<value>')
This example creates the binary version for this URI-encoded string: ```
-uriComponentToBinary('http%3A%2F%2Fcontoso.com')
+uriComponentToBinary('https%3A%2F%2Fcontoso.com')
``` And returns this result:
@@ -4534,7 +4532,7 @@ uriComponentToString('<value>')
This example creates the decoded string version for this URI-encoded string: ```
-uriComponentToString('http%3A%2F%2Fcontoso.com')
+uriComponentToString('https%3A%2F%2Fcontoso.com')
``` And returns this result: `"https://contoso.com"`
@@ -4594,7 +4592,7 @@ uriPath('<uri>')
This example finds the `path` value for this URI: ```
-uriPath('http://www.contoso.com/catalog/shownew.htm?date=today')
+uriPath('https://www.contoso.com/catalog/shownew.htm?date=today')
``` And returns this result: `"/catalog/shownew.htm"`
@@ -4624,7 +4622,7 @@ uriPathAndQuery('<uri>')
This example finds the `path` and `query` values for this URI: ```
-uriPathAndQuery('http://www.contoso.com/catalog/shownew.htm?date=today')
+uriPathAndQuery('https://www.contoso.com/catalog/shownew.htm?date=today')
``` And returns this result: `"/catalog/shownew.htm?date=today"`
@@ -4654,7 +4652,7 @@ uriPort('<uri>')
This example returns the `port` value for this URI: ```
-uriPort('http://www.localhost:8080')
+uriPort('https://www.localhost:8080')
``` And returns this result: `8080`
@@ -4684,7 +4682,7 @@ uriQuery('<uri>')
This example returns the `query` value for this URI: ```
-uriQuery('http://www.contoso.com/catalog/shownew.htm?date=today')
+uriQuery('https://www.contoso.com/catalog/shownew.htm?date=today')
``` And returns this result: `"?date=today"`
@@ -4714,7 +4712,7 @@ uriScheme('<uri>')
This example returns the `scheme` value for this URI: ```
-uriScheme('http://www.contoso.com/catalog/shownew.htm?date=today')
+uriScheme('https://www.contoso.com/catalog/shownew.htm?date=today')
``` And returns this result: `"http"`
@@ -5055,16 +5053,16 @@ Here is the result: `30`
*Example 8*
-In this example, suppose you have this XML string, which includes the XML document namespace, `xmlns="http://contoso.com"`:
+In this example, suppose you have this XML string, which includes the XML document namespace, `xmlns="https://contoso.com"`:
```xml
-<?xml version="1.0"?><file xmlns="http://contoso.com"><location>Paris</location></file>
+<?xml version="1.0"?><file xmlns="https://contoso.com"><location>Paris</location></file>
```
-These expressions use either XPath expression, `/*[name()="file"]/*[name()="location"]` or `/*[local-name()="file" and namespace-uri()="http://contoso.com"]/*[local-name()="location"]`, to find nodes that match the `<location></location>` node. These examples show the syntax that you use in either the Logic App Designer or in the expression editor:
+These expressions use either XPath expression, `/*[name()="file"]/*[name()="location"]` or `/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]`, to find nodes that match the `<location></location>` node. These examples show the syntax that you use in either the Logic App Designer or in the expression editor:
* `xpath(xml(body('Http')), '/*[name()="file"]/*[name()="location"]')`
-* `xpath(xml(body('Http')), '/*[local-name()="file" and namespace-uri()="http://contoso.com"]/*[local-name()="location"]')`
+* `xpath(xml(body('Http')), '/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]')`
Here is the result node that matches the `<location></location>` node:
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
@@ -462,10 +462,20 @@ For general information on how model explanations and feature importance can be
* **`import numpy` fails in Windows**: Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
-* **`import numpy` fails**: Check the TensorFlow version in the automated ml conda environment. Supported versions are < 1.13. Uninstall TensorFlow from the environment if version is >= 1.13 You may check the version of TensorFlow and uninstall as follows -
+* **`import numpy` fails**: Check the TensorFlow version in the automated ml conda environment. Supported versions are < 1.13. Uninstall TensorFlow from the environment if version is >= 1.13. You may check the version of TensorFlow and uninstall as follows -
1. Start a command shell, activate conda environment where automated ml packages are installed. 2. Enter `pip freeze` and look for `tensorflow`, if found, the version listed should be < 1.13 3. If the listed version is not a supported version, `pip uninstall tensorflow` in the command shell and enter y for confirmation.
+
+ * **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+
+ Consider upgrading to the latest version of AutoML SDK: `pip install -U azureml-sdk[automl]`.
+
+ If that is not viable, check the version of PyJWT. Supported versions are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ 1. Start a command shell, activate conda environment where automated ml packages are installed.
+ 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
+ 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
+ 4. Install using `pip install 'PyJWT<2.0.0'`.
## Next steps
@@ -473,4 +483,4 @@ For general information on how model explanations and feature importance can be
+ Learn more about [how to train a regression model with Automated machine learning](tutorial-auto-train-models.md) or [how to train using Automated machine learning on a remote resource](how-to-auto-train-remote.md).
-+ Learn how to train multiple models with AutoML in the [Many Models Solution Accelerator](https://aka.ms/many-models).
\ No newline at end of file++ Learn how to train multiple models with AutoML in the [Many Models Solution Accelerator](https://aka.ms/many-models).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-consume-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-consume-web-service.md
@@ -10,7 +10,7 @@ author: aashishb
ms.reviewer: larryfr ms.date: 10/12/2020 ms.topic: conceptual
-ms.custom: "how-to, devx-track-python, devx-track-csharp, devx-track-azurecli"
+ms.custom: "how-to, devx-track-python, devx-track-csharp"
#Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
@@ -6,7 +6,7 @@ services: machine-learning
ms.service: machine-learning ms.subservice: core ms.topic: conceptual
-ms.custom: how-to, contperf-fy21q1, deploy, devx-track-azurecli
+ms.custom: how-to, contperf-fy21q1, deploy
ms.author: jordane author: jpe316 ms.reviewer: larryfr
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-existing-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-existing-model.md
@@ -10,7 +10,7 @@ author: jpe316
ms.reviewer: larryfr ms.date: 07/17/2020 ms.topic: conceptual
-ms.custom: how-to, devx-track-python, deploy, devx-track-azurecli
+ms.custom: how-to, devx-track-python, deploy
--- # Deploy your existing model with Azure Machine Learning
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-update-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-update-web-service.md
@@ -10,7 +10,7 @@ ms.reviewer: larryfr
ms.author: gopalv author: gvashishtha ms.date: 07/31/2020
-ms.custom: deploy, devx-track-azurecli
+ms.custom: deploy
--- # Update a deployed web service
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-triton.md
@@ -10,7 +10,7 @@ author: gvashishtha
ms.date: 09/23/2020 ms.topic: conceptual ms.reviewer: larryfr
-ms.custom: deploy, devx-track-azurecli
+ms.custom: deploy
--- # High-performance serving with Triton Inference Server (Preview)
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-secure-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
@@ -10,7 +10,7 @@ ms.author: aashishb
author: aashishb ms.date: 11/18/2020 ms.topic: conceptual
-ms.custom: how-to, devx-track-azurecli
+ms.custom: how-to
--- # Use TLS to secure a web service through Azure Machine Learning
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
@@ -10,7 +10,7 @@ ms.service: machine-learning
ms.subservice: core ms.date: 07/23/2020 ms.topic: conceptual
-ms.custom: how-to, devx-track-python, devx-track-azurecli
+ms.custom: how-to, devx-track-python
## As a developer, I need to configure my experiment context with the necessary software packages so my machine learning models can be trained and deployed on different compute targets.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-get-sas-uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
@@ -57,7 +57,7 @@ There are two common tools used to create a SAS address (URL):
1. Download and install [Microsoft Azure CL](/cli/azure/install-azure-cli)I. Versions are available for Windows, macOS, and various distros of Linux. 2. Create a PowerShell file (.ps1 file extension), copy in the following code, then save it locally.
- ```JSON
+ ```azurecli-interactive
az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.netΓÇÖ --name <vhd-name> --permissions rl --start ΓÇÿ<start-date>ΓÇÖ --expiry ΓÇÿ<expiry-date>ΓÇÖ ```
@@ -65,13 +65,14 @@ There are two common tools used to create a SAS address (URL):
- account-name ΓÇô Your Azure storage account name. - account-key ΓÇô Your Azure storage account key.
- - vhd-name ΓÇô Your VHD name.
- start-date ΓÇô Permission start date for VHD access. Provide a date one day before the current date. - expiry-date ΓÇô Permission expiration date for VHD access. Provide a date at least three weeks after the current date. Here's an example of proper parameter values (at the time of this writing):
- `az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.netΓÇÖ --name vhds -- permissions rl --start ΓÇÿ2020-04-01T00:00:00ZΓÇÖ --expiry ΓÇÿ2021-04-01T00:00:00ZΓÇÖ`
+ ```azurecli-interactive
+ az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.netΓÇÖ --name vhds -- permissions rl --start ΓÇÿ2020-04-01T00:00:00ZΓÇÖ --expiry ΓÇÿ2021-04-01T00:00:00ZΓÇÖ
+ ```
1. Save the changes. 2. Using one of the following methods, run this script with administrative privileges to create a SAS connection string for container-level access:
@@ -83,7 +84,7 @@ There are two common tools used to create a SAS address (URL):
6. Copy the SAS connection string and save it to a text file in a secure location. Edit this string to add the VHD location information to create the final SAS URI. 7. In the Azure portal, go to the blob storage that includes the VHD associated with the new URI.
-8. Copy the URL of thebBlob service endpoint:
+8. Copy the URL of the blob service endpoint:
![Copying the URL of the blob service endpoint.](media/vm/create-sas-uri-blob-endpoint.png)
@@ -104,4 +105,4 @@ Check the SAS URI before publishing it on Partner Center to avoid any issues rel
- If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md). - [Sign in to Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership)-- [Create a virtual machine offer on Azure Marketplace](azure-vm-create.md)\ No newline at end of file
+- [Create a virtual machine offer on Azure Marketplace](azure-vm-create.md)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/includes/size-connect-generalize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/includes/size-connect-generalize.md
@@ -48,31 +48,31 @@ The following process generalizes a Linux VM and redeploys it as a separate VM.
Use the following script to export the snapshot into a VHD in your storage account.
-```JSON
+```azurecli-interactive
#Provide the subscription Id where the snapshot is created
-subscriptionId=yourSubscriptionId
+$subscriptionId=yourSubscriptionId
#Provide the name of your resource group where the snapshot is created
-resourceGroupName=myResourceGroupName
+$resourceGroupName=myResourceGroupName
#Provide the snapshot name
-snapshotName=mySnapshot
+$snapshotName=mySnapshot
#Provide Shared Access Signature (SAS) expiry duration in seconds (such as 3600) #Know more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-signature-part-1
-sasExpiryDuration=3600
+$sasExpiryDuration=3600
#Provide storage account name where you want to copy the underlying VHD file.
-storageAccountName=mystorageaccountname
+$storageAccountName=mystorageaccountname
#Name of the storage container where the downloaded VHD will be stored.
-storageContainerName=mystoragecontainername
+$storageContainerName=mystoragecontainername
#Provide the key of the storage account where you want to copy the VHD
-storageAccountKey=mystorageaccountkey
+$storageAccountKey=mystorageaccountkey
#Give a name to the destination VHD file to which the VHD will be copied.
-destinationVHDFileName=myvhdfilename.vhd
+$destinationVHDFileName=myvhdfilename.vhd
az account set --subscription $subscriptionId
marketplace https://docs.microsoft.com/en-us/azure/marketplace/marketplace-commercial-transaction-capabilities-and-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
@@ -85,7 +85,7 @@ This option allows higher or lower pricing than the publicly available offering.
We charge a 20% standard store service fee when customers purchase your transact offer from the commercial marketplace. For details of this fee, see section 5c of the [Microsoft Publisher Agreement](https://go.microsoft.com/fwlink/?LinkID=699560).
-For certain transactable offers that you publish to the commercial marketplace, you may qualify for a reduced store service fee of 10%. For an offer to qualify, it must have been designated by Microsoft as Azure IP Co-sell incentivized. Eligibility must be met at least five business days before the end of each calendar month to receive the Reduced Marketplace Service Fee for the month. For details about IP co-sell eligibility, see [Requirements for co-sell status](https://aka.ms/CertificationPolicies#3000-requirements-for-co-sell-status).
+For certain transactable offers that you publish to the commercial marketplace, you may qualify for a reduced store service fee of 10%. For an offer to qualify, it must have been designated by Microsoft as Azure IP Co-sell incentivized. Eligibility must be met at least five business days before the end of each calendar month to receive the Reduced Marketplace Service Fee. Once eligibility is met, the reduced service fee is awarded to all transactions effective the first day of the following month until Azure IP Co-sell incentivized status is lost. For details about IP co-sell eligibility, see [Requirements for co-sell status](https://aka.ms/CertificationPolicies#3000-requirements-for-co-sell-status).
The Reduced Marketplace Service Fee applies to Azure IP Co-sell incentivized SaaS, VMs, Managed apps, and any other qualified transactable IaaS solutions made available through the commercial marketplace. Paid SaaS offers associated with one Microsoft Teams app or at least two Microsoft 365 add-ins (Excel, PowerPoint, Word, Outlook, and SharePoint) and published to Microsoft AppSource also receive this discount.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/plan-saas-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-saas-offer.md
@@ -246,7 +246,7 @@ The following example shows a sample breakdown of costs and payouts to demonstra
| Microsoft pays you 80% of your license cost<br>`*` For qualified SaaS apps, Microsoft pays 90% of your license cost| $80.00 per month<br>``*`` $90.00 per month | |||
-**`*` Reduced Marketplace Service Fee** ΓÇô For certain SaaS offers that you have published on the commercial marketplace, Microsoft will reduce its Marketplace Service Fee from 20% (as described in the Microsoft Publisher Agreement) to 10%. For your offer(s) to qualify, your offer(s) must have been designated by Microsoft as Azure IP Co-sell incentivized. Eligibility must be met at least five (5) business days before the end of each calendar month to receive the Reduced Marketplace Service Fee for the month. For details about IP co-sell eligibility, see [Requirements for co-sell status](https://aka.ms/CertificationPolicies#3000-requirements-for-co-sell-status). The Reduced Marketplace Service Fee also applies to Azure IP Co-sell incentivized VMs, Managed Apps, and any other qualified transactable IaaS offers made available through the commercial marketplace.
+**`*` Reduced Marketplace Service Fee** ΓÇô For certain SaaS offers that you have published on the commercial marketplace, Microsoft will reduce its Marketplace Service Fee from 20% (as described in the Microsoft Publisher Agreement) to 10%. For your offer(s) to qualify, your offer(s) must have been designated by Microsoft as Azure IP Co-sell incentivized. Eligibility must be met at least five (5) business days before the end of each calendar month to receive the Reduced Marketplace Service Fee. Once eligibility is met, the reduced service fee is awarded to all transactions effective the first day of the following month and will continue to be applied until Azure IP Co-sell incentivized status is lost. For details about IP co-sell eligibility, see [Requirements for co-sell status](https://aka.ms/CertificationPolicies#3000-requirements-for-co-sell-status). The Reduced Marketplace Service Fee also applies to Azure IP Co-sell incentivized VMs, Managed Apps, and any other qualified transactable IaaS offers made available through the commercial marketplace.
## Additional sales opportunities
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/includes/task-list-set-subscription-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/includes/task-list-set-subscription-cli.md
@@ -4,7 +4,7 @@ ms.service: media-services
ms.topic: include ms.date: 08/17/2020 ms.author: inhenkel
-ms.custom: CLI, devx-track-azurecli
+ms.custom: CLI
--- <!-- List and set subscriptions -->
security-center https://docs.microsoft.com/en-us/azure/security-center/alerts-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/10/2020
+ms.date: 12/30/2020
ms.author: memildin ---
@@ -216,6 +216,7 @@ At the bottom of this page, there's a table describing the Azure Security Center
| **Attempt to run high privilege command detected**<br>(AppServices_HighPrivilegeCommand) | Analysis of App Service processes detected an attempt to run a command that requires high privileges.<br>The command ran in the web application context. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities. | - | Medium | | **Azure Security Center test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Azure Security Center. No further action is needed. | - | High | | **Connection to web page from anomalous IP address detected**<br>(AppServices_AnomalousPageAccess) | Azure App Service activity log indicates a connection to a sensitive web page from a source IP address that hasn't connected to it before. This might indicate that someone is attempting a brute force attack into your web app administration pages. It might also be the result of a new IP address being used by a legitimate user. | InitialAccess | Medium |
+| **Detected encoded executable in command line data**<br>(AppServices_Base64EncodedExecutableInCommandLineParams) | Analysis of host data on {Compromised host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | DefenseEvasion, Execution | High |
| **Digital currency mining related behavior detected**<br>(AppServices_DigitalCurrencyMining) | Analysis of host data on Inn-Flow-WebJobs detected the execution of a process or command normally associated with digital currency mining. | Execution | High | | **Executable decoded using certutil**<br>(AppServices_ExecutableDecodedUsingCertutil) | Analysis of host data on [Compromised entity] detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | DefenseEvasion, Execution | High | | **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} | Execution | Medium |
@@ -376,23 +377,23 @@ At the bottom of this page, there's a table describing the Azure Security Center
[Further details and notes](defender-for-storage-introduction.md)
-| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|---------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------:|----------|
-| **PREVIEW ΓÇô Access from a Suspicious IP address** | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
-| **Access from a Tor exit node to a storage account** | Indicates that this account has been accessed successfully from an IP address that is known as an active exit node of Tor (an anonymizing proxy). The severity of this alert considers the authentication type used (if any), and whether this is the first case of such access. Potential causes can be an attacker who has accessed your storage account by using Tor, or a legitimate user who has accessed your storage account by using Tor.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Probing / Exploitation | High |
-| **Access from an unusual location to a storage account** | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low |
-| **Anonymous access to a storage account** | Indicates that there's a change in the access pattern to a storage account. For instance, the account has been accessed anonymously (without any authentication), which is unexpected compared to the recent access pattern on this account. A potential cause is that an attacker has exploited public read access to a container that holds blob storage.<br>Applies to: Azure Blob Storage | Exploitation | High |
-| **PREVIEW ΓÇô Phishing content hosted on a storage account** | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High |
-| **Potential malware uploaded to a storage account** | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-is-hash-reputation-analysis-for-malware).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).| LateralMovement | High |
-| **Unusual access inspection in a storage account** | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
-| **Unusual amount of data extracted from a storage account** | Indicates that an unusually large amount of data has been extracted compared to recent activity on this storage container. A potential cause is that an attacker has extracted a large amount of data from a container that holds blob storage.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
-| **Unusual application accessed a storage account** | Indicates that an unusual application has accessed this storage account. A potential cause is that an attacker has accessed your storage account by using a new application.<br>Applies to: Azure Blob Storage, Azure Files | Exploitation | Medium |
-| **Unusual change of access permissions in a storage account** | Indicates that the access permissions of this storage container have been changed in an unusual way. A potential cause is that an attacker has changed container permissions to weaken its security posture or to gain persistence.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Persistence | Medium |
-| **Unusual data exploration in a storage account** | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
-| **Unusual deletion in a storage account** | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
-| **Unusual upload of .cspkg to a storage account** | Indicates that an Azure Cloud Services package (.cspkg file) has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has been preparing to deploy malicious code from your storage account to an Azure cloud service.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
-| **Unusual upload of .exe to a storage account** | Indicates that an .exe file has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has uploaded a malicious executable file to your storage account, or that a legitimate user has uploaded an executable file.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
-| | | | |
+| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------:|------------|
+| **PREVIEW ΓÇô Access from a suspicious IP address**<br>(Storage.Blob_AccessInspectionAnomaly<br>Storage.Files_AccessInspectionAnomaly) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+| **Access from a Tor exit node to a storage account**<br>(Storage.Blob_AnonymousAccessAnomaly) | Indicates that this account has been accessed successfully from an IP address that is known as an active exit node of Tor (an anonymizing proxy). The severity of this alert considers the authentication type used (if any), and whether this is the first case of such access. Potential causes can be an attacker who has accessed your storage account by using Tor, or a legitimate user who has accessed your storage account by using Tor.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Probing / Exploitation | High |
+| **Access from an unusual location to a storage account**<br>(Storage.Blob_ApplicationAnomaly<br>Storage.Files_ApplicationAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low |
+| **Anonymous access to a storage account**<br>(Storage.Blob_CspkgUploadAnomaly) | Indicates that there's a change in the access pattern to a storage account. For instance, the account has been accessed anonymously (without any authentication), which is unexpected compared to the recent access pattern on this account. A potential cause is that an attacker has exploited public read access to a container that holds blob storage.<br>Applies to: Azure Blob Storage | Exploitation | High |
+| **PREVIEW ΓÇô Phishing content hosted on a storage account**<br>(Storage.Blob_DataExfiltration.AmountOfDataAnomaly<br>Storage.Files_DataExfiltration.AmountOfDataAnomaly) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High |
+| **Potential malware uploaded to a storage account**<br>(Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly<br>Storage.Files_DataExfiltration.NumberOfFilesAnomaly) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-is-hash-reputation-analysis-for-malware).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | LateralMovement | High |
+| **Unusual access inspection in a storage account**<br>(Storage.Blob_DataExplorationAnomaly<br>Storage.Files_DataExplorationAnomaly) | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
+| **Unusual amount of data extracted from a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that an unusually large amount of data has been extracted compared to recent activity on this storage container. A potential cause is that an attacker has extracted a large amount of data from a container that holds blob storage.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
+| **Unusual application accessed a storage account**<br>(Storage.Blob_ExeUploadAnomaly<br>Storage.Files_ExeUploadAnomaly) | Indicates that an unusual application has accessed this storage account. A potential cause is that an attacker has accessed your storage account by using a new application.<br>Applies to: Azure Blob Storage, Azure Files | Exploitation | Medium |
+| **Unusual change of access permissions in a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that the access permissions of this storage container have been changed in an unusual way. A potential cause is that an attacker has changed container permissions to weaken its security posture or to gain persistence.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Persistence | Medium |
+| **Unusual data exploration in a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
+| **Unusual deletion in a storage account**<br>(Storage.Blob_PermissionsChangeAnomaly<br>Storage.Files_PermissionsChangeAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
+| **Unusual upload of .cspkg to a storage account**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that an Azure Cloud Services package (.cspkg file) has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has been preparing to deploy malicious code from your storage account to an Azure cloud service.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
+| **Unusual upload of .exe to a storage account**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | Indicates that an .exe file has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has uploaded a malicious executable file to your storage account, or that a legitimate user has uploaded an executable file.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
+| | | | |
## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB (Preview)
service-fabric-mesh https://docs.microsoft.com/en-us/azure/service-fabric-mesh/service-fabric-mesh-tutorial-deploy-service-fabric-mesh-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric-mesh/service-fabric-mesh-tutorial-deploy-service-fabric-mesh-app.md
@@ -5,7 +5,7 @@ author: georgewallace
ms.topic: tutorial ms.date: 09/18/2018 ms.author: gwallace
-ms.custom: mvc, devcenter , devx-track-azurecli
+ms.custom: mvc, devcenter
#Customer intent: As a developer, I want learn how to publish a Service Fabric Mesh app to Azure. ---
service-fabric-mesh https://docs.microsoft.com/en-us/azure/service-fabric-mesh/service-fabric-mesh-tutorial-template-remove-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric-mesh/service-fabric-mesh-tutorial-template-remove-app.md
@@ -5,7 +5,7 @@ author: georgewallace
ms.topic: tutorial ms.date: 01/11/2019 ms.author: gwallace
-ms.custom: mvc, devcenter, devx-track-azurecli
+ms.custom: mvc, devcenter
#Customer intent: As a developer, I want learn how to create a Service Fabric Mesh app that communicates with another service, and then publish it to Azure. ---
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/quickstart-create-vault-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/quickstart-create-vault-template.md
@@ -2,7 +2,7 @@
title: Quickstart to create an Azure Recovery Services vault using an Azure Resource Manager template. description: In this quickstart, you learn how to create an Azure Recovery Services vault using an Azure Resource Manager template (ARM template). ms.topic: quickstart
-ms.custom: subject-armqs, devx-track-azurecli
+ms.custom: subject-armqs
ms.date: 04/29/2020 ---
spring-cloud https://docs.microsoft.com/en-us/azure/spring-cloud/spring-cloud-quickstart-setup-config-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/spring-cloud-quickstart-setup-config-server.md
@@ -6,7 +6,7 @@ ms.author: brendm
ms.service: spring-cloud ms.topic: quickstart ms.date: 09/08/2020
-ms.custom: devx-track-java, devx-track-azurecli
+ms.custom: devx-track-java
zone_pivot_groups: programming-languages-spring-cloud ---
sql-database https://docs.microsoft.com/en-us/azure/sql-database/scripts/sql-database-auditing-and-threat-detection-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-auditing-and-threat-detection-cli.md
@@ -4,7 +4,7 @@ description: Azure CLI example script to configure auditing and Advanced Threat
services: sql-database ms.service: sql-database ms.subservice: security
-ms.custom: security, devx-track-azurecli
+ms.custom: security
ms.devlang: azurecli ms.topic: sample author: ronitr
sql-database https://docs.microsoft.com/en-us/azure/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-cli.md
@@ -4,7 +4,7 @@ description: Azure CLI example script to set up active geo-replication for a poo
services: sql-database ms.service: sql-database ms.subservice: high-availability
-ms.custom: sqldbrb=1, devx-track-azurecli
+ms.custom: sqldbrb=1
ms.devlang: azurecli ms.topic: sample author: mashamsft
static-web-apps https://docs.microsoft.com/en-us/azure/static-web-apps/application-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/application-settings.md
@@ -7,7 +7,7 @@ ms.service: static-web-apps
ms.topic: how-to ms.date: 05/08/2020 ms.author: buhollan
-ms.custom: devx-track-js, devx-track-azurecli
+ms.custom: devx-track-js
--- # Configure application settings for Azure Static Web Apps Preview
storage https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-container-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-overview.md
@@ -10,7 +10,7 @@ ms.topic: conceptual
ms.date: 08/25/2020 ms.author: tamram ms.subservice: blobs
-ms.custom: references_regions, devx-track-azurecli
+ms.custom: references_regions
--- # Soft delete for containers (preview)
storage https://docs.microsoft.com/en-us/azure/storage/common/redundancy-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/redundancy-migration.md
@@ -11,7 +11,7 @@ ms.date: 09/24/2020
ms.author: tamram ms.reviewer: artek ms.subservice: common
-ms.custom: devx-track-azurepowershell, devx-track-azurecli
+ms.custom: devx-track-azurepowershell
--- # Change how a storage account is replicated
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/custom-deserializer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/custom-deserializer.md
@@ -6,14 +6,14 @@ ms.author: mamccrea
ms.reviewer: mamccrea ms.service: stream-analytics ms.topic: tutorial
-ms.date: 05/06/2019
+ms.date: 12/17/2020
--- # Tutorial: Custom .NET deserializers for Azure Stream Analytics Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for both cloud and edge jobs.
-This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio.
+This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio. To learn how to create .NET deserializers in Visual Studio Code, see [Create .NET deserializers for Azure Stream Analytics jobs in Visual Studio Code](visual-studio-code-custom-deserializer.md).
In this tutorial, you learn how to:
@@ -21,17 +21,16 @@ In this tutorial, you learn how to:
> * Create a custom deserializer for protocol buffer. > * Create an Azure Stream Analytics job in Visual Studio. > * Configure your Stream Analytics job to use the custom deserializer.
-> * Run your Stream Analytics job locally to test the custom deserializer.
+> * Run your Stream Analytics job locally to test and debug the custom deserializer.
+ ## Prerequisites * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Install [Visual Studio 2017](https://www.visualstudio.com/downloads/) or [Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/). Enterprise (Ultimate/Premium), Professional, and Community editions are supported. Express edition isn't supported.
+* Install [Visual Studio 2019 (recommended)](https://www.visualstudio.com/downloads/) or [Visual Studio 2017](https://www.visualstudio.com/vs/older-downloads/). Enterprise (Ultimate/Premium), Professional, and Community editions are supported. Express edition isn't supported.
-* [Install the Stream Analytics tools for Visual Studio](stream-analytics-tools-for-visual-studio-install.md) or update to the latest version. The following versions of Visual Studio are supported:
- * Visual Studio 2015
- * Visual Studio 2017
+* [Install the Stream Analytics tools for Visual Studio](stream-analytics-tools-for-visual-studio-install.md) or update to the latest version.
* Open **Cloud Explorer** in Visual Studio, and sign in to your Azure subscription.
@@ -111,11 +110,13 @@ You have successfully implemented a custom deserializer for your Stream Analytic
## Debug your deserializer
-You can debug your .NET deserializer locally the same way you debug standard .NET code.
+You can debug your .NET deserializer locally the same way you debug standard .NET code.
+
+1. Right click **ProtobufCloudDeserializer** project name and set it as startup project.
-1. Add breakpoints in your function.
+2. Add breakpoints in your function.
-2. Press **F5** to start debugging. The program will stop at your breakpoints as expected.
+3. Press **F5** to start debugging. The program will stop at your breakpoints as expected.
## Clean up resources
@@ -130,4 +131,4 @@ When no longer needed, delete the resource group, the streaming job, and all rel
In this tutorial, you learned how to implement a custom .NET deserializer for the protocol buffer input serialization. To learn more about creating custom deserializers, continue to the following article: > [!div class="nextstepaction"]
-> [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
+> [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
\ No newline at end of file
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-edge.md
@@ -78,7 +78,18 @@ For both inputs and outputs, CSV and JSON formats are supported.
For each input and output stream you create in your Stream Analytics job, a corresponding endpoint is created on your deployed module. These endpoints can be used in the routes of your deployment.
-The only supported stream input and stream output type is Edge Hub. Reference input supports reference file type. Other outputs can be reached using a cloud job downstream. For example, a Stream Analytics job hosted in Edge sends output to Edge Hub, which can then send output to IoT Hub. You can use a second cloud-hosted Azure Stream Analytics job with input from IoT Hub and output to Power BI or another output type.
+Supported stream input types are:
+* Edge Hub
+* Event Hub
+* IoT Hub
+
+Supported stream output types are:
+* Edge Hub
+* SQL Database
+* Event Hub
+* Blob Storage/ADLS Gen2
+
+Reference input supports reference file type. Other outputs can be reached using a cloud job downstream. For example, a Stream Analytics job hosted in Edge sends output to Edge Hub, which can then send output to IoT Hub. You can use a second cloud-hosted Azure Stream Analytics job with input from IoT Hub and output to Power BI or another output type.
## License and third-party notices * [Azure Stream Analytics on IoT Edge license](https://go.microsoft.com/fwlink/?linkid=862827).
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-real-time-fraud-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
@@ -280,17 +280,14 @@ When you use a join with streaming data, the join must provide some limits on ho
1. Paste the following query in the query editor: ```SQL
- SELECT System.Timestamp as Time,
- CS1.CallingIMSI,
- CS1.CallingNum as CallingNum1,
- CS2.CallingNum as CallingNum2,
- CS1.SwitchNum as Switch1,
- CS2.SwitchNum as Switch2
- FROM CallStream CS1 TIMESTAMP BY CallRecTime
- JOIN CallStream CS2 TIMESTAMP BY CallRecTime
- ON CS1.CallingIMSI = CS2.CallingIMSI
- AND DATEDIFF(ss, CS1, CS2) BETWEEN 1 AND 5
- WHERE CS1.SwitchNum != CS2.SwitchNum
+ SELECT System.Timestamp AS WindowEnd, COUNT(*) AS FraudulentCalls
+ INTO "MyPBIoutput"
+ FROM "CallStream" CS1 TIMESTAMP BY CallRecTime
+ JOIN "CallStream" CS2 TIMESTAMP BY CallRecTime
+ ON CS1.CallingIMSI = CS2.CallingIMSI
+ AND DATEDIFF(ss, CS1, CS2) BETWEEN 1 AND 5
+ WHERE CS1.SwitchNum != CS2.SwitchNum
+ GROUP BY TumblingWindow(Duration(second, 1))
``` This query is like any SQL join except for the `DATEDIFF` function in the join. This version of `DATEDIFF` is specific to Streaming Analytics, and it must appear in the `ON...BETWEEN` clause. The parameters are a time unit (seconds in this example) and the aliases of the two sources for the join. This is different from the standard SQL `DATEDIFF` function.
@@ -341,4 +338,4 @@ Once you've got the application running in your browser, follow these steps to e
In this tutorial, you created a simple Stream Analytics job, analyzed the incoming data, and presented results in a Power BI dashboard. To learn more about Stream Analytics jobs, continue to the next tutorial: > [!div class="nextstepaction"]
-> [Run Azure Functions within Stream Analytics jobs](stream-analytics-with-azure-functions.md)
\ No newline at end of file
+> [Run Azure Functions within Stream Analytics jobs](stream-analytics-with-azure-functions.md)
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/visual-studio-code-custom-deserializer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/visual-studio-code-custom-deserializer.md new file mode 100644
@@ -0,0 +1,135 @@
+---
+title: Create custom .NET deserializers for Azure Stream Analytics cloud jobs using Visual Studio Code
+description: This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code.
+author: su-jie
+ms.author: sujie
+ms.reviewer: mamccrea
+ms.service: stream-analytics
+ms.topic: how-to
+ms.date: 12/22/2020
+---
++
+# Create custom .NET deserializers for Azure Stream Analytics in Visual Studio Code
+
+Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for cloud jobs.
+
+## Custom .NET deserializers in Visual Studio Code
+
+You can create, test and debug a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code.
+
+### Prerequisites
+
+* Install [.NET core SDK](https://dotnet.microsoft.com/download) and restart Visual Studio Code.
+
+* Use this [quickstart](quick-create-visual-studio-code.md) to learn how to create a Stream Analytics job using Visual Studio Code.
+
+### Create a custom deserializer
+
+1. Open a terminal and run following command to create a .NET class library in Visual Studio Code for your custom deserializer called **ProtobufDeserializer**.
+
+ ```dotnetcli
+ dotnet new classlib -o ProtobufDeserializer
+ ```
+
+2. Go to the **ProtobufDeserializer** project directory and install the [Microsoft.Azure.StreamAnalytics](https://www.nuget.org/packages/Microsoft.Azure.StreamAnalytics/) and [Google.Protobuf](https://www.nuget.org/packages/Google.Protobuf/) NuGet packages.
+
+ ```dotnetcli
+ dotnet add package Microsoft.Azure.StreamAnalytics
+ ```
+
+ ```dotnetcli
+ dotnet add package Google.Protobuf
+ ```
+
+3. Add the [MessageBodyProto class](https://github.com/Azure/azure-stream-analytics/blob/master/CustomDeserializers/Protobuf/MessageBodyProto.cs) and the [MessageBodyDeserializer class](https://github.com/Azure/azure-stream-analytics/blob/master/CustomDeserializers/Protobuf/MessageBodyDeserializer.cs) to your project.
+
+4. Build the **ProtobufDeserializer** project.
+
+### Add an Azure Stream Analytics project
+
+1. Open Visual Studio Code and select **Ctrl+Shift+P** to open the command palette. Then enter ASA and select **ASA: Create New Project**. Name it **ProtobufCloudDeserializer**.
+
+### Configure a Stream Analytics job
+
+1. Double-click **JobConfig.json**. Use the default configurations, except for the following settings:
+
+ |Setting|Suggested Value|
+ |-------|---------------|
+ |Global Storage Settings Resource|Choose data source from current account|
+ |Global Storage Settings Subscription| < your subscription >|
+ |Global Storage Settings Storage Account| < your storage account >|
+ |CustomCodeStorage Settings Storage Account|< your storage account >|
+ |CustomCodeStorage Settings Container|< your storage container >|
+
+2. Under **Inputs** folder open **input.json**. Select **Add live input** and add an input from Azure Data Lake Storage Gen2/Blob storage, choose **Select from your Azure subscription**. Use the default configurations, except for the following settings:
+
+ |Setting|Suggested Value|
+ |-------|---------------|
+ |Name|Input|
+ |Subscription|< your subscription >|
+ |Storage Account|< your storage account >|
+ |Container|< your storage container >|
+ |Serialization Type|Choose **Custom**|
+ |SerializationProjectPath|Select **Choose library project path** from CodeLens and select the **ProtobufDeserializer** library project created in previous section. Select **build project** to build the project|
+ |SerializationClassName|Select **select deserialization class** from CodeLens to auto populate the class name and DLL path|
+ |Class Name|MessageBodyProto.MessageBodyDeserializer|
+
+ :::image type="content" source="./media/custom-deserializer/create-input-vscode.png" alt-text="Add custom deserializer input.":::
+
+3. Add the following query to the **ProtobufCloudDeserializer.asaql** file.
+
+ ```sql
+ SELECT * FROM Input
+ ```
+
+4. Download the [sample protobuf input file](https://github.com/Azure/azure-stream-analytics/blob/master/CustomDeserializers/Protobuf/SimulatedTemperatureEvents.protobuf). In the **Inputs** folder, right-click **input.json** and select **Add local input**. Then, double-click **local_input1.json** and use the default configurations, except for the following settings.
+
+ |Setting|Suggested Value|
+ |-------|---------------|
+ |Select local file path|Click CodeLens to select < The file path for the downloaded sample protobuf input file>|
+
+### Execute the Stream Analytics job
+
+1. Open **ProtobufCloudDeserializer.asaql** and select **Run Locally** from CodeLens then choose **Use Local Input** from the dropdown list.
+
+2. Observe the results in **Results** tab in job diagram view on the right. You can also click the steps in the job diagram to view intermediate result. More details please see [Debug locally using job diagram](debug-locally-using-job-diagram-vs-code.md).
+
+ :::image type="content" source="./media/custom-deserializer/check-local-run-result-vscode.png" alt-text="Check local run result.":::
+
+You have successfully implemented a custom deserializer for your Stream Analytics job! In this tutorial, you tested the custom deserializer locally with local input data. You can also test it [using live data input in the cloud](visual-studio-code-local-run-live-input.md). For running the job in the cloud, you would properly configure the input and output. Then submit the job to Azure from Visual Studio Code to run your job in the cloud using the custom deserializer you just implemented.
+
+### Debug your deserializer
+
+You can debug your .NET deserializer locally the same way you debug standard .NET code.
+
+1. Add breakpoints in your .NET function.
+
+2. Click **Run** from Visual Studio Code Activity bar and select **create a launch.json file**.
+ :::image type="content" source="./media/custom-deserializer/create-launch-file-vscode.png" alt-text="Create launch file.":::
+
+ Choose **ProtobufCloudDeserializer** and then **Azure Stream Analytics** from the dropdown list.
+ :::image type="content" source="./media/custom-deserializer/create-launch-file-vscode-2.png" alt-text="Create launch file 2.":::
+
+ Edit the **launch.json** file to replace <ASAScript>.asaql with ProtobufCloudDeserializer.asaql.
+ :::image type="content" source="./media/custom-deserializer/configure-launch-file-vscode.png" alt-text="Configure launch file.":::
+
+3. Press **F5** to start debugging. The program will stop at your breakpoints as expected. This works for both local input and live input data.
+
+ :::image type="content" source="./media/custom-deserializer/debug-vscode.png" alt-text="Debug custom deserializer.":::
+
+## Clean up resources
+
+When no longer needed, delete the resource group, the streaming job, and all related resources. Deleting the job avoids billing the streaming units consumed by the job. If you're planning to use the job in future, you can stop it and restart it later when you need. If you are not going to continue to use this job, delete all resources created by this tutorial by using the following steps:
+
+1. From the left-hand menu in the Azure portal, select **Resource groups** and then select the name of the resource you created.
+
+2. On your resource group page, select **Delete**, type the name of the resource to delete in the text box, and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you learned how to implement a custom .NET deserializer for the protocol buffer input serialization. To learn more about creating custom deserializers, continue to the following article:
+
+> [!div class="nextstepaction"]
+> * [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
+> * [Test Azure Stream Analytics jobs locally with live input using Visual Studio Code](visual-studio-code-local-run-live-input.md)
\ No newline at end of file
virtual-machine-scale-sets https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md
@@ -8,7 +8,7 @@ ms.service: virtual-machine-scale-sets
ms.subservice: management ms.date: 02/22/2018 ms.reviewer: jushiman
-ms.custom: mimckitt, devx-track-azurecli
+ms.custom: mimckitt
--- # Understand instance IDs for Azure VM scale set VMs
virtual-machine-scale-sets https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md
@@ -8,7 +8,7 @@ ms.service: virtual-machine-scale-sets
ms.subservice: availability ms.date: 02/26/2020 ms.reviewer: jushiman
-ms.custom: avverma, devx-track-azurecli
+ms.custom: avverma
--- # Instance Protection for Azure virtual machine scale set instances
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/boot-diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/troubleshooting/boot-diagnostics.md
@@ -48,7 +48,7 @@ On the **Management** tab, in **Monitoring** section, make sure that **Boot diag
![Create VM](./media/virtual-machines-common-boot-diagnostics/enable-boot-diagnostics-vm.png) > [!NOTE]
-> The Boot diagnostics feature does not support premium storage account or Zone Redundent Storage Account Types. If you use the premium storage account for Boot diagnostics, you might receive the StorageAccountTypeNotSupported error when you start the VM.
+> The Boot diagnostics feature does not support premium storage account or Zone Redundant Storage Account Types. If you use the premium storage account for Boot diagnostics, you might receive the StorageAccountTypeNotSupported error when you start the VM.
> ### Deploying from an Azure Resource Manager template
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/troubleshoot-rdp-static-ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/troubleshooting/troubleshoot-rdp-static-ip.md
@@ -60,13 +60,13 @@ To resolve this issue, use Serial control to enable DHCP or [reset network inter
3. If the DHCP is disabled, revert the configuration of your network interface to use DHCP: ```console
- netsh interface ip set address name="<NIC Name>" source=dhc
+ netsh interface ip set address name="<NIC Name>" source=dhcp
``` For example, if the interwork interface names "Ethernet 2", run the following command: ```console
- netsh interface ip set address name="Ethernet 2" source=dhc
+ netsh interface ip set address name="Ethernet 2" source=dhcp
``` 4. Query the IP configuration again to make sure that the network interface is now correctly set up. The new IP address should match the one thatΓÇÖs provided by the Azure.
@@ -77,4 +77,4 @@ To resolve this issue, use Serial control to enable DHCP or [reset network inter
You don't have to restart the VM at this point. The VM will be back reachable.
-After that, if you want to configure the static IP for the VM, see [Configure static IP addresses for a VM](../../virtual-network/virtual-networks-static-private-ip-arm-pportal.md).
\ No newline at end of file
+After that, if you want to configure the static IP for the VM, see [Configure static IP addresses for a VM](../../virtual-network/virtual-networks-static-private-ip-arm-pportal.md).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-li-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-li-portal.md
@@ -13,7 +13,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure
-ms.date: 12/18/2020
+ms.date: 12/31/2020
ms.author: juergent ms.custom: H1Hack27Feb2017 ---
@@ -21,7 +21,7 @@ ms.custom: H1Hack27Feb2017
# Azure HANA Large Instances control through Azure portal >[!NOTE]
->For Rev 4.2, follow the instructions in the [Manage BareMetal Instances through the Azure portal](baremetal-infrastructure-portal.md) topic.
+>For Rev 4.2, follow the instructions in the [Manage BareMetal Instances through the Azure portal](../../../baremetal-infrastructure/workloads/sap/baremetal-infrastructure-portal.md) topic.
This document covers the way how [HANA Large Instances](./hana-overview-architecture.md) are presented in [Azure portal](https://portal.azure.com) and what activities can be conducted through Azure portal with HANA Large Instance units that are deployed for you. Visibility of HANA Large Instances in Azure portal is provided through an Azure resource provider for HANA Large Instances, which currently is in public preview
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-ha-availability-zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-ha-availability-zones.md
@@ -14,29 +14,27 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-windows ms.workload: infrastructure-services
-ms.date: 03/05/2020
+ms.date: 12/29/2020
ms.author: juergent ms.custom: H1Hack27Feb2017 --- # SAP workload configurations with Azure Availability Zones
-[Azure Availability Zones](../../../availability-zones/az-overview.md) is one of the high-availability features that Azure provides. Using Availability Zones improves the overall availability of SAP workloads on Azure. This feature is already available in some [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). In the future, it will be available in more regions.
+Additionally to the deployment of the different SAP architecture layers in Azure availability sets, the more recently introduced [Azure Availability Zones](../../../availability-zones/az-overview.md) can be used for SAP workload deployments as well. An Azure Availability Zone is defined as: "Unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking". Azure Availability Zones are not available in all regions. For Azure regions that provide Availability Zones, check the [Azure region map](https://azure.microsoft.com/global-infrastructure/geographies/). This map is going to show you which regions provide or are announced to provide Availability Zones.
-This graphic shows the basic architecture of SAP high availability:
+As of the typical SAP NetWeaver or S/4HANA architecture, you need to protect three different layers:
-![Standard high availability configuration](./media/sap-ha-availability-zones/standard-ha-config.png)
+- SAP application layer, which can be one to a few dozen VMs. You want to minimize the chance of VMs getting deployed on the same host server. You also want those VMs in an acceptable proximity to the DBMS layer to keep network latency in an acceptable window
+- SAP ASCS/SCS layer that is representing a single point of failure in the SAP NetWeaver and S/4HANA architecture. You usually look at two VMs that you want to cover with a failover framework. Therefore, these VMs should be allocated in different infrastructure fault and update domains
+- SAP DBMS layer, which represents a single point of failure as well. In the usual cases, it consists out of two VMs that are covered by a failover framework. Therefore, these VMs should be allocated in different infrastructure fault and update domains. Exceptions are SAP HANA scale-out deployments where more than two VMs are can be used
-The SAP application layer is deployed across one Azure [availability set](../../manage-availability.md). For high availability of SAP Central Services, you can deploy two VMs in a separate availability set. Use Windows Server Failover Clustering or Pacemaker (Linux) as a high-availability framework with automatic failover in case of an infrastructure or software problem. To learn more about these deployments, see:
+The major differences between deploying your critical VMs through availability sets or Availability Zones are:
-- [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk](./sap-high-availability-guide-wsfc-shared-disk.md)-- [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using file share](./sap-high-availability-guide-wsfc-file-share.md)-- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications](./high-availability-guide-suse.md)-- [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux](./high-availability-guide-rhel.md)
+- Deploying with an availability set is lining up the VMs within the set in a single zone or datacenter (whatever applies for the specific region). As a result the deployment through the availability set is not protected by power, cooling or networking issues that affect the dataceter(s) of the zone as a whole. On the plus side, the VMs are aligned with update and fault domains within that zone or datacenter. Specifically for the SAP ASCS or DBMS layer where we protect two VMs per availability set, the alignment with fault and update domains prevents that both VMs are ending up on the same host hardware
+- Deploying VMs through an Azure Availability Zones and choosing different zones (maximum of three possible so far), is going to deploy the VMs across the different physical locations and with that add the additional protection from power, cooling or networking issues that affect the dataceter(s) of the zone as a whole. However, as you deploy more than one VM of the same VM family into the same Availability Zone, there is no protection from those VMs ending up on the same host. As a result, deploying through Availability Zones is ideal for the SAP ASCS and DBMS layer where we usually look at two VMs each. For the SAP application layer, which can be drastically more than two VMs, you might need to fall back to a different deployment model (see later)
-A similar architecture applies for the DBMS layer of SAP NetWeaver, S/4HANA, or Hybris systems. You deploy the DBMS layer in an active/passive mode with a failover cluster solution to protect from infrastructure or software failure. The failover cluster solution could be a DBMS-specific failover framework, Windows Server Failover Clustering, or Pacemaker.
-
-To deploy the same architecture by using Azure Availability Zones, you need to make some changes to the architecture outlined earlier. This article describes these changes.
+Your motivation for a deployment across Azure Availability Zones should be that you, on top of covering failure of a single critical VM or ability to reduce downtime for software patching within a critical, want to protect from larger infrastructure issues that might affect the availability of one or multiple Azure datacenters.
## Considerations for deploying across Availability Zones
@@ -45,7 +43,7 @@ Consider the following when you use Availability Zones:
- There are no guarantees regarding the distances between various Availability Zones within an Azure region. - Availability Zones are not an ideal DR solution. Natural disasters can cause widespread damage in world regions, including heavy damage to power infrastructures. The distances between various zones might not be large enough to constitute a proper DR solution.-- The network latency across Availability Zones is not the same in all Azure regions. In some cases, you can deploy and run the SAP application layer across different zones because the network latency from one zone to the active DBMS VM is acceptable. But in some Azure regions, the latency between the active DBMS VM and the SAP application instance, when deployed in different zones, might not be acceptable for SAP business processes. In these cases, the deployment architecture needs to be different, with an active/active architecture for the application or an active/passive architecture where cross-zone network latency is too high.
+- The network latency across Availability Zones is not the same in all Azure regions. In some cases, you can deploy and run the SAP application layer across different zones because the network latency from one zone to the active DBMS VM is acceptable. But in some Azure regions, the latency between the active DBMS VM and the SAP application instance, when deployed in different zones, might not be acceptable for SAP business processes. In these cases, the deployment architecture needs to be different, with an active/active architecture for the application, or an active/passive architecture where cross-zone network latency is too high.
- When deciding where to use Availability Zones, base your decision on the network latency between the zones. Network latency plays an important role in two areas: - Latency between the two DBMS instances that need to have synchronous replication. The higher the network latency, the more likely it will affect the scalability of your workload. - The difference in network latency between a VM running an SAP dialog instance in-zone with the active DBMS instance and a similar VM in another zone. As this difference increases, the influence on the running time of business processes and batch jobs also increases, dependent on whether they run in-zone with the DBMS or in a different zone.
@@ -54,15 +52,21 @@ When you deploy Azure VMs across Availability Zones and establish failover solut
- You must use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/) when you deploy to Azure Availability Zones. - The mapping of zone enumerations to the physical zones is fixed on an Azure subscription basis. If you're using different subscriptions to deploy your SAP systems, you need to define the ideal zones for each subscription.-- You can't deploy Azure availability sets within an Azure Availability Zone unless you use [Azure Proximity Placement Group](../../linux/co-location.md). The way how you can deploy the SAP DBMS layer and the central services across zones and at the same time deploy the SAP application layer using availability sets and still achieve close proximity of the VMs is documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md). If you are not leveraging Azure proximity placement groups, you need to choose one or the other as a deployment framework for virtual machines.
+- You can't deploy Azure availability sets within an Azure Availability Zone unless you use [Azure Proximity Placement Group](../../linux/co-location.md). The way how you can deploy the SAP DBMS layer and the central services across zones and at the same time deploy the SAP application layer using availability sets and still achieve close proximity of the VMs is documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md). If you are not using Azure proximity placement groups, you need to choose one or the other as a deployment framework for virtual machines.
- You can't use an [Azure Basic Load Balancer](../../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Server Failover Clustering or Linux Pacemaker. Instead, you need to use the [Azure Standard Load Balancer SKU](../../../load-balancer/load-balancer-standard-availability-zones.md). ## The ideal Availability Zones combination
-Before you decide how to use Availability Zones, you need to determine:
+If you want to deploy an SAP NetWeaver or S/4HANA system across zones, there are two principle architectures you can deploy:
+
+- Active/active: The pair of VMs running ASCS/SCS and the pair of VMS running the DBMS layer are distributed across two zones. The number of VMs running the SAP application layer are deployed to an even numbers across the same two zones. If a DBMS or ASCS/SCS VM is failing over, some of the open and active transactions might be rolled back. But users are remaining logged in. It does not really matter in which of the zones the active DBMS VM and the application instances run. This architecture is the preferred architecture to deploy across zones.
+- Active/passive: The pair of VMs running ASCS/SCS and the pair of VMS running the DBMS layer are distributed across two zones. The number of VMs running the SAP application layer are deployed into one of the Availability Zones. You run the application layer in the same zone as the active ASCS/SCS and DBMS instance. You use this deployment architecture if the network latency across the different zones is too high to run the application layer distributed across the zones. Instead the SAP application layer needs to run in the same zone as the active ASCS/SCS and/or DBMS instance. If an ASCS/SCS or DBMS VM fails over to the secondary zone, you might encounter higher network latency and with that a reduction of throughput. And you are required to fail back the previously failed over VM as soon as possible to get back to the previous throughput levels. If a zonal outage occurs, the application layer needs to be failed over to the secondary zone. An activity that users experience as complete system shutdown. In some of the Azure regions, this architecture is the only viable architecture when you want to use Availability Zones. If you can't accept the potential impact of an ASCS/SCS or DBMS VMS failing over to the secondary zone, you might be better of staying with availability set deployments
++
+So before you decide how to use Availability Zones, you need to determine:
-- The network latency among the three zones of an Azure region. This will enable you to choose the zones with the least network latency in cross-zone network traffic.
+- The network latency among the three zones of an Azure region. Knowing the network latency between the zones of a region is going to enable you to choose the zones with the least network latency in cross-zone network traffic.
- The difference between VM-to-VM latency within one of the zones, of your choosing, and the network latency across two zones of your choosing. - A determination of whether the VM types that you need to deploy are available in the two zones that you selected. With some VMs, especially M-Series VMs, you might encounter situations in which some SKUs are available in only two of the three zones.
@@ -71,7 +75,7 @@ To determine the latency between the different zones, you need to:
- Deploy the VM SKU you want to use for your DBMS instance in all three zones. Make sure [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled when you take this measurement. - When you find the two zones with the least network latency, deploy another three VMs of the VM SKU that you want to use as the application layer VM across the three Availability Zones. Measure the network latency against the two DBMS VMs in the two DBMS zones that you selected. -- Use **niping** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
+- Use **`niping`** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
You don't need to perform these tests manually. You can find a PowerShell procedure [Availability Zone Latency Test](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/master/AvZone-Latency-Test) that automates the latency tests described.
@@ -88,12 +92,33 @@ In making these decisions, also take into account SAP's network latency recommen
> [!IMPORTANT]
-> It's expected that the measurements described earlier will provide different results in every Azure region that supports [Availability Zones](../../../availability-zones/az-overview.md). Even if your network latency requirements are the same, you might need to adopt different deployment strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the network latency among the three different zones can be vastly different. In other regions, the network latency among the three different zones might be more uniform. The claim that there is always a network latency between 1 and 2 milliseconds is not correct. The network latency across Availability Zones in Azure regions can't be generalized.
+> It's expected that the measurements described earlier will provide different results in every Azure region that supports [Availability Zones](https://azure.microsoft.com/global-infrastructure/geographies/). Even if your network latency requirements are the same, you might need to adopt different deployment strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the network latency among the three different zones can be vastly different. In other regions, the network latency among the three different zones might be more uniform. The claim that there is always a network latency between 1 and 2 milliseconds is not correct. The network latency across Availability Zones in Azure regions can't be generalized.
## Active/Active deployment
-This deployment architecture is called active/active because you deploy your active SAP application servers across two or three zones. The SAP Central Services instance that uses enqueue replication will be deployed between two zones. The same is true for the DBMS layer, which will be deployed across the same zones as SAP Central Service.
+This deployment architecture is called active/active because you deploy your active SAP application servers across two or three zones. The SAP Central Services instance that uses enqueue replication will be deployed between two zones. The same is true for the DBMS layer, which will be deployed across the same zones as SAP Central Service. When considering this configuration, you need to find the two Availability Zones in your region that offer cross-zone network latency that's acceptable for your workload and your synchronous DBMS replication. You also want to be sure the delta between network latency within the zones you selected and the cross-zone network latency isn't too large.
+
+Nature of the SAP architecture is that, unless you configure it differently, users and batch jobs can be executed in the different application instances. The side effect of this fact with the active/active deployment is that batch jobs might be executed by any SAP application instances independent on whether those run in the same zone with the active DBMS or not. If the difference in network latency between the difference zones is small compared to network latency within a zone, the difference in run times of batch jobs might not be significant. However, the larger the difference of network latency within a zone compared to across zone network traffic is, the run time of batch jobs can be impacted more if the job got executed in a zone where the DBMS instance is not active. It is on you as a customer to decide what acceptable differences in run time are. And with that what the tolerable network latency for cross zones traffic is.
+
+Azure regions where such an active/active deployment should be possible without large differences in run time and throughput within the application layer deployed across different Availability Zones, list like:
+
+- West US2 (all three zones)
+- East US2 (all three zones)
+- Central US (all three zones)
+- North Europe (all three zones)
+- West Europe (two of the three zones)
+- East US (two of the three zones)
+- South Central US (two of the three zones)
+- UK South (two of the three zones)
+
+Azure regions where this SAP deployment architecture across zones is not recommended are:
+
+- France Central
+- South Africa North
+- Canada Central
+- Japan East
+
+Dependent on what you are willing to tolerate on run time differences other regions not listed could qualify as well.
-When considering this configuration, you need to find the two Availability Zones in your region that offer cross-zone network latency that's acceptable for your workload and your synchronous DBMS replication. You also want to be sure the delta between network latency within the zones you selected and the cross-zone network latency isn't too large. This is because you don't want large variations, depending on whether a job runs in-zone with the DBMS server or across zones, in the running times of your business processes or batch jobs. Some variations are acceptable, but not factors of difference.
A simplified schema of an active/active deployment across two zones could look like this:
@@ -106,15 +131,15 @@ The following considerations apply for this configuration:
- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones. - The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.-- Azure Premium Storage and [Ultra SSD storage](../../disks-types.md#ultra-disk) don't support any type of storage replication across zones. The application (DBMS or SAP Central Services) must replicate important data.
+- Azure Premium Storage, [Ultra SSD storage](../../disks-types.md#ultra-disk), or ANF don't support any type of storage replication across zones. The application (DBMS or SAP Central Services) must replicate important data.
- The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share (Windows), or an NFS share (Linux). You need to use a technology that replicates these shared disks or shares between the zones. These technologies are supported: - For Windows, a cluster solution that uses SIOS DataKeeper, as documented in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md). - For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device in case you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) or additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
- To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and users to application instances that are in-zone with the active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in the case of a zonal failover, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM. -- You might want to deploy dormant dialog instances in each of the zones. This is to enable an immediate return to the former resource capacity if a zone used by part of your application instances is out of service.
+- You might want to deploy dormant dialog instances in each of the zones.
> [!IMPORTANT] > In this active/active scenario additional charges for bandwidth are announced by Microsoft from 04/01/2020 on. Check the document [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/). The data transfer between the SAP application layer and SAP DBMS layer is quite intensive. Therefore the active/active scenario can contribute to costs quite a bit. Keep checking this article to get the exact costs
@@ -123,6 +148,18 @@ The following considerations apply for this configuration:
## Active/Passive deployment If you can't find an acceptable delta between the network latency within one zone and the latency of cross-zone network traffic, you can deploy an architecture that has an active/passive character from the SAP application layer point of view. You define an *active* zone, which is the zone where you deploy the complete application layer and where you attempt to run both the active DBMS and the SAP Central Services instance. With such a configuration, you need to make sure you don't have extreme run time variations, depending on whether a job runs in-zone with the active DBMS instance or not, in business transactions and batch jobs.
+Azure regions where this type of deployment architecture across different zones may be preferable are:
+
+- Southeast Asia
+- Australia East
+- Brazil South
+- Germany West Central
+- South Africa North
+- France Central
+- Canada Central
+- Japan East
++ The basic layout of the architecture looks like this: ![Active/Passive zone deployment](./media/sap-ha-availability-zones/active_passive_zones_deployment.png)
@@ -134,19 +171,19 @@ The following considerations apply for this configuration:
- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones. - The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.-- Azure Premium Storage and [Ultra SSD storage](../../disks-types.md#ultra-disk) don't support any type of storage replication across zones. The application (DBMS or SAP Central Services) must replicate important data.
+- Azure Premium Storage, [Ultra SSD storage](../../disks-types.md#ultra-disk), or ANF don't support any type of storage replication across zones. The application (DBMS or SAP Central Services) must replicate important data.
- The same is true for the shared sapmnt directory, which is a shared disk (Windows), a CIFS share (Windows), or an NFS share (Linux). You need to use a technology that replicates these shared disks or shares between the zones. These technologies are supported: - For Windows, a cluster solution that uses SIOS DataKeeper, as documented in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md). - For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device in case you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) or additional application instances.-- You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start application resources in case of a zone failure.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
+- You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start application resources for the case of a zone failure.
- [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) is currently unable to replicate active VMs to dormant VMs between zones. -- You should invest in automation that allows you, in case of a zone failure, to automatically start the SAP application layer in the second zone.
+- You should invest in automation that allows you to automatically start the SAP application layer in the second zone if a zonal outage occurs.
## Combined high availability and disaster recovery configuration
-Microsoft doesn't share any information about geographical distances between the facilities that host different Azure Availability Zones in an Azure region. Still, some customers are using zones for a combined HA and DR configuration that promises a recovery point objective (RPO) of zero. This means that you shouldn't lose any committed database transactions even in the case of disaster recovery.
+Microsoft doesn't share any information about geographical distances between the facilities that host different Azure Availability Zones in an Azure region. Still, some customers are using zones for a combined HA and DR configuration that promises a recovery point objective (RPO) of zero. An RPO of zero means that you shouldn't lose any committed database transactions even in the case of disaster recovery.
> [!NOTE] > We recommend that you use a configuration like this only in certain circumstances. For example, you might use it when data can't leave the Azure region for security or compliance reasons.
@@ -160,7 +197,7 @@ The following considerations apply for this configuration:
- You're either assuming that there's a significant distance between the facilities hosting an Availability Zone or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md). - When you use this architecture, you need to monitor the status closely and try to keep the active DBMS and SAP Central Services instances in the same zone as your deployed application layer. In case of a failover of SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible. - You should have production application instances pre-installed in the VMs that run the active QA application instances.-- In case of a zone failure, shut down the QA application instances and start the production instances instead. Note that you need to use virtual names for the application instances to make this work.
+- In a zonal failure case, shut down the QA application instances and start the production instances instead. You need to use virtual names for the application instances to make this work.
- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones. - The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.
@@ -170,7 +207,7 @@ The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device in case you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) or additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-proximity-placement-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md
@@ -13,7 +13,7 @@ ms.subservice: workloads
ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure
-ms.date: 09/29/2020
+ms.date: 12/29/2020
ms.author: juergent ms.custom: H1Hack27Feb2017
@@ -39,6 +39,8 @@ To give you a possibility to optimize network latency, Azure offers [proximity p
> - Only on granularity of a single SAP system and not for a whole system landscape or a complete SAP landscape > - In a way to keep the different VM types and the number of VMs within a proximity placement group to a minimum
+Assume that if you deploy VMs by specifying Availability Zones and select the same Availability Zones, the network latency between these VMs should be sufficient to operate SAP NetWeaver and S/4HANA systems with satisfying performance and throughput. This assumption is independent of the fact whether a particular zone is built up out of one datacenter or multiple datacenters. The only reason for using proximity placement groups in zonal deployments is the case where you want to allocate Azure availability set deployed VMs together with zonal deployed VMs.
+ ## What are proximity placement groups? An Azure proximity placement group is a logical construct. When a proximity placement group is defined, it's bound to an Azure region and an Azure resource group. When VMs are deployed, a proximity placement group is referenced by:
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-public-ip-address-upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-public-ip-address-upgrade.md
@@ -16,7 +16,7 @@ ms.tgt_pltfrm: na
ms.workload: infrastructure-services ms.date: 12/08/2020 ms.author: blehr
-ms.custom: references_regions , devx-track-azurecli
+ms.custom: references_regions
--- # Upgrade public IP addresses