Updates from: 09/07/2021 03:04:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui.md
Previously updated : 06/27/2021 Last updated : 09/06/2021
Keep these things in mind when you configure company branding in Azure AD B2C:
* The banner logo appears in the verification emails sent to your users when they initiate a sign-up user flow. ## Enable company branding in user flow pages - Once you've configured company branding, enable it in your user flows. 1. In the left menu of the Azure portal, select **Azure AD B2C**.
If you'd like to brand all pages in the user flow, set the page layout version f
::: zone pivot="b2c-custom-policy"
+## Enable company branding in custom policy pages
+ Once you've configured company branding, enable it in your custom policy. Configure the [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. To specify a page layout in your custom policies that use an old **DataUri** value. For more information, learn how to [migrate to page layout](contentdefinitions.md#migrating-to-page-layout) with page version. The following example shows the content definitions with their corresponding the page contract, and *Ocean Blue* page template:
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/api-server-authorized-ip-ranges.md
Add another IP address to the approved ranges with the following command.
```bash # Retrieve your IP address
-CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
+CURRENT_IP=$(dig +short "myip.opendns.com" "@resolver1.opendns.com")
# Add to AKS approved list az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32 ```
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-classic-portal.md
description: Learn how to use Azure portal, CLI or PowerShell to create, view an
Previously updated : 02/14/2021 Last updated : 09/06/2021 # Create, view, and manage classic metric alerts using Azure Monitor > [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
> Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics cross a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal, Azure CLI and PowerShell.
This sections shows how to use PowerShell commands create, view and manage class
## Next steps - [Create a classic metric alert with a Resource Manager template](./alerts-enable-template.md).-- [Have a classic metric alert notify a non-Azure system using a webhook](./alerts-webhooks.md).
+- [Have a classic metric alert notify a non-Azure system using a webhook](./alerts-webhooks.md).
azure-monitor Alerts Classic.Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-classic.overview.md
Last updated 02/14/2021
# What are classic alerts in Microsoft Azure? > [!NOTE]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
> Alerts allow you to configure conditions over data and become notified when the conditions match the latest monitoring data.
Get information about alert rules and configuring them by using:
* Configure [Activity Log Alerts via Resource Manager](./alerts-activity-log.md) * Review the [activity log alert webhook schema](activity-log-alerts-webhook.md) * Learn more about [Action groups](./action-groups.md)
-* Configure [newer Alerts](alerts-metric.md)
+* Configure [newer Alerts](alerts-metric.md)
azure-monitor Alerts Enable Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-enable-template.md
description: Learn how to use a Resource Manager template to create a classic me
Previously updated : 02/14/2021 Last updated : 09/06/2021 # Create a classic metric alert with a Resource Manager template > [!WARNING]
-> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
> This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure Azure classic metric alerts. This enables you to automatically set up alerts on your resources when they are created to ensure that all resources are monitored correctly.
An alert on a Resource Manager template is most often useful when creating an al
## Next Steps * [Read more about Alerts](./alerts-overview.md) * [Add Diagnostic Settings](../essentials/resource-manager-diagnostic-settings.md) to your Resource Manager template
-* For the JSON syntax and properties, see [Microsoft.Insights/alertrules](/azure/templates/microsoft.insights/alertrules) template reference.
+* For the JSON syntax and properties, see [Microsoft.Insights/alertrules](/azure/templates/microsoft.insights/alertrules) template reference.
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-understand-migration.md
Title: Understand migration for Azure Monitor alerts description: Understand how the alerts migration works and troubleshoot problems. Previously updated : 02/14/2021 Last updated : 09/06/2021 # Understand migration options to newer alerts
-Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
This article explains how the manual migration and voluntary migration tool work, which will be used to migrate remaining alert rules. It also describes solutions for some common problems.
As part of the migration, new metric alerts and new action groups will be create
## Next steps - [How to use the migration tool](alerts-using-migration-tool.md)-- [Prepare for the migration](alerts-prepare-migration.md)
+- [Prepare for the migration](alerts-prepare-migration.md)
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-webhooks.md
description: Learn how to reroute Azure metric alerts to other, non-Azure system
Previously updated : 02/14/2021 Last updated : 09/06/2021 # Call a webhook with a classic metric alert in Azure Monitor > [!WARNING]
-> This article describes how to use older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+> This article describes how to use older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
> You can use webhooks to route an Azure alert notification to other systems for post-processing or custom actions. You can use a webhook on an alert to route it to services that send SMS messages, to log bugs, to notify a team via chat or messaging services, or for various other actions.
azure-monitor Monitoring Classic Retirement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/monitoring-classic-retirement.md
Azure Monitor is a unified monitoring stack that supports ΓÇÿOne MetricΓÇÖ and ΓÇÿOne AlertsΓÇÖ across Azure resources. See more information in this [blog post](https://azure.microsoft.com/blog/new-full-stack-monitoring-capabilities-in-azure-monitor/). The new Azure monitoring and alerting platforms has been built to be faster, smarter, and extensible ΓÇô keeping pace with the growing expanse of cloud computing and in-line with Microsoft Intelligent Cloud philosophy.
-With the new Azure monitoring and alerting platform in place, classic alerts in Azure Monitor are retired for public cloud users, though still in limited use until **31 May 2021**. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
+With the new Azure monitoring and alerting platform in place, classic alerts in Azure Monitor are retired for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**.
![Classic alert in Azure portal](media/monitoring-classic-retirement/monitor-alert-screen2.png) 
The following are examples of cases where you'll incur a charge for your alert r
## Next steps * Learn about the [new unified Azure Monitor](../overview.md).
-* Learn more about the new [Azure Alerts](./alerts-overview.md).
+* Learn more about the new [Azure Alerts](./alerts-overview.md).
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-dedicated-clusters.md
Azure Monitor Logs Dedicated Clusters are a deployment option that enables advan
Dedicated clusters require customers to commit using a capacity of at least 1 TB of data ingestion per day. You can migrate an existing workspace to a dedicated cluster with no data loss or service interruption.
-The capabilities that require dedicated clusters are:
+The capabilities that require dedicated clusters:
- **[Customer-managed Keys](../logs/customer-managed-keys.md)** - Encrypt the cluster data using keys that are provided and controlled by the customer. - **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Control Microsoft support engineers access requests to your data.
The capabilities that require dedicated clusters are:
## Management
-Dedicated clusters are managed with an Azure resource that represents Azure Monitor Log clusters. All operations are done on this resource using PowerShell or the REST API.
+Dedicated clusters are managed with an Azure resource that represents Azure Monitor Log clusters. Operations are performed programmatically using [CLI](https://docs.microsoft.com/cli/azure/monitor/log-analytics/cluster?view=azure-cli-latest), [PowerShell](https://docs.microsoft.com/powershell/module/az.operationalinsights) or the [REST](https://docs.microsoft.com/rest/api/loganalytics/clusters).
-Once the cluster is created, it can be configured and workspaces linked to it. When a workspace is linked to a cluster, new data sent to the workspace resides on the cluster. Only workspaces that are in the same region as the cluster can be linked to the cluster. Workspaces can be unlinked from a cluster with some limitations. More detail on these limitations is included in this article.
-
-Data ingested to dedicated clusters is encrypted twice, once at the service level using Microsoft-managed keys or [customer-managed key](../logs/customer-managed-keys.md), and once at the infrastructure level using two different encryption algorithms and two different keys. [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview) control.
+Once a cluster is created, workspaces can be linked to it and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data is stored in shared Log Analytics clusters. The link and unlink operation doesnΓÇÖt affect your queries and the access to data before and after the operation with subjection to retention in workspaces. The Cluster and workspaces must be in the same region to allow linking.
All operations on the cluster level require the `Microsoft.OperationalInsights/clusters/write` action permission on the cluster. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md). ## Cluster pricing model
-Log Analytics Dedicated Clusters use a Commitment Tier pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
+Log Analytics Dedicated Clusters use a Commitment Tier (formerly called capacity reservations) pricing model of at least 500 GB/day. Any usage above the tier level will be billed at effective per-GB rate of that Commitment Tier. Commitment Tier pricing information is available at the [Azure Monitor pricing page]( https://azure.microsoft.com/pricing/details/monitor/).
The cluster Commitment Tier level is configured programmatically with Azure Resource Manager using the `Capacity` parameter under `Sku`. The `Capacity` is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day.
Authorization: Bearer <token>
You must specify the following properties when you create a new dedicated cluster: -- **ClusterName**: Used for administrative purposes. Users are not exposed to this name.-- **ResourceGroupName**: Resource group for the dedicated cluster. You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).-- **Location**: A cluster is located in a specific Azure region. Only workspaces located in this region can be linked to this cluster.-- **SkuCapacity**: You must specify the Commitment Tier (sku) when creating a cluster resource. The Commitment Tier can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-
+- **ClusterName**
+- **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
+- **Location**
+- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Manage Costs for Log Analytics clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-> [!NOTE]
-> Commitment tiers were formerly called capacity reservations.
+The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
After you create your cluster resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below. You can have up to 2 active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to 4 reserved clusters per subscription per region (active or recently deleted).
-> [!WARNING]
-> Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete. It is recommended to run it asynchronously.
-
-The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
+> [!INFORMATION]
+> Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
+> Dedicated cluster is billed once provisioned regardless data ingestion and itΓÇÖs recommended to prepare the deployment to expedite the provisioning and workspaces link to cluster. Verify the following:
+> - A list of initial workspace to be linked to cluster is identified
+> - You have permissions to subscription intended for the cluster and any workspace to be linked
**CLI** ```azurecli
The *principalId* GUID is generated by the managed identity service at cluster c
## Link a workspace to a cluster
-When a Log Analytics workspace is linked to a dedicated cluster, new data that is ingested into the workspace is routed to the new cluster while existing data remains on the existing cluster. If the dedicated cluster is encrypted using customer-managed keys (CMK), only new data is encrypted with the key. The system abstracts this difference, so you can query the workspace as usual while the system performs cross-cluster queries in the background.
+When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace is routed to the new cluster while existing data remains on the existing cluster. If the dedicated cluster is encrypted using customer-managed keys (CMK), only new data is encrypted with the key. The system abstracts this difference, so you can query the workspace as usual while the system performs cross-cluster queries in the background.
A cluster can be linked to up to 1,000 workspaces. Linked workspaces are located in the same region as the cluster. To protect the system backend and avoid fragmentation of data, a workspace canΓÇÖt be linked to a cluster more than twice a month.
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/roles-permissions-security.md
A similar pattern can be followed with event hubs, but first you need to create
$role.Name = "Monitoring Event Hub Listener" $role.Description = "Can get the key to listen to an event hub streaming monitoring data." $role.Actions.Clear()
- $role.Actions.Add("Microsoft.ServiceBus/namespaces/authorizationrules/listkeys/action")
- $role.Actions.Add("Microsoft.ServiceBus/namespaces/Read")
+ $role.Actions.Add("Microsoft.EventHub/namespaces/authorizationrules/listkeys/action")
+ $role.Actions.Add("Microsoft.EventHub/namespaces/Read")
$role.AssignableScopes.Clear() $role.AssignableScopes.Add("/subscriptions/mySubscription/resourceGroups/myResourceGroup/providers/Microsoft.ServiceBus/namespaces/mySBNameSpace") New-AzRoleDefinition -Role $role
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-common-errors.md
Common migration errors and mitigation steps.
| XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). | | The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) |
+|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration could not be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Please abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service defintion file does not match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file|
## Next steps
-For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md)
+For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md)
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-databricks-notebook.md
Previously updated : 06/07/2021 Last updated : 08/31/2021 # Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory
For an eleven-minute introduction and demonstration of this feature, watch the f
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
+1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Select **Create a resource** on the left menu, select **Analytics**, and then select **Data Factory**.
+1. Select **Create a resource** on the Azure portal menu, select **Integration**, and then select **Data Factory**.
- ![Create a new data factory](media/transform-data-using-databricks-notebook/new-azure-data-factory-menu.png)
+ :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="Screenshot showing Data Factory selection in the New pane.":::
-1. In the **New data factory** pane, enter **ADFTutorialDataFactory** under **Name**.
+1. On the **Create Data Factory** page, under **Basics** tab, select your Azure **Subscription** in which you want to create the data factory.
- The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory. (For example, use **\<yourname\>ADFTutorialDataFactory**). For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](./naming-rules.md) article.
+1. For **Resource Group**, take one of the following steps:
+
+ 1. Select an existing resource group from the drop-down list.
+
+ 1. Select **Create new**, and enter the name of a new resource group.
- ![Provide a name for the new data factory](media/transform-data-using-databricks-notebook/new-azure-data-factory.png)
+ To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-1. For **Subscription**, select your Azure subscription in which you want to create the data factory.
+1. For **Region**, select the location for the data factory.
-1. For **Resource Group**, take one of the following steps:
-
- - Select **Use existing** and select an existing resource group from the drop-down list.
-
- - Select **Create new** and enter the name of a resource group.
+ The list shows only locations that Data Factory supports, and where your Azure Data Factory meta data will be stored. The associated data stores (like Azure Storage and Azure SQL Database) and computes (like Azure HDInsight) that Data Factory uses can run in other regions.
- Some of the steps in this quickstart assume that you use the name **ADFTutorialResourceGroup** for the resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+1. For **Name**, enter **ADFTutorialDataFactory**.
+
+ The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory (For example, use **&lt;yourname&gt;ADFTutorialDataFactory**). For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](./naming-rules.md) article.
-1. For **Version**, select **V2**.
+ :::image type="content" source="./media/doc-common-process/name-not-available-error.png" alt-text="Screenshot showing the Error when a name is not available.":::
-1. For **Location**, select the location for the data factory.
+1. For **Version**, select **V2**.
- For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (like Azure Storage and Azure SQL Database) and computes (like HDInsight) that Data Factory uses can be in other regions.
-1. Select **Create**.
+1. Select **Next: Git configuration**, and then select **Configure Git later** check box.
+1. Select **Review + create**, and select **Create** after the validation is passed.
-1. After the creation is complete, you see the **Data factory** page. Select the **Author & Monitor** tile to start the Data Factory UI application on a separate tab.
+1. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page. Select the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
- ![Launch the data factory UI application](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image4.png)
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot showing the home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
## Create linked services
In this section, you author a Databricks linked service. This linked service con
1. On the home page, switch to the **Manage** tab in the left panel.
- ![Edit the new linked service](media/doc-common-process/get-started-page-manage-button.png)
+ ![Screenshot showing the Manage tab.](media/doc-common-process/get-started-page-manage-button.png)
-1. Select **Connections** at the bottom of the window, and then select **+ New**.
+1. Select **Linked services** under **Connections**, and then select **+ New**.
- ![Create a new connection](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image6.png)
+ ![Screenshot showing how to create a new connection.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-6.png)
-1. In the **New Linked Service** window, select **Compute** \> **Azure Databricks**, and then select **Continue**.
+1. In the **New linked service** window, select **Compute** &gt; **Azure Databricks**, and then select **Continue**.
- ![Specify a Databricks linked service](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image7.png)
+ ![Screenshot showing how to specify a Databricks linked service.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-7.png)
-1. In the **New Linked Service** window, complete the following steps:
+1. In the **New linked service** window, complete the following steps:
- 1. For **Name**, enter ***AzureDatabricks\_LinkedService***
+ 1. For **Name**, enter ***AzureDatabricks\_LinkedService***.
- 1. Select the appropriate **Databricks workspace** that you will run your notebook in
+ 1. Select the appropriate **Databricks workspace** that you will run your notebook in.
- 1. For **Select cluster**, select **New job cluster**
+ 1. For **Select cluster**, select **New job cluster**.
- 1. For **Domain/ Region**, info should auto-populate
+ 1. For **Databrick Workspace URL**, the information should be auto-populated.
1. For **Access Token**, generate it from Azure Databricks workplace. You can find the steps [here](https://docs.databricks.com/api/latest/authentication.html#generate-token).
- 1. For **Cluster version**, select **4.2** (with Apache Spark 2.3.1, Scala 2.11)
+ 1. For **Cluster version**, select **4.2** (with Apache Spark 2.3.1, Scala 2.11).
1. For **Cluster node type**, select **Standard\_D3\_v2** under **General Purpose (HDD)** category for this tutorial. 1. For **Workers**, enter **2**.
- 1. Select **Finish**
+ 1. Select **Create**.
- ![Finish creating the linked service](media/transform-data-using-databricks-notebook/new-databricks-linkedservice.png)
+ ![Screenshot showing the configuration of the new Azure Databricks linked service.](media/transform-data-using-databricks-notebook/new-databricks-linked-service.png)
## Create a pipeline 1. Select the **+** (plus) button, and then select **Pipeline** on the menu.
- ![Buttons for creating a new pipeline](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image9.png)
+ ![Screenshot showing buttons for creating a new pipeline.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-9.png)
-1. Create a **parameter** to be used in the **Pipeline**. Later you pass this parameter to the Databricks Notebook Activity. In the empty pipeline, click on the **Parameters** tab, then **New** and name it as '**name**'.
+1. Create a **parameter** to be used in the **Pipeline**. Later you pass this parameter to the Databricks Notebook Activity. In the empty pipeline, select the **Parameters** tab, then select **+ New** and name it as '**name**'.
- ![Create a new parameter](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image10.png)
+ ![Screenshot showing how to create a new parameter.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-10.png)
- ![Create the name parameter](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image11.png)
+ ![Screenshot showing how to create the name parameter.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-11.png)
1. In the **Activities** toolbox, expand **Databricks**. Drag the **Notebook** activity from the **Activities** toolbox to the pipeline designer surface.
- ![Drag the notebook to the designer surface](media/transform-data-using-databricks-notebook/new-adf-pipeline.png)
+ ![Screenshot showing how to drag the notebook to the designer surface.](media/transform-data-using-databricks-notebook/new-adf-pipeline.png)
1. In the properties for the **Databricks** **Notebook** activity window at the bottom, complete the following steps:
- a. Switch to the **Azure Databricks** tab.
+ 1. Switch to the **Azure Databricks** tab.
- b. Select **AzureDatabricks\_LinkedService** (which you created in the previous procedure).
+ 1. Select **AzureDatabricks\_LinkedService** (which you created in the previous procedure).
- c. Switch to the **Settings** tab
+ 1. Switch to the **Settings** tab.
- c. Browse to select a Databricks **Notebook path**. LetΓÇÖs create a notebook and specify the path here. You get the Notebook Path by following the next few steps.
+ 1. Browse to select a Databricks **Notebook path**. LetΓÇÖs create a notebook and specify the path here. You get the Notebook Path by following the next few steps.
- 1. Launch your Azure Databricks Workspace
+ 1. Launch your Azure Databricks Workspace.
1. Create a **New Folder** in Workplace and call it as **adftutorial**.
- ![Create a new folder](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image13.png)
+ ![Screenshot showing how to create a new folder.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image13.png)
- 1. [Create a new notebook](https://docs.databricks.com/user-guide/notebooks/https://docsupdatetracker.net/index.html#creating-a-notebook) (Python), letΓÇÖs call it **mynotebook** under **adftutorial** Folder, click **Create.**
+ 1. [Screenshot showing how to create a new notebook.](https://docs.databricks.com/user-guide/notebooks/https://docsupdatetracker.net/index.html#creating-a-notebook) (Python), letΓÇÖs call it **mynotebook** under **adftutorial** Folder, click **Create.**
- ![Create a new notebook](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image14.png)
+ ![Screenshot showing how to create a new notebook.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image14.png)
- ![Set the properties of the new notebook](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image15.png)
+ ![Screenshot showing how to set the properties of the new notebook.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image15.png)
1. In the newly created notebook "mynotebook'" add the following code:
In this section, you author a Databricks linked service. This linked service con
print (y) ```
- ![Create widgets for parameters](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image16.png)
+ ![Screenshot showing how to create widgets for parameters.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image16.png)
- 1. The **Notebook Path** in this case is **/adftutorial/mynotebook**
+ 1. The **Notebook Path** in this case is **/adftutorial/mynotebook**.
-1. Switch back to the **Data Factory UI authoring tool**. Navigate to **Settings** Tab under the **Notebook1 Activity**.
+1. Switch back to the **Data Factory UI authoring tool**. Navigate to **Settings** Tab under the **Notebook1** activity.
- a. **Add Parameter** to the Notebook activity. You use the same parameter that you added earlier to the **Pipeline**.
+ a. Add a **parameter** to the Notebook activity. You use the same parameter that you added earlier to the **Pipeline**.
- ![Add a parameter](media/transform-data-using-databricks-notebook/new-adf-parameters.png)
+ ![Screenshot showing how to add a parameter.](media/transform-data-using-databricks-notebook/new-adf-parameters.png)
b. Name the parameter as **input** and provide the value as expression **\@pipeline().parameters.name**.
-1. To validate the pipeline, select the **Validate** button on the toolbar. To close the validation window, select the **\>\>** (right arrow) button.
+1. To validate the pipeline, select the **Validate** button on the toolbar. To close the validation window, select the **Close** button.
- ![Validate the pipeline](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image18.png)
+ ![Screenshot showing how to validate the pipeline.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-18.png)
-1. Select **Publish All**. The Data Factory UI publishes entities (linked services and pipeline) to the Azure Data Factory service.
+1. Select **Publish all**. The Data Factory UI publishes entities (linked services and pipeline) to the Azure Data Factory service.
- ![Publish the new data factory entities](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image19.png)
+ ![Screenshot showing how to publish the new data factory entities.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-19.png)
## Trigger a pipeline run
-Select **Trigger** on the toolbar, and then select **Trigger Now**.
+Select **Add trigger** on the toolbar, and then select **Trigger now**.
-![Select the Trigger Now command](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image20.png)
+![Screenshot showing how to select the 'Trigger now' command.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-20.png)
-The **Pipeline Run** dialog box asks for the **name** parameter. Use **/path/filename** as the parameter here. Click **Finish.**
+The **Pipeline run** dialog box asks for the **name** parameter. Use **/path/filename** as the parameter here. Select **OK**.
-![Provide a value for the name parameters](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image21.png)
+![Screenshot showing how to provide a value for the name parameters.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image-21.png)
## Monitor the pipeline run 1. Switch to the **Monitor** tab. Confirm that you see a pipeline run. It takes approximately 5-8 minutes to create a Databricks job cluster, where the notebook is executed.
- ![Monitor the pipeline](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image22.png)
+ ![Screenshot showing how to monitor the pipeline.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image22.png)
1. Select **Refresh** periodically to check the status of the pipeline run. 1. To see activity runs associated with the pipeline run, select **View Activity Runs** in the **Actions** column.
- ![View the activity runs](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image23.png)
+ ![Screenshot showing how to view the activity runs.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image23.png)
You can switch back to the pipeline runs view by selecting the **Pipelines** link at the top.
You can switch back to the pipeline runs view by selecting the **Pipelines** lin
You can log on to the **Azure Databricks workspace**, go to **Clusters** and you can see the **Job** status as *pending execution, running, or terminated*.
-![View the job cluster and the job](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image24.png)
+![Screenshot showing how to view the job cluster and the job.](media/transform-data-using-databricks-notebook/databricks-notebook-activity-image24.png)
You can click on the **Job name** and navigate to see further details. On successful run, you can validate the parameters passed and the output of the Python notebook.
-![View the run details and output](media/transform-data-using-databricks-notebook/databricks-output.png)
+![Screenshot showing how to view the run details and output.](media/transform-data-using-databricks-notebook/databricks-output.png)
## Next steps
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/tutorial-onboarding.md
+
+ Title: Azure Defender for IoT trial setup
+description: In this tutorial, you will learn how to onboard to Azure Defender for IoT with a virtual sensor, on a virtual machine, with a trial subscription of Azure Defender for IoT.
+++ Last updated : 09/06/2021+++
+# Tutorial: Azure Defender for IoT trial setup
+
+This tutorial will help you learn how to onboard to Azure Defender for IoT with a virtual sensor, on a virtual machine, with a trial subscription of Azure Defender for IoT. This tutorial will show you the optimal setup for someone who wishes to test Azure Defender for IoT, before signing up, and incorporating it into their environment.
+
+By using virtual environments, along with the software needed to create a sensor, Defender for IoT allows you to:
+
+- Use passive, agentless network monitoring to gain a complete inventory of all your IoT, and OT devices, their details, and how they communicate, with zero effect on the IoT, and OT network.
+
+- Identify risks and vulnerabilities in your IoT, and OT environment. For example, identify unpatched devices, open ports, unauthorized applications, and unauthorized connections. You can also identify changes to device configurations, PLC code, and firmware.
+
+- Detect anomalous or unauthorized activities with specialized IoT, and OT-aware threat intelligence and behavioral analytics. You can even detect advanced threats missed by static IOCs, like zero-day malware, fileless malware, and living-off-the-land tactics.
+
+- Integrate into Azure Sentinel for a bird's-eye view of your entire organization. Implement unified IoT, and OT security governance with integration into your existing workflows, including third-party tools like Splunk, IBM QRadar, and ServiceNow.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Onboard with Azure Defender for IoT
+> * Download the ISO for the virtual sensor
+> * Create a virtual machine for the sensor
+> * Install the virtual sensor software
+> * Configure a SPAN port
+> * Onboard, and activate the virtual sensor
+
+## Prerequisites
+
+- Permissions: Azure **Subscription Owners**, or **Subscription Contributors** level.
+
+- At least one device to monitor connected to a SPAN port on the switch.
+
+- Either VMware (ESXi 5.5 or later), or Hyper-V hypervisor (Windows 10 Pro or Enterprise) is installed and operational.
+
+- An Azure account. If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+## Onboard with Azure Defender for IoT
+
+To get started with Azure Defender for IoT, you must have a Microsoft Azure subscription. If you do not have a subscription, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+To evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports up to 1000 committed devices. The trial allows you to deploy a virtual sensor on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to deploy a virtual on-premises management console to view the aggregated information generated by the sensor.
+
+**To onboard a subscription to Azure Defender for Iot**:
+
+1. Navigate to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Search for, and select **Azure Defender for IoT**.
+
+1. Select **Onboard subscription**.
+
+ :::image type="content" source="media/tutorial-onboarding/onboard-subscription.png" alt-text="Screenshot of the selecting the onboard subscription button from the Getting started page.":::
+
+1. On the Pricing page, select **Start with a trial**.
+
+ :::image type="content" source="media/tutorial-onboarding/start-with-trial.png" alt-text="Screenshot of the start with a trial button to open the trial window.":::
+
+1. Select a subscription from the Onboard trial subscription pane and then select **Evaluate**.
+
+1. Confirm your evaluation.
+
+## Download the ISO for the virtual sensor
+
+The virtual appliances have minimum specifications that are required for both the sensor and management console. The following table shows the specifications needed for the sensor depending on your environment.
+
+### Virtual sensor
+
+| Type | Corporate | Enterprise | SMB |
+|--|--|--|--|
+| vCPU | 32 | 8 | 4 |
+| Memory | 32 GB | 32 GB | 8 GB |
+| Storage | 5.6 TB | 1.8 TB | 500 GB |
+
+**To download the ISO file for the virtual sensor**:
+
+1. Navigate to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Search for, and select **Azure Defender for IoT**.
+
+1. On the Getting started page, select the **Sensor** tab.
+
+1. Select **Download**.
+
+ :::image type="content" source="media/tutorial-onboarding/sensor-download.png" alt-text="Screenshot of the sensor tab, select download, to download the ISO file for the virtual sensor.":::
+
+## Create a virtual machine for the sensor
+
+The virtual sensor supports both VMware, and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
+
+- VMware (ESXi 5.5 or later), or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational.
+
+- Available hardware resources for the virtual machine.
+
+- ISO installation file for the Azure Defender for IoT sensor.
+
+- Make sure the hypervisor is running.
+
+### Create the virtual machine for the sensor with ESXi
+
+**To create the virtual machine for the sensor (ESXi)**:
+
+1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
+
+1. **Upload** the image and select **Close**.
+
+1. Go to **Virtual Machines**, and then select **Create/Register VM**.
+
+1. Select **Create new virtual machine**, and then select **Next**.
+
+1. Add a sensor name and choose:
+
+ - Compatibility: **&lt;latest ESXi version&gt;**
+
+ - Guest OS family: **Linux**
+
+ - Guest OS version: **Ubuntu Linux (64-bit)**
+
+1. Select **Next**.
+
+1. Choose the relevant datastore and select **Next**.
+
+1. Change the virtual hardware parameters according to the required [architecture](#download-the-iso-for-the-virtual-sensor).
+
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+
+1. Select **Next** > **Finish**.
+
+1. Power on the VM, and open a console.
+
+### Create a virtual machine for the sensor with Hyper-V
+
+This procedure describes how to create a virtual machine by using Hyper-V.
+
+**To create a virtual machine with Hyper-V**:
+
+1. Create a virtual disk in Hyper-V Manager.
+
+1. Select **format = VHDX**.
+
+1. Select **type = Dynamic Expanding**.
+
+1. Enter the name and location for the VHD.
+
+1. Enter the required size (according to the [architecture](#download-the-iso-for-the-virtual-sensor)).
+
+1. Review the summary and select **Finish**.
+
+1. On the **Actions** menu, create a new virtual machine.
+
+1. Enter a name for the virtual machine.
+
+1. Select **Specify Generation** > **Generation 1**.
+
+1. Specify the memory allocation (according to the [architecture](#download-the-iso-for-the-virtual-sensor)) and select the check box for dynamic memory.
+
+1. Configure the network adaptor according to your server network topology.
+
+1. Connect the VHDX created previously to the virtual machine.
+
+1. Review the summary and select **Finish**.
+
+1. Right-click the new virtual machine and select **Settings**.
+
+1. Select **Add Hardware** and add a new network adapter.
+
+1. Select the virtual switch that will connect to the sensor management network.
+
+1. Allocate CPU resources (according to the [architecture](#download-the-iso-for-the-virtual-sensor)).
+
+1. Connect the management console's ISO image to a virtual DVD drive.
+
+1. Start the virtual machine.
+
+1. On the **Actions** menu, select **Connect** to continue the software installation.
+
+## Install the virtual sensor software with ESXi or Hyper-V
+
+Either ESXi, or Hyper-V can be used to install the software for the virtual sensor.
+
+**To install the software on the virtual sensor**:
+
+1. Open the virtual machine console.
+
+1. The VM will start from the ISO image, and the language selection screen will appear. Select **English**.
+
+1. Select the required [architecture](#download-the-iso-for-the-virtual-sensor).
+
+1. Define the appliance profile and network properties:
+
+ | Parameter | Configuration |
+ | -| - |
+ | **Hardware profile** | Based on the required [architecture](#download-the-iso-for-the-virtual-sensor). |
+ | **Management interface** | **ens192** |
+ | **Network parameters (provided by the customer)** | **management network IP address:** <br/>**subnet mask:** <br>**appliance hostname:** <br/>**DNS:** <br/>**default gateway:** <br/>**input interfaces:**|
+ | **bridge interfaces:** | There's no need to configure the bridge interface. This option is for special use cases only. |
+
+1. Enter **Y** to accept the settings.
+
+1. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required to sign-in, and manage your device. The username and password will not be presented again.
+
+ - **Support**: The administrative user for user management.
+
+ - **CyberX**: The equivalent of root for accessing the appliance.
+
+1. The appliance restarts.
+
+1. Access the sensor via the IP address previously configured: `https://ip_address`.
+
+### Post-installation validation
+
+To validate the installation of a physical appliance, you need to perform many tests.
+
+The validation is available to both the **Support**, and **CyberX** user.
+
+**To access the post validation tool**:
+
+1. Sign in to the sensor.
+
+1. Select **System Settings** from the left side pane.
+
+1. Select the :::image type="icon" source="media/tutorial-onboarding/system-statistics-icon.png" border="false"::: button.
+
+ :::image type="content" source="media/tutorial-onboarding/system-health-check-screen.png" alt-text="Screenshot of the system health check." lightbox="media/tutorial-onboarding/system-health-check-screen-expanded.png":::
+
+For post-installation validation, you must test to ensure the system is running, that you have the right version, and to verify that all of the input interfaces that were configured during the installation process are running.
+
+**To verify that the system is running**:
+
+1. Select **Appliance**, and ensure that each line item shows `Running` and the bottom line states `System is up`.
+
+1. Select **Version**, and ensure that the correct version appears.
+
+1. Select **ifconfig** to display the parameters for the appliance's physical interfaces, and ensure that they are correct.
+
+## Configure a SPAN port
+
+A vSwitch does not have mirroring capabilities, but you can use a workaround to implement a SPAN port. You can implement the workaround with either ESXi, or Hyper-V.
+
+### Configure a SPAN port with ESXi
+
+**To configure a SPAN port with ESXi**:
+
+1. Open vSwitch properties.
+
+1. Select **Add**.
+
+1. Select **Virtual Machine** > **Next**.
+
+1. Insert a network label **SPAN Network**, select **VLAN ID** > **All**, and then select **Next**.
+
+1. Select **Finish**.
+
+1. Select **SPAN Network** > **Edit*.
+
+1. Select **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
+
+1. Select **OK**, and then select **Close** to close the vSwitch properties.
+
+1. Open the **XSense VM** properties.
+
+1. For **Network Adapter 2**, select the **SPAN** network.
+
+1. Select **OK**.
+
+1. Connect to the sensor and verify that mirroring works.
+
+### Configure a SPAN port with Hyper-V
+
+Prior to starting you will need to:
+
+- Ensure that there is no instance of ClearPass VA running.
+
+- Enable Ensure SPAN on the data port, and not the management port.
+
+- Ensure that the data port SPAN configuration is not configured with an IP address.
+
+**To configure a SPAN port with Hyper-V**:
+
+1. Open the Virtual Switch Manager.
+
+1. In the Virtual Switches list, select **New virtual network switch** > **External** as the dedicated spanned network adapter type.
+
+ :::image type="content" source="media/tutorial-onboarding/new-virtual-network.png" alt-text="Screenshot of selecting new virtual network and external before creating the virtual switch.":::
+
+1. Select **Create Virtual Switch**.
+
+1. Under connection type, select **External Network**.
+
+1. Ensure the checkbox for **Allow management operating system to share this network adapter** is checked.
+
+ :::image type="content" source="media/tutorial-onboarding/external-network.png" alt-text="Select external network, and allow the management operating system to share the network adapter.":::
+
+1. Select **OK**.
+
+#### Attach a ClearPass SPAN Virtual Interface to the virtual switch
+
+You are able to attach a ClearPass SPAN Virtual Interface to the Virtual Switch through Windows PowerShell, or through Hyper-V Manager.
+
+**To attach a ClearPass SPAN Virtual Interface to the virtual switch with PowerShell**:
+
+1. Select the newly added SPAN virtual switch, and add a new network adapter with the following command:
+
+ ```bash
+ ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
+ ```
+
+1. Enable port mirroring for the selected interface as the span destination with the following command:
+
+ ```bash
+ Get-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 | ? Name -eq Monitor | Set-VMNetworkAdapter -PortMirroring Destination
+ ```
+
+ | Parameter | Description |
+ |--|--|
+ | VK-C1000V-LongRunning-650 | CPPM VA name |
+ |vSwitch_Span |Newly added SPAN virtual switch name |
+ |Monitor |Newly added adapter name |
+
+1. Select **OK**.
+
+These commands set the name of the newly added adapter hardware to be `Monitor`. If you are using Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
+
+**To attach a ClearPass SPAN Virtual Interface to the virtual switch with Hyper-V Manager**:
+
+1. Under the Hardware list, select **Network Adapter**.
+
+1. In the Virtual Switch field, select **vSwitch_Span**.
+
+ :::image type="content" source="media/tutorial-onboarding/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
+
+1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**.
+
+1. In the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
+
+ :::image type="content" source="media/tutorial-onboarding/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode.":::
+
+1. Select **OK**.
+
+#### Enable Microsoft NDIS Capture Extensions for the Virtual Switch
+
+Microsoft NDIS Capture Extensions will need to be enabled for the new virtual switch.
+
+**To enable Microsoft NDIS Capture Extensions for the newly added virtual switch**:
+
+1. Open the Virtual Switch Manager on the Hyper-V host.
+
+1. In the Virtual Switches list, expand the virtual switch name `vSwitch_Span` and select **Extensions**.
+
+1. In the Switch Extensions field, select **Microsoft NDIS Capture**.
+
+ :::image type="content" source="media/tutorial-onboarding/microsoft-ndis.png" alt-text="Screenshot of enabling the Microsoft NDIS by selecting it from the switch extensions menu.":::
+
+1. Select **OK**.
+
+#### Set the Mirroring Mode on the external port
+
+Mirroring mode will need to be set on the external port of the new virtual switch to be the source.
+
+You will need to configure the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port, to the virtual network adapter that you configured as the destination.
+
+Use the following PowerShell commands to set the external virtual switch port to source mirror mode:
+
+```bash
+$ExtPortFeature=Get-VMSystemSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings"
+$ExtPortFeature.SettingData.MonitorMode=2
+Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitchExtensionFeature $ExtPortFeature
+```
+
+| Parameter | Description |
+|--|--|
+| vSwitch_Span | Newly added SPAN virtual switch name. |
+| MonitorMode=2 | Source |
+| MonitorMode=1 | Destination |
+| MonitorMode=0 | None |
+
+Use the following PowerShell command to verify the monitoring mode status:
+
+```bash
+Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData
+```
+
+| Parameter | Description |
+|--|--|
+| vSwitch_Span | Newly added SPAN virtual switch name |
+## Onboard, and activate the virtual sensor
+
+Before you can start using your Defender for IoT sensor, you will need to onboard the created virtual sensor to your Azure subscription, and download the virtual sensor's activation file to activate the sensor.
+
+### Onboard the virtual sensor
+
+**To onboard the virtual sensor:**
+
+1. Go to the **Welcome** page in the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+
+1. Select **Onboard sensor**.
+
+ :::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of selecting to onboard the sensor to start the onboarding process for your sensor.":::
+
+1. Enter a name for the sensor.
+
+ We recommend that you include the IP address of the sensor as part of the name, or use an easily identifiable name. Naming your sensor in this way will ensure easier tracking.
+
+1. Select a subscription from the drop-down menu.
+
+ :::image type="content" source="media/tutorial-onboarding/name-subscription.png" alt-text="Screenshot of entering a meaningful name, and connect your sensor to a subscription.":::
+
+1. Choose a sensor connection mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed.
+
+ - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Azure Sentinel. In addition, threat intelligence packages can be pushed from the Azure Defender for IoT portal to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+
+ For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+
+ - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
+
+1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Azure Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors).
+
+1. Select **Register**.
+
+### Download the sensor activation file
+
+Once registration is complete for the sensor, you will be able to download an activation file for the sensor. The sensor activation file contains instructions about the management mode of the sensor. The activation file you download, will be unique for each sensor that you deploy. The user who signs in to the sensor console for the first time, will uploads the activation file to the sensor.
+
+**To download an activation file:**
+
+1. On the **Onboard Sensor** page, select **Register**
+
+1. Select **download activation file**.
+
+1. Make the file accessible to the user who's signing in to the sensor console for the first time.
+
+### Sign in and activate the sensor
+
+**To sign in and activate:**
+
+1. Go to the sensor console from your browser by using the IP defined during the installation.
+
+ :::image type="content" source="media/tutorial-onboarding/azure-defender-for-iot-sensor-log-in-screen.png" alt-text="Screenshot of the Azure Defender for IoT sensor.":::
+
+1. Enter the credentials defined during the sensor installation.
+
+1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
+
+ :::image type="content" source="media/tutorial-onboarding/activation-upload-screen-with-upload-button.png" alt-text="Screenshot of selecting to upload and go to the activation file.":::
+
+1. Accept the terms and conditions.
+
+1. Select **Activate**. The SSL/TLS certificate dialog box opens.
+
+1. Define a certificate name.
+
+1. Upload the CRT and key files.
+
+1. Enter a passphrase and upload a PEM file if necessary.
+
+1. Select **Next**. The validation screen opens. By default, validation between the management console and connected sensors is enabled.
+
+1. Turn off the **Enable system-wide validation** toggle to disable validation. We recommend that you enable validation.
+
+1. Select **Save**.
+
+You might need to refresh your screen after uploading the CA-signed certificate.
+
+## Next steps
+
+Learn how to set up [additional appliances](how-to-install-software.md#about-defender-for-iot-appliances).
+Read about the [agentless architecture](architecture.md).
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/overview-iot-plug-and-play.md
-+ #Customer intent: As a device builder, I need to know what is IoT Plug and Play, so I can understand how it can help me build and market my IoT devices.
mysql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-deploy-on-azure-free-account.md
To complete this tutorial, you need:
:::image type="content" source="media/how-to-deploy-on-azure-free-account/review-and-create.png" alt-text="Screenshot that shows the Review + create blade."::: >[!IMPORTANT]
- >As long as you are using your Azure free account, and your free service usage is within monthly limits (to view usage information, refer [**Monitor and track free services usage**](#monitor-and-track-free-services-usage) section below), you won't be charged for the service. We're currently working to improve the **Cost Summary** experience for free services.
+ >While creating the Flexible server instance from your Azure free account, you will still see an **Estimated cost per month** in the **Compute + Storage : Cost Summary** blade and **Review + Create** tab. But, as long as you are using your Azure free account, and your free service usage is within monthly limits (to view usage information, refer [**Monitor and track free services usage**](#monitor-and-track-free-services-usage) section below), you won't be charged for the service. We're currently working to improve the **Cost Summary** experience for free services.
1. Select **Create** to provision the server.
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/create-use-views.md
The folder name in the `OPENROWSET` function (`yellow` in this example) that is
Delta Lake is in public preview and there are some known issues and limitations. Review the known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
+## JSON views
+
+The views are the good choice if you need to do some additional processing on top of the result set that is fetched from the files. One example might be parsing JSON files where we need to apply the JSON functions to extract the values from the JSON documents:
+
+```sql
+CREATE OR ALTER VIEW CovidCases
+AS
+select
+ *
+from openrowset(
+ bulk 'latest/ecdc_cases.jsonl',
+ data_source = 'covid',
+ format = 'csv',
+ fieldterminator ='0x0b',
+ fieldquote = '0x0b'
+ ) with (doc nvarchar(max)) as rows
+ cross apply openjson (doc)
+ with ( date_rep datetime2,
+ cases int,
+ fatal int '$.deaths',
+ country varchar(100) '$.countries_and_territories')
+```
+
+The `OPENJSON` function parses each line from the JSONL file containing one JSON document per line in textual format.
+
+## CosmosDB view
+
+The views can be created on top of the Azure CosmosDB containers if the CosmosDB analytical storage is enabled on the container. CosmosDB account name, database name, and container name should be added as a part of the view, and the read-only access key should be placed in the database scoped credential that the view references.
+
+```sql
+CREATE DATABASE SCOPED CREDENTIAL MyCosmosDbAccountCredential
+WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 's5zarR2pT0JWH9k8roipnWxUYBegOuFGjJpSjGlR36y86cW0GQ6RaaG8kGjsRAQoWMw1QKTkkX8HQtFpJjC8Hg==';
+GO
+CREATE OR ALTER VIEW Ecdc
+AS SELECT *
+FROM OPENROWSET(
+ PROVIDER = 'CosmosDB',
+ CONNECTION = 'Account=synapselink-cosmosdb-sqlsample;Database=covid',
+ OBJECT = 'Ecdc',
+ CREDENTIAL = 'MyCosmosDbAccountCredential'
+ ) with ( date_rep varchar(20), cases bigint, geo_id varchar(6) ) as rows
+```
+
+Find more details about [querying CosmosDB containers using Synapse Link here](query-cosmos-db-analytical-store.md).
+ ## Use a view You can use views in your queries the same way you use views in SQL Server queries.
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/classic-vm-deprecation.md
VMs created using the classic deployment model will follow the [Modern Lifecycle
- On March 1, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic). This retirement does *not* affect the following Azure services and functionality: -- [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) - Storage accounts *not* used by VMs (classic) - Virtual networks *not* used by VMs (classic) - Other classic resources
+Azure Cloud Services (classic) retirement was announced in August 2021 [here](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
+ ## What resources are available for this migration? - [Microsoft Q&A](/answers/topics/azure-virtual-machines-migration.html): Microsoft and community support for migration.
Start planning your migration to Azure Resource Manager, today.
1. For technical questions, issues, and help with adding subscriptions to the allowlist, [contact support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/{"pesId":"6f16735c-b0ae-b275-ad3a-03479cfa1396","supportTopicId":"8a82f77d-c3ab-7b08-d915-776b4ff64ff4"}).
-1. Complete the migration as soon as possible to prevent business impact and to take advantage of the improved performance, security, and new features of Azure Resource Manager.
+1. Complete the migration as soon as possible to prevent business impact and to take advantage of the improved performance, security, and new features of Azure Resource Manager.