Updates from: 03/15/2021 04:04:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
Title: Support and help options for Azure AD app developers
-description: Know how to obtain help and support for development-related questions and problems when creating application that integrate with Microsoft identities (Azure Active Directory and Microsoft account)
+ Title: Support and help options for Microsoft identity platform developers | Azure
+description: Learn where to get help and find answers to your questions as you build identity and access management (IAM) solutions that integrate with Azure Active Directory (Azure AD) and other components of the Microsoft identity platform.
-+ Previously updated : 05/23/2019- Last updated : 03/09/2021+ - # Support and help options for developers
-If you're just starting to integrate with Azure Active Directory (Azure AD), Microsoft identities, or Microsoft Graph API, or when you're implementing a new feature to your application, there are times when you need to obtain help from the community or understand the support options that you have as a developer. Here are suggestions for where you can get help when developing your Microsoft identity platform solutions.
+If you need an answer to a question or help in solving a problem not covered in our documentation, it might be time to reach out to experts for help. Here are several suggestions for getting answers to your questions as you develop applications that integrate with the Microsoft identity platform.
## Create an Azure support request
Explore the range of [Azure support options and choose the plan](https://azure.m
- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). -- If you are not an Azure customer, you can also open a support request with Microsoft via [our commercial support](https://support.serviceshub.microsoft.com/supportforbusiness).
+- If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness).
## Post a question to Microsoft Q&A+ <div class='icon is-large'> <img alt='Microsoft Q&A' src='./media/common/question-mark-icon.png'> </div>
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan for an at-scale deployment of Azure Arc enabled servers description: Learn how to enable a large number of machines to Azure Arc enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 02/23/2021 Last updated : 03/12/2021
-# Planing for an at-scale deployment of Azure Arc enabled servers
+# Plan and deploy Arc enabled servers at-scale
Deployment of an IT infrastructure service or business application is a challenge for any company. In order to execute it well and avoid any unwelcome surprises and unplanned costs, you need to thoroughly plan for it to ensure that you're as ready as possible. To plan for deploying Azure Arc enabled servers at-scale, it should cover the design and deployment criteria that needs to be met in order to successfully complete the tasks to support an at-scale deployment.
In this phase, system engineers or administrators enable the core features in th
## Phase 2: Deploy Arc enabled servers
-Next, we add to the foundation laid in phase 1 by preparing the deployment, and performing the installation of the agent.
+Next, we add to the foundation laid in phase 1 by preparing for and deploying the Arc enabled servers Connected Machine agent.
|Task |Detail |Duration | |--|-||
Next, we add to the foundation laid in phase 1 by preparing the deployment, and
## Phase 3: Manage and operate
-Phase 3 sees administrators or system engineers enabling automation of manual tasks to manage and operate the Connected Machine agent and the machine during their lifecycle.
+Phase 3 sees administrators or system engineers enable automation of manual tasks to manage and operate the Connected Machine agent and the machine during their lifecycle.
|Task |Detail |Duration | |--|-||
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoint. This feature currently works for all virtual network supported skus which includes Standard and Premium, except for on flex stamps where virtual networks are available only for Premium sku. To set up a function with a storage account restricted to a private network:
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoint. This feature currently works for all Windows virtual network supported skus which includes Standard and Premium, except for on flex stamps where virtual networks are available only for Premium sku. To set up a function with a storage account restricted to a private network:
1. Create a function with a storage account that does not have service endpoints enabled. 1. Configure the function to connect to your virtual network.
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
You can optionally define filters so the rule will apply to a specific subset of
The available filters are:
-* **Severity**: this rule will apply only to alerts with the selected severities.
+* **Severity**
+This rule will apply only to alerts with the selected severities.
For example, **Severity = Sev1** means that the rule will apply only to alerts with Sev1 severity.
-* **Monitor Service**: this rule will apply only to alerts coming from the selected monitoring services.
+* **Monitor Service**
+This rule will apply only to alerts coming from the selected monitoring services.
For example, **Monitor Service = ΓÇ£Azure BackupΓÇ¥** means that the rule will apply only to backup alerts (coming from Azure Backup).
-* **Resource Type**: this rule will apply only to alerts on the selected resource types.
+* **Resource Type**
+This rule will apply only to alerts on the selected resource types.
For example, **Resource Type = ΓÇ£Virtual MachinesΓÇ¥** means that the rule will apply only to alerts on virtual machines.
-* **Alert Rule ID**: this rule will apply only to alerts coming from a specific alert rule. The value should be the Resource Manager ID of the alert rule.
-For example, **Alert Rule ID = "/subscriptions/SubId1/resourceGroups/ResourceGroup1/providers/microsoft.insights/metricalerts/MyAPI-highLatency"** means this rule will apply only to alerts coming from "MyAPI-highLatency" metric alert rule.
-* **Monitor Condition**: this rule will apply only to alert events with the specified monitor condition - either **Fired** or **Resolved**.
-* **Description**: this rule will apply only to alerts that contains a specific string in the alert description field. That field contains the alert rule description.
+* **Alert Rule ID**
+This rule will apply only to alerts coming from a specific alert rule. The value should be the Resource Manager ID of the alert rule.
+For example, **Alert Rule ID = "/subscriptions/SubId1/resourceGroups/ResourceGroup1/providers/microsoft.insights/metricalerts/API-Latency"** means this rule will apply only to alerts coming from "API-Latency" metric alert rule.
+* **Monitor Condition**
+This rule will apply only to alert events with the specified monitor condition - either **Fired** or **Resolved**.
+* **Description**
+This rule will apply only to alerts that contains a specific string in the alert description field. That field contains the alert rule description.
For example, **Description contains 'prod'** means that the rule will only match alerts that contain the string "prod" in their description.
-* **Alert Context (payload)**: this rule will apply only to alerts that contain any of one or more specific values in the alert context fields.
+* **Alert Context (payload)**
+This rule will apply only to alerts that contain any of one or more specific values in the alert context fields.
For example, **Alert context (payload) contains 'Computer-01'** means that the rule will only apply to alerts whose payload contain the string "Computer-01". If you set multiple filters in a rule, all of them apply. For example, if you set **Resource type' = Virtual Machines** and **Severity' = Sev0**, then the rule will apply only for Sev0 alerts on virtual machines.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
To enable annotations in your workbook go to **Advanced Settings** and select **
Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment. ## Create custom annotations from PowerShell
-You can use the [CreateReleaseAnnotation](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/API/CreateReleaseAnnotation.ps1) PowerShell script from GitHub to create annotations from any process you like, without using Azure DevOps.
-
-1. Make a local copy of [CreateReleaseAnnotation.ps1](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/API/CreateReleaseAnnotation.ps1).
+You can use the CreateReleaseAnnotation PowerShell script from GitHub to create annotations from any process you like, without using Azure DevOps.
+
+1. Make a local copy of CreateReleaseAnnotation.ps1:
+
+ ```powershell
+
+ # Copyright (c) Microsoft Corporation. All rights reserved.
+ # Licensed under the MIT License. See License.txt in the project root for license information.
+
+ # Sample usage .\CreateReleaseAnnotation.ps1 -applicationId "<appId>" -apiKey "<apiKey>" -releaseFilePath "<path to .exe with file version>" -releaseProperties @{"ReleaseDescription"="Release with annotation";"TriggerBy"="John Doe"}
+ param(
+ [parameter(Mandatory = $true)][string]$applicationId,
+ [parameter(Mandatory = $true)][string]$apiKey,
+ [parameter(Mandatory = $true)][string]$releaseFilePath,
+ [parameter(Mandatory = $false)]$releaseProperties
+ )
+
+ $releaseName = (Get-Item $releaseFilePath).VersionInfo.FileVersion
+ Write-Host "Creating release annotation $releaseName in ApplicationInsights" -ForegroundColor Cyan
+
+ # background info on how fwlink works: After you submit a web request, many sites redirect through a series of intermediate pages before you finally land on the destination page.
+ # So when calling Invoke-WebRequest, the result it returns comes from the final page in any redirect sequence. Hence, I set MaximumRedirection to 0, as this prevents the call to
+ # be redirected. By doing this, we get a response with status code 302, which indicates that there is a redirection link from the response body. We grab this redirection link and
+ # construct the url to make a release annotation.
+ # Here's how this logic is going to works
+ # 1. Client send http request, such as: http://go.microsoft.com/fwlink/?LinkId=625115
+ # 2. FWLink get the request and find out the destination URL for it, such as: http://www.bing.com
+ # 3. FWLink generate a new http response with status code ΓÇ£302ΓÇ¥ and with destination URL ΓÇ£http://www.bing.comΓÇ¥. Send it back to Client.
+ # 4. Client, such as a powershell script, knows that status code ΓÇ£302ΓÇ¥ means redirection to new a location, and the target location is ΓÇ£http://www.bing.comΓÇ¥
+ function GetRequestUrlFromFwLink($fwLink)
+ {
+ $request = Invoke-WebRequest -Uri $fwLink -MaximumRedirection 0 -UseBasicParsing -ErrorAction Ignore
+ if ($request.StatusCode -eq "302") {
+ return $request.Headers.Location
+ }
+
+ return $null
+ }
+
+ function CreateAnnotation($grpEnv)
+ {
+ $retries = 1
+ $success = $false
+ while (!$success -and $retries -lt 6) {
+ $location = "$grpEnv/applications/$applicationId/Annotations?api-version=2015-11"
+
+ Write-Host "Invoke a web request for $location to create a new release annotation. Attempting $retries"
+ set-variable -Name createResultStatus -Force -Scope Local -Value $null
+ set-variable -Name createResultStatusDescription -Force -Scope Local -Value $null
+ set-variable -Name result -Force -Scope Local
+
+ try {
+ $result = Invoke-WebRequest -Uri $location -Method Put -Body $bodyJson -Headers $headers -ContentType "application/json; charset=utf-8" -UseBasicParsing
+ } catch {
+ if ($_.Exception){
+ if($_.Exception.Response) {
+ $createResultStatus = $_.Exception.Response.StatusCode.value__
+ $createResultStatusDescription = $_.Exception.Response.StatusDescription
+ }
+ else {
+ $createResultStatus = "Exception"
+ $createResultStatusDescription = $_.Exception.Message
+ }
+ }
+ }
+
+ if ($result -eq $null) {
+ if ($createResultStatus -eq $null) {
+ $createResultStatus = "Unknown"
+ }
+ if ($createResultStatusDescription -eq $null) {
+ $createResultStatusDescription = "Unknown"
+ }
+ }
+ else {
+ $success = $true
+ }
+
+ if ($createResultStatus -eq 409 -or $createResultStatus -eq 404 -or $createResultStatus -eq 401) # no retry when conflict or unauthorized or not found
+ {
+ break
+ }
+
+ $retries = $retries + 1
+ sleep 1
+ }
+
+ $createResultStatus
+ $createResultStatusDescription
+ return
+ }
+
+ # Need powershell version 3 or greater for script to run
+ $minimumPowershellMajorVersion = 3
+ if ($PSVersionTable.PSVersion.Major -lt $minimumPowershellMajorVersion) {
+ Write-Host "Need powershell version $minimumPowershellMajorVersion or greater to create release annotation"
+ return
+ }
+
+ $currentTime = (Get-Date).ToUniversalTime()
+ $annotationDate = $currentTime.ToString("MMddyyyy_HHmmss")
+ set-variable -Name requestBody -Force -Scope Script
+ $requestBody = @{}
+ $requestBody.Id = [GUID]::NewGuid()
+ $requestBody.AnnotationName = $releaseName
+ $requestBody.EventTime = $currentTime.GetDateTimeFormats("s")[0] # GetDateTimeFormats returns an array
+ $requestBody.Category = "Deployment"
+
+ if ($releaseProperties -eq $null) {
+ $properties = @{}
+ } else {
+ $properties = $releaseProperties
+ }
+ $properties.Add("ReleaseName", $releaseName)
+
+ $requestBody.Properties = ConvertTo-Json($properties) -Compress
+
+ $bodyJson = [System.Text.Encoding]::UTF8.GetBytes(($requestBody | ConvertTo-Json))
+ $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+ $headers.Add("X-AIAPIKEY", $apiKey)
+
+ set-variable -Name createAnnotationResult1 -Force -Scope Local -Value $null
+ set-variable -Name createAnnotationResultDescription -Force -Scope Local -Value ""
+
+ # get redirect link from fwlink
+ $requestUrl = GetRequestUrlFromFwLink("http://go.microsoft.com/fwlink/?prd=11901&pver=1.0&sbp=Application%20Insights&plcid=0x409&clcid=0x409&ar=Annotations&sar=Create%20Annotation")
+ if ($requestUrl -eq $null) {
+ $output = "Failed to find the redirect link to create a release annotation"
+ throw $output
+ }
+
+ $createAnnotationResult1, $createAnnotationResultDescription = CreateAnnotation($requestUrl)
+ if ($createAnnotationResult1)
+ {
+ $output = "Failed to create an annotation with Id: {0}. Error {1}, Description: {2}." -f $requestBody.Id, $createAnnotationResult1, $createAnnotationResultDescription
+ throw $output
+ }
+
+ $str = "Release annotation created. Id: {0}." -f $requestBody.Id
+ Write-Host $str -ForegroundColor Green
+
+ ```
1. Use the steps in the preceding procedure to get your Application Insights ID and create an API key from your Application Insights **API Access** tab.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
telemetry.trackTrace({
*Client/Browser-side JavaScript* ```javascript
-trackTrace(message: string, properties?: {[string]:string}, severityLevel?: SeverityLevel)
+trackTrace({
+ message: string,
+ properties?: {[string]:string},
+ severityLevel?: SeverityLevel
+})
``` Log a diagnostic event such as entering or leaving a method.
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/compare-template-syntax.md
Last updated 03/12/2021
This article compares Bicep syntax with JSON syntax for Azure Resource Manager templates (ARM templates). In most cases, Bicep provides syntax that is less verbose than the equivalent in JSON.
-## Syntax equivalents
-
-If you're familiar with using JSON to develop ARM templates, use the following table to learn about the equivalent syntax for Bicep.
-
-| Scenario | Bicep | JSON |
-| -- | | -- |
-| Author an expression | `func()` | `"[func()]"` |
-| Get parameter value | `exampleParameter` | `[parameters('exampleParameter'))]` |
-| Get variable value | `exampleVar` | `[variables('exampleVar'))]` |
-| Concatenate strings | `'${namePrefix}-vm'` | `[concat(parameters('namePrefix'), '-vm')]` |
-| Set resource property | `sku: '2016-Datacenter'` | `"sku": "2016-Datacenter",` |
-| Return the logical AND | `isMonday && isNovember` | `[and(parameter('isMonday'), parameter('isNovember'))]` |
-| Get resource ID of resource in the template | `nic1.id` | `[resourceId('Microsoft.Network/networkInterfaces', variables('nic1Name'))]` |
-| Get property from resource in the template | `diagsAccount.properties.primaryEndpoints.blob` | `[reference(resourceId('Microsoft.Storage/storageAccounts', variables('diagStorageAccountName'))).primaryEndpoints.blob]` |
-| Conditionally set a value | `isMonday ? 'valueIfTrue' : 'valueIfFalse'` | `[if(parameters('isMonday'), 'valueIfTrue', 'valueIfFalse')]` |
-| Separate a solution into multiple files | Use modules | Use linked templates |
-| Set the target scope of the deployment | `targetScope = 'subscription'` | `"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#"` |
-| Set dependency | Either rely on automatic detection of dependencies or manually set dependency with `dependsOn: [ stg ]` | `"dependsOn": ["[resourceId('Microsoft.Storage/storageAccounts', 'parameters('storageAccountName'))]"]` |
-| Resource declaration | `resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {...}` | `"resources": [ { "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2020-06-01", ... } ]` |
+If you're familiar with using JSON to develop ARM templates, use the following examples to learn about the equivalent syntax for Bicep.
+
+## Expressions
+
+To author an expression:
+
+```bicep
+func()
+```
+
+```json
+"[func()]"
+```
+
+## Parameters
+
+To declare a parameter with a default value:
+
+```bicep
+param demoParam string = 'Contoso'
+```
+
+```json
+"parameters": {
+ "demoParam": {
+ "type": "string",
+ "defaultValue": "Contoso"
+ }
+}
+```
+
+To get a parameter value:
+
+```bicep
+demoParam
+```
+
+```json
+[parameters('demoParam'))]
+```
+
+## Variables
+
+To declare a variable:
+
+```bicep
+var demoVar = 'example value'
+```
+
+```json
+"variables": {
+ "demoVar": "example value"
+},
+```
+
+To get a variable value:
+
+```bicep
+demoVar
+```
+
+```json
+[variables('demoVar'))]
+```
+
+## Strings
+
+To concatenate strings:
+
+```bicep
+'${namePrefix}-vm'
+```
+
+```json
+[concat(parameters('namePrefix'), '-vm')]
+```
+
+## Logical operators
+
+To return the logical **AND**:
+
+```bicep
+isMonday && isNovember
+```
+
+```json
+[and(parameter('isMonday'), parameter('isNovember'))]
+```
+
+To conditionally set a value:
+
+```bicep
+isMonday ? 'valueIfTrue' : 'valueIfFalse'
+```
+
+```json
+[if(parameters('isMonday'), 'valueIfTrue', 'valueIfFalse')]
+```
+
+## Deployment scope
+
+To set the target scope of the deployment:
+
+```bicep
+targetScope = 'subscription'
+```
+
+```json
+"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#"
+```
+
+## Resources
+
+To declare a resource:
+
+```bicep
+resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
+ ...
+}
+```
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-06-01",
+ ...
+ }
+]
+```
+
+To conditionally deploy a resource:
+
+```bicep
+resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = if(deployVM) {
+ ...
+}
+```
+
+```json
+"resources": [
+ {
+ "condition": "[parameters('deployVM')]",
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-06-01",
+ ...
+ }
+]
+```
+
+To set resource property:
+
+```bicep
+sku: '2016-Datacenter'
+```
+
+```json
+"sku": "2016-Datacenter",
+```
+
+To get resource ID of resource in the template:
+
+```bicep
+nic1.id
+```
+
+```json
+[resourceId('Microsoft.Network/networkInterfaces', variables('nic1Name'))]
+```
+
+## Loops
+
+To iterate over items in an array or count:
+
+```bicep
+[for storageName in storageAccounts: {
+ ...
+}]
+```
+
+```json
+"copy": {
+ "name": "storagecopy",
+ "count": "[length(parameters('storageAccounts'))]"
+},
+...
+```
+
+## Resource dependencies
+
+To set dependency between resources:
+
+For Bicep, either rely on automatic detection of dependencies or manually set dependency.
+
+```bicep
+dependsOn: [ stg ]
+```
+
+```json
+"dependsOn": ["[resourceId('Microsoft.Storage/storageAccounts', 'parameters('storageAccountName'))]"]
+```
+
+## Reference resources
+
+To get a property from a resource in the template:
+
+```bicep
+diagsAccount.properties.primaryEndpoints.blob
+```
+
+```json
+[reference(resourceId('Microsoft.Storage/storageAccounts', variables('diagStorageAccountName'))).primaryEndpoints.blob]
+```
+
+To get a property from an existing resource that isn't deployed in the template:
+
+```bicep
+resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+ name: storageAccountName
+}
+
+// use later in template as often as needed
+stg.properties.primaryEndpoints.blob
+```
+
+```json
+// required every time the property is needed
+"[reference(resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName')), '2019-06-01').primaryEndpoints.blob]"
+```
+
+## Outputs
+
+To output a property from a resource in the template:
+
+```bicep
+output hostname string = publicIP.properties.dnsSettings.fqdn
+```
+
+```json
+"outputs": {
+ "hostname": {
+ "type": "string",
+ "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))).dnsSettings.fqdn]"
+ },
+}
+```
+
+## Code reuse
+
+To separate a solution into multiple files:
+
+* For Bicep, use [modules](bicep-tutorial-add-modules.md).
+* For JSON, use [linked templates](linked-templates.md).
## Recommendations * When possible, avoid using the [reference](template-functions-resource.md#reference) and [resourceId](template-functions-resource.md#resourceid) functions in your Bicep file. When you reference a resource in the same Bicep deployment, use the resource identifier instead. For example, if you've deployed a resource in your Bicep file with `stg` as the resource identifier, use syntax like `stg.id` or `stg.properties.primaryEndpoints.blob` to get property values. By using the resource identifier, you create an implicit dependency between resources. You don't need to explicitly set the dependency with the dependsOn property.
+* If the resource isn't deployed in the Bicep file, you can still get a symbolic reference to the resource using the **existing** keyword.
* Use consistent casing for identifiers. If you're unsure what type of casing to use, try camel casing. For example, `param myCamelCasedParameter string`. * Add a description to a parameter only when the description provides essential information to users. You can use `//` comments for some information.
azure-vmware Azure Vmware Solution On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-on-premises.md
Title: Connect Azure VMware Solution to your on-premises environment description: Learn how to connect Azure VMware Solution to your on-premises environment. Previously updated : 12/28/2020 Last updated : 03/13/2021 # Connect Azure VMware Solution to your on-premises environment
In this article, you'll continue using the [information gathered during planning
Before you begin, there are two prerequisites for connecting Azure VMware Solution to your on-premises environment: - An ExpressRoute circuit from your on-premises environment to Azure.-- A /29 non-overlapping network address block for the ExpressRoute Global Reach peering, which you defined as part of the [planning phase](production-ready-deployment-steps.md).
+- A /29 non-overlapping CIDR network address block for the ExpressRoute Global Reach peering, which you defined as part of the [planning phase](production-ready-deployment-steps.md).
>[!NOTE] > You can connect through VPN, but that's out of scope for this quick start document.
Before you begin, there are two prerequisites for connecting Azure VMware Soluti
To establish on-premises connectivity to your Azure VMware Solution private cloud using ExpressRoute Global Reach, follow the [Peer on-premises environments to a private cloud](tutorial-expressroute-global-reach-private-cloud.md) tutorial.
+This tutorial results in a connection as shown in the diagram.
++ ## Verify on-premises network connectivity You should now see in your **on-premises edge router** where the ExpressRoute connects the NSX-T network segments and the Azure VMware Solution management segments.
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Last updated 03/11/2021
[!INCLUDE [avs-networking-description](includes/azure-vmware-solution-networking-description.md)]
-A useful perspective on interconnectivity is to consider the two types of Azure VMware Solution private cloud implementations:
+There are two ways to interconnectivity in the Azure VMware Solution private cloud:
1. [**Basic Azure-only interconnectivity**](#azure-virtual-network-interconnectivity) lets you manage and use your private cloud with only a single virtual network in Azure. This implementation is best suited for Azure VMware Solution evaluations or implementations that don't require access from on-premises environments. 1. [**Full on-premises to private cloud interconnectivity**](#on-premises-interconnectivity) extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
-In this article, we'll cover a few key concepts that establish networking and interconnectivity, including requirements and limitations. WeΓÇÖll also cover more information the two types of Azure VMware Solution private cloud interconnectivity implementations. This article provides you with the information you need to know to configure your networking to work with Azure VMware Solution properly.
+In this article, we'll cover the key concepts that establish networking and interconnectivity, including requirements and limitations. This article provides you with the information you need to know to configure your networking to work with Azure VMware Solution.
## Azure VMware Solution private cloud use cases
The use cases for Azure VMware Solution private clouds include:
## Azure virtual network interconnectivity
-In the virtual network to private cloud implementation, you can manage your Azure VMware Solution private cloud, consume workloads in your private cloud, and access Azure services over the ExpressRoute connection.
+You can interconnect your Azure virtual network with the Azure VMware Solution private cloud implementation. You can manage your Azure VMware Solution private cloud, consume workloads in your private cloud, and access other Azure services.
+
+The diagram below shows the basic network interconnectivity established at the time of a private cloud deployment. It shows the logical networking between a virtual network in Azure and a private cloud. This connectivity is established via a backend ExpressRoute that is part of the Azure VMware Solution service. The interconnectivity fulfills the following primary use cases:
+
+- Inbound access to vCenter server and NSX-T manager that is accessible from VMs in your Azure subscription.
+- Outbound access from VMs on the private cloud to Azure services.
+- Inbound access of workloads running in the private cloud.
-The diagram below shows the basic network interconnectivity established at the time of a private cloud deployment. It shows the logical, ExpressRoute-based networking between a virtual network in Azure and a private cloud. The interconnectivity fulfills three of the primary use cases:
-* Inbound access to vCenter server and NSX-T Manager that is accessible from VMs in your Azure subscription and not from your on-premises systems.
-* Outbound access from VMs to Azure services.
-* Inbound access and consumption of workloads running a private cloud.
:::image type="content" source="media/concepts/adjacency-overview-drawing-single.png" alt-text="Basic virtual network to private cloud connectivity" border="false"::: ## On-premises interconnectivity
-In the virtual network and on-premises to full private cloud implementation, you can access your Azure VMware Solution private clouds from on-premises environments. This implementation is an extension of the basic implementation described in the previous section. Like the basic implementation, an ExpressRoute circuit is required, but with this implementation, itΓÇÖs used to connect from on-premises environments to your private cloud in Azure.
+In the fully interconnected scenario, you can access the Azure VMware Solution from your Azure virtual network(s) and on-premises. This implementation is an extension of the basic implementation described in the previous section. An ExpressRoute circuit is required to connect from on-premises to your Azure VMware Solution private cloud in Azure.
The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases:
-* Hot/Cold Cross-vCenter vMotion
-* On-Premises to Azure VMware Solution private cloud management access
+
+- Hot/Cold vCenter vMotion between on-premises and Azure VMware Solution.
+- On-Premises to Azure VMware Solution private cloud management access.
:::image type="content" source="media/concepts/adjacency-overview-drawing-double.png" alt-text="Virtual network and on-premises full private cloud connectivity" border="false":::
-For full interconnectivity to your private cloud, enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your new private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures to request and use the authorization key and peering ID, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
+For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
## Limitations [!INCLUDE [azure-vmware-solutions-limits](includes/azure-vmware-solutions-limits.md)]
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and vSphere clusters. Previously updated : 02/02/2021 Last updated : 03/13/2021 # Azure VMware Solution private cloud and cluster concepts
This article describes all of these concepts.
![Image of two private clouds in a customer subscription](./media/hosts-clusters-private-clouds-final.png)
->[!NOTE]
->Because of the lower potential needs of a development environment, use smaller clusters with lower capacity hosts.
## Private clouds
Private clouds contain vSAN clusters built with dedicated, bare-metal Azure host
As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. ## Clusters
-For each private cloud created, there's one vSAN cluster by default. You can add, delete, and scale clusters using the Azure portal or through the API. All clusters have a default size of three hosts and can scale up to 16 hosts. The hosts used in a cluster must be the same host type.
+For each private cloud created, there's one vSAN cluster by default. You can add, delete, and scale clusters using the Azure portal or through the API. All clusters have a default size of three hosts and can scale up to 16 hosts. You can have up to four clusters per private cloud.
Trial clusters are available for evaluation and limited to three hosts. There's a single trial cluster per private cloud. You can scale a trial cluster by a single host during the evaluation period.
You use vSphere and NSX-T Manager to manage most other aspects of cluster config
## Hosts
-Azure VMware Solution private cloud clusters use hyper-converged, bare-metal infrastructure hosts. The following table shows the RAM, CPU, and disk capacities of the host.
+Azure VMware Solution clusters are based on hyper-converged, bare-metal infrastructure. The following table shows the RAM, CPU, and disk capacities of the host.
| Host Type | CPU | RAM (GB) | vSAN NVMe cache Tier (TB, raw) | vSAN SSD capacity tier (TB, raw) | | : | :: | :: | :: | :: |
-| High-End (HE) | dual Intel 18 core 2.3 GHz | 576 | 3.2 | 15.20 |
+| AVS36 | dual Intel 18 core 2.3 GHz | 576 | 3.2 | 15.20 |
Hosts used to build or scale clusters come from an isolated pool of hosts. Those hosts have passed hardware tests and have had all data securely deleted.
Hosts used to build or scale clusters come from an isolated pool of hosts. Those
Host maintenance and lifecycle management have no impact on the private cloud clusters' capacity or performance. Examples of automated host maintenance include firmware upgrades and hardware repair or replacement.
-Microsoft is responsible for the lifecycle management of NSX-T appliances, such as NSX-T Manager and NSX-T Edge. They are also responsible for bootstrapping network configuration, such as creating the Tier-0 gateway and enabling North-South routing. You're responsible for NSX-T SDN configuration. For example, network segments, distributed firewall rules, Tier 1 gateways, and load balancers.
-
-> [!IMPORTANT]
-> Do not modify the configuration of NSX-T Edge or Tier-0 Gateway, as this may result in a loss of service.
+Microsoft is responsible for the lifecycle management of NSX-T appliances, such as NSX-T Manager and NSX-T Edge. Microsoft is responsible for bootstrapping network configuration, such as creating the Tier-0 gateway and enabling North-South routing. You're responsible for NSX-T SDN configuration. For example, network segments, distributed firewall rules, Tier 1 gateways, and load balancers.
## Backup and restoration
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
-description: Learn about the key storage capabilities in Azure VMware Solution private clouds.
+description: Learn about the key storage capabilities in Azure VMware Solution private clouds.
Previously updated : 03/11/2021 Last updated : 03/13/2021 # Azure VMware Solution storage concepts
Azure VMware Solution private clouds provide native, cluster-wide storage with V
Local storage in each cluster host is used as part of a vSAN datastore. All diskgroups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of 15.4 TB. The size of the raw capacity tier of a cluster is the per host capacity times the number of hosts. For example, a four host cluster will provide 61.6-TB raw capacity in the vSAN capacity tier. Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datastores are created as part of a private cloud deployment and are available for use immediately. The cloudadmin user and all users in the CloudAdmin group can manage datastores with these vSAN privileges:+ - Datastore.AllocateSpace - Datastore.Browse - Datastore.Config
Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datas
## Data-at-rest encryption
-vSAN datastores use data-at-rest encryption by default. The encryption solution is KMS-based and supports vCenter operations for key management. Keys are stored encrypted, wrapped by an Azure Key Vault master key. When a host is removed from a cluster for any reason, data on SSDs is invalidated immediately.
+vSAN datastores use data-at-rest encryption by default. The encryption solution is KMS-based and supports vCenter operations for key management. Keys are stored encrypted, wrapped by an Azure Key Vault master key. When a host is removed from a cluster, data on SSDs is invalidated immediately.
## Scaling
-Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use AVS36 hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
+Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use AVS36 hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
## Azure storage integration
-You can use Azure storage services on workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
## Next steps
Now that you've covered Azure VMware Solution storage concepts, you may want to
- [Private cloud identity concepts](concepts-identity.md). - [vSphere role-based access control for Azure VMware Solution](concepts-role-based-access-control.md). - [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md)
<!-- LINKS - external-->
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
After you deploy Azure VMware Solution, you'll create the virtual network's jump
:::image type="content" source="media/pre-deployment/jump-box-diagram.png" alt-text="Create the Azure VMware Solution jump box" border="false" lightbox="media/pre-deployment/jump-box-diagram.png":::
-To create a virtual machine (VM) in the virtual network that you [identified or created as part of the deployment process](production-ready-deployment-steps.md#attach-virtual-network-to-azure-vmware-solution), follow these instructions:
+To create a virtual machine (VM) in the virtual network that you [identified or created as part of the deployment process](production-ready-deployment-steps.md#attach-azure-virtual-network-to-azure-vmware-solution), follow these instructions:
[!INCLUDE [create-avs-jump-box-steps](includes/create-jump-box-steps.md)]
If you didn't define a virtual network in the deployment step and your intent is
The jump box is in the virtual network where Azure VMware Solution connects through its ExpressRoute circuit. In Azure, go to the jump box's network interface and [view the effective routes](../virtual-network/manage-route-table.md#view-effective-routes).
-In the effective routes list, you should see the networks created as part of the Azure VMware Solution deployment. You'll see multiple networks that were derived from the [`/22` network you defined](production-ready-deployment-steps.md#ip-address-segment) when you [create a private cloud](#create-an-azure-vmware-solution-private-cloud).
+In the effective routes list, you should see the networks created as part of the Azure VMware Solution deployment. You'll see multiple networks that were derived from the [`/22` network you defined](production-ready-deployment-steps.md#ip-address-segment-for-private-cloud-management) when you [create a private cloud](#create-an-azure-vmware-solution-private-cloud).
:::image type="content" source="media/pre-deployment/azure-vmware-solution-effective-routes.png" alt-text="Verify network routes advertised from Azure VMware Solution to Azure Virtual Network" lightbox="media/pre-deployment/azure-vmware-solution-effective-routes.png":::
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
Title: Planning the Azure VMware Solution deployment description: This article outlines an Azure VMware Solution deployment workflow. The final result is an environment ready for virtual machine (VM) creation and migration. Previously updated : 02/22/2021 Last updated : 03/13/2021 # Planning the Azure VMware Solution deployment
-This article provides you the planning process to identify and collect data used during the deployment. As you plan your deployment, make sure to document the information you gather for easy reference during the deployment.
+This article provides you the planning process to identify and collect the information you'll use during the deployment. As you plan your deployment, make sure to document the information you gather for easy reference during the deployment.
-The processes of this quick start result in a production-ready environment for creating virtual machines (VMs) and migration.
+The steps outlined in this quick start give you a production-ready environment for creating virtual machines (VMs) and migration.
>[!IMPORTANT] >Before you create your Azure VMware Solution resource, follow the [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md) article to submit a support ticket to have your hosts allocated. Once the support team receives your request, it takes up to five business days to confirm your request and allocate your hosts. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll go through the same process.
Define the resource name you'll use during deployment. The resource name is a f
Identify the size hosts that you want to use when deploying Azure VMware Solution. For a complete list, see the [Azure VMware Solution private clouds and clusters](concepts-private-clouds-clusters.md#hosts) documentation.
-## Number of hosts
+## Number of clusters and hosts
-Define the number of hosts that you want to deploy into the Azure VMware Solution private cloud. The minimum number of hosts is three, and the maximum is 16 per cluster. For more information, see the [Azure VMware Solution private cloud and clusters](concepts-private-clouds-clusters.md#clusters) documentation.
+In the Azure VMware Solution, you'll deploy a private cloud and create multiple clusters. For your deployment, you'll need to define the number of clusters and the f hosts that you want to deploy in each cluster. The minimum number of hosts per cluster is three, and the maximum is 16. The maximum number of clusters per private cloud is four. The maximum number of nodes per private cloud is 64.
-You can always extend the cluster later if you need to go beyond the initial deployment number.
+For more information, see the [Azure VMware Solution private cloud and clusters](concepts-private-clouds-clusters.md#clusters) documentation.
-## IP address segment
+>[!TIP]
+>You can always extend the cluster later if you need to go beyond the initial deployment number.
-The first step in planning the deployment is to plan out the IP segmentation. Azure VMware Solution ingests a /22 network that you provide. Then carves it up into smaller segments and then uses those IP segments for vCenter, VMware HCX, NSX-T, and vMotion.
+## vCenter admin password
+Define the vCenter admin password. During the deployment, you'll create a vCenter admin password. The password is assigned to the cloudadmin@vsphere.local admin account during the vCenter build. You'll use these credentials to sign in to vCenter.
-Azure VMware Solution connects to your Microsoft Azure Virtual Network through an internal ExpressRoute circuit. In most cases, it connects to your data center through ExpressRoute Global Reach.
+## NSX-T admin password
+Define the NSX-T admin password. During the deployment, you'll create an NSX-T admin password. The password is assigned to the admin user in the NSX account during the NSX build. You'll use these credentials to sign in to NSX-T Manager.
-Azure VMware Solution, your existing Azure environment, and your on-premises environment all exchange routes (typically). That being the case, the /22 CIDR network address block you define in this step shouldn't overlap anything you already have on-premises or Azure.
+## IP address segment for private cloud management
+
+The first step in planning the deployment is to plan out the IP segmentation. Azure VMware Solution requires a /22 CIDR network. This address space carves it up into smaller network segments (subnets) and used for vCenter, VMware HCX, NSX-T, and vMotion functionality.
+
+This /22 CIDR network address block shouldn't overlap with anything existing network segment you already have on-premises or in Azure.
**Example:** 10.0.0.0/22
+Azure VMware Solution connects to your Microsoft Azure Virtual Network through an internal ExpressRoute Global Reach circuit (D-MSEE in below visualization). This functionality is part of the Azure VMware Solution service and won't be charged.
+ For more information, see the [Network planning checklist](tutorial-network-checklist.md#routing-and-subnet-considerations). :::image type="content" source="media/pre-deployment/management-vmotion-vsan-network-ip-diagram.png" alt-text="Identify - IP address segment" border="false"::: ## IP address segment for virtual machine workloads
-Identify an IP segment to create your first network (NSX segment) in your private cloud. In other words, you want to create a network segment on Azure VMware Solution so you can deploy VMs onto Azure VMware Solution.
-
-Even if you only plan on extending L2 networks, create a network segment that will validate the environment.
+Identify an IP segment to create your first network for workloads (NSX segment) in your private cloud. In other words, youΓÇÖll need to create a network segment on Azure VMware Solution so you can deploy VMs in Azure VMware Solution.
-Remember, any IP segments created must be unique across your Azure and on-premises footprint.
+Even if you plan to extend networks from on-premises into Azure VMware Solution (L2), you still need to create a network segment that validates the environment.
+Remember, any IP segments created must be unique across your Azure and on-premises footprint.
+
**Example:** 10.0.4.0/24 :::image type="content" source="media/pre-deployment/nsx-segment-diagram.png" alt-text="Identify - IP address segment for virtual machine workloads" border="false":::
Keep in mind that:
- If you plan to extend networks from on-premises, those networks must connect to a [vSphere Distributed Switch (vDS)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-B15C6A13-797E-4BCB-B9D9-5CBC5A60C3A6.html) in your on-premises VMware environment. - If the network(s) you wish to extend live on a [vSphere Standard Switch](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-350344DE-483A-42ED-B0E2-C811EE927D59.html), then they can't be extended.
-## Attach virtual network to Azure VMware Solution
+## Attach Azure Virtual Network to Azure VMware Solution
-In this step, you'll identify an ExpressRoute virtual network gateway and supporting Azure Virtual Network used to connect the Azure VMware Solution ExpressRoute circuit. The ExpressRoute circuit facilitates connectivity to and from the Azure VMware Solution private cloud to other Azure services, Azure resources, and on-premises environments.
+In this step, you'll identify an ExpressRoute virtual network gateway and the supporting Azure Virtual Network used to connect the Azure VMware Solution ExpressRoute circuit. The ExpressRoute circuit facilitates connectivity to and from the Azure VMware Solution private cloud to other Azure services, Azure resources, and on-premises environments.
You can use an *existing* OR *new* ExpressRoute virtual network gateway.
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud description: Learn how to access an Azure VMware Solution private cloud Previously updated : 02/22/2021 Last updated : 03/13/2021 # Tutorial: Access an Azure VMware Solution private cloud
-Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter. You'll need to do additional setup and connection to a local vCenter instance through a jump box.
+Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter. You'll need to connect to the Azure VMware Solution vCenter instance through a jump box.
-In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into vCenter. The jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to vCenter and NSX Manager.
+In this tutorial, you'll create a jump box in the resource group you created in the [previous tutorial](tutorial-configure-networking.md) and sign into the Azure VMware Solution vCenter. This jump box is a Windows virtual machine (VM) on the same virtual network you created. It provides access to both vCenter and the NSX Manager.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a Windows virtual machine to use to connect to vCenter
-> * Login to vCenter from your virtual machine
+> * Create a Windows virtual machine for access to the Azure VMware Solution vCenter
+> * Sign into vCenter from this virtual machine
## Create a new Windows virtual machine
In this tutorial, you learn how to:
## Connect to the local vCenter of your private cloud
-1. From the jump box, sign in to vSphere Client with VMware vCenter SSO using a cloud admin username and verity that the user interface displays successfully.
+1. From the jump box, sign in to vSphere Client with VMware vCenter SSO using a cloud admin username and verify that the user interface displays successfully.
-1. In the Azure portal, select your private cloud and then **Manage** > **Identity**.
+1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
The URLs and user credentials for private cloud vCenter and NSX-T Manager display. >[!TIP] >Select **Generate a new password** to generate new vCenter and NSX-T passwords.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
+ :::image type="content" source="media/tutorial-access-private-cloud/generate-vcenter-nsxt-passwords.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/generate-vcenter-nsxt-passwords.png":::
1. Navigate to the VM you created in the preceding step and connect to the virtual machine.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
Title: Tutorial - Configure networking for your VMware private cloud in Azure description: Learn to create and configure the networking needed to deploy your private cloud in Azure Previously updated : 02/23/2021 Last updated : 03/13/2021 # Tutorial: Configure networking for your VMware private cloud in Azure
To sign in to vCenter and NSX manager, you'll need the URLs to the vCenter web c
Navigate to your Azure VMware Solution private cloud, under **Manage**, select **Identity**, here you'll find the information needed. ## Next steps
azure-vmware Tutorial Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
Title: Tutorial - Create and deploy an Azure VMware Solution private cloud
+ Title: Tutorial - Deploy an Azure VMware Solution private cloud
description: Learn how to create and deploy an Azure VMware Solution private cloud Last updated 02/22/2021
-# Tutorial: Create an Azure VMware Solution private cloud
+# Tutorial: Deploy an Azure VMware Solution private cloud
-In this tutorial, you'll learn how to create and deploy an Azure VMware Solution private cloud. The minimum initial deployment of hosts is three. Additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster.
+Azure VMware Solution gives you the ability to deploy a vSphere cluster in Azure. The minimum initial deployment is three hosts. Additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster.
Because Azure VMware Solution doesn't allow you to manage your private cloud with your on-premises vCenter at launch, additional configuration is needed. These procedures and related prerequisites are covered in this tutorial.
Select **Try it** from the upper right corner of a code block. You can also laun
#### Create a resource group
-Create a resource group with the `[az group create](/cli/azure/group)` command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+Create a resource group with the ['az group create'](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
```azurecli-interactive
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-delete-private-cloud.md
Title: Tutorial - Delete an Azure VMware Solution private cloud description: Learn how to delete an Azure VMware Solution private cloud that you no longer need. Previously updated : 02/09/2021 Last updated : 03/13/2021 # Tutorial: Delete an Azure VMware Solution private cloud
-If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. The private cloud includes an isolated network domain, one or more provisioned vSphere clusters on dedicated server hosts, and several virtual machines (VMs). When you delete a private cloud, all of the VMs, their data, and clusters are deleted. The dedicated hosts are securely wiped and returned to the free pool. The network domain provisioned for the customer is also deleted.
+If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. The private cloud includes an isolated network domain, one or more provisioned vSphere clusters on dedicated server hosts, and several virtual machines (VMs). When you delete a private cloud, all of the VMs, their data, and clusters are deleted. The dedicated Azure VMware Solution hosts are securely wiped and returned to the free pool. The network address space provisioned is also deleted.
> [!CAUTION] > Deleting the private cloud is an irreversible operation. Once the private cloud is deleted, the data cannot be recovered, as it terminates all running workloads and components and destroys all private cloud data and configuration settings, including public IP addresses.
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
Title: Tutorial - Deploy and configure VMware HCX description: Learn how to deploy and configure a VMware HCX solution for your Azure VMware Solution private cloud. Previously updated : 11/25/2020 Last updated : 03/13/2021 # Deploy and configure VMware HCX
After you deploy the VMware HCX Connector OVA on-premises and start the applianc
> [!NOTE] > Typically, it's the same as your vCenter FQDN or IP address.
-1. Verify that the information entered is correct, and select **Restart**.
+1. Verify that the information entered is correct and select **Restart**.
> [!NOTE] > You'll experience a delay after restarting before being prompted for the next step.
You can connect or pair the VMware HCX Cloud Manager in Azure VMware Solution wi
1. Under **Infrastructure**, select **Site Pairing**, and then select the **Connect To Remote Site** option (in the middle of the screen).
-1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9`, the Azure VMware Solution cloudadmin@vsphere.local username, and the password. Then select **Connect**.
+1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9`, the Azure VMware Solution cloudadmin\@vsphere.local username, and the password. Then select **Connect**.
> [!NOTE] > To successfully establish a site pair:
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
Title: Tutorial - Network planning checklist description: Learn about the network requirements for network connectivity and network ports on Azure VMware Solution. Previously updated : 01/27/2021 Last updated : 03/13/2021 # Networking planning checklist for Azure VMware Solution
Applications and workloads running in a private cloud environment require name r
Use the DHCP service built-in to NSX or use a local DHCP server in the private cloud instead of routing broadcast DHCP traffic over the WAN back to on-premises.
+For more details, see the [Provide DHCP services to NSX-T network segment](deploy-azure-vmware-solution.md#optional-provide-dhcp-services-to-nsx-t-network-segment) article.
-## Next steps
-
-In this tutorial, you learned about the considerations and requirements for deploying an Azure VMware Solution private cloud.
+## Next steps
-Once you have the proper networking in place, continue to the next tutorial to create your Azure VMware Solution private cloud.
+In this tutorial, you learned about the considerations and requirements for deploying an Azure VMware Solution private cloud. Once you have the proper networking in place, continue to the next tutorial to create your Azure VMware Solution private cloud.
> [!div class="nextstepaction"] > [Create an Azure VMware Solution private cloud](tutorial-create-private-cloud.md)
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add an NSX-T network segment in Azure VMware Solution description: Learn how to create a NSX-T network segment to use for virtual machines (VMs) in vCenter. Previously updated : 11/09/2020 Last updated : 03/13/2021 # Tutorial: Add a network segment in Azure VMware Solution
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-scale-private-cloud.md
Title: Tutorial - Scale a private cloud description: In this tutorial, you use the Azure portal to scale an Azure VMware Solution private cloud. Previously updated : 09/21/2020 Last updated : 03/13/2021 #Customer intent: As a VMware administrator, I want to learn how to scale an Azure VMware Solution private cloud in the Azure portal.
In this tutorial, you'll use the Azure portal to:
## Prerequisites
-A private cloud to complete this tutorial. If you haven't created a private cloud, use the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one. Configure networking for your VMware private cloud in Azure to set up the required virtual network.
+You'll need an existing private cloud to complete this tutorial. If you haven't created a private cloud, use the [create a private cloud tutorial](tutorial-create-private-cloud.md) to create one.
## Add a new cluster
A private cloud to complete this tutorial. If you haven't created a private clou
:::image type="content" source="./media/tutorial-scale-private-cloud/ss5-scale-cluster.png" alt-text="In the Edit Cluster page, use the slider to select the number of hosts. Select Save." border="true":::
- The addition of hosts to the cluster will begin.
+ The addition of hosts to the cluster begins.
## Next steps
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-run-python-batch-azure-data-factory.md
description: Learn how to run Python scripts as part of a pipeline through Azure
ms.devlang: python Previously updated : 08/12/2020 Last updated : 03/12/2021
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/5/2021 Last updated : 3/12/2021 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## March 2021 Guest OS
+
+>[!NOTE]
+
+>The March Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the March Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-03 | [5000822] | Latest Cumulative Update(LCU) | 6.29 | Mar 9, 2021 |
+| Rel 21-03 | [4580325] | Flash update | 3.95, 4.88, 5.53, 6.29 | Oct 13, 2020 |
+| Rel 21-03 | [5000800] | IE Cumulative Updates | 2.108, 3.95, 4.88 | Mar 9, 2021 |
+| Rel 21-03 | [5000803] | Latest Cumulative Update(LCU) | 5.53 | Mar 9, 2021 |
+| Rel 21-03 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.108 | Oct 13, 2020 |
+| Rel 21-03 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.108 | Oct 13, 2020 |
+| Rel 21-03 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.88 | Oct 13, 2020 |
+| Rel 21-03 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.88 | Oct 13, 2020 |
+| Rel 21-03 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.95 | Oct 13, 2020 |
+| Rel 21-03 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.95 | Oct 13, 2020 |
+| Rel 21-03 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.29 | Feb 9, 2021 |
+| Rel 21-03 | [5000841] | Monthly Rollup  | 2.108 | Mar 9, 2021 |
+| Rel 21-03 | [5000847] | Monthly Rollup  | 3.95 | Mar 9, 2021 |
+| Rel 21-03 | [5000848] | Monthly Rollup  | 4.88 | Mar 9, 2021 |
+| Rel 21-03 | [4566426] | Servicing Stack update  | 3.95 | July 14, 2020 |
+| Rel 21-03 | [4566425] | Servicing Stack update  | 4.88 | July 14, 2020 |
+| Rel 21-03 OOB | [4578013] | Standalone Security Update  | 4.88 | Aug 19, 2020 |
+| Rel 21-03 | [4592510] | Servicing Stack update  | 2.108 | Dec 8, 2020 |
+| Rel 21-03 | [5000859] | Servicing Stack update  | 6.29 | Mar 9, 2021 |
+| Rel 21-03 | [4494175] | Microcode  | 5.53 | Sep 1, 2020 |
+| Rel 21-03 | [4494174] | Microcode  | 6.29 | Sep 1, 2020 |
+
+[5000822]: https://support.microsoft.com/kb/5000822
+[4580325]: https://support.microsoft.com/kb/4580325
+[5000800]: https://support.microsoft.com/kb/5000800
+[5000803]: https://support.microsoft.com/kb/5000803
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[4601060]: https://support.microsoft.com/kb/4601060
+[5000841]: https://support.microsoft.com/kb/5000841
+[5000847]: https://support.microsoft.com/kb/5000847
+[5000848]: https://support.microsoft.com/kb/5000848
+[4566426]: https://support.microsoft.com/kb/4566426
+[4566425]: https://support.microsoft.com/kb/4566425
+[4578013]: https://support.microsoft.com/kb/4578013
+[4592510]: https://support.microsoft.com/kb/4592510
+[5000859]: https://support.microsoft.com/kb/5000859
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## February 2021 Guest OS
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-pull-model.md
ms.devlang: dotnet Previously updated : 03/01/2021 Last updated : 03/10/2021
The `FeedIterator` comes in two flavors. In addition to the examples below that
Here's an example for obtaining a `FeedIterator` that returns entity objects, in this case a `User` object: ```csharp
-FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Beginning());
+FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
``` Here's an example for obtaining a `FeedIterator` that returns a `Stream`: ```csharp
-FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Beginning());
+FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
``` If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example which starts reading all changes starting at the current time: ```csharp
-FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Now());
+FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental);
while (iteratorForTheEntireContainer.HasMoreResults) {
In some cases, you may only want to process a specific partition key's changes.
```csharp FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<User>(
- ChangeFeedMode.Incremental,
- ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue"))));
+ ChangeFeedStartFrom.Beginning(FeedRange.FromPartitionKey(new PartitionKey("PartitionKeyValue")), ChangeFeedMode.Incremental));
while (iteratorForThePartitionKey.HasMoreResults) {
Here's a sample that shows how to read from the beginning of the container's cha
Machine 1: ```csharp
-FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Beginning(ranges[0]));
+FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental);
while (iteratorA.HasMoreResults) { try {
while (iteratorA.HasMoreResults)
Machine 2: ```csharp
-FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Beginning(ranges[1]));
+FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental);
while (iteratorB.HasMoreResults) { try {
while (iteratorB.HasMoreResults)
You can save the position of your `FeedIterator` by creating a continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes. This allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed. ```csharp
-FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.Beginning());
+FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
string continuation = null;
while (iterator.HasMoreResults)
} // Some time later
-FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedMode.Incremental, ChangeFeedStartFrom.ContinuationToken(continuation));
+FeedIterator<User> iteratorThatResumesFromLastPoint = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.ContinuationToken(continuation), ChangeFeedMode.Incremental);
``` As long as the Cosmos container still exists, a FeedIterator's continuation token never expires.
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
Last updated 02/02/2021
## Overview
-Continuous integration is the practice of testing each change made to your codebase automatically and as early as possible Continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system.
+Continuous integration is the practice of testing each change made to your codebase automatically. As early as possible, continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system.
-In Azure Data Factory, continuous integration and delivery (CI/CD) means moving Data Factory pipelines from one environment (development, test, production) to another. Azure Data Factory utilizes [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to store the configuration of your various ADF entities (pipelines, datasets, data flows, and so on). There are two suggested methods to promote a data factory to another environment:
+In Azure Data Factory, continuous integration and continuous delivery (CI/CD) means moving Data Factory pipelines from one environment, such as development, test, and production, to another. Data Factory uses [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/overview.md) to store the configuration of your various Data Factory entities, such as pipelines, datasets, and data flows.
-- Automated deployment using Data Factory's integration with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines).-- Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager.
+There are two suggested methods to promote a data factory to another environment:
+
+- Automated deployment using the integration of Data Factory with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines).
+- Manually uploading an ARM template by using Data Factory user experience integration with Azure Resource Manager.
For more information, see [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
-In this article, we focus on the continuous deployment improvements and the automated publish feature for CI/CD.
+This article focuses on the continuous deployment improvements and the automated publish feature for CI/CD.
## Continuous deployment improvements
-The "Automated publish" feature takes the *validate all* and export *Azure Resource Manager (ARM) template* features from the ADF UX and makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This will give your CI/CD pipelines a truer continuous integration experience.
+The automated publish feature takes the **Validate all** and **Export ARM template** features from the Data Factory user experience and makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). For this reason, you can programmatically trigger these actions instead of having to go to the Data Factory UI and select a button manually. This capability will give your CI/CD pipelines a truer continuous integration experience.
### Current CI/CD flow 1. Each user makes changes in their private branches.
-2. Push to master is forbidden, users must create a PR to master to make changes.
-3. Users must load ADF UI and click publish to deploy changes to Data Factory and generate the ARM templates in the Publish branch.
-4. DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new change is pushed to the publish branch.
+1. Push to master isn't allowed. Users must create a pull request to make changes.
+1. Users must load the Data Factory UI and select **Publish** to deploy changes to Data Factory and generate the ARM templates in the publish branch.
+1. The DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new change is pushed to the publish branch.
-![Current CI/CD Flow](media/continuous-integration-deployment-improvements/current-ci-cd-flow.png)
+![Diagram that shows the current CI/CD flow.](media/continuous-integration-deployment-improvements/current-ci-cd-flow.png)
### Manual step
-In current CI/CD flow, the UX is the intermediary to create the ARM template, therefore a user must go to ADF UI and manually click publish to start the ARM template generation and drop it in the publish branch, which is a bit of a hack.
+In the current CI/CD flow, the user experience is the intermediary to create the ARM template. As a result, a user must go to the Data Factory UI and manually select **Publish** to start the ARM template generation and drop it in the publish branch.
### The new CI/CD flow 1. Each user makes changes in their private branches.
-2. Push to master is forbidden, users must create a PR to master to make changes.
-3. **Azure DevOps pipeline build is triggered every time a new commit is made to master, validates the resources and generates an ARM template as an artifact if validation succeeds.**
-4. DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new build is available.
+1. Push to master isn't allowed. Users must create a pull request to make changes.
+1. The Azure DevOps pipeline build is triggered every time a new commit is made to master. It validates the resources and generates an ARM template as an artifact if validation succeeds.
+1. The DevOps Release pipeline is configured to create a new release and deploy the ARM template each time a new build is available.
-![New CI/CD Flow](media/continuous-integration-deployment-improvements/new-ci-cd-flow.png)
+![Diagram that shows the new CI/CD flow.](media/continuous-integration-deployment-improvements/new-ci-cd-flow.png)
### What changed? -- We now have a build process using a DevOps build pipeline.-- The build pipeline uses ADFUtilities NPM package, which will validate all the resources and generate the ARM templates (single and linked templates).-- The build pipeline will be responsible of validating ADF resources and generating the ARM template instead of ADF UI (Publish button).-- DevOps release definition will now consume this new build pipeline instead of the Git artifact.
+- We now have a build process that uses a DevOps build pipeline.
+- The build pipeline uses the ADFUtilities NPM package, which will validate all the resources and generate the ARM templates. These templates can be single and linked.
+- The build pipeline is responsible for validating Data Factory resources and generating the ARM template instead of the Data Factory UI (**Publish** button).
+- The DevOps release definition will now consume this new build pipeline instead of the Git artifact.
> [!NOTE]
-> You can continue to use existing mechanism (adf_publish branch) or use the new flow. Both are supported.
+> You can continue to use the existing mechanism, which is the `adf_publish` branch, or you can use the new flow. Both are supported.
## Package overview
-There are two commands currently available in the package:
+Two commands are currently available in the package:
+ - Export ARM template - Validate ### Export ARM template
-Run npm run start export <rootFolder> <factoryId> [outputFolder] to export the ARM template using the resources of a given folder. This command runs a validation check as well prior to generating the ARM template. Below is an example:
+Run `npm run start export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example:
``` npm run start export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput ``` -- RootFolder is a mandatory field that represents where the Data Factory resources are located.-- FactoryId is a mandatory field that represents the Data factory resource ID in the format: "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>".-- OutputFolder is an optional parameter that specifies the relative path to save the generated ARM template.
+- `RootFolder` is a mandatory field that represents where the Data Factory resources are located.
+- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`.
+- `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
> [!NOTE]
-> The ARM template generated is not published to the `Live` version of the factory. Deployment should be done using a CI/CD pipeline.
+> The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline.
- ### Validate
-Run npm run start validate <rootFolder> <factoryId> to validate all the resources of a given folder. Below is an example:
-
+Run `npm run start validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
+ ``` npm run start validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ``` -- RootFolder is a mandatory field that represents where the Data Factory resources are located.-- FactoryId is a mandatory field that represents the Data factory resource ID in the format: "/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>".-
+- `RootFolder` is a mandatory field that represents where the Data Factory resources are located.
+- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`.
## Create an Azure pipeline
-While npm packages can be consumed in various ways, one of the primary benefits is being consumed via an [Azure Pipeline](https://nam06.safelinks.protection.outlook.com/?url=https:%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fget-started%2Fwhat-is-azure-pipelines%3Fview%3Dazure-devops%23:~:text%3DAzure%2520Pipelines%2520is%2520a%2520cloud%2Cit%2520available%2520to%2520other%2520users.%26text%3DAzure%2520Pipelines%2520combines%2520continuous%2520integration%2Cship%2520it%2520to%2520any%2520target.&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000268277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jo%2BkIvSBiz6f%2B7kmgqDN27TUWc6YoDanOxL9oraAbmA%3D&reserved=0). On each merge into your collaboration branch, a pipeline can be triggered that first validates all of the code and then exports the ARM template into a [build artifact](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fartifacts%2Fbuild-artifacts%3Fview%3Dazure-devops%26tabs%3Dyaml%23how-do-i-consume-artifacts&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000278113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=dN3t%2BF%2Fzbec4F28hJqigGANvvedQoQ6npzegTAwTp1A%3D&reserved=0) that can be consumed by a release pipeline. **How it differs from the current CI/CD process is that you will point your release pipeline at this artifact instead of the existing `adf_publish` branch.**
+While npm packages can be consumed in various ways, one of the primary benefits is being consumed via [Azure Pipeline](https://nam06.safelinks.protection.outlook.com/?url=https:%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fget-started%2Fwhat-is-azure-pipelines%3Fview%3Dazure-devops%23:~:text%3DAzure%2520Pipelines%2520is%2520a%2520cloud%2Cit%2520available%2520to%2520other%2520users.%26text%3DAzure%2520Pipelines%2520combines%2520continuous%2520integration%2Cship%2520it%2520to%2520any%2520target.&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000268277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jo%2BkIvSBiz6f%2B7kmgqDN27TUWc6YoDanOxL9oraAbmA%3D&reserved=0). On each merge into your collaboration branch, a pipeline can be triggered that first validates all of the code and then exports the ARM template into a [build artifact](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2F%2Fazure%2Fdevops%2Fpipelines%2Fartifacts%2Fbuild-artifacts%3Fview%3Dazure-devops%26tabs%3Dyaml%23how-do-i-consume-artifacts&data=04%7C01%7Cabnarain%40microsoft.com%7C5f064c3d5b7049db540708d89564b0bc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637423607000278113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=dN3t%2BF%2Fzbec4F28hJqigGANvvedQoQ6npzegTAwTp1A%3D&reserved=0) that can be consumed by a release pipeline. How it differs from the current CI/CD process is that you will *point your release pipeline at this artifact instead of the existing `adf_publish` branch*.
-Follow the below steps to get started:
+Follow these steps to get started:
-1. Open an Azure DevOps project and go to "Pipelines". Select "New Pipeline".
+1. Open an Azure DevOps project, and go to **Pipelines**. Select **New Pipeline**.
- ![New Pipeline](media/continuous-integration-deployment-improvements/new-pipeline.png)
+ ![Screenshot that shows the New pipeline button.](media/continuous-integration-deployment-improvements/new-pipeline.png)
-2. Select the repository where you wish to save your Pipeline YAML script. We recommend saving it in a *build* folder within the same repository of your ADF resources. Ensure there is a **package.json** file in the repository as well that contains the package name (as shown in below example).
+1. Select the repository where you want to save your pipeline YAML script. We recommend saving it in a build folder in the same repository of your Data Factory resources. Ensure there's a *package.json* file in the repository that contains the package name, as shown in the following example:
```json {
Follow the below steps to get started:
} ```
-3. Select *Starter pipeline*. If you have uploaded or merged the YAML file (as shown in below example), you can also point directly at that and edit it.
+1. Select **Starter pipeline**. If you've uploaded or merged the YAML file, as shown in the following example, you can also point directly at that and edit it.
- ![Starter pipeline](media/continuous-integration-deployment-improvements/starter-pipeline.png)
+ ![Screenshot that shows Starter pipeline.](media/continuous-integration-deployment-improvements/starter-pipeline.png)
```yaml
- # Sample YAML file to validate and export an ARM template into a Build Artifact
+ # Sample YAML file to validate and export an ARM template into a build artifact
# Requires a package.json file located in the target repository trigger:
Follow the below steps to get started:
verbose: true displayName: 'Install npm package'
- # Validates all of the ADF resources in the repository. You will get the same validation errors as when "Validate All" is clicked
- # Enter the appropriate subscription and name for the source factory
+ # Validates all of the Data Factory resources in the repository. You'll get the same validation errors as when "Validate All" is selected.
+ # Enter the appropriate subscription and name for the source factory.
- task: Npm@1 inputs:
Follow the below steps to get started:
customCommand: 'run build validate $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName' displayName: 'Validate'
- # Validate and then generate the ARM template into the destination folder. Same as clicking "Publish" from UX
- # The ARM template generated is not published to the ΓÇÿLiveΓÇÖ version of the factory. Deployment should be done using a CI/CD pipeline.
+ # Validate and then generate the ARM template into the destination folder, which is the same as selecting "Publish" from the UX.
+ # The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline.
- task: Npm@1 inputs:
Follow the below steps to get started:
customCommand: 'run build export $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName "ArmTemplate"' displayName: 'Validate and Generate ARM template'
- # Publish the Artifact to be used as a source for a release pipeline
+ # Publish the artifact to be used as a source for a release pipeline.
- task: PublishPipelineArtifact@1 inputs:
Follow the below steps to get started:
publishLocation: 'pipeline' ```
-4. Enter in your YAML code. We recommend taking the YAML file and using it as a starting point.
-5. Save and run. If using the YAML, it will get triggered every time the "main" branch is updated.
+1. Enter your YAML code. We recommend that you use the YAML file as a starting point.
+1. Save and run. If you used the YAML, it gets triggered every time the main branch is updated.
## Next steps
-Learn more information about continuous integration and delivery in Data Factory:
+Learn more information about continuous integration and delivery in Data Factory:
- [Continuous integration and delivery in Azure Data Factory](continuous-integration-deployment.md).
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 12/15/2020 Last updated : 03/13/2021
Pipeline runs are typically instantiated by passing arguments to parameters that
You have Data Factory and an Azure function app running on a private endpoint. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
-**Cause**: Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
+**Cause**
-**Resolution**: Create a **PrivateLinkService** endpoint and provide your function app's DNS.
+Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
+
+**Resolution**
+
+Create a **PrivateLinkService** endpoint and provide your function app's DNS.
### A pipeline run is canceled but the monitor still shows progress status
+**Cause**
+ When you cancel a pipeline run, pipeline monitoring often still shows the progress status. This happens because of a browser cache issue. You also might not have the correct monitoring filters.
-**Resolution**: Refresh the browser and apply the correct monitoring filters.
+**Resolution**
+
+Refresh the browser and apply the correct monitoring filters.
### You see a "DelimitedTextMoreColumnsThanDefined" error when copying a pipeline
+ **Cause**
+
If a folder you're copying contains files with different schemas, such as variable number of columns, different delimiters, quote char settings, or some data issue, the Data Factory pipeline might throw this error: `
Message=Error found when processing 'Csv/Tsv Format Text' source '0_2020_11_09_1
Source=Microsoft.DataTransfer.Common,' `
-**Resolution**: Select the **Binary Copy** option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Data Factory will treat each file as binary and copy it to the other location.
+**Resolution**
+
+Select the **Binary Copy** option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Data Factory will treat each file as binary and copy it to the other location.
+
+### A pipeline run fails when you reach the capacity limit of the integration runtime for data flow
-### A pipeline run fails when you reach the capacity limit of the integration runtime
+**Issue**
Error message:
Error message:
Type=Microsoft.DataTransfer.Execution.Core.ExecutionException,Message=There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under Integration Runtime 'AutoResolveIntegrationRuntime'. `
-**Cause**: You've reached the integration runtime's capacity limit. You might be running a large amount of data flow by using the same integration runtime at the same time. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#version-2) for details.
+**Cause**
-**Resolution**:
+You've reached the integration runtime's capacity limit. You might be running a large amount of data flow by using the same integration runtime at the same time. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#version-2) for details.
+
+**Resolution**
- Run your pipelines at different trigger times. - Create a new integration runtime, and split your pipelines across multiple integration runtimes.
-### You have activity-level errors and failures in pipelines
+### How to perform activity-level errors and failures in pipelines
+
+**Cause**
Azure Data Factory orchestration allows conditional logic and enables users to take different paths based upon the outcome of a previous activity. It allows four conditional paths: **Upon Success** (default pass), **Upon Failure**, **Upon Completion**, and **Upon Skip**.
Azure Data Factory evaluates the outcome of all leaf-level activities. Pipeline
**Resolution**
-1. Implement activity-level checks by following [How to handle pipeline failures and errors](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459).
-1. Use Azure Logic Apps to monitor pipelines in regular intervals following [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory).
+* Implement activity-level checks by following [How to handle pipeline failures and errors](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459).
+* Use Azure Logic Apps to monitor pipelines in regular intervals following [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory).
+* [Visually Monitor Pipeline](https://docs.microsoft.com/azure/data-factory/monitor-visually)
### How to monitor pipeline failures in regular intervals
+**Cause**
+ You might need to monitor failed Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory by using the endpoint. **Resolution**
-You can set up an Azure logic app to query all of the failed pipelines every 5 minutes, as described in [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory). Then, you can report incidents to your ticketing system.
-
-For more information, go to [Send Notifications from Data Factory, Part 2](https://www.mssqltips.com/sqlservertip/5962/send-notifications-from-an-azure-data-factory-pipeline--part-2/).
+* You can set up an Azure logic app to query all of the failed pipelines every 5 minutes, as described in [Query By Factory](/rest/api/datafactory/pipelineruns/querybyfactory). Then, you can report incidents to your ticketing system.
+* [Visually Monitor Pipeline](https://docs.microsoft.com/azure/data-factory/monitor-visually)
### Degree of parallelism increase does not result in higher throughput
Known Facts about *ForEach*
* You should not use *SetVariable* activity inside *For Each* that runs in parallel. * Taking in consideration the way the queues are constructed, customer can improve the foreach performance by setting multiple *foreaches* where each foreach will have items with similar processing time. This will ensure that long runs are processed in parallel rather sequentially.
+ ### Pipeline status is queued or stuck for a long time
+
+ **Cause**
+
+ This can happen for various reasons like hitting concurrency limits, service outages, network failures and so on.
+
+ **Resolution**
+
+* Concurrency Limit: If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress. The maximum pipeline concurrency allowed in Azure Data Factory is 10 pipelines .
+* Monitoring limits: Go to the ADF authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there is something in progress, you can cancel it and the new pipeline run should start.
+* Transient Issues: It is possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
+
+### Longer start up times for activities in ADF Copy and Data Flow
+
+**Cause**
+
+This can happen if you have not implemented time to live feature for Data Flow or optimized SHIR.
+
+**Resolution**
+
+* If each copy activity is taking up to 2 minutes to start, and the problem occurs primarily on a VNet join (vs. Azure IR), this can be a copy performance issue. To review troubleshooting steps, go to [Copy Performance Improvement.](https://docs.microsoft.com/azure/data-factory/copy-activity-performance-troubleshooting)
+* You can use time to live feature to decrease cluster start up time for data flow activities. Please review [Data Flow Integration Runtime.](https://docs.microsoft.com/azure/data-factory/control-flow-execute-data-flow-activity#data-flow-integration-runtime)
+
+ ### Hitting capacity issues in SHIR(Self Hosted Integration Runtime)
+
+ **Cause**
+
+This can happen if you have not scaled up SHIR as per your workload.
+
+**Resolution**
+
+* If you encounter a capacity issue from SHIR, upgrade the VM to increase the node to balance the activities. If you receive an error message about a self-hosted IR general failure or error, a self-hosted IR upgrade, or self-hosted IR connectivity issues, which can generate a long queue, go to [Troubleshoot self-hosted integration runtime.](https://docs.microsoft.com/azure/data-factory/self-hosted-integration-runtime-troubleshoot-guide)
+
+### Error messages due to long queues for ADF Copy and Data Flow
+
+**Cause**
+
+Long queue related error messages can appear for various reasons.
+
+**Resolution**
+* If you receive an error message from any source or destination via connectors, which can generate a long queue, go to [Connector Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/connector-troubleshoot-guide)
+* If you receive an error message about Mapping Data Flow, which can generate a long queue, go to [Data Flows Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/data-flow-troubleshoot-guide)
+* If you receive an error message about other activities, such as Databricks, custom activities, or HDI, which can generate a long queue, go to [Activity Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/data-factory-troubleshoot-guide)
+* If you receive an error message about running SSIS packages, which can generate a long queue, go to the [Azure-SSIS Package Execution Troubleshooting Guide](https://docs.microsoft.com/azure/data-factory/ssis-integration-runtime-ssis-activity-faq) and [Integration Runtime Management Troubleshooting Guide.](https://docs.microsoft.com/azure/data-factory/ssis-integration-runtime-management-troubleshoot)
++ ## Next steps For more troubleshooting help, try these resources:
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
Get digital twins by **properties** (including ID and metadata):
:::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByProperty1":::
-> [!NOTE]
-> The ID of a digital twin is queried using the metadata field `$dtId`.
+As shown in the query above, the ID of a digital twin is queried using the metadata field `$dtId`.
+
+>[!TIP]
+> If you are using Cloud Shell to run a query with metadata fields that begin with `$`, you should escape the `$` with a backtick to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text.
You can also get twins based on **whether a certain property is defined**. Here is a query that gets twins that have a defined *Location* property:
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-app.md
# Mandatory fields. Title: 'Tutorial: Explore the basics with a sample client app'
+ Title: 'Tutorial: Create a graph in Azure Digital Twins (client app)'
-description: Tutorial to explore the Azure Digital Twins SDKs using a sample command-line application
+description: Tutorial to build an Azure Digital Twins scenario using a sample command-line application
Last updated 5/8/2020
#
-# Tutorial: Explore Azure Digital Twins with a sample client app
+# Tutorial: Create an Azure Digital Twins graph using a sample client app
-This tutorial introduces a sample application that implements a command-line client application, for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [*Tutorial: Code a client app*](tutorial-code.md).
-You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the code of the sample to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you would like.
+In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the a **sample command-line client application** for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [*Tutorial: Code a client app*](tutorial-code.md).
+
+You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/master/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you would like.
In this tutorial, you will... > [!div class="checklist"]
-> * Set up an Azure Digital Twins instance
-> * Configure the sample command-line app to interact with the instance
-> * Use the command-line app to explore Azure Digital Twins, including **models**, **digital twins**, **relationships**, and **queries**
+> * Model an environment
+> * Create digital twins
+> * Add relationships to form a graph
+> * Query the graph to answer questions
[!INCLUDE [Azure Digital Twins tutorial: sample prerequisites](../../includes/digital-twins-tutorial-sample-prereqs.md)] [!INCLUDE [Azure Digital Twins tutorial: configure the sample project](../../includes/digital-twins-tutorial-sample-configure.md)]
-## Explore with the sample solution
-
-Now that the instance and sample app are configured, you will use the sample project and some pre-written example code to build out and explore a basic Azure Digital Twins solution. The major solution components are **models**, **digital twins**, and **relationships**, resulting in a queryable **twin graph** of an environment.
-
-### Model a physical environment with DTDL
-
-The first step in creating an Azure Digital Twins solution is defining twin [**models**](concepts-models.md) for your environment.
-
-Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They are written in a JSON-like language called **Digital Twins Definition Language (DTDL)**, and can define a twin's *properties*, *telemetry*, *relationships*, and *components*.
+### Run the sample project
-> [!NOTE]
-> DTDL also allows for the definition of *commands* on digital twins. However, commands are not currently supported in the Azure Digital Twins service.
+Now that the app and authentication are set up, run the project with this button in the toolbar:
-In your Visual Studio window where the _**AdtE2ESample**_ project is open, use the *Solution Explorer* pane to navigate to the *AdtSampleApp\SampleClientApp\Models* folder. This folder contains sample models.
-Select *Room.json* to open it in the editing window, and change it in the following ways:
-
-1. **Update the version number**, to indicate that you are providing a more-updated version of this model. Do this by changing the *1* at the end of the `@id` value to a *2*. Any number greater than the current version number will also work.
-1. **Edit a property**. Change the name of the `Humidity` property to *HumidityLevel* (or something different if you'd like. If you use something different than *HumidityLevel*, remember what you used and continue using that instead of *HumidityLevel* throughout the tutorial).
-1. **Add a property**. Underneath the `HumidityLevel` property that ends on line 15, paste the following code to add a `RoomName` property to the room:
-
- :::code language="json" source="~/digital-twins-docs-samples/models/Room.json" range="16-20":::
-
-1. **Add a relationship**. Underneath the `RoomName` property that you just added, paste the following code to add the ability for this type of twin to form *contains* relationships with other twins:
-
- :::code language="json" source="~/digital-twins-docs-samples/models/Room.json" range="21-24":::
-
-When you are finished, the updated model should match this:
+A console window will open, carry out authentication, and wait for a command.
+* Authentication is handled through the browser: your default web browser will open with an authentication prompt. Use this prompt to sign in with your Azure credentials. You can then close the browser tab or window.
+Here is a screenshot of what the project console looks like:
-Make sure to save the file before moving on.
> [!TIP]
-> If you want to try creating your own model, you can paste the *Room* model code into a new file that you save with a *.json* extension in the *AdtSampleApp\SampleClientApp\Models* folder. Then, play around with adding properties and relationships to represent whatever you'd like. You can also look at the other sample models in this folder for ideas.
+> For a list of all the possible commands you can use with this project, enter `help` in the project console and press return.
+> :::image type="content" source="media/tutorial-command-line/app/command-line-app-help.png" alt-text="Output of the help command":::
-> [!TIP]
-> There is a language-agnostic [DTDL Validator sample](/samples/azure-samples/dtdl-validator/dtdl-validator) that you can use to check model documents to make sure the DTDL is valid. It is built on the DTDL parser library, which you can read more about in [*How-to: Parse and validate models*](how-to-parse-models.md).
+Keep the project console running for the rest of the steps in this tutorial.
-### Get started with the command-line app
+## Model a physical environment with DTDL
-Now that you've defined a model, the remaining steps involve using the sample app to interact with your Azure Digital Twins instance. Run the project with this button in the toolbar:
+Now that the Azure Digital Twins instance and sample app are set up, you can begin building a graph of a scenario.
+The first step in creating an Azure Digital Twins solution is defining twin [**models**](concepts-models.md) for your environment.
-A console window will open, carry out authentication, and wait for a command.
-* Authentication is handled through the browser: your default web browser will open with an authentication prompt. Use this prompt to sign in with your Azure credentials. You can then close the browser tab or window.
+Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They are written in a JSON-like language called **Digital Twins Definition Language (DTDL)**, and can define a twin's *properties*, *telemetry*, *relationships*, and *components*.
-Here is a screenshot of what the project console looks like:
+> [!NOTE]
+> DTDL also allows for the definition of *commands* on digital twins. However, commands are not currently supported in the Azure Digital Twins service.
+In your Visual Studio window where the _**AdtE2ESample**_ project is open, use the *Solution Explorer* pane to navigate to the *AdtSampleApp\SampleClientApp\Models folder*. This folder contains sample models.
-> [!TIP]
-> For a list of all the possible commands you can use with this project, enter `help` in the project console and press return.
-> :::image type="content" source="media/tutorial-command-line-app/command-line-app-help.png" alt-text="Output of the help command":::
+Select *Room.json* to open it in the editing window, and change it in the following ways:
-Keep the project console running for the rest of the steps in this tutorial.
-#### Upload models to Azure Digital Twins
+### Upload models to Azure Digital Twins
After designing models, you need to upload them to your Azure Digital Twins instance. This configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you have uploaded the models, you can create twin instances that use them.
-In the project console window, run the following command to upload your updated *Room* model, as well as a *Floor* model that you'll also use in the next section to create different types of twins.
+1. In the project console window, run the following command to upload your updated *Room* model, as well as a *Floor* model that you'll also use in the next section to create different types of twins.
-```cmd/sh
-CreateModels Room Floor
-```
-
-The output should indicate the models were created successfully.
-
-> [!TIP]
-> If you designed your own model earlier, you can also upload it here, by adding its file name (you can leave out the extension) to the `Room Floor` list in the command above.
+ ```cmd/sh
+ CreateModels Room Floor
+ ```
+
+ The output should indicate the models were created successfully.
-Verify the models were created by running the command `GetModels true`. This will query the Azure Digital Twins instance for all models that have been uploaded, and print out their full information. Look for the edited *Room* model in the results:
+1. Verify the models were created by running the command `GetModels true`. This will query the Azure Digital Twins instance for all models that have been uploaded, and print out their full information. Look for the edited *Room* model in the results:
+ :::image type="content" source="media/tutorial-command-line/app/output-get-models.png" alt-text="Results of GetModels, showing the updated Room model":::
-#### Errors
+### Errors
The sample application also handles errors from the service.
Content-Length: 223
Content-Type: application/json; charset=utf-8 ```
-### Create digital twins
+## Create digital twins
Now that some models have been uploaded to your Azure Digital Twins instance, you can create [**digital twins**](concepts-twins-graph.md) based on the model definitions. Digital twins represent the entities within your business environmentΓÇöthings like sensors on a farm, rooms in a building, or lights in a car. To create a digital twin, you use the `CreateDigitalTwin` command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You do not have to pass any relationship information at this stage.
-Run this code in the running project console to create several twins, based on the *Room* model you updated earlier and another model, *Floor*. Recall that *Room* has three properties, so you can provide arguments with the initial values for these.
-
-```cmd/sh
-CreateDigitalTwin dtmi:example:Room;2 room0 RoomName string Room0 Temperature double 70 HumidityLevel double 30
-CreateDigitalTwin dtmi:example:Room;2 room1 RoomName string Room1 Temperature double 80 HumidityLevel double 60
-CreateDigitalTwin dtmi:example:Floor;1 floor0
-CreateDigitalTwin dtmi:example:Floor;1 floor1
-```
+1. Run this code in the running project console to create several twins, based on the *Room* model you updated earlier and another model, *Floor*. Recall that *Room* has three properties, so you can provide arguments with the initial values for these. (Initializing property values is optional in general, but they're needed for this tutorial.)
-> [!TIP]
-> If you uploaded your own model earlier, try making your own `CreateDigitalTwin` command based on the commands above to add a twin of your own model type.
-
-The output from these commands should indicate the twins were created successfully.
-
+ ```cmd/sh
+ CreateDigitalTwin dtmi:example:Room;2 room0 RoomName string Room0 Temperature double 70 HumidityLevel double 30
+ CreateDigitalTwin dtmi:example:Room;2 room1 RoomName string Room1 Temperature double 80 HumidityLevel double 60
+ CreateDigitalTwin dtmi:example:Floor;1 floor0
+ CreateDigitalTwin dtmi:example:Floor;1 floor1
+ ```
-You can also verify that the twins were created by running the `Query` command. This command queries your Azure Digital Twins instance for all the digital twins it contains. Look for the *floor0*, *floor1*, *room0*, and *room1* twins in the results.
+ The output from these commands should indicate the twins were created successfully.
+
+ :::image type="content" source="media/tutorial-command-line/app/output-create-digital-twin.png" alt-text="Excerpt from the results of CreateDigitalTwin commands, showing floor0, floor1, room0, and room1":::
-#### Modify a digital twin
+1. You can verify that the twins were created by running the `Query` command. This command queries your Azure Digital Twins instance for all the digital twins it contains. Look for the *room0*, *room1*, *floor0*, and *floor1* twins in the results.
-You can also modify the properties of a twin you've created. Try running this command to change *room0*'s RoomName from *Room0* to *PresidentialSuite*:
+### Modify a digital twin
-```cmd/sh
-UpdateDigitalTwin room0 add /RoomName string PresidentialSuite
-```
+You can also modify the properties of a twin you've created.
-The output should indicate the twin was updated successfully.
+> [!NOTE]
+> The underlying REST API uses [JSON Patch](http://jsonpatch.com/) format to define updates to a twin. The command-line app also uses this format, to give a truer experience with what the underlying APIs expect.
-You can also verify by running this command to see *room0*'s information:
+1. Run this command to change *room0*'s RoomName from *Room0* to *PresidentialSuite*:
+
+ ```cmd/sh
+ UpdateDigitalTwin room0 add /RoomName string PresidentialSuite
+ ```
+
+ The output should indicate the twin was updated successfully.
-```cmd/sh
-GetDigitalTwin room0
-```
+1. You can verify the update succeeded by running this command to see *room0*'s information:
-The output should reflect the updated name.
+ ```cmd/sh
+ GetDigitalTwin room0
+ ```
+
+ The output should reflect the updated name.
-> [!NOTE]
-> The underlying REST API uses JSON Patch to define updates to a twin. The command-line app reflects this format, so that you can experiment with what the underlying APIs actually expect.
-### Create a graph by adding relationships
+## Create a graph by adding relationships
Next, you can create some **relationships** between these twins, to connect them into a [**twin graph**](concepts-twins-graph.md). Twin graphs are used to represent an entire environment.
-To add a relationship, use the `CreateRelationship` command. Specify the twin that the relationship is coming from, the type of relationship to add, and the twin that the relationship is connecting to. Lastly, provide a name (ID) for the relationship.
-
-Run the following code to add a "contains" relationship from each of the *Floor* twins you created earlier to a corresponding *Room* twin. Note that there must be a *contains* relationship defined on the *Floor* model for this to be possible.
-
-```cmd/sh
-CreateRelationship floor0 contains room0 relationship0
-CreateRelationship floor1 contains room1 relationship1
-```
+The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for *Floor*](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called *contains*. This makes it possible to create a *contains*-type relationship from each *Floor* twin to the corresponding room that it contains.
-The output from these commands confirms that the relationships were created successfully:
+To add a relationship, use the `CreateRelationship` command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID.
+1. Run the following code to add a "contains" relationship from each of the *Floor* twins you created earlier to a corresponding *Room* twin. The relationships are named *relationship0* and *relationship1*.
-You can also verify the relationships with any of the following commands, which query the relationships in your Azure Digital Twins instance.
-* To see all relationships coming off of each floor (viewing the relationships from one side),
- ```cmd/sh
- GetRelationships floor0
- GetRelationships floor1
- ```
-* To see all relationships arriving at each room (viewing the relationship from the "other" side),
- ```cmd/sh
- GetIncomingRelationships room0
- ```
-* To query for these relationships individually,
```cmd/sh
- GetRelationship floor0 relationship0
- GetRelationship floor1 relationship1
+ CreateRelationship floor0 contains room0 relationship0
+ CreateRelationship floor1 contains room1 relationship1
```
+ >[!TIP]
+ >The *contains* relationship in the [*Floor* model](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two string properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
+ > Here's an alternate version of the command above to create *relationship0* that also specifies initial values for these properties:
+ > ```cmd/sh
+ > CreateRelationship floor0 contains room0 relationship0 ownershipUser string MyUser ownershipDepartment string myDepartment
+ > ```
+
+ The output from these commands confirms that the relationships were created successfully:
+
+ :::image type="content" source="media/tutorial-command-line/app/output-create-relationship.png" alt-text="Excerpt from the results of CreateRelationship commands, showing relationship0 and relationship1":::
+
+1. You can verify the relationships with any of the following commands, which query the relationships in your Azure Digital Twins instance.
+ * To see all relationships coming off of each floor (viewing the relationships from one side):
+ ```cmd/sh
+ GetRelationships floor0
+ GetRelationships floor1
+ ```
+ * To see all relationships arriving at each room (viewing the relationship from the "other" side):
+ ```cmd/sh
+ GetIncomingRelationships room0
+ GetIncomingRelationships room1
+ ```
+ * To look for these relationships individually, by ID:
+ ```cmd/sh
+ GetRelationship floor0 relationship0
+ GetRelationship floor1 relationship1
+ ```
+ The twins and relationships you have set up in this tutorial form the following conceptual graph: +
+## Query the twin graph to answer environment questions
-### Query the twin graph to answer environment questions
+A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment.
-A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. Run the following commands in the running project console to get an idea of what this is like.
+Run the following commands in the running project console to answer some questions about the sample environment.
-* **What are all the entities in my environment represented in Azure Digital Twins?** (query all)
+1. **What are all the entities from my environment represented in Azure Digital Twins?** (query all)
```cmd/sh Query
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
This allows you to take stock of your environment at a glance, and make sure everything is represented as you'd like it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
- :::image type="content" source="media/tutorial-command-line-app/output-query-all.png" alt-text="Partial results of twin query, showing room0 and floor1":::
+ :::image type="content" source="media/tutorial-command-line/app/output-query-all.png" alt-text="Partial results of twin query, showing room0 and floor1":::
>[!NOTE] >In the sample project, the command `Query` without any additional arguments is the equivalent of `Query SELECT * FROM DIGITALTWINS`. To query all the twins in your instance using the [Query APIs](/rest/api/digital-twins/dataplane/query) or the [CLI commands](how-to-use-cli.md), use the longer (complete) query.
-* **What are all the rooms in my environment?** (query by model)
+1. **What are all the rooms in my environment?** (query by model)
```cmd/sh Query SELECT * FROM DIGITALTWINS T WHERE IS_OF_MODEL(T, 'dtmi:example:Room;2')
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
You can restrict your query to twins of a certain type, to get more specific information about what's represented. The result of this shows *room0* and *room1*, but does **not** show *floor0* or *floor1* (since they are floors, not rooms).
- :::image type="content" source="media/tutorial-command-line-app/output-query-model.png" alt-text="Results of model query, showing only room0 and room1":::
+ :::image type="content" source="media/tutorial-command-line/app/output-query-model.png" alt-text="Results of model query, showing only room0 and room1":::
-* **What are all the rooms on *floor0*?** (query by relationship)
+1. **What are all the rooms on *floor0*?** (query by relationship)
```cmd/sh Query SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.$dtId = 'floor0'
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
You can query based on relationships in your graph, to get information about how twins are connected or to restrict your query to a certain area. Only *room0* is on *floor0*, so it's the only room in the result.
- :::image type="content" source="media/tutorial-command-line-app/output-query-relationship.png" alt-text="Results of relationship query, showing room0":::
+ :::image type="content" source="media/tutorial-command-line/app/output-query-relationship.png" alt-text="Results of relationship query, showing room0":::
-* **What are all the twins in my environment with a temperature above 75?** (query by property)
+1. **What are all the twins in my environment with a temperature above 75?** (query by property)
```cmd/sh Query SELECT * FROM DigitalTwins T WHERE T.Temperature > 75
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
You can query the graph based on properties to answer a variety of questions, including finding outliers in your environment that might need attention. Other comparison operators (*<*,*>*, *=*, or *!=*) are also supported. *room1* shows up in the results here, because it has a temperature of 80.
- :::image type="content" source="media/tutorial-command-line-app/output-query-property.png" alt-text="Results of property query, showing only room1":::
+ :::image type="content" source="media/tutorial-command-line/app/output-query-property.png" alt-text="Results of property query, showing only room1":::
-* **What are all the rooms on *floor0* with a temperature above 75?** (compound query)
+1. **What are all the rooms on *floor0* with a temperature above 75?** (compound query)
```cmd/sh Query SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.$dtId = 'floor0' AND IS_OF_MODEL(room, 'dtmi:example:Room;2') AND room.Temperature > 75
A main feature of Azure Digital Twins is the ability to [query](concepts-query-l
You can also combine the earlier queries like you would in SQL, using combination operators such as `AND`, `OR`, `NOT`. This query uses `AND` to make the previous query about twin temperatures more specific. The result now only includes rooms with temperatures above 75 that are on *floor0*ΓÇöwhich in this case, is none of them. The result set is empty.
- :::image type="content" source="media/tutorial-command-line-app/output-query-compound.png" alt-text="Results of compound query, showing no results":::
+ :::image type="content" source="media/tutorial-command-line/app/output-query-compound.png" alt-text="Results of compound query, showing no results":::
## Clean up resources
After completing this tutorial, you can choose which resources you'd like to rem
* **If you plan to continue to the next tutorial**, you can keep the resources you set up here to continue using this Azure Digital Twins instance and configured sample app for the next tutorial
-* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the sample app's `DeleteAllTwins` and `DeleteAllModels` commands to clear the twins and models in your instance, respectively. This will give you a clean slate for the next tutorial.
+* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the sample app's `DeleteAllTwins` and `DeleteAllModels` commands to clear the twins and models in your instance, respectively.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
You may also want to delete the project folder from your local machine.
## Next steps
-In this tutorial, you got started with Azure Digital Twins by setting up an instance and a client application to interact with the instance. You used the client app to explore Azure Digital Twins, creating models, digital twins, and relationships. You also ran some queries on the solution, to get an idea of what kinds of questions Azure Digital Twins can answer about an environment.
+In this tutorial, you got started with Azure Digital Twins by building a graph in your instance using a sample client application. You created models, digital twins, and relationships to form a graph. You also ran some queries on the graph, to get an idea of what kinds of questions Azure Digital Twins can answer about an environment.
-Continue to the next tutorial to use the sample command-line app in combination with other Azure services to complete a data-driven, end-to-end scenario:
+Continue to the next tutorial to combine Azure Digital Twins with other Azure services to complete a data-driven, end-to-end scenario:
> [!div class="nextstepaction"] > [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-command-line-cli.md
+
+# Mandatory fields.
+ Title: 'Tutorial: Create a graph in Azure Digital Twins (CLI)'
+
+description: Tutorial to build an Azure Digital Twins scenario using the Azure CLI
++ Last updated : 2/26/2021+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Tutorial: Create an Azure Digital Twins graph using the Azure CLI
++
+In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the [Azure Digital Twins command set for the **Azure CLI**](how-to-use-cli.md).
+
+You can use the CLI commands to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [reference documentation for *az dt* command set](/cli/azure/ext/azure-iot/dt?preserve-view=true&view=azure-cli-latest) to see the full set of CLI commands.
+
+In this tutorial, you will...
+> [!div class="checklist"]
+> * Model an environment
+> * Create digital twins
+> * Add relationships to form a graph
+> * Query the graph to answer questions
+
+## Prerequisites
+
+To complete the steps in this tutorial, you'll need to first complete the following prerequisites.
+
+If you don't have an Azure subscription, **create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)** before you begin.
+
+### Download the sample models
+
+The tutorial uses two pre-written models that are part of the C# [end-to-end sample project](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) for Azure Digital Twins. The model files are located here:
+* [*Room.json*](https://github.com/Azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Room.json)
+* [*Floor.json*](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json)
+
+To get the files on your machine, use the navigation links above and copy the file bodies into local files on your machine with the same names (*Room.json* and *Floor.json*).
++
+### Set up Cloud Shell session
+
+### Prepare an Azure Digital Twins instance
+
+To work with Azure Digital Twins in this article, you first need to **set up an Azure Digital Twins instance** and the required permissions for using it. If you already have an Azure Digital Twins instance set up from previous work, you can use that instance.
+
+Otherwise, follow the instructions in [*How-to: Set up an instance and authentication*](how-to-set-up-instance-cli.md). The instructions also contain steps to verify that you've completed each step successfully and are ready to move on to using your new instance.
+
+After you set up your Azure Digital Twins instance, make a note of the following values that you'll need to connect to the instance later:
+* the instance's **_host name_**
+* the **Azure subscription** that you used to create the instance.
+
+You can get both of these values for your instance in the output of the following Azure CLI command:
+
+```azurecli-interactive
+az dt show -n <ADT_instance_name>
+```
++
+## Model a physical environment with DTDL
+
+Now that the CLI and Azure Digital Twins instance are set up, you can begin building a graph of a scenario.
+
+The first step in creating an Azure Digital Twins solution is defining twin [**models**](concepts-models.md) for your environment.
+
+Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They are written in a JSON-like language called **Digital Twins Definition Language (DTDL)**, and can define a twin's *properties*, *telemetry*, *relationships*, and *components*.
+
+> [!NOTE]
+> DTDL also allows for the definition of *commands* on digital twins. However, commands are not currently supported in the Azure Digital Twins service.
+
+Navigate on your machine to the *Room.json* file that you created in the [Prerequisites](#prerequisites) section. Open it in a code editor, and change it in the following ways:
++
+### Upload models to Azure Digital Twins
+
+After designing models, you need to upload them to your Azure Digital Twins instance. This configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you have uploaded the models, you can create twin instances that use them.
+
+1. To add models using Cloud Shell, you'll need to upload your model files to Cloud Shell's storage so the files will be available when you run the Cloud Shell command that uses them. To do this, select the "Upload/Download files" icon and choose "Upload".
+
+ :::image type="content" source="media/how-to-set-up-instance/cloud-shell/cloud-shell-upload.png" alt-text="Cloud Shell window showing selection of the Upload icon":::
+
+ Navigate to the *Room.json* file on your machine and select "Open." Then, repeat this step for *Floor.json*.
+
+1. Next, use the [**az dt model create**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_create) command as shown below to upload your updated *Room* model to your Azure Digital Twins instance. The second command uploads another model, *Floor*, which you'll also use in the next section to create different types of twins.
+
+ ```azurecli-interactive
+ az dt model create -n <ADT_instance_name> --models Room.json
+ az dt model create -n <ADT_instance_name> --models Floor.json
+ ```
+
+ The output from each command will show information about the successfully uploaded model.
+
+ >[!TIP]
+ >You can also upload all models within a directory at the same time, by using the `--from-directory` option for the model create command. For more information, see [Optional parameters for *az dt model create*](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_create-optional-parameters).
+
+1. Verify the models were created with the [**az dt model list**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_list) command as shown below. This will print a list of all models that have been uploaded to the Azure Digital Twins instance with their full information.
+
+ ```azurecli-interactive
+ az dt model list -n <ADT_instance_name> --definition
+ ```
+
+ Look for the edited *Room* model in the results:
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-get-models.png" alt-text="Results of the model list command, showing the updated Room model" lightbox="media/tutorial-command-line/cli/output-get-models.png":::
+
+### Errors
+
+The CLI also handles errors from the service.
+
+Re-run the `az dt model create` command to try re-uploading one of the same models you just uploaded, for a second time:
+
+```azurecli-interactive
+az dt model create -n <ADT_instance_name> --models Room.json
+```
+
+As models cannot be overwritten, this will now return an error code of `ModelIdAlreadyExists`.
+
+## Create digital twins
+
+Now that some models have been uploaded to your Azure Digital Twins instance, you can create [**digital twins**](concepts-twins-graph.md) based on the model definitions. Digital twins represent the entities within your business environmentΓÇöthings like sensors on a farm, rooms in a building, or lights in a car.
+
+To create a digital twin, you use the [**az dt twin create**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_create) command. You must reference the model that the twin is based on, and can optionally define initial values for any properties in the model. You do not have to pass any relationship information at this stage.
+
+1. Run this code in the Cloud Shell to create several twins, based on the *Room* model you updated earlier and another model, *Floor*. Recall that *Room* has three properties, so you can provide arguments with the initial values for these. (Initializing property values is optional in general, but they're needed for this tutorial.)
+
+ ```azurecli-interactive
+ az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Room;2" --twin-id room0 --properties '{"RoomName":"Room0", "Temperature":70, "HumidityLevel":30}'
+ az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Room;2" --twin-id room1 --properties '{"RoomName":"Room1", "Temperature":"80", "HumidityLevel":"60"}'
+ az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Floor;1" --twin-id floor0
+ az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Floor;1" --twin-id floor1
+ ```
+
+ >[!NOTE]
+ > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly. With this edit, the commands to create the room twins look like this:
+ >
+ > ```azurecli-interactive
+ > az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Room;2" --twin-id room0 --properties '{\"RoomName\":\"Room0\", \"Temperature\":70, \"HumidityLevel\":30}'
+ > az dt twin create -n <ADT_instance_name> --dtmi "dtmi:example:Room;2" --twin-id room1 --properties '{\"RoomName\":\"Room1\", \"Temperature\":80, \"HumidityLevel\":60}'
+ > ```
+ > This is reflected in the screenshot below.
+
+ The output from each command will show information about the successfully created twin (including properties for the room twins that were initialized with them).
+
+1. You can verify that the twins were created with the [**az dt twin query**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_query) command as shown below. The query shown finds all the digital twins in your Azure Digital Twins instance.
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DIGITALTWINS"
+ ```
+
+ Look for the *room0*, *room1*, *floor0*, and *floor1* twins in the results. Here is an excerpt showing part of the result of this query.
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-all.png" alt-text="Partial results of twin query, showing room0 and room1" lightbox="media/tutorial-command-line/cli/output-query-all.png":::
+
+### Modify a digital twin
+
+You can also modify the properties of a twin you've created.
+
+1. Run this [**az dt twin update**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_update) command to change *room0*'s RoomName from *Room0* to *PresidentialSuite*:
+
+ ```azurecli-interactive
+ az dt twin update -n <ADT_instance_name> --twin-id room0 --json-patch '{"op":"add", "path":"/RoomName", "value": "PresidentialSuite"}'
+ ```
+
+ >[!NOTE]
+ > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--json-patch` JSON value to be parsed correctly. With this edit, the command to update the twin looks like this:
+ >
+ > ```azurecli-interactive
+ > az dt twin update -n <ADT_instance_name> --twin-id room0 --json-patch '{\"op\":\"add\", \"path\":\"/RoomName\", \"value\": \"PresidentialSuite\"}'
+ > ```
+ > This is reflected in the screenshot below.
+
+ The output from this command will show the twin's current information, and you should see the new value for the `RoomName` in the result.
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-update-twin.png" alt-text="Results of the update command, showing a RoomName of PresidentialSuite" lightbox="media/tutorial-command-line/cli/output-update-twin.png":::
+
+1. You can verify the update succeeded by running the [**az dt twin show**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_show) command to see *room0*'s information:
+
+ ```azurecli-interactive
+ az dt twin show -n <ADT_instance_name> --twin-id room0
+ ```
+
+ The output should reflect the updated name.
+
+## Create a graph by adding relationships
+
+Next, you can create some **relationships** between these twins, to connect them into a [**twin graph**](concepts-twins-graph.md). Twin graphs are used to represent an entire environment.
+
+The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for *Floor*](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called *contains*. This makes it possible to create a *contains*-type relationship from each *Floor* twin to the corresponding room that it contains.
+
+To add a relationship, use the [**az dt twin relationship create**](/cli/azure/ext/azure-iot/dt/twin/relationship?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_relationship_create) command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID. If a relationship was defined to have properties, you can initialize the relationship properties in this command as well.
+
+1. Run the following code to add a *contains*-type relationship from each of the *Floor* twins you created earlier to the corresponding *Room* twin. The relationships are named *relationship0* and *relationship1*.
+
+ ```azurecli-interactive
+ az dt twin relationship create -n <ADT_instance_name> --relationship-id relationship0 --relationship contains --twin-id floor0 --target room0
+ az dt twin relationship create -n <ADT_instance_name> --relationship-id relationship1 --relationship contains --twin-id floor1 --target room1
+ ```
+
+ >[!TIP]
+ >The *contains* relationship in the [*Floor* model](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
+ > To create a relationship with these properties initialized, add the `--properties` option to either of the above commands, like this:
+ > ```azurecli-interactive
+ > ... --properties '{"ownershipUser":"MyUser", "ownershipDepartment":"MyDepartment"}'
+ > ```
+ >
+ > If you're using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly.
+
+ The output from each command will show information about the successfully created relationship.
+
+1. You can verify the relationships with any of the following commands, which query the relationships in your Azure Digital Twins instance.
+ * To see all relationships coming off of each floor (viewing the relationships from one side):
+ ```azurecli-interactive
+ az dt twin relationship list -n <ADT_instance_name> --twin-id floor0
+ az dt twin relationship list -n <ADT_instance_name> --twin-id floor1
+ ```
+ * To see all relationships arriving at each room (viewing the relationship from the "other" side):
+ ```azurecli-interactive
+ az dt twin relationship list -n <ADT_instance_name> --twin-id room0 --incoming
+ az dt twin relationship list -n <ADT_instance_name> --twin-id room1 --incoming
+ ```
+ * To look for these relationships individually, by ID:
+ ```azurecli-interactive
+ az dt twin relationship show -n <ADT_instance_name> --twin-id floor0 --relationship-id relationship0
+ az dt twin relationship show -n <ADT_instance_name> --twin-id floor1 --relationship-id relationship1
+ ```
+
+The twins and relationships you have set up in this tutorial form the following conceptual graph:
++
+## Query the twin graph to answer environment questions
+
+A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment. In the Azure CLI, this is done with the [**az dt twin query**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_query) command.
+
+Run the following queries in the Cloud Shell to answer some questions about the sample environment.
+
+1. **What are all the entities from my environment represented in Azure Digital Twins?** (query all)
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DIGITALTWINS"
+ ```
+
+ This allows you to take stock of your environment at a glance, and make sure everything is represented as you'd like it to be within Azure Digital Twins. The result of this is an output containing each digital twin with its details. Here is an excerpt:
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-all.png" alt-text="Partial results of twin query, showing room0 and room1" lightbox="media/tutorial-command-line/cli/output-query-all.png":::
+
+ >[!TIP]
+ >You may recognize that this is the same command you used in the [*Create digital twins*](#create-digital-twins) section earlier to find all the Azure Digital Twins in the instance.
+
+1. **What are all the rooms in my environment?** (query by model)
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DIGITALTWINS T WHERE IS_OF_MODEL(T, 'dtmi:example:Room;2')"
+ ```
+
+ You can restrict your query to twins of a certain type, to get more specific information about what's represented. The result of this shows *room0* and *room1*, but does **not** show *floor0* or *floor1* (since they are floors, not rooms).
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-model.png" alt-text="Results of model query, showing only room0 and room1" lightbox="media/tutorial-command-line/cli/output-query-model.png":::
+
+1. **What are all the rooms on *floor0*?** (query by relationship)
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.`$dtId = 'floor0'"
+ ```
+
+ You can query based on relationships in your graph, to get information about how twins are connected or to restrict your query to a certain area. Only *room0* is on *floor0*, so it's the only room in the result.
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-relationship.png" alt-text="Results of relationship query, showing room0" lightbox="media/tutorial-command-line/cli/output-query-relationship.png":::
+
+ > [!NOTE]
+ > Notice that a twin's ID (like *floor0* in the query above) is queried using the metadata field `$dtId`.
+ >
+ >When using Cloud Shell to run a query with metadata fields like this one that begin with `$`, you should escape the `$` with a backtick to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text. This is reflected in the screenshot above.
+
+1. **What are all the twins in my environment with a temperature above 75?** (query by property)
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT * FROM DigitalTwins T WHERE T.Temperature > 75"
+ ```
+
+ You can query the graph based on properties to answer a variety of questions, including finding outliers in your environment that might need attention. Other comparison operators (*<*,*>*, *=*, or *!=*) are also supported. *room1* shows up in the results here, because it has a temperature of 80.
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-property.png" alt-text="Results of property query, showing only room1" lightbox="media/tutorial-command-line/cli/output-query-property.png":::
+
+1. **What are all the rooms on *floor0* with a temperature above 75?** (compound query)
+
+ ```azurecli-interactive
+ az dt twin query -n <ADT_instance_name> -q "SELECT room FROM DIGITALTWINS floor JOIN room RELATED floor.contains where floor.`$dtId = 'floor0' AND IS_OF_MODEL(room, 'dtmi:example:Room;2') AND room.Temperature > 75"
+ ```
+
+ You can also combine the earlier queries like you would in SQL, using combination operators such as `AND`, `OR`, `NOT`. This query uses `AND` to make the previous query about twin temperatures more specific. The result now only includes rooms with temperatures above 75 that are on *floor0*ΓÇöwhich in this case, is none of them. The result set is empty.
+
+ :::image type="content" source="media/tutorial-command-line/cli/output-query-compound.png" alt-text="Results of compound query, showing no results" lightbox="media/tutorial-command-line/cli/output-query-compound.png":::
+
+## Clean up resources
+
+After completing this tutorial, you can choose which resources you'd like to remove, depending on what you'd like to do next.
+
+* **If you plan to continue to the next tutorial**, you can keep the resources you set up here and reuse the Azure Digital Twins instance without clearing anything in between.
+
+* **If you'd like to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships**, you can use the [**az dt twin relationship delete**](/cli/azure/ext/azure-iot/dt/twin/relationship?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_relationship_delete), [**az dt twin delete**](/cli/azure/ext/azure-iot/dt/twin?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_twin_delete), and [**az dt model delete**](/cli/azure/ext/azure-iot/dt/model?view=azure-cli-latest&preserve-view=true#ext_azure_iot_az_dt_model_delete) commands to clear the relationships, twins, and models in your instance, respectively.
++
+You may also want to delete the model files you created on your local machine.
+
+## Next steps
+
+In this tutorial, you got started with Azure Digital Twins by building a graph in your instance using the Azure CLI. You created models, digital twins, and relationships to form a graph. You also ran some queries on the graph, to get an idea of what kinds of questions Azure Digital Twins can answer about an environment.
+
+Continue to the next tutorial to combine Azure Digital Twins with other Azure services to complete a data-driven, end-to-end scenario:
+> [!div class="nextstepaction"]
+> [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for online mi
| Target | Source | Support | Status | | - | - |:-:|:-:|
-| **Azure SQL DB** | SQL Server | X | GA |
-| | RDS SQL | X | GA |
+| **Azure SQL DB** | SQL Server | X | |
+| | RDS SQL | X | |
| | Oracle | X | | | **Azure SQL DB MI** | SQL Server | Γ£ö | GA |
-| | RDS SQL | X | GA |
+| | RDS SQL | X | |
| | Oracle | X | | | **Azure SQL VM** | SQL Server | X | | | | Oracle | X | |
The following table shows Azure Database Migration Service support for online mi
| | RDS PostgreSQL | Γ£ö | GA | > [!IMPORTANT]
-> "Oracle to Azure Database for PostgreSQL" migration scenario (currently in preview) will no longer be available after May 1, 2021. We will continue to provide support via alternative tooling (such as Ora2pg) and provide the best migration experience for Oracle to PostgreSQL migrations. For migration best practices, see [Oracle to Azure Database for PostgreSQL migration guide] (https://aka.ms/OracletoPGguide).
+> "Oracle to Azure Database for PostgreSQL" migration scenario (currently in preview) will no longer be available after May 1, 2021. We will continue to provide support via alternative tooling (such as Ora2pg) and provide the best migration experience for Oracle to PostgreSQL migrations. For migration best practices, see [Oracle to Azure Database for PostgreSQL migration guide](https://aka.ms/OracletoPGguide).
## Next steps
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart-linux.md
description: In this quickstart, learn how to create an IoT Edge device on Linux
Previously updated : 12/02/2020 Last updated : 03/12/2021
# Quickstart: Deploy your first IoT Edge module to a virtual Linux device
-Test out Azure IoT Edge in this quickstart by deploying containerized code to a virtual Linux IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using an Azure virtual machine for your IoT Edge device, which allows you to quickly create a test machine with the IoT Edge service installed and then delete it when you're finished.
+
+Test out Azure IoT Edge in this quickstart by deploying containerized code to a virtual Linux IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using an Azure virtual machine for your IoT Edge device, which allows you to quickly create a test machine and then delete it when you're finished.
In this quickstart you learn how to: * Create an IoT Hub. * Register an IoT Edge device to your IoT hub.
-* Install and start the IoT Edge runtime on your virtual device.
+* Install and start the IoT Edge runtime on a virtual device.
* Remotely deploy a module to an IoT Edge device. ![Diagram - Quickstart architecture for device and cloud](./media/quickstart-linux/install-edge-full.png)
Prepare your environment for the Azure CLI.
Cloud resources: -- A resource group to manage all the resources you use in this quickstart. We use the example resource group name **IoTEdgeResources** throughout this quickstart and the following tutorials.
+* A resource group to manage all the resources you use in this quickstart. We use the example resource group name **IoTEdgeResources** throughout this quickstart and the following tutorials.
```azurecli-interactive az group create --name IoTEdgeResources --location westus2
During the runtime configuration, you provide a device connection string. This i
This section uses an Azure Resource Manager template to create a new virtual machine and install the IoT Edge runtime on it. If you want to use your own Linux device instead, you can follow the installation steps in [Install the Azure IoT Edge runtime](how-to-install-iot-edge.md), then return to this quickstart.
+<!-- 1.1 -->
+ Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) template. * For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window:
Use the following CLI command to create your IoT Edge device based on the prebui
--template-uri "https://aka.ms/iotedge-vm-deploy" \ --parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' \ --parameters adminUsername='azureUser' \
- --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name
- <REPLACE_WITH_HUB_NAME> -o tsv) \
+ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) \
--parameters authenticationType='password' \ --parameters adminPasswordOrKey="<REPLACE_WITH_PASSWORD>" ```
Use the following CLI command to create your IoT Edge device based on the prebui
--parameters adminPasswordOrKey="<REPLACE_WITH_PASSWORD>" ```
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.2.0-rc4) template.
+
+* For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window:
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group IoTEdgeResources \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0-rc4/edgeDeploy.json" \
+ --parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' \
+ --parameters adminUsername='azureUser' \
+ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) \
+ --parameters authenticationType='password' \
+ --parameters adminPasswordOrKey="<REPLACE_WITH_PASSWORD>"
+ ```
+
+* For PowerShell users, copy the following command into your PowerShell window, then replace the placeholder text with your own information:
+
+ ```azurecli
+ az deployment group create `
+ --resource-group IoTEdgeResources `
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.2.0-rc4/edgeDeploy.json" `
+ --parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' `
+ --parameters adminUsername='azureUser' `
+ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) `
+ --parameters authenticationType='password' `
+ --parameters adminPasswordOrKey="<REPLACE_WITH_PASSWORD>"
+ ```
+<!-- end 1.2 -->
+ This template takes the following parameters: | Parameter | Description |
The rest of the commands in this quickstart take place on your IoT Edge device i
Once connected to your virtual machine, verify that the runtime was successfully installed and configured on your IoT Edge device.
+<!--1.1 -->
+ 1. Check to see that the IoT Edge security daemon is running as a system service. ```bash
Once connected to your virtual machine, verify that the runtime was successfully
``` ![View one module on your device](./media/quickstart-linux/iotedge-list-1.png)
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+1. Check to see that IoT Edge is running. The following command should return a status of **Ok** if IoT Edge is running, or provide any service errors.
+
+ ```bash
+ sudo iotedge system status
+ ```
+
+ >[!TIP]
+ >You need elevated privileges to run `iotedge` commands. Once you sign out of your machine and sign back in the first time after installing the IoT Edge runtime, your permissions are automatically updated. Until then, use `sudo` in front of the commands.
+
+2. If you need to troubleshoot the service, retrieve the service logs.
+
+ ```bash
+ sudo iotedge system logs
+ ```
+
+3. View all the modules running on your IoT Edge device. Since the service just started for the first time, you should only see the **edgeAgent** module running. The edgeAgent module runs by default and helps to install and start any additional modules that you deploy to your device.
+
+ ```bash
+ sudo iotedge list
+ ```
+
+<!-- end 1.2 -->
Your IoT Edge device is now configured. It's ready to run cloud-deployed modules.
Manage your Azure IoT Edge device from the cloud to deploy a module that will se
[!INCLUDE [iot-edge-deploy-module](../../includes/iot-edge-deploy-module.md)]
+<!-- 1.2 -->
+
+Since IoT Edge version 1.2 is in public preview, there is an extra step to take to update the runtime modules to their public preview versions as well.
+
+1. From the device details page, select **Set Modules** again.
+
+1. Select **Runtime Settings**.
+
+1. Update the **Image** field for both the IoT Edge hub and IoT Edge agent modules to use the version tag 1.2.0-rc4. For example:
+
+ * `mcr.microsoft.com/azureiotedge-hub:1.2.0-rc4`
+ * `mcr.microsoft.com/azureiotedge-agent:1.2.0-rc4`
+
+1. The simulated temperature sensor module should still be listed in the modules section. You don't need to make any changes to that module for the public preview.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+1. On the device details page, you can select either **$edgeAgent** or **$edgeHub** to see the module details reflect the public preview version of the image.
+
+<!-- end 1.2 -->
+ ## View generated data In this quickstart, you created a new IoT Edge device and installed the IoT Edge runtime on it. Then, you used the Azure portal to deploy an IoT Edge module to run on the device without having to make changes to the device itself.
Open the command prompt on your IoT Edge device again, or use the SSH connection
sudo iotedge list ```
- ![View three modules on your device](./media/quickstart-linux/iotedge-list-2.png)
+<!-- 1.1 -->
+ ![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-201806.png)
+
+<!-- 1.2 -->
+ ![View three modules on your device](./media/quickstart-linux/iotedge-list-2-version-202011.png)
View the messages being sent from the temperature sensor module:
If you created your virtual machine and IoT hub in a new resource group, you can
Remove the **IoTEdgeResources** group. It might take a few minutes to delete a resource group. ```azurecli-interactive
-az group delete --name IoTEdgeResources
+az group delete --name IoTEdgeResources --yes
``` You can confirm the resource group is removed by viewing the list of resource groups.
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/quickstart.md
monikerRange: "=iotedge-2018-06"
# Quickstart: Deploy your first IoT Edge module to a Windows device (preview) + Try out Azure IoT Edge in this quickstart by deploying containerized code to a Linux on Windows IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using your own device to see how easy it is to use Azure IoT Edge for Linux on Windows. In this quickstart, you'll learn how to:
iot-edge Tutorial C Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module-windows.md
-# Tutorial: Develop C IoT Edge modules for Windows devices
+# Tutorial: Develop C IoT Edge modules using Windows containers
+ This article shows you how to use Visual Studio to develop C code and deploy it to a Windows device that's running Azure IoT Edge.
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
+>[!NOTE]
+>IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+
+You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
In this tutorial, you learn how to:
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in C by using Visual Studio 2019 and then deploy it to a Windows device. If you're developing modules for Linux devices, go to [Develop C IoT Edge modules for Linux devices](tutorial-csharp-module.md) instead.
+This tutorial demonstrates how to develop a module in C by using Visual Studio 2019 and then deploy it to a Windows device. If you're developing modules using Linux containers, go to [Develop C IoT Edge modules using Linux containers](tutorial-csharp-module.md) instead.
-To understand your options for developing and deploying C modules to Windows devices, refer to the following table:
+To understand your options for developing and deploying C modules using Windows containers, refer to the following table:
| C | Visual&nbsp;Studio&nbsp;Code | Visual Studio 2017&nbsp;and&nbsp;2019 | | -- | | :: | | Windows AMD64 | | ![Develop C modules for WinAMD64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-Before you begin this tutorial, set up your development environment by following the instructions in the [Develop IoT Edge modules for Windows devices](tutorial-develop-for-windows.md) tutorial. After you complete it, your environment will contain the following prerequisites:
+Before you begin this tutorial, set up your development environment by following the instructions in the [Develop IoT Edge modules using Windows containers](tutorial-develop-for-windows.md) tutorial. After you complete it, your environment will contain the following prerequisites:
* A free or standard-tier [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Windows device that's running Azure IoT Edge](quickstart.md).
+* A [Windows device that's running Azure IoT Edge](how-to-install-iot-edge-windows-on-windows.md).
* A container registry, such as [Azure Container Registry](../container-registry/index.yml). * [Visual Studio 2019](/visualstudio/install/install-visual-studio), configured with the [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) extension. * [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), configured to run Windows containers.
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-c-module.md
-# Tutorial: Develop a C IoT Edge module for Linux devices
+# Tutorial: Develop a C IoT Edge module using Linux containers
-Use Visual Studio Code to develop C code and deploy it to a Linux device running Azure IoT Edge.
+
+Use Visual Studio Code to develop C code and deploy it to a device running Azure IoT Edge.
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. In this tutorial, you learn how to:
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in **C** using **Visual Studio Code**, and how to deploy it to a **Linux device**. If you're developing modules for Windows devices, go to [Develop a C IoT Edge module for Windows devices](tutorial-c-module-windows.md) instead.
+This tutorial demonstrates how to develop a module in **C** using **Visual Studio Code**, and how to deploy it to an IoT Edge device. If you're developing modules using Windows containers, go to [Develop a C IoT Edge module using Windows containers](tutorial-c-module-windows.md) instead.
-Use the following table to understand your options for developing and deploying C modules to Linux:
+Use the following table to understand your options for developing and deploying C modules using Linux containers:
| C | Visual Studio Code | Visual Studio | | - | | - | | **Linux AMD64** | ![Use VS Code for C modules on Linux AMD64](./medi64](./media/tutorial-c-module/green-check.png) | | **Linux ARM32** | ![Use VS Code for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) |
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Csharp Module Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module-windows.md
-# Tutorial: Develop C# IoT Edge modules for Windows devices
+# Tutorial: Develop C# IoT Edge modules using Windows containers
+ This article shows you how to use Visual Studio to develop C# code and deploy it to a Windows device that's running Azure IoT Edge.
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
+>[!NOTE]
+>IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+
+You can use Azure IoT Edge modules to deploy code that implements your business logic directly in your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data.
In this tutorial, you learn how to:
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in C# by using Visual Studio 2019 and then deploy it to a Windows device. If you're developing modules for Linux devices, go to [Develop C# IoT Edge modules for Linux devices](tutorial-csharp-module.md) instead.
+This tutorial demonstrates how to develop a module in C# by using Visual Studio 2019 and then deploy it to a Windows device. If you're developing modules using Linux containers, go to [Develop C# IoT Edge modules using Linux containers](tutorial-csharp-module.md) instead.
-To understand your options for developing and deploying C# modules to Windows devices, refer to the following table:
+To understand your options for developing and deploying C# modules using Windows containers, refer to the following table:
| C# | Visual&nbsp;Studio&nbsp;Code | Visual Studio 2017&nbsp;and&nbsp;2019 | | -- | :: | :: | | Windows AMD64 develop | ![Develop C# modules for WinAMD64 in Visual Studio Code](./medi64 in Visual Studio](./media/tutorial-c-module/green-check.png) | | Windows AMD64 debug | | ![Debug C# modules for WinAMD64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-Before you begin this tutorial, set up your development environment by following the instructions in the [Develop IoT Edge modules for Windows devices](tutorial-develop-for-windows.md) tutorial. After you complete it, your environment will contain the following prerequisites:
+Before you begin this tutorial, set up your development environment by following the instructions in the [Develop IoT Edge modules using Windows containers](tutorial-develop-for-windows.md) tutorial. After you complete it, your environment will contain the following prerequisites:
* A free or standard-tier [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Windows device that's running Azure IoT Edge](quickstart.md).
+* A [Windows device that's running Azure IoT Edge](how-to-install-iot-edge-windows-on-windows.md).
* A container registry, such as [Azure Container Registry](../container-registry/index.yml). * [Visual Studio 2019](/visualstudio/install/install-visual-studio), configured with the [Azure IoT Edge Tools](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) extension. * [Docker Desktop](https://docs.docker.com/docker-for-windows/install/), configured to run Windows containers.
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-csharp-module.md
-# Tutorial: Develop a C# IoT Edge module for Linux devices
+# Tutorial: Develop a C# IoT Edge module using Linux containers
-Use Visual Studio Code to develop C# code and deploy it to a Linux device running Azure IoT Edge.
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the Deploy Azure IoT Edge on a simulated device in [Windows](quickstart.md) or [Linux](quickstart-linux.md) quickstarts. In this tutorial, you learn how to:
+Use Visual Studio Code to develop C# code and deploy it to a device running Azure IoT Edge.
+
+You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
> [!div class="checklist"] >
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in **C#** using **Visual Studio Code** and deploy it to a **Linux device**. If you're developing modules for Windows devices, go to [Develop a C# IoT Edge module for Windows devices](tutorial-csharp-module-windows.md) instead.
+This tutorial demonstrates how to develop a module in **C#** using **Visual Studio Code** and deploy it to an IoT Edge device. If you're developing modules using Windows containers, go to [Develop a C# IoT Edge module using Windows containers](tutorial-csharp-module-windows.md) instead.
-Use the following table to understand your options for developing and deploying C# modules to Linux:
+Use the following table to understand your options for developing and deploying C# modules using Linux containers:
| C# | Visual Studio Code | Visual Studio | | -- | | - |
Use the following table to understand your options for developing and deploying
>[!NOTE] >Support for Linux ARM64 devices is available in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Develop and debug ARM64 IoT Edge modules in Visual Studio Code (preview)](https://devblogs.microsoft.com/iotdev/develop-and-debug-arm64-iot-edge-modules-in-visual-studio-code-preview).
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment, [Develop an IoT Edge module for a Linux device](tutorial-develop-for-linux.md). After completing that tutorial, you already should have the following prerequisites:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment, [Develop an IoT Edge module using Linux containers](tutorial-develop-for-linux.md). After completing that tutorial, you already should have the following prerequisites:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md).
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-custom-vision.md
# Tutorial: Perform image classification at the edge with Custom Vision Service + Azure IoT Edge can make your IoT solution more efficient by moving workloads out of the cloud and to the edge. This capability lends itself well to services that process a lot of data, like computer vision models. The [Custom Vision Service](../cognitive-services/custom-vision-service/overview.md) lets you build custom image classifiers and deploy them to devices as containers. Together, these two services enable you to find insights from images or video streams without having to transfer all of the data off site first. Custom Vision provides a classifier that compares an image against a trained model to generate insights. For example, Custom Vision on an IoT Edge device could determine whether a highway is experiencing higher or lower traffic than normal, or whether a parking garage has available parking spots in a row. These insights can be shared with another service to take action.
In this tutorial, you learn how to:
>[!TIP] >This tutorial is a simplified version of the [Custom Vision and Azure IoT Edge on a Raspberry Pi 3](https://github.com/Azure-Samples/custom-vision-service-iot-edge-raspberry-pi) sample project. This tutorial was designed to run on a cloud VM and uses static images to train and test the image classifier, which is useful for someone just starting to evaluate Custom Vision on IoT Edge. The sample project uses physical hardware and sets up a live camera feed to train and test the image classifier, which is useful for someone who wants to try a more detailed, real-life scenario.
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-function.md
# Tutorial: Deploy Azure Functions as IoT Edge modules
-You can use Azure Functions to deploy code that implements your business logic directly to your Azure IoT Edge devices. This tutorial walks you through creating and deploying an Azure Function that filters sensor data on the simulated IoT Edge device. You use the simulated IoT Edge device that you created in the Deploy Azure IoT Edge on a simulated device on [Windows](quickstart.md) or [Linux](quickstart-linux.md) quickstarts. In this tutorial, you learn how to:
+
+You can use Azure Functions to deploy code that implements your business logic directly to your Azure IoT Edge devices. This tutorial walks you through creating and deploying an Azure Function that filters sensor data on the simulated IoT Edge device. You use the simulated IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
> [!div class="checklist"] >
The Azure Function that you create in this tutorial filters the temperature data
## Prerequisites
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Deploy Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-machine-learning.md
# Tutorial: Deploy Azure Machine Learning as an IoT Edge module (preview) + Use Azure Notebooks to develop a machine learning module and deploy it to a Linux device running Azure IoT Edge. You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through deploying an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data. For more information about Azure Machine Learning on IoT Edge, see [Azure Machine Learning documentation](../machine-learning/how-to-deploy-and-where.md).
In this tutorial, you learn how to:
An Azure IoT Edge device:
-* You can use an Azure virtual machine as an IoT Edge device by following the steps in the quickstart for [Linux](quickstart-linux.md).
+* You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* The Azure Machine Learning module doesn't support Windows containers. * The Azure Machine Learning module doesn't support ARM processors.
iot-edge Tutorial Deploy Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-deploy-stream-analytics.md
# Tutorial: Deploy Azure Stream Analytics as an IoT Edge module + Many IoT solutions use analytics services to gain insight about data as it arrives in the cloud from IoT devices. With Azure IoT Edge, you can take [Azure Stream Analytics](../stream-analytics/index.yml) logic and move it onto the device itself. By processing telemetry streams at the edge, you can reduce the amount of uploaded data and reduce the time it takes to react to actionable insights. Azure IoT Edge and Azure Stream Analytics are integrated to simplify your workload development. You can create an Azure Stream Analytics job in the Azure portal and then deploy it as an IoT Edge module with no additional code.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-develop-for-linux.md
-# Tutorial: Develop IoT Edge modules for Linux devices
+# Tutorial: Develop IoT Edge modules with Linux containers
-Use Visual Studio Code to develop and deploy code to Linux devices running IoT Edge.
-In the quickstart, you created an IoT Edge device using a Linux virtual machine and deployed a module from the Azure Marketplace. This tutorial walks through developing and deploying your own code to an IoT Edge device. This article is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
+Use Visual Studio Code to develop and deploy code to devices running IoT Edge.
+
+In the quickstart, you created an IoT Edge device and deployed a module from the Azure Marketplace. This tutorial walks through developing and deploying your own code to an IoT Edge device. This article is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
This tutorial uses the example of deploying a **C# module to a Linux device**. This example was chosen because it's the most common developer scenario for IoT Edge solutions. Even if you plan on using a different language or deploying an Azure service, this tutorial is still useful to learn about the development tools and concepts. Complete this introduction to the development process, then choose your preferred language or Azure service to dive into the details.
A development machine:
* [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp). * [.NET Core 2.1 SDK](https://www.microsoft.com/net/download).
-An Azure IoT Edge device on Linux:
+An Azure IoT Edge device:
* We recommend that you don't run IoT Edge on your development machine, but instead use a separate device. This distinction between development machine and IoT Edge device more accurately mirrors a true deployment scenario, and helps to keep the different concepts straight. * If you don't have a second device available, use the quickstart article to create an IoT Edge device in Azure with a [Linux virtual machine](quickstart-linux.md).
This tutorial walks through the development of an IoT Edge module. An *IoT Edge
When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module will eventually be deployed. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*.
-This tutorial targets Linux devices running IoT Edge. You can use your preferred operating system as long as your development machine runs Linux containers. We recommend using Visual Studio Code to develop for Linux devices, so that's what this tutorial will use. You can use Visual Studio as well, although there are differences in support between the two tools.
+>[!TIP]
+>If you're using [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md), then the *target device* in your scenario is the Linux virtual machine, not the Windows host.
+
+This tutorial targets devices running IoT Edge with Linux containers. You can use your preferred operating system as long as your development machine runs Linux containers. We recommend using Visual Studio Code to develop with Linux containers, so that's what this tutorial will use. You can use Visual Studio as well, although there are differences in support between the two tools.
The following table lists the supported development scenarios for **Linux containers** in Visual Studio Code and Visual Studio.
iot-edge Tutorial Develop For Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-develop-for-windows.md
-# Tutorial: Develop IoT Edge modules for Windows devices
+# Tutorial: Develop IoT Edge modules using Windows containers
+ Use Visual Studio to develop and deploy code to Windows devices running IoT Edge.
-In the quickstart, you created an IoT Edge device using a Windows virtual machine and deployed a pre-built module from the Azure Marketplace. This tutorial walks through what it takes to develop and deploy your own code to an IoT Edge device. This tutorial is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
+>[!NOTE]
+>IoT Edge 1.1 LTS is the last release channel that will support Windows containers. Starting with version 1.2, Windows containers are not supported. Consider using or moving to [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) to run IoT Edge on Windows devices.
+
+This tutorial walks through what it takes to develop and deploy your own code to an IoT Edge device. This tutorial is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
This tutorial uses the example of deploying a **C# module to a Windows device**. This example was chosen because it's the most common development scenario. If you're interested in developing in a different language, or plan on deploying Azure services as modules, this tutorial will still be helpful to learn about the development tools. Once you understand the development concepts, then you can choose your preferred language or Azure service to dive into the details.
A development machine:
An Azure IoT Edge device on Windows:
-* We recommend that you don't run IoT Edge on your development machine, but instead use a separate device. This distinction between development machine and IoT Edge device more accurately mirrors a true deployment scenario, and helps to keep the different concepts straight.
-* If you don't have a second device available, use the quickstart article to create an IoT Edge device in Azure with a [Windows virtual machine](quickstart.md).
+* [Install and manage Azure IoT Edge with Windows containers](how-to-install-iot-edge-windows-on-windows.md).
+* We recommend that you don't run IoT Edge on your development machine, but instead use a separate device if possible. This distinction between development machine and IoT Edge device more accurately mirrors a true deployment scenario, and helps to keep the different concepts straight.
Cloud resources:
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-java-module.md
-# Tutorial: Develop a Java IoT Edge module for Linux devices
+# Tutorial: Develop a Java IoT Edge module using Linux containers
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the Deploy Azure IoT Edge on a simulated device in [Linux](quickstart-linux.md) quickstart. In this tutorial, you learn how to:
+
+You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the Deploy Azure IoT Edge on a simulated device in the quickstart articles. In this tutorial, you learn how to:
> [!div class="checklist"] >
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in **Java** using **Visual Studio Code**, and how to deploy it to a **Linux device**. IoT Edge does not support Java modules for Windows devices.
+This tutorial demonstrates how to develop a module in **Java** using **Visual Studio Code**, and how to deploy it to an IoT Edge device. IoT Edge does not support Java modules built as Windows containers.
Use the following table to understand your options for developing and deploying Java modules:
Use the following table to understand your options for developing and deploying
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
-To develop an IoT Edge module in Java, install the following additional prerequisites on your development machine:
+To develop an IoT Edge module in Java, install the following additional prerequisites on your development machine:
* [Java Extension Pack](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack) for Visual Studio Code. * [Java SE Development Kit 11](/azure/developer/java/fundamentals/java-jdk-long-term-support), and [set the `JAVA_HOME` environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) to point to your JDK installation.
iot-edge Tutorial Machine Learning Edge 01 Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-01-intro.md
# Tutorial: An end-to-end solution using Azure Machine Learning and IoT Edge + Frequently, IoT applications want to take advantage of the intelligent cloud and the intelligent edge. In this tutorial, we walk you through training a machine learning model with data collected from IoT devices in the cloud, deploying that model to IoT Edge, and maintaining and refining the model periodically.
+>[!NOTE]
+>The concepts in this set of tutorials apply to all versions of IoT Edge, but the sample device that you create to try out the scenario runs IoT Edge version 1.1.
+ The primary objective of this tutorial is to introduce the processing of IoT data with machine learning, specifically on the edge. While we touch many aspects of a general machine learning workflow, this tutorial is not intended as an in-depth introduction to machine learning. As a case in point, we do not attempt to create a highly optimized model for the use case ΓÇô we just do enough to illustrate the process of creating and using a viable model for IoT data processing. This section of the tutorial discusses:
iot-edge Tutorial Machine Learning Edge 02 Prepare Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-02-prepare-environment.md
# Tutorial: Set up an environment for machine learning on IoT Edge + This article helps you prepare your environment for development and deployment. First, set up a development machine with all the tools you need. Then, create the necessary cloud resources in Azure. In this section of the tutorial, you learn how to:
iot-edge Tutorial Machine Learning Edge 03 Generate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-03-generate-data.md
# Tutorial: Generate simulated device data + In this article, we use machine learning training data to simulate a device sending telemetry to Azure IoT Hub. As stated in the introduction, this tutorial uses the [Turbofan engine degradation simulation data set](https://c3.nasa.gov/dashlink/resources/139/) to simulate data from a set of airplane engines for training and testing. In our experimental scenario, we know that:
iot-edge Tutorial Machine Learning Edge 04 Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-04-train-model.md
# Tutorial: Train and deploy an Azure Machine Learning model + In this article, we do the following tasks: * Use Azure Machine Learning Studio to train a machine learning model.
iot-edge Tutorial Machine Learning Edge 05 Configure Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device.md
# Tutorial: Configure an Azure IoT Edge device + In this article, we configure an Azure virtual machine running Linux to be an Azure IoT Edge device that acts as a transparent gateway. A transparent gateway configuration allows devices to connect to Azure IoT Hub through the gateway without knowing that the gateway exists. At the same time, a user interacting with the devices in IoT Hub is unaware of the intermediate gateway device. Ultimately, we'll add edge analytics to our system by adding IoT Edge modules to the transparent gateway.
+>[!NOTE]
+>The concepts in this tutorial apply to all versions of IoT Edge, but the sample device that you create to try out the scenario runs IoT Edge version 1.1.
+ The steps in this article are typically performed by a cloud developer. In this section of the tutorial, you learn how to:
iot-edge Tutorial Machine Learning Edge 06 Custom Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-06-custom-modules.md
# Tutorial: Create and deploy custom IoT Edge modules + In this article, we create three IoT Edge modules that receive messages from leaf IoT devices, run the data through your machine learning model, and then forward insights to IoT Hub. IoT Edge hub facilitates module to module communication. Using the IoT Edge hub as a message broker keeps modules independent from each other. Modules only need to specify the inputs on which they accept messages and the outputs to which they write messages.
iot-edge Tutorial Machine Learning Edge 07 Send Data To Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-machine-learning-edge-07-send-data-to-hub.md
# Tutorial: Send data via transparent gateway + In this article, we once again use the development VM as a simulated device. However instead of sending data directly to the IoT Hub, the device sends data to the IoT Edge device configured as a transparent gateway. We monitor the operation of the IoT Edge device while the simulated device is sending data. Once the device is finished running, we look at the data in our storage account to validate everything worked as expected.
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
monikerRange: ">=iotedge-2020-11"
# Tutorial: Create a hierarchy of IoT Edge devices (Preview) + Deploy Azure IoT Edge nodes across networks organized in hierarchical layers. Each layer in a hierarchy is a gateway device that handles messages and requests from devices in the layer beneath it. >[!NOTE]
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-node-module.md
-# Tutorial: Develop and deploy a Node.js IoT Edge module for Linux devices
+# Tutorial: Develop and deploy a Node.js IoT Edge module using Linux containers
-Use Visual Studio Code to develop Node.js code and deploy it to a Linux device running Azure IoT Edge.
-You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
+Use Visual Studio Code to develop Node.js code and deploy it to a device running Azure IoT Edge.
+
+You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
> [!div class="checklist"] >
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in **Node.js** using **Visual Studio Code**, and how to deploy it to a **Linux device**. IoT Edge does not support Node.js modules for Windows devices.
+This tutorial demonstrates how to develop a module in **Node.js** using **Visual Studio Code**, and how to deploy it to an IoT Edge device.
+
+IoT Edge does not support Node.js modules using Windows containers.
Use the following table to understand your options for developing and deploying Node.js modules:
Use the following table to understand your options for developing and deploying
| **Linux AMD64** | ![Use VS Code for Node.js modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | | | **Linux ARM32** | ![Use VS Code for Node.js modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-python-module.md
-# Tutorial: Develop and deploy a Python IoT Edge module for Linux devices
+# Tutorial: Develop and deploy a Python IoT Edge module using Linux containers
-Use Visual Studio Code to develop Python code and deploy it to a Linux device running Azure IoT Edge.
+
+Use Visual Studio Code to develop Python code and deploy it to a device running Azure IoT Edge.
You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data on the IoT Edge device that you set up in the quickstart. In this tutorial, you learn how to:
The IoT Edge module that you create in this tutorial filters the temperature dat
## Prerequisites
-This tutorial demonstrates how to develop a module in **Python** using **Visual Studio Code**, and how to deploy it to a **Linux device**. IoT Edge does not support Python modules for Windows devices.
+This tutorial demonstrates how to develop a module in **Python** using **Visual Studio Code**, and how to deploy it to an IoT Edge device.
+
+IoT Edge does not support Python modules using Windows containers.
-Use the following table to understand your options for developing and deploying Python modules to Linux:
+Use the following table to understand your options for developing and deploying Python modules using Linux containers:
| Python | Visual Studio Code | Visual Studio 2017/2019 | | - | | | | **Linux AMD64** | ![Use VS Code for Python modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | | | **Linux ARM32** | ![Use VS Code for Python modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
+Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A [Linux device running Azure IoT Edge](quickstart-linux.md)
+* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-store-data-sql-server.md
# Tutorial: Store data at the edge with SQL Server databases + Deploy a SQL Server module to store data on a Linux device running Azure IoT Edge. Use Azure IoT Edge and SQL Server to store and query data at the edge. Azure IoT Edge has basic storage capabilities to cache messages if a device goes offline, and then forward them when the connection is reestablished. However, you may want more advanced storage capabilities, like being able to query data locally. Your IoT Edge devices can use local databases to perform more complex computing without having to maintain a connection to IoT Hub.
In this tutorial, you learn how to:
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* An AMD64 [Linux device running Azure IoT Edge](quickstart-linux.md).
+* An AMD64 device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
* ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can sign up to try [Azure SQL Edge](https://azure.microsoft.com/services/sql-edge/) in preview. * A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 03/03/2021 Last updated : 03/18/2021 # Limits and configuration information for Azure Logic Apps
Here are the limits for a single logic app run:
| Name | Multi-tenant limit | Integration service environment limit | Notes | ||--||-| | Run duration | 90 days | 366 days | Run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <p><p>To change the default limit, see [Change run duration and history retention in storage](#change-duration). |
-| Run history retention in storage | 90 days | 366 days | If a run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. Whether the run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>To change the default limit and for more information, see [Change duration and run history retention in storage](#change-retention). To increase the maximum limit, [contact the Logic Apps team](mailto://logicappsemail@microsoft.com) for help with your requirements. |
+| Run history retention in storage | 90 days | 366 days | If a run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. Whether the run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <p><p>To change the default limit and for more information, see [Change duration and run history retention in storage](#change-retention). To increase the maximum limit, [contact the Logic Apps team](mailto://logicappspm@microsoft.com) for help with your requirements. |
| Minimum recurrence interval | 1 second | 1 second || | Maximum recurrence interval | 500 days | 500 days || |||||
For more information about your logic app resource definition, see [Overview: Au
| Name | Limit | Notes | ||-|-|
- | Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month | |
- | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 additional action executions per minute, which is ~80 million more action executions per month | |
+ | Base unit execution limit | System-throttled when infrastructure capacity reaches 80% | Provides ~4,000 action executions per minute, which is ~160 million action executions per month |
+ | Scale unit execution limit | System-throttled when infrastructure capacity reaches 80% | Each scale unit can provide ~2,000 additional action executions per minute, which is ~80 million more action executions per month |
| Maximum scale units that you can add | 10 | | ||||
Some connector operations make asynchronous calls or listen for webhook requests
#### Character limits
-| Name | Notes |
-||-|
+| Name | Limit | Notes |
+||-|-|
| Expression evaluation limit | 131,072 characters | The `@concat()`, `@base64()`, `@string()` expressions can't be longer than this limit. |
-| Request URL character limit | 16,384 characters |
-|||
+| Request URL character limit | 16,384 characters | |
+||||
<a name="retry-policy-limits"></a>
media-services Monitor Media Services Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/monitor-media-services-data-reference.md
+
+ Title: Monitoring Media Services data reference
+description: Important reference material needed when you monitor Media Services
+++++ Last updated : 03/11/2021++
+# Monitoring Media Services data reference
+
+This article covers the data that is useful for monitoring Media Services. For more information about all platform metrics supported in Azure Monitor, review [Supported metrics with Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported).
+
+## Media Services metrics
+
+Metrics are collected at regular intervals whether or not the value changes. They're useful for alerting because they can be sampled frequently, and an alert can be fired quickly with relatively simple logic.
+
+Media Services supports monitoring metrics for the following resources:
+
+* Account
+* Streaming Endpoint
+
+### Account
+
+You can monitor the following account metrics.
+
+|Metric name|Display name|Description|
+||||
+|AssetCount|Asset count|Assets in your account.|
+|AssetQuota|Asset quota|Asset quota in your account.|
+|AssetQuotaUsedPercentage|Asset quota used percentage|The percentage of the Asset quota already used.|
+|ContentKeyPolicyCount|Content Key Policy count|Content Key Policies in your account.|
+|ContentKeyPolicyQuota|Content Key Policy quota|Content Key Policies quota in your account.|
+|ContentKeyPolicyQuotaUsedPercentage|Content Key Policy quota used percentage|The percentage of the Content Key Policy quota already used.|
+|StreamingPolicyCount|Streaming Policy count|Streaming Policies in your account.|
+|StreamingPolicyQuota|Streaming Policy quota|Streaming Policies quota in your account.|
+|StreamingPolicyQuotaUsedPercentage|Streaming Policy quota used percentage|The percentage of the Streaming Policy quota already used.|
+
+You should also review [account quotas and limits](../limits-quotas-constraints.md).
+
+### Streaming Endpoint
+
+The following Media Services [Streaming Endpoints](/rest/api/media/streamingendpoints) metrics are supported:
+
+|Metric name|Display name|Description|
+||||
+|Requests|Requests|Provides the total number of HTTP requests served by the Streaming Endpoint.|
+|Egress|Egress|Egress bytes total per minute per Streaming Endpoint.|
+|SuccessE2ELatency|Success end to end Latency|Time duration from when the Streaming Endpoint received the request to when the last byte of the response was sent.|
+|CPU usage| | CPU usage for premium streaming endpoints. This data is not available for standard streaming endpoints. |
+|Egress bandwidth | | Egress bandwidth in bits per second.|
+
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](https://docs.microsoft.com/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+**PLACEHOLDER** for dimensions table.
+
+## Resource logs
+
+## Media Services diagnostic logs
+
+Diagnostic logs provide rich and frequent data about the operation of an Azure resource. For more information, see [How to collect and consume log data from your Azure resources](https://docs.microsoft.com/azure/azure-monitor/essentials/platform-logs-overview.md).
+
+Media Services supports the following diagnostic logs:
+
+* Key delivery
+
+### Key delivery
+
+|Name|Description|
+|||
+|Key delivery service request|Logs that show the key delivery service request information. For more information, see [schemas](monitor-media-services-data-reference.md).|
+
+## Schemas
+
+For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](https://docs.microsoft.com/azure/azure-monitor/essentials/resource-logs-schema.md).
+
+## Key delivery log schema properties
+
+These properties are specific to the key delivery log schema.
+
+|Name|Description|
+|||
+|keyId|The ID of the requested key.|
+|keyType|Could be one of the following values: "Clear" (no encryption), "FairPlay", "PlayReady", or "Widevine".|
+|policyName|The Azure Resource Manager name of the policy.|
+|tokenType|The token type.|
+|statusMessage|The status message.|
+
+### Example
+
+Properties of the key delivery requests schema.
+
+```json
+{
+ "time": "2019-01-11T17:59:10.4908614Z",
+ "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-0000000000/RESOURCEGROUPS/SBKEY/PROVIDERS/MICROSOFT.MEDIA/MEDIASERVICES/SBDNSTEST",
+ "operationName": "MICROSOFT.MEDIA/MEDIASERVICES/CONTENTKEYS/READ",
+ "operationVersion": "1.0",
+ "category": "KeyDeliveryRequests",
+ "resultType": "Succeeded",
+ "resultSignature": "OK",
+ "durationMs": 315,
+ "identity": {
+ "authorization": {
+ "issuer": "http://testacs",
+ "audience": "urn:test"
+ },
+ "claims": {
+ "urn:microsoft:azure:media
+ "iss": "http://testacs",
+ "aud": "urn:test",
+ "exp": "1547233138"
+ }
+ },
+ "level": "Informational",
+ "location": "uswestcentral",
+ "properties": {
+ "requestId": "b0243468-d8e5-4edf-a48b-d408e1661050",
+ "keyType": "Clear",
+ "keyId": "3321e646-78d0-4896-84ec-c7b98eddfca5",
+ "policyName": "56a70229-82d0-4174-82bc-e9d3b14e5dbf",
+ "tokenType": "JWT",
+ "statusMessage": "OK"
+ }
+}
+```
+
+```json
+ {
+ "time": "2019-01-11T17:59:33.4676382Z",
+ "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-0000000000/RESOURCEGROUPS/SBKEY/PROVIDERS/MICROSOFT.MEDIA/MEDIASERVICES/SBDNSTEST",
+ "operationName": "MICROSOFT.MEDIA/MEDIASERVICES/CONTENTKEYS/READ",
+ "operationVersion": "1.0",
+ "category": "KeyDeliveryRequests",
+ "resultType": "Failed",
+ "resultSignature": "Unauthorized",
+ "durationMs": 2,
+ "level": "Error",
+ "location": "uswestcentral",
+ "properties": {
+ "requestId": "875af030-b77c-416b-b7e1-58f23ebec182",
+ "keyType": "Clear",
+ "keyId": "3321e646-78d0-4896-84ec-c7b98eddfca5",
+ "policyName": "56a70229-82d0-4174-82bc-e9d3b14e5dbf",
+ "tokenType": "None",
+ "statusMessage": "No token present in authorization header or URL."
+ }
+}
+```
+
+>[!NOTE]
+> Widevine is a service provided by Google Inc. and subject to the terms of service and Privacy Policy of Google, Inc.
+
+## Next steps
+
media-services Monitor Media Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/monitoring/monitor-media-services.md
+
+ Title: Monitoring Media Services
+description: Start here to learn how to monitor Media Services
+++++ Last updated : 03/12/2021++
+# Monitor Media Services
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Media Services and how you can use the features of Azure Monitor to analyze and alert on this data.
+
+## Metrics are useful
+
+Here are examples of how monitoring Media Services metrics can help you understand how your apps are performing. Some questions that can be addressed with Media Services metrics are:
+
+- How do I monitor my Standard Streaming Endpoint to know when I have exceeded the limits?
+- How do I know if I have enough Premium Streaming Endpoint scale units?
+- How can I set an alert to know when to scale up my Streaming Endpoints?
+- How do I set an alert to know when the max egress configured on the account was reached?
+- How can I see the breakdown of requests failing and what is causing the failure?
+- How can I see how many HLS or DASH requests are being pulled from the packager?
+- How do I set an alert to know when the I have hit the threshold value of failed requests?
+
+<!--THIS DOESN'T BELONG HERE Concurrency becomes a concern for the number of Streaming Endpoints used in a single account over time. You need to keep in mind the relationship between the number of concurrent streams with complex publishing parameters like dynamic packaging to multiple protocols, multiple DRM encryptions etc. Each additional published live stream adds to the CPU and output bandwidth on the Streaming Endpoint. With that in mind, you should use Azure Monitor to closely watch the Streaming Endpoint's utilization (CPU and Egress capacity) to make certain that you are scaling it appropriately (or split traffic out between multiple Streaming Endpoints if you are getting into very high concurrency).-->
+
+<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact
+robb@microsoft.com -->
+
+## What is Azure Monitor?
+
+Media Services creates monitoring data using [Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/overview), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+
+Start with reading the article [Monitoring Azure resources with Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/insights/monitor-azure-resource), which describes the following concepts:
+
+- What is Azure Monitor?
+- Costs associated with monitoring
+- Monitoring data collected in Azure
+- Configuring data collection
+- Standard tools in Azure for analyzing and alerting on monitoring data
+
+## Monitoring data
+
+Media Services collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](https://docs.microsoft.com/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+All data collected by Azure Monitor fits into one of two fundamental types: metrics and logs. With these two types you can:
+
+- Visualize and analyze the metrics data using Metrics explorer.
+- Monitor Media Services diagnostic logs and create alerts and notifications for them.
+- With logs you can:
+ - Send them to Azure Storage
+ - Stream them to Azure Event Hubs
+ - Export them to Log Analytics
+ - Use third-party services
+
+See the article [Monitoring Media Services data reference](monitor-media-services-data-reference.md) for detailed information on the metrics and logs metrics created by Media Services.
+
+## Collection and routing
+
+*Platform metrics* and the *Activity log* are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+*Resource Logs* are **not** collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See the article [Create diagnostic setting to collect platform logs and metrics in Azure](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostic-settings) for the detailed process of creating a diagnostic setting using the Azure portal, CLI, or PowerShell.
+
+When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Media Services are listed in [Media Services monitoring data reference](monitor-media-services-data-reference.md).
+
+## Analyzing metrics
+
+You can analyze metrics for Media Services with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+
+For a list of the metrics collected for Media Services, see [Monitoring Media Services Data Reference](monitor-media-services-data-reference.md).
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostic-logs-schema#top-level-resource-logs-schema).
+
+The schema for Media Services resource logs is found in [Monitoring Media Services Data Reference](monitor-media-services-data-reference.md).
+
+The [Activity log](https://docs.microsoft.com/azure/azure-monitor/platform/activity-log) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Media Services, see [Monitoring Media Services data reference](monitor-media-services-data-reference.md).
+
+### Why would I want to use diagnostic logs?
+
+Some things that you can examine with diagnostic logs are:
+
+- The number of licenses delivered by DRM type.
+- The number of licenses delivered by policy.
+- Errors by DRM or policy type.
+- The number of unauthorized license requests from clients.
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](https://docs.microsoft.com/azure/azure-monitor/platform/alerts-metric-overview), [logs](https://docs.microsoft.com/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](https://docs.microsoft.com/azure/azure-monitor/platform/activity-log-alerts).
+
+Media Services metrics are collected at regular intervals whether or not the value changes. They're useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
+
+<!--
+The following table lists common and recommended alert rules for Media Services.
+
+<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable
+**PLACEHOLDER** table
+
+| Alert type | Condition | Description |
+|:|:|:|
+| | | |
+| | | |
+-->
+
+## Next steps
+
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-version-policy.md
Title: Versioning policy - Azure Database for MySQL - Single Server and Flexible Server (Preview)
+ Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server (Preview)
description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL
Last updated 11/03/2020
-# Azure Database for MySQL versioning policy
+# Azure Database for MySQL version support policy
This page describes the Azure Database for MySQL versioning policy, and is applicable to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
object-anchors Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/best-practices.md
description: Recommended best practices to get improved results
Previously updated : 02/17/2021 Last updated : 03/12/2021 #
We recommend trying some of these steps to get the best results.
## Ingestion -- Check the dimensions of your physical objects. Object anchors works best for objects whose smallest dimension is in
+- Check the dimensions of your physical objects. Azure Object anchors works best for objects whose smallest dimension is in
the range of the recommended 1m-10m. - Inspect your 3D model in software like [**MeshLab**](https://www.meshlab.net/) for the following details. - Ensure that the 3D model has a triangle mesh and that the triangles on the exterior surface face outward. That is,
We recommend trying some of these steps to get the best results.
search region could be a bounding box, a sphere, a view frustum, or any combination of them. To avoid a false detection, it is preferable to set a search region large enough to cover the object. When using the provided sample apps, you can stand on one side of the object about 2 meters away from the closest surface and start the app.-- Before starting the Object Anchors app on a HoloLens 2 device, remove the holograms in the vicinity of your workplace
+- Before starting the Azure Object Anchors app on a HoloLens 2 device, remove the holograms in the vicinity of your workplace
via on your devices main settings through ***Settings->System->Holograms*** This step ensures that if a new object such as a car is present in the same space as occupied by another previously,
We recommend trying some of these steps to get the best results.
This step ensures that any residual surface estimates created in your space by earlier objects and scans are refreshed with the surfaces of the current target object that you are going to work with. Otherwise the app may see double ghost surfaces leading to inaccurate alignment of your 3D model and the associated holograms. Pre-scanning the object will
- also greatly reduce the AOA detection latency, say, from 30 seconds to 5 seconds.
+ also greatly reduce the Azure Object Anchors detection latency, say, from 30 seconds to 5 seconds.
- For dark and highly reflective objects, you may have to scan the object at closer range and also by moving your head up and down and left and right to let the device see surfaces from multiple angles and multiple distances. - When you see a wrong object detection such as the orientation being flipped or the pose being incorrect such as a
object-anchors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/overview.md
# Azure Object Anchors overview
-Azure Object Anchors enables an application to detect an object in the physical world using a 3D model and estimate its 6DoF pose. The 6DoF (6 degrees of freedom) pose is defined as a rotation and translation between a 3D model and its physical counterpart, the real object.
+Azure Object Anchors enables an application to detect an object in the physical world using a 3D model and estimate its 6DoF pose. The 6DoF (6 degrees of freedom) pose is defined as a rotation and translation between a 3D model and its physical counterpart, the real object.
-Azure Object Anchors is composed of a managed service for model conversion and a runtime client SDK for HoloLens. The service accepts a 3D object model and outputs an Azure Object Anchors model. The Azure Object Anchors model is used along with the runtime SDK to enable a HoloLens application to load an object model, detect, and track instance(s) of that model in the physical world.
+Azure Object Anchors is composed of a service for model conversion and a runtime client SDK for HoloLens. The service accepts a 3D object model and outputs an Azure Object Anchors model. The Azure Object Anchors model is used along with the runtime SDK to enable a HoloLens application to load an object model, detect, and track instance(s) of that model in the physical world.
:::image type="content" source="./media/aoa-overview.jpg" alt-text="Azure Object Anchors in action":::
Azure Object Anchors is composed of a managed service for model conversion and a
Some example use cases enabled by Azure Object Anchors include: -- **Training**. Create mixed reality training experiences for your workers, without the need to place markers or spend time manually adjusting hologram alignment. If you want to augment your mixed reality training experiences with automated detection and tracking, ingest your model into the Object Anchors service and you'll be one step closer to a markerless experience.
+- **Training**. Create Mixed Reality training experiences for your workers, without the need to place markers or spend time manually adjusting hologram alignment. If you want to augment your Mixed Reality training experiences with automated detection and tracking, ingest your model into the Azure Object Anchors service and you'll be one step closer to a markerless experience.
-- **Task Guidance**. Walking employees through a set of tasks can be greatly simplified when using Mixed Reality. Overlaying digital instructions and best practices, on top of the physical object ΓÇô be it a piece of machinery on a factory floor, or a coffee maker in the team kitchen ΓÇô can greatly reduce difficulty of completing the set of tasks. Triggering these experiences typically requires some form of marker or manual alignment, but with Object Anchors, you can create an experience that automatically detects the object related to the task at hand. Then, seamlessly flow through Mixed Reality guidance without markers or manual alignment.
+- **Task Guidance**. Walking employees through a set of tasks can be greatly simplified when using Mixed Reality. Overlaying digital instructions and best practices, on top of the physical object ΓÇô be it a piece of machinery on a factory floor, or a coffee maker in the team kitchen ΓÇô can greatly reduce difficulty of completing the set of tasks. Triggering these experiences typically requires some form of marker or manual alignment, but with Azure Object Anchors, you can create an experience that automatically detects the object related to the task at hand. Then, seamlessly flow through Mixed Reality guidance without markers or manual alignment.
-- **Asset Finding**. If you already have a 3D model of some object in your physical space, Object Anchors can enable you to locate and track instances of that object in your physical environment.
+- **Asset Finding**. If you already have a 3D model of some object in your physical space, Azure Object Anchors can enable you to locate and track instances of that object in your physical environment.
## Next steps
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-credentials.md
If you are using the Purview managed identity to set up scans, you will not have
Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Azure Purview account.
-1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account. Navigate to the **Management Center** and then navigate to **credentials**.
+1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account and Open Azure Purview Studio. Navigate to the **Management Center** on Azure Purview Studio and then navigate to **credentials**.
2. From the **Credentials** page, select **Manage Key Vault connections**.
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-manage-powershell.md
ResourceId : /subscriptions/<alphanumeric-subscription-ID>/resourceGroups
## Create or delete a service
-[**New-AzSearchService**](/powershell/module/az.search/new-azsearchadminkey) is used to [create a new search service](search-create-service-portal.md).
+[**New-AzSearchService**](/powershell/module/az.search/new-azsearchservice) is used to [create a new search service](search-create-service-portal.md).
```azurepowershell-interactive New-AzSearchService -ResourceGroupName <resource-group-name> -Name <search-service-name> -Sku "Standard" -Location "West US" -PartitionCount 3 -ReplicaCount 3 -HostingMode Default
Build an [index](search-what-is-an-index.md), [query an index](search-query-over
* [Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md) * [Set up an indexer to load data from other services](search-indexer-overview.md) * [Query an Azure Cognitive Search index using Search explorer in the Azure portal](search-explorer.md)
-* [How to use Azure Cognitive Search in .NET](search-howto-dotnet-sdk.md)
+* [How to use Azure Cognitive Search in .NET](search-howto-dotnet-sdk.md)
security-center Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-usage.md
To enable this plan:
### Step 1. Provision the Log Analytics agent on your SQL server's host: -- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [enable auto provisioning of the Log Analytics agent <a name="auto-provision-mma"></a>](security-center-enable-data-collection.md#auto-provision-mma). Alternatively, you can follow the manual procedure for [Onboard your Azure Stack VMs](quickstart-onboard-machines.md#onboard-your-azure-stack-vms).
+- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [enable auto provisioning of the Log Analytics agent <a name="auto-provision-mma"></a>](security-center-enable-data-collection.md#auto-provision-mma). Alternatively, you can follow the manual procedure for [Onboard your Azure Stack Hub VMs](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
- **SQL Server on Azure Arc** - If your SQL Server is managed by [Azure Arc](../azure-arc/index.yml) enabled servers, you can deploy the Log Analytics agent using the Security Center recommendation ΓÇ£Log Analytics agent should be installed on your Windows-based Azure Arc machines (Preview)ΓÇ¥. Alternatively, you can follow the installation methods described in the [Azure Arc documentation](../azure-arc/servers/manage-vm-extensions.md). - **SQL Server on-prem** - If your SQL Server is hosted on an on-premises Windows machine without Azure Arc, you have two options for connecting it to Azure:
security-center Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/quickstart-onboard-machines.md
Learn more about [Azure Arc enabled servers](../azure-arc/servers/overview.md).
From here, choose the relevant procedure below depending on the type of machines you're onboarding:
- - [Onboard your Azure Stack VMs](#onboard-your-azure-stack-vms)
+ - [Onboard your Azure Stack Hub VMs](#onboard-your-azure-stack-hub-vms)
- [Onboard your Linux machines](#onboard-your-linux-machines) - [Onboard your Windows machines](#onboard-your-windows-machines)
-### Onboard your Azure Stack VMs
+### Onboard your Azure Stack Hub VMs
-To add Azure Stack VMs, you need the information on the **Agents management** page and to configure the **Azure Monitor, Update and Configuration Management** virtual machine extension on the virtual machines running on your Azure Stack.
+To add Azure Stack Hub VMs, you need the information on the **Agents management** page and to configure the **Azure Monitor, Update and Configuration Management** virtual machine extension on the virtual machines running on your Azure Stack Hub instance.
1. From the **Agents management** page, copy the **Workspace ID** and **Primary Key** into Notepad.
-1. Log into your **Azure Stack** portal and open the **Virtual machines** page.
+1. Log into your **Azure Stack Hub** portal and open the **Virtual machines** page.
1. Select the virtual machine that you want to protect with Security Center. >[!TIP]
- > For information on how to create a virtual machine on Azure Stack, see [this quickstart for Windows virtual machines](/azure-stack/user/azure-stack-quick-windows-portal) or [this quickstart for Linux virtual machines](/azure-stack/user/azure-stack-quick-linux-portal).
+ > For information on how to create a virtual machine on Azure Stack Hub, see [this quickstart for Windows virtual machines](/azure-stack/user/azure-stack-quick-windows-portal) or [this quickstart for Linux virtual machines](/azure-stack/user/azure-stack-quick-linux-portal).
1. Select **Extensions**. The list of virtual machine extensions installed on this virtual machine is shown. 1. Select the **Add** tab. The **New Resource** menu shows the list of available virtual machine extensions. 1. Select the **Azure Monitor, Update and Configuration Management** extension and select **Create**. The **Install extension** configuration page opens. >[!NOTE]
- > If you do not see the **Azure Monitor, Update and Configuration Management** extension listed in your marketplace, please reach out to your Azure Stack operator to make it available.
+ > If you do not see the **Azure Monitor, Update and Configuration Management** extension listed in your marketplace, please reach out to your Azure Stack Hub operator to make it available.
1. On the **Install extension** configuration page, paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied into Notepad in the previous step. 1. When you complete the configuration, select **OK**. The extension's status will show as **Provisioning Succeeded**. It might take up to one hour for the virtual machine to appear in Security Center.
security-center Security Center Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-os-coverage.md
Virtual machines are also created in a customer subscription as part of some Azu
Virtual machines that run in a cloud service are also supported. Only cloud services web and worker roles that run in production slots are monitored. To learn more about cloud services, see [Overview of Azure Cloud Services](../cloud-services/cloud-services-choose-me.md).
-Protection for VMs residing in Azure Stack is also supported. For more information about Security Center's integration with Azure Stack, see [Onboard your Azure Stack virtual machines to Security Center](quickstart-onboard-machines.md).
+Protection for VMs residing in Azure Stack Hub is also supported. For more information about Security Center's integration with Azure Stack Hub, see [Onboard your Azure Stack Hub virtual machines to Security Center](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
## Next steps
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/customer-lockbox-overview.md
Customer Lockbox for Microsoft Azure provides an interface for customers to revi
This article covers how to enable Customer Lockbox and how Lockbox requests are initiated, tracked, and stored for later reviews and audits.
-<a name='supported-services-and-scenarios-in-general-availability'><a name='supported-services-and-scenarios-in-preview'>
+<a name='supported-services-and-scenarios-in-general-availability'></a><a name='supported-services-and-scenarios-in-preview'></a>
## Supported services and scenarios (General Availability) The following services are now generally available for Customer Lockbox:
synapse-analytics Oracle To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/oracle-to-synapse-analytics-guide.md
- Title: "Oracle to Azure Synapse Analytics: Migration guide"
-description: The following sections provide an overview of what's involved with migrating an existing Oracle database solution to Azure Synapse Analytics.
----- Previously updated : 08/25/2020--
-# Migration guide: Migrate Oracle data warehouse to a dedicated SQL pool in Azure Synapse Analytics
-The following sections provide an overview of what's involved with migrating an existing Oracle data warehouse solution to Azure Synapse Analytics.
-
-## Overview
-Before migrating, you should verify that Azure Synapse Analytics is the best solution for your workload. Azure Synapse Analytics is a distributed system designed to perform analytics on large data. Migrating to Azure Synapse Analytics requires some design changes that are not difficult to understand but that might take some time to implement. If your business requires an enterprise-class data warehouse, the benefits are worth the effort. However, if you don't need the power of Azure Synapse Analytics, it is more cost-effective to use [SQL Server](https://docs.microsoft.com/sql/sql-server/) or [Azure SQL Database](https://docs.microsoft.com/azure/azure-sql/).
-
-Consider using Azure Synapse Analytics when you:
-- Have one or more Terabytes of data.-- Plan to run analytics on substantial amounts of data.-- Need the ability to scale compute and storage.-- Want to save on costs by pausing compute resources when you don't need them.-
-Rather than Azure Synapse Analytics, consider other options for operational (OLTP) workloads that have:
-- High frequency reads and writes.-- Large numbers of singleton selects.-- High volumes of single row inserts.-- Row-by-row processing needs.-- Incompatible formats (JSON, XML).-
-## Prerequisites
-To migrate your Oracle data warehouse to Azure Synapse Analytics, make sure you have the following prerequisites:
--- A data warehouse or Analytics workload -- SSMA for Oracle to convert Oracle objects to SQL Server. See [Migrating Oracle Databases to SQL Server (OracleToSQL)](https://docs.microsoft.com/sql/ssma/oracle/migrating-oracle-databases-to-sql-server-oracletosql) for more information. -- Latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=102787) tool to migrate SQL Server objects to Azure Synapse objects.-- A [dedicated SQL pool](../get-started-create-workspace.md) in Azure Synapse workspace. --
-## Pre-migration
-After you make the decision to migrate an existing solution to Azure Synapse Analytics, it is important to plan the migration before you get started. A primary goal of planning is to ensure that your data, table schemas, and code are compatible with Azure Synapse Analytics. There are some compatibility differences between your current system and SQL Data Warehouse that you will need to work around. In addition, migrating large amounts of data to Azure takes time. Careful planning will speed up the process of getting your data to Azure. Another key goal of planning is to adjust your design to ensure that your solution takes full advantage of the high query performance that Azure Synapse Analytics is designed to provide. Designing data warehouses for scale introduces unique design patterns, so traditional approaches aren't always the best. While some design adjustments can be made after migration, making changes earlier in the process will save you time later.
-
-## Azure Synapse Pathway
-One of the critical blockers customers face is translating their SQL code when migrating from one system to another. [Azure Synapse Pathway](https://docs.microsoft.com/sql/tools/synapse-pathway/azure-synapse-pathway-overview) helps you upgrade to a modern data warehouse platform by automating the code translation of your existing data warehouse. It's a free, intuitive, and easy to use tool that automates the code translation enabling a quicker migration to Azure Synapse Analytics.
-
-## Migrate
-Performing a successful migration requires you to migrate your table schemas, code, and data. For more detailed guidance on these topics, see:
-- The article [Migrate your schemas](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-- The article [Migrate your code](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-- The article [Migrate your data](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-
-## Additional resources
-- The CAT (Customer Advisory Team) has some great Azure Synapse Analytics (formerly SQL Data Warehouse) guidance published as blog postings. Be sure to take a look at their article, [Migrating data to Azure SQL Data Warehouse in practice](https://docs.microsoft.com/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice), for additional guidance on migration.-- Check out the white paper [Choosing your database migration path to Azure](https://azure.microsoft.com/mediahandler/files/resourcefiles/choosing-your-database-migration-path-to-azure/Choosing_your_database_migration_path_to_Azure.pdf) for additional information and recommendations.-- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).-
-## Migration assets from real-world engagements
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
-
-| Title/link | Description |
-| | |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
-| [Handling Data Encoding Issues While Loading Data to Azure Synapse Analytics](https://azure.microsoft.com/en-us/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog is intended to provide insight on some of the data encoding issues that you may encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. |
-| [Getting table sizes in Azure Synapse Analytics SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform execute is to get metrics about a new environment post-migration: collecting load times from on-premises to the cloud, collecting PolyBase load times, etc. Of these tasks, one of the most important is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. |
-| [Utility to move On-Premises SQL Server Logins to Azure Synapse Analytics](https://github.com/Microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) | A PowerShell script that creates a T-SQL command script to re-create logins and select database users from an ΓÇ£on premisesΓÇ¥ SQL Server to an Azure SQL PaaS service. The tool allows the automatic mapping of Windows AD accounts to Azure AD accounts or it can do UPN lookups for each login against the on premises Windows Active Directory. The tool optionally moves SQL Server native logins as well. Custom server and database roles are scripted, as well as role membership and database role and user permissions. Contained databases are yet not supported and only a subset of possible SQL Server permissions are scripted; i.e. permissions grant with grant are not supported (complex permission trees). More details are available in the support document and the script have comments for ease of understanding. |
-
-> [!NOTE]
-> These above resources were developed as part of the Data Migration Jumpstart Program (DM Jumpstart), which is sponsored by the Azure Data Group engineering team. The core charter of DM Jumpstart is to unblock and accelerate complex modernization and compete data platform migration opportunities to MicrosoftΓÇÖs Azure Data platform. If you think your organization would be interested in participating in the DM Jumpstart program, please contact your account team and ask that they submit a nomination.
-
-## Videos
-- For an overview of the Azure Database Migration Guide and the information it contains, see the video [How to Use the Database Migration Guide](https://azure.microsoft.com/resources/videos/how-to-use-the-azure-database-migration-guide/).-- For a walk through of the phases of the migration process and detail about the specific tools and services recommended to perform assessment and migration, see the video [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
synapse-analytics Sql Server To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/sql-server-to-synapse-analytics-guide.md
- Title: "SQL Server to Azure Synapse Analytics: Migration guide"
-description: Follow this guide to migrate your SQL databases to Azure Synapse Analytics SQL pool.
------ Previously updated : 03/10/2021-
-# Migration guide: SQL Server to a dedicated SQL pool in Azure Synapse Analytics
-The following sections provide an overview of what's involved with migrating an existing SQL Server data warehouse solution to Azure Synapse Analytics SQL pool.
-
-## Overview
-Before migrating, you should verify that Azure Synapse Analytics is the best solution for your workload. Azure Synapse Analytics is a distributed system designed to perform analytics on large data. Migrating to Azure Synapse Analytics requires some design changes that aren't difficult to understand but that might take some time to implement. If your business requires an enterprise-class data warehouse, the benefits are worth the effort. However, if you don't need the power of Azure Synapse Analytics, it's more cost-effective to use [SQL Server](/sql/sql-server/) or [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
-
-Consider using Azure Synapse Analytics when you:
-- Have one or more Terabytes of data.-- Plan to run analytics on substantial amounts of data.-- Need the ability to scale compute and storage.-- Want to save on costs by pausing compute resources when you don't need them.-
-Rather than Azure Synapse Analytics, consider other options for operational (OLTP) workloads that have:
-- High frequency reads and writes.-- Large numbers of singleton selects.-- High volumes of single row inserts.-- Row-by-row processing needs.-- Incompatible formats (JSON, XML).-
-## Prerequisites
-To migrate your SQL Server to Azure Synapse Analytics, make sure you have the following prerequisites:
--- A data warehouse or Analytics workload -- Latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=102787) tool to migrate SQL Server objects to Azure Synapse objects.-- A [dedicated SQL pool](../get-started-create-workspace.md) in Azure Synapse workspace. -
-## Pre-migration
-After you make the decision to migrate an existing solution to Azure Synapse Analytics, it's important to plan the migration before you get started. A primary goal of planning is to ensure that your data, table schemas, and code are compatible with Azure Synapse Analytics. There are some compatibility differences between your current system and SQL Data Warehouse that you'll need to work around. Also, migrating large amounts of data to Azure takes time. Careful planning will speed up the process of getting your data to Azure. Another key goal of planning is to adjust your design to ensure that your solution takes full advantage of the high query performance that Azure Synapse Analytics is designed to provide. Designing data warehouses for scale introduces unique design patterns, so traditional approaches aren't always the best. While some design adjustments can be made after migration, making changes earlier in the process will save you time later.
-
-## Azure Synapse Pathway
-One of the critical blockers customers face is translating their SQL code when migrating from one system to another. [Azure Synapse Pathway](/sql/tools/synapse-pathway/azure-synapse-pathway-overview) helps you upgrade to a modern data warehouse platform by automating the code translation of your existing data warehouse. It's a free, intuitive, and easy to use tool that automates the code translation enabling a quicker migration to Azure Synapse Analytics.
-
-## Migrate
-Performing a successful migration requires you to migrate your table schemas, code, and data. For more detailed guidance on these topics, see:
-- The article [Migrate your schemas](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-- The article [Migrate your code](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-- The article [Migrate your data](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).-
-## Additional resources
-- The CAT (Customer Advisory Team) has some great Azure Synapse Analytics (formerly SQL Data Warehouse) guidance published as blog postings. Be sure to take a look at their article, [Migrating data to Azure SQL Data Warehouse in practice](https://docs.microsoft.com/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice), for additional guidance on migration.-- Check out the white paper [Choosing your database migration path to Azure](https://azure.microsoft.com/mediahandler/files/resourcefiles/choosing-your-database-migration-path-to-azure/Choosing_your_database_migration_path_to_Azure.pdf) for additional information and recommendations.-- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix). -
-## Migration assets from real-world engagements
-For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
-
-| Title/link | Description |
-| | |
-| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
-| [Handling Data Encoding Issues While Loading Data to Azure Synapse Analytics](https://azure.microsoft.com/en-us/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog is intended to provide insight on some of the data encoding issues that you may encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. |
-| [Getting table sizes in Azure Synapse Analytics SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform execute is to get metrics about a new environment post-migration: collecting load times from on-premises to the cloud, collecting PolyBase load times, etc. Of these tasks, one of the most important is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. |
-| [Utility to move On-Premises SQL Server Logins to Azure Synapse Analytics](https://github.com/Microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) | A PowerShell script that creates a T-SQL command script to re-create logins and select database users from an ΓÇ£on premisesΓÇ¥ SQL Server to an Azure SQL PaaS service. The tool allows the automatic mapping of Windows AD accounts to Azure AD accounts or it can do UPN lookups for each login against the on premises Windows Active Directory. The tool optionally moves SQL Server native logins as well. Custom server and database roles are scripted, as well as role membership and database role and user permissions. Contained databases are yet not supported and only a subset of possible SQL Server permissions are scripted; i.e. permissions grant with grant are not supported (complex permission trees). More details are available in the support document and the script have comments for ease of understanding. |
-
-> [!NOTE]
-> These above resources were developed as part of the Data Migration Jumpstart Program (DM Jumpstart), which is sponsored by the Azure Data Group engineering team. The core charter of DM Jumpstart is to unblock and accelerate complex modernization and compete data platform migration opportunities to MicrosoftΓÇÖs Azure Data platform. If you think your organization would be interested in participating in the DM Jumpstart program, please contact your account team and ask that they submit a nomination.
-
-## Videos
-- For an overview of the Azure Database Migration Guide and the information it contains, see the video [How to Use the Database Migration Guide](https://azure.microsoft.com/resources/videos/how-to-use-the-azure-database-migration-guide/).-- For a walk through of the phases of the migration process and detail about the specific tools and services recommended to perform assessment and migration, see the video [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
time-series-insights Concepts Streaming Ingestion Event Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-streaming-ingestion-event-sources.md
Events must be sent as UTF-8 encoded JSON.
## Create or edit event sources
-Your event source resource(s) can live in the same Azure subscription as your Azure Time Series Insights Gen2 environment or a different subscription.You can use the [Azure portal](./tutorials-set-up-tsi-environment.md#create-an-azure-time-series-insights-gen2-environment), [Azure CLI](https://github.com/Azure/azure-cli-extensions/tree/master/src/timeseriesinsights), [ARM Templates](time-series-insights-manage-resources-using-azure-resource-manager-template.md), and the [REST API](/rest/api/time-series-insights/management(gen1/gen2)/eventsources) to create, edit, or remove your environment's event sources.
+Your event source resource(s) can live in the same Azure subscription as your Azure Time Series Insights Gen2 environment or a different subscription.You can use the [Azure portal](./tutorial-set-up-environment.md#create-an-azure-time-series-insights-gen2-environment), [Azure CLI](https://github.com/Azure/azure-cli-extensions/tree/master/src/timeseriesinsights), [ARM Templates](time-series-insights-manage-resources-using-azure-resource-manager-template.md), and the [REST API](/rest/api/time-series-insights/management(gen1/gen2)/eventsources) to create, edit, or remove your environment's event sources.
When you connect an event source, your Azure Time Series Insights Gen2 environment will read all of the events currently stored in your Iot or Event Hub, starting with the oldest event.
time-series-insights Concepts Ux Panels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-ux-panels.md
This article describes the various features and options available within the Azu
To get started with the Azure Time Series Insights Explorer, you must:
-* Have an Azure Time Series Insights Gen2 environment provisioned. Learn more about provisioning an instance by reading the [Azure Time Series Insights Gen2](./tutorials-set-up-tsi-environment.md) tutorial.
+* Have an Azure Time Series Insights Gen2 environment provisioned. Learn more about provisioning an instance by reading the [Azure Time Series Insights Gen2](./tutorial-set-up-environment.md) tutorial.
* [Provide data access](./concepts-access-policies.md) to the Azure Time Series Insights Gen2 environment that you created for the account. You can provide access to others as well as to yourself. * Add an event source to the Azure Time Series Insights Gen2 environment to push data to the environment: * Learn [how to connect to an event hub](./how-to-ingest-data-event-hub.md)
The well displays instance fields and other metadata associated with selected Ti
You may remove specific data elements from your current data well by selecting the red **Delete** (trash can) control on the left side of the element. The well also lets you control how each element is displayed in the chart. You can choose to add min/max shadows, data points, shift the element in time and visualize the instance a stepped manner.
-Additionally, The explorations control lets you create time shifts and scatter plots easily.
+Additionally, The explorations control lets you create time shifts and scatter plots easily.
[![Well layout options](media/v2-update-explorer/well-layout-options.png)](media/v2-update-explorer/well-layout-options.png#lightbox)
time-series-insights How To Create Environment Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-create-environment-using-cli.md
+
+ Title: 'Create an Azure Time Series Insights Gen2 environment using the Azure CLI - Azure Time Series Insights Gen2 | Microsoft Docs'
+description: 'Learn how to set up an environment in Azure Time Series Insights Gen2 using the Azure CLI.'
+++++++ Last updated : 03/15/2021+++
+# Create an Azure Time Series Insights Gen2 environment using the Azure CLI
+
+This document will guide you through creating a new Time Series Insights Gen2 Environment.
++
+## Prerequisites
+
+* Create an Azure storage account for your environment's [cold store](./concepts-storage.md#cold-store). This account is designed for long-term retention and analytics for historical data.
+
+> [!NOTE]
+> In your code, replace `mytsicoldstore` with a unique name for your cold storage account.
+
+First, create the storage account:
+
+```azurecli-interactive
+storage=mytsicoldstore
+rg=-my-resource-group-name
+az storage account create -g $rg -n $storage --https-only
+key=$(az storage account keys list -g $rg -n $storage --query [0].value --output tsv
+```
+
+## Creating the environment
+
+Now that the storage account is created and its name and management key are assigned to the variables, run the command below to create the Azure Time Series Insights Environment:
+
+> [!NOTE]
+> In your code, replace the following with unique names for your scenario:
+>
+> * `my-tsi-env` with your Environment name.
+> * `my-ts-id-prop` with the name of your Time Series Id Property.
+
+> [!IMPORTANT]
+> Your environment's Time Series ID is like a database partition key. The Time Series ID also acts as the primary key for your Time Series Model.
+>
+> For more information, see [Best practices for choosing a Time Series ID.](./how-to-select-tsid.md)
+
+```azurecli-interactive
+az tsi environment gen2 create --name "my-tsi-env" --location eastus2 --resource-group $rg --sku name="L1" capacity=1 --time-series-id-properties name=my-ts-id-prop type=String --warm-store-configuration data-retention=P7D --storage-configuration account-name=$storage management-key=$key
+```
+
+## Remove an Azure Time Series Insights environment
+
+You can use the Azure CLI to delete an individual resource, such as a Time Series Insights Environment, or delete a Resource Group and all its resources, including any Time Series Insights Environments.
+
+To [delete a Time Series Insights Environments](/cli/azure/ext/timeseriesinsights/tsi/environment?view=azure-cli-latest#ext_timeseriesinsights_az_tsi_environment_delete), run the following command:
+
+```azurecli-interactive
+az tsi environment delete --name "my-tsi-env" --resource-group $rg
+```
+
+To [delete the storage account](/cli/azure/storage/account?view=azure-cli-latest#az_storage_account_delete), run the following command:
+
+```azurecli-interactive
+az storage account delete --name $storage --resource-group $rg
+```
+
+To [delete a resource group](/cli/azure/group#az-group-delete) and all its resources, run the following command:
+
+```azurecli-interactive
+az group delete --name $rg
+```
+
+## Next steps
+
+* Learn about [streaming ingestion event sources](./concepts-streaming-ingestion-event-sources.md) for your Azure Time Series Insights Gen2 environment.
+* Learn how to connect to an [IoT Hub](./how-to-ingest-data-iot-hub.md)
time-series-insights How To Create Environment Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-create-environment-using-portal.md
+
+ Title: 'Set up a Gen2 environment using the Azure portal - Azure Time Series Insights Gen2 | Microsoft Docs'
+description: 'Learn how to set up an environment in Azure Time Series Insights Gen2 using Azure portal.'
+++++++ Last updated : 03/15/2021+++
+# Create an Azure Time Series Insights Gen2 environment using the Azure portal
+
+This article describes how to create an Azure Time Series Insights Gen2 environment by using the [Azure portal](https://portal.azure.com/).
+
+## Overview
+
+When you provision an Azure Time Series Insights Gen2 environment, you create these Azure resources:
+
+* An Azure Time Series Insights Gen2 environment that follows pay-as-you-go pricing model
+* An Azure Storage account
+* An optional warm store for faster and unlimited queries
+
+> [!TIP]
+>
+> * Learn [how to plan your environment](./how-to-plan-your-environment.md).
+> * Read about how to [Add an event hub source](./how-to-ingest-data-event-hub.md) or how to [Add an IoT hub source](./how-to-ingest-data-iot-hub.md).
+
+You will learn how to:
+
+1. Associate each Azure Time Series Insights Gen 2 environment with an event source. You will also provide a Timestamp ID property and a unique consumer group to ensure that the environment has access to the appropriate events.
+
+1. After provisioning is complete, you can modify your access policies and other environment attributes to suit your business needs.
+
+ > [!NOTE]
+ > The first step is optional when provisioning an environment. If you skip this step, you must attach an event source to the environment later so data can start flowing into your environment and can be accessed through query.
+
+## Create the environment
+
+To create an Azure Time Series Insights Gen 2 environment:
+
+1. Create an Azure Time Series Insights resource under *Internet of Things* on [Azure portal](https://portal.azure.com/).
+
+1. Select **Gen2(L1)** as the **Tier**. Provide an environment name, and choose the subscription group and resource group to use. Then select a supported location to host the environment.
+
+ :::image type="content" source="media/how-to-create-environment-using-portal/environment-configuration.png" alt-text="Create an Azure Time Series Insights instance." lightbox="media/how-to-create-environment-using-portal/environment-configuration.png":::
+
+1. Enter a Time Series ID.
+
+ :::image type="content" source="media/how-to-create-environment-using-portal/environment-configuration-2.png" alt-text="Create an Azure Time Series Insights environment configuration, continued." lightbox="media/how-to-create-environment-using-portal/environment-configuration-2.png":::
+
+ > [!NOTE]
+ >
+ > * The Time Series ID is *case-sensitive* and *immutable*. (It can't be changed after it's set.)
+ > * Time Series IDs can be up to *three* keys. Think of it as a primary key in a database, which uniquely represents each device sensor that would send data to your environment. It could be one property or a combination of up to three properties.
+ > * Read more about [How to choose a Time Series ID](./how-to-select-tsid.md)
+
+1. Create an Azure Storage account by selecting a storage account name, account kind, and designating a [replication](../storage/common/redundancy-migration.md?tabs=portal) choice. Doing so automatically creates an Azure Storage account. By default, [general purpose v2](../storage/common/storage-account-overview.md) account will be created. The account is created in the same region as the Azure Time Series Insights Gen2 environment that you previously selected.
+Alternatively, you can also bring your own storage (BYOS) through an [Azure Resource Manager template](./time-series-insights-manage-resources-using-azure-resource-manager-template.md) when you create a new Azure Time Series Gen2 environment.
+
+1. **(Optional)** Enable warm store for your environment if you want faster and unlimited queries over most recent data in your environment. You can also create or delete a warm store through the **Storage Configuration** option in the left navigation pane, after you create an Azure Time Series Insights Gen2 environment.
+
+1. **(Optional)** You can add an event source now. You can also wait until after the instance has been provisioned.
+
+ * Azure Time Series Insights supports [Azure IoT Hub](./how-to-ingest-data-iot-hub.md) and [Azure Event Hubs](./how-to-ingest-data-event-hub.md) as event source options. Although you can add only a single event source when you create the environment, you can add another event source later.
+
+ You can select an existing consumer group or create a new consumer group when you add the event source. Be sure that the event source uses a unique consumer group for your environment to read data into it.
+
+ * Choose the appropriate Timestamp property. By default, Azure Time Series Insights uses the message-enqueued time for each event source.
+
+ > [!TIP]
+ > The message-enqueued time might not be the best configured setting to use in batch event scenarios or historical data uploading scenarios. In such cases, make sure to verify your decision to use or not use a Timestamp property.
+
+ :::image type="content" source="media/how-to-create-environment-using-portal/configure-event-source.png" alt-text="Event Source configuration tab" lightbox="media/how-to-create-environment-using-portal/configure-event-source.png":::
+
+1. Select **Review + Create** to confirm that your environment has been provisioned and configured the way you want.
+
+ :::image type="content" source="media/how-to-create-environment-using-portal/environment-confirmation.png" alt-text="Review + Create tab" lightbox="media/how-to-create-environment-using-portal/environment-confirmation.png":::
+
+## Next steps
+
+* Learn more about Azure Time Series Insights generally available environments and Gen2 environments by reading [Plan your environment](./how-to-plan-your-environment.md).
+* Learn about [streaming ingestion event sources](./concepts-streaming-ingestion-event-sources.md) for your Azure Time Series Insights Gen2 environment.
+* Learn more about [how to manage your environment](./how-to-provision-manage.md).
time-series-insights How To Ingest Data Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-ingest-data-event-hub.md
This article describes how to use the Azure portal to add an event source that r
## Prerequisites -- Create an Azure Time Series Insights environment as described in [Create an Azure Time Series Insights environment](./tutorials-set-up-tsi-environment.md).
+- Create an Azure Time Series Insights environment as described in [Create an Azure Time Series Insights environment](./tutorial-set-up-environment.md).
- Create an event hub. Read [Create an Event Hubs namespace and an event hub by using the Azure portal](../event-hubs/event-hubs-create.md). - The event hub must have active message events sent to it. Learn how to [Send events to Azure Event Hubs by using the .NET Framework](../event-hubs/event-hubs-dotnet-framework-getstarted-send.md). - Create a dedicated consumer group in the event hub that the Azure Time Series Insights environment can consume from. Each Azure Time Series Insights event source must have its own dedicated consumer group that isn't shared with any other consumer. If multiple readers consume events from the same consumer group, all readers are likely to exhibit failures. There's a limit of 20 consumer groups per event hub. For details, read the [Event Hubs programming guide](../event-hubs/event-hubs-programming-guide.md).
time-series-insights How To Ingest Data Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-ingest-data-iot-hub.md
-+ Last updated 01/21/2021
This article describes how to use the Azure portal to add an event source that r
## Prerequisites
-* Create an [Azure Time Series Insights environment](./tutorials-set-up-tsi-environment.md).
+* Create an [Azure Time Series Insights environment](./tutorial-set-up-environment.md).
* Create an [IoT hub by using the Azure portal](../iot-hub/iot-hub-create-through-portal.md). * The IoT hub must have active message events being sent in. * Create a dedicated consumer group in the IoT hub for the Azure Time Series Insight environment to consume from. Each Azure Time Series Insight event source must have its own dedicated consumer group that isn't shared with any other consumer. If multiple readers consume events from the same consumer group, all readers are likely to exhibit failures. For details, read the [Azure IoT Hub developer guide](../iot-hub/iot-hub-devguide.md).
time-series-insights How To Provision Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-provision-manage.md
Title: Provision and manage a Gen 2 environment - Azure Time Series | Microsoft Docs
-description: Learn how to provision and manage an Azure Time Series Insights Gen 2 environment.
-
+ Title: Manage a Gen 2 environment - Azure Time Series | Microsoft Docs
+description: Learn how to manage an Azure Time Series Insights Gen 2 environment.
+ Previously updated : 10/02/2020 Last updated : 03/15/2020
-# Provision and manage Azure Time Series Insights Gen2
+# Manage Azure Time Series Insights Gen2
-This article describes how to create and manage an Azure Time Series Insights Gen2 environment by using the [Azure portal](https://portal.azure.com/).
-
-## Overview
-
-When you provision an Azure Time Series Insights Gen2 environment, you create these Azure resources:
-
-* An Azure Time Series Insights Gen2 environment that follows pay-as-you-go pricing model
-* An Azure Storage account
-* An optional warm store for faster and unlimited queries
-
-> [!TIP]
->
-> * Learn [how to plan your environment](./how-to-plan-your-environment.md).
-> * Read about how to [Add an event hub source](./how-to-ingest-data-event-hub.md) or how to [Add an IoT hub source](./how-to-ingest-data-iot-hub.md).
-
-You will learn how to:
-
-1. Associate each Azure Time Series Insights Gen 2 environment with an event source. You will also provide a Timestamp ID property and a unique consumer group to ensure that the environment has access to the appropriate events.
-
-1. After provisioning is complete, you can modify your access policies and other environment attributes to suit your business needs.
-
- > [!NOTE]
- > The first step is optional when provisioning an environment. If you skip this step, you must attach an event source to the environment later so data can start flowing into your environment and can be accessed through query.
-
-## Create the environment
-
-To create an Azure Time Series Insights Gen 2 environment:
-
-1. Create an Azure Time Series Insights resource under *Internet of Things* on [Azure portal](https://portal.azure.com/).
-
-1. Select **Gen2(L1)** as the **Tier**. Provide an environment name, and choose the subscription group and resource group to use. Then select a supported location to host the environment.
-
- [![Create an Azure Time Series Insights instance.](media/v2-update-manage/create-and-manage-configuration.png)](media/v2-update-manage/create-and-manage-configuration.png#lightbox)
-
-1. Enter a Time Series ID.
-
- > [!NOTE]
- >
- > * The Time Series ID is *case-sensitive* and *immutable*. (It can't be changed after it's set.)
- > * Time Series IDs can be up to *three* keys. Think of it as a primary key in a database, which uniquely represents each device sensor that would send data to your environment. It could be one property or a combination of upto three properties.
- > * Read more about [How to choose a Time Series ID](./how-to-select-tsid.md)
-
-1. Create an Azure Storage account by selecting a storage account name, account kind and designating a [replication](../storage/common/redundancy-migration.md?tabs=portal) choice. Doing so automatically creates an Azure Storage account. By default, [general purpose v2](../storage/common/storage-account-overview.md) account will be created. The account is created in the same region as the Azure Time Series Insights Gen2 environment that you previously selected.
-Alternatively, you can also bring your own storage (BYOS) through [ARM template](./time-series-insights-manage-resources-using-azure-resource-manager-template.md) when you create a new Azure Time Series Gen2 environment.
-
-1. **(Optional)** Enable warm store for your environment if you want faster and unlimited queries over most recent data in your environment. You can also create or delete a warm store through the **Storage Configuration** option in the left navigation pane, after you create an Azure Time Series Insights Gen2 environment.
-
-1. **(Optional)** You can add an event source now. You can also wait until after the instance has been provisioned.
-
- * Azure Time Series Insights supports [Azure IoT Hub](./how-to-ingest-data-iot-hub.md) and [Azure Event Hubs](./how-to-ingest-data-event-hub.md) as event source options. Although you can add only a single event source when you create the environment, you can add another event source later.
-
- You can select an existing consumer group or create a new consumer group when you add the event source. Please note that the event source requires a unique consumer group for your environment to read data into it.
-
- * Choose the appropriate Timestamp property. By default, Azure Time Series Insights uses the message-enqueued time for each event source.
-
- > [!TIP]
- > The message-enqueued time might not be the best configured setting to use in batch event scenarios or historical data uploading scenarios. In such cases, make sure to verify your decision to use or not use a Timestamp property.
-
- [![Event Source configuration tab](media/v2-update-manage/create-and-manage-event-source.png)](media/v2-update-manage/create-and-manage-event-source.png#lightbox)
-
-1. Confirm that your environment has been provisioned and configured the way you want.
-
- [![Review + Create tab](media/v2-update-manage/create-and-manage-review-and-confirm.png)](media/v2-update-manage/create-and-manage-review-and-confirm.png#lightbox)
+After you've created your Azure Time Series Insights Gen2 environment by using [the Azure CLI](./how-to-create-environment-using-cli.md) or [the Azure portal](./how-to-create-environment-using-portal.md), you can modify your access policies and other environment attributes to suit your business needs.
## Manage the environment
-You can manage your Azure Time Series Insights Gen2 environment by using the Azure portal. There a few key differences between a Gen2 environment and Gen1 S1 or Gen1 S2 environments to bear in mind when you manage your environment through the Azure portal:
+You can manage your Azure Time Series Insights Gen2 environment by using the [Azure portal](https://portal.azure.com/). There a few key differences between a Gen2 environment and Gen1 S1 or Gen1 S2 environments to bear in mind when you manage your environment through the Azure portal:
-* The Azure portal Gen2 **Overview** blade has the following changes:
+* The Azure portal Gen2 **Overview** pane has the following changes:
* Capacity is removed because it doesn't apply to Gen2 environments. * The **Time series ID** property is added. It determines how your data is partitioned.
You can manage your Azure Time Series Insights Gen2 environment by using the Azu
* The displayed URL directs you to the [Azure Time Series Insights Explorer](./concepts-ux-panels.md). * Your Azure Storage account name is listed.
-* The Azure portal's **Configure** blade is removed because scale units don't apply to Azure Time Series Insights Gen2 environments. However, you can use **Storage Configuration** to configure the newly introduced warm store.
+* The Azure portal's **Configure** pane is removed because scale units don't apply to Azure Time Series Insights Gen2 environments. However, you can use **Storage Configuration** to configure the newly introduced warm store.
-* The Azure portal's **Reference data** blade is removed in Azure Time Series Insights Gen2 because reference data concept has been replaced by [Time Series Model (TSM)](./concepts-model-overview.md).
+* The Azure portal's **Reference data** pane is removed in Azure Time Series Insights Gen2 because reference data concept has been replaced by [Time Series Model (TSM)](./concepts-model-overview.md).
-[![Azure Time Series Insights Gen2 environment in the Azure portal](media/v2-update-manage/create-and-manage-overview-confirm.png)](media/v2-update-manage/create-and-manage-overview-confirm.png#lightbox)
## Next steps
-* Learn more about Azure Time Series Insights generally available environments and Gen2 environments by reading [Plan your environment](./how-to-plan-your-environment.md).
-
-* Learn how to [Add an event hub source](./how-to-ingest-data-event-hub.md).
-
-* Configure an [IoT hub source](./how-to-ingest-data-iot-hub.md).
+* Review the list of [streaming ingestion best practices](./concepts-streaming-ingestion-event-sources.md#streaming-ingestion-best-practices)
+* Understand how to [diagnose and troubleshoot](./how-to-diagnose-troubleshoot.md)
time-series-insights Time Series Insights Send Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-send-events.md
> [!CAUTION] > This is a Gen1 article.
-This article explains how to create and configure an event hub in Azure Event Hubs. It also describes how to run a sample application to push events to Azure Time Series Insights from Event Hubs. If you have an existing event hub with events in JSON format, skip this tutorial and view your environment in [Azure Time Series Insights](./tutorials-set-up-tsi-environment.md).
+This article explains how to create and configure an event hub in Azure Event Hubs. It also describes how to run a sample application to push events to Azure Time Series Insights from Event Hubs. If you have an existing event hub with events in JSON format, skip this tutorial and view your environment in [Azure Time Series Insights](./tutorial-set-up-environment.md).
## Configure an event hub 1. To learn how to create an event hub, read the [Event Hubs documentation](../event-hubs/index.yml). 1. In the search box, search for **Event Hubs**. In the returned list, select **Event Hubs**. 1. Select your event hub.
-1. When you create an event hub, you're creating an event hub namespace. If you haven't yet created an event hub within the namespace, on the menu, under **Entities**, create an event hub.
+1. When you create an event hub, you're creating an event hub namespace. If you haven't yet created an event hub within the namespace, on the menu, under **Entities**, create an event hub.
[![List of event hubs](media/send-events/tsi-connect-event-hub-namespace.png)](media/send-events/tsi-connect-event-hub-namespace.png#lightbox)
In Azure Time Series Insights Gen2, you can add contextual data to incoming tele
1. Go to <https://tsiclientsample.azurewebsites.net/windFarmGen.html>. The URL creates and runs simulated windmill devices. 1. In the **Event Hub Connection String** box on the webpage, paste the connection string that you copied in the [windmill input field](#push-events-to-windmills-sample).
-
+ [![Paste the primary key connection string in the Event Hub Connection String box](media/send-events/configure-wind-mill-sim.png)](media/send-events/configure-wind-mill-sim.png#lightbox) 1. Select **Click to start**.
time-series-insights Time Series Insights Update Query Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/time-series-insights-update-query-data-csharp.md
The sample code below demonstrates the following features:
Complete the following steps before you compile and run the sample code:
-1. [Provision a Gen2 Azure Time Series Insights](./how-to-provision-manage.md#create-the-environment) environment.
+1. [Provision a Gen2 Azure Time Series Insights](./how-to-create-environment-using-portal.md) environment.
1. Configure your Azure Time Series Insights environment for Azure Active Directory as described in [Authentication and authorization](time-series-insights-authentication-and-authorization.md). 1. Run the [GenerateCode.bat](https://github.com/Azure-Samples/Azure-Time-Series-Insights/blob/master/gen2-sample/csharp-tsi-gen2-sample/DataPlaneClient/GenerateCode.bat) as specified in the [Readme.md](https://github.com/Azure-Samples/Azure-Time-Series-Insights/blob/master/gen2-sample/csharp-tsi-gen2-sample/DataPlaneClient/Readme.md) to generate the Azure Time Series Insights Gen2 client dependencies. 1. Open the `TSIPreviewDataPlaneclient.sln` solution and set `DataPlaneClientSampleApp` as the default project in Visual Studio.
time-series-insights Tutorial Set Up Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/tutorial-set-up-environment.md
+
+ Title: 'Tutorial: Set up a Gen2 environment - Azure Time Series Insights Gen2| Microsoft Docs'
+description: 'Tutorial: Learn how to set up an environment in Azure Time Series Insights Gen2.'
+++++++ Last updated : 02/25/2021+
+# Customer intent: As a data analyst or developer, I want to learn how to create an Azure Time Series Insights Gen2 environment so that I can use Azure Time Series Insights Gen2 queries to understand device behavior.
++
+# Tutorial: Set up an Azure Time Series Insights Gen2 environment
+
+This tutorial guides you through the process of creating an Azure Time Series Insights Gen2 *pay-as-you-go* (PAYG) environment.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create an Azure Time Series Insights Gen2 environment.
+> * Connect the Azure Time Series Insights Gen2 environment to an IoT Hub.
+> * Run a solution accelerator sample to stream data into the Azure Time Series Insights Gen2 environment.
+> * Perform basic analysis on the data.
+> * Define a Time Series Model type and hierarchy, and associate it with your instances.
+
+>[!TIP]
+> [IoT solution accelerators](https://www.azureiotsolutions.com/Accelerators) provide enterprise-grade preconfigured solutions that you can use to accelerate the development of custom IoT solutions.
+
+Sign up for a [free Azure subscription](https://azure.microsoft.com/free/) if you don't already have one.
+
+## Prerequisites
+
+* At minimum, you must have the **Contributor** role for the Azure subscription. For more information, read [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+* Create an environment using either the [Azure portal](#create-an-azure-time-series-insights-gen2-environment) or [CLI](how-to-create-environment-using-cli.md).
+
+## Create a device simulation
+
+In this section, you will create three simulated devices that send data to an Azure IoT Hub instance.
+
+1. Go to the [Azure IoT solution accelerators page](https://www.azureiotsolutions.com/Accelerators). Sign in by using your Azure account, then select **Device Simulation**.
+
+ [![Azure IoT solution accelerators page.](media/tutorial-set-up-environment/iot-solution-accelerators-landing-page.png)](media/tutorial-set-up-environment/iot-solution-accelerators-landing-page.png#lightbox)
+
+1. Scroll down to read the [Overview](https://github.com/Azure/azure-iot-pcs-device-simulation#overview) and [Getting started](https://github.com/Azure/azure-iot-pcs-device-simulation#getting-started) sections.
+
+1. Follow the [deployment instructions](https://github.com/Azure/azure-iot-pcs-device-simulation/blob/master/deployment/README.md) in the Getting Started section.
+
+ It may take up to 20 minutes to complete this process.
+
+1. When deployment has finished, you'll be provided the URL to your simulation. Keep this page open because you'll return to it later.
+
+ >[!IMPORTANT]
+ > Don't enter your solution accelerator yet! Keep this web page open because you'll return to it later.
+
+ [![Device simulation solution provisioning complete.](media/tutorial-set-up-environment/iot-solution-accelerator-ready.png)](media/tutorial-set-up-environment/iot-solution-accelerator-ready.png#lightbox)
+
+1. Now, inspect the newly created resources in the Azure portal. On the **Resource groups** page, notice that a new resource group was created by using the `solutionName` you provided in your ARM template parameters file. Make note of the resources that were created for the device simulation.
+
+ [![Device simulation resources.](media/tutorial-set-up-environment/device-sim-solution-resources.png)](media/tutorial-set-up-environment/device-sim-solution-resources.png#lightbox)
+
+## Create an Azure Time Series Insights Gen2 environment
+
+This section describes how to create an Azure Time Series Insights Gen2 environment and connect it to the IoT hub created by the IoT Solution Accelerator using the [Azure portal](https://portal.azure.com/).
+
+1. Sign in to the [Azure portal](https://portal.azure.com) by using your Azure subscription account.
+1. Select **+ Create a resource** in the upper left.
+1. Select the **Internet of Things** category, and then select **Time Series Insights**.
+
+ [![Select the Time Series Insights environment resource.](media/tutorial-set-up-environment/create-new-environment.png)](media/tutorial-set-up-environment/create-new-environment.png#lightbox)
+
+1. In the **Create Time Series Insights environment** pane, on the **Basics** tab, set the following parameters:
+
+ | Parameter | Action |
+ | | |
+ | **Environment name** | Enter a unique name for the Azure Time Series Insights Gen2 environment. |
+ | **Subscription** | Enter the subscription where you want to create the Azure Time Series Insights Gen2 environment. A best practice is to use the same subscription as the rest of the IoT resources that are created by the device simulator. |
+ | **Resource group** | Select an existing resource group or create a new resource group for the Azure Time Series Insights Gen2 environment resource. A resource group is a container for Azure resources. A best practice is to use the same resource group as the other IoT resources that are created by the device simulator. |
+ | **Location** | Select a data center region for your Azure Time Series Insights Gen2 environment. To avoid additional latency, it's best to create your Azure Time Series Insights Gen2 environment in the same region as your IoT hub created by the device simulator. |
+ | **Tier** | Select **Gen2(L1)**. This is the SKU for the Azure Time Series Insights Gen2 product. |
+ | **Time Series ID property name** | Enter a name of a property that contains values that uniquely identify your time series instances. The value you enter in the **Property name** box as Time Series ID cannot be changed later. For this tutorial, enter ***iothub-connection-device-id***. To learn more about Time Series ID including composite Time Series ID, read [Best practices for choosing a Time Series ID](./how-to-select-tsid.md). |
+ | **Storage account name** | Enter a globally unique name for a new storage account.|
+ | **Storage account kind** | Select the storage kind for a new storage account. We recommend StorageV2|
+ | **Storage account replication** | Select the storage kind for a new storage account. Based on your location selection, you can choose from LRS, GRS, and ZRS. For this tutorial, you can select LRS|
+ | **Hierarchical namespace** |This option is selectable, once you select the storage kind to be StorageV2. By default, it is disabled. For this tutorial, you can leave it in its default *disabled* state|
+ |**Enable warm store**|Select **Yes** to enable warm store. This setting can be disabled and re-enabled after the environment has been created as well. |
+ |**Data retention (in days)**|Choose the default option of 7 days. |
+
+ [![New Azure Time Series Insights environment configuration.](media/tutorial-set-up-environment/environment-configuration.png)](media/tutorial-set-up-environment/environment-configuration.png#lightbox)
+ [![New Azure Time Series Insights environment configuration, continued.](media/tutorial-set-up-environment/environment-configuration2.png)](media/tutorial-set-up-environment/environment-configuration2.png#lightbox)
+
+1. Select **Next: Event Source**.
+
+ [![Configure Time Series ID for the environment.](media/tutorial-set-up-environment/time-series-id-selection.png)](media/tutorial-set-up-environment/time-series-id-selection.png#lightbox)
+
+1. On the **Event Source** tab, set the following parameters:
+
+ | Parameter | Action |
+ | | |
+ | **Create an event source?** | Select **Yes**.|
+ | **Source type** | Select **IoT Hub**. |
+ | **Name** | Enter a unique value for the event source name. |
+ | **Select a hub** | Choose **Select existing**. |
+ | **Subscription** | Select the subscription that you used for the device simulator. |
+ | **IoT Hub name** | Select the IoT hub name you created for the device simulator. |
+ | **IoT Hub access policy** | Select **iothubowner**. |
+ | **IoT Hub consumer group** | Select **New**, enter a unique name, and then select **+ Add**. The consumer group must be a unique value in Azure Time Series Insights Gen2. |
+ | **Timestamp property** | This value is used to identify the **Timestamp** property in your incoming telemetry data. For this tutorial, leave this box empty. This simulator uses the incoming timestamp from IoT Hub, which Azure Time Series Insights Gen2 defaults to. |
+
+1. Select **Review + Create**.
+
+ [![Configure the created IoT hub as an event source.](media/tutorial-set-up-environment/configure-event-source.png)](media/tutorial-set-up-environment/configure-event-source.png#lightbox)
+
+1. Select **Review + Create**.
+
+ [![Review + Create page, with Create button.](media/tutorial-set-up-environment/environment-confirmation.png)](media/tutorial-set-up-environment/environment-confirmation.png#lightbox)
+
+ You can review the status of your deployment:
+
+ [![Notification that deployment is complete.](media/tutorial-set-up-environment/deployment-notification.png)](media/tutorial-set-up-environment/deployment-notification.png#lightbox)
+
+1. Expand deployment details.
+
+## Stream data
+
+Now that you've deployed your Azure Time Series Insights Gen2 environment, begin streaming data for analysis.
+
+1. You will be given a URL once the solution accelerator deployment is complete.
+
+1. Click on the URL to launch the device simulation.
+
+1. Select **+ New simulation**.
+
+ 1. After the **Simulation setup** page loads, enter the required parameters.
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter a unique name for a simulator. |
+ | **Description** | Enter a definition. |
+ | **Simulation duration** | Set to **Run indefinitely**. |
+ | **Device model** | Click + **Add a device type** <br />**Name**: Enter **Elevator**. <br />**Amount**: Enter **3**. <br /> Leave the remaining default values |
+ | **Target IoT Hub** | Set to **Use pre-provisioned IoT Hub**. |
+
+ [![Configure parameters and launch.](media/tutorial-set-up-environment/launch-solution-accelerator.png)](media/tutorial-set-up-environment/launch-solution-accelerator.png#lightbox)
+
+ 1. Select **Start simulation**. In the device simulation dashboard, **Active devices** and **Total messages** are displayed.
+
+ [![Azure IoT simulation dashboard.](media/tutorial-set-up-environment/see-active-devices-and-messages.png)](media/tutorial-set-up-environment/see-active-devices-and-messages.png#lightbox)
+
+## Analyze data
+
+In this section, you perform basic analytics on your time series data by using the [Azure Time Series Insights Gen2 Explorer](./concepts-ux-panels.md).
+
+1. Go to your Azure Time Series Insights Gen2 Explorer by selecting the URL from the resource page in the [Azure portal](https://portal.azure.com/).
+
+ [![The Azure Time Series Insights Gen2 Explorer URL.](media/tutorial-set-up-environment/select-explorer-url.png)](media/tutorial-set-up-environment/select-explorer-url.png#lightbox)
+
+1. In the Azure Time Series Insights Gen2 Explorer, a bar spanning the top of the screen will appear. This is your availability picker. Ensure that you have at least two 2 m selected, and if needed, expand the time frame by selecting and dragging the picker handles to the left and right.
+
+1. **Time Series Instances** will be displayed on the left-hand side.
+
+ [![List of unparented instances.](media/tutorial-set-up-environment/explorer-unparented-instances.png)](media/tutorial-set-up-environment/explorer-unparented-instances.png#lightbox)
+
+1. Select the first-time series instance. Then, select **Show temperature**.
+
+ [![Selected time series instance with menu command to show average temperature.](media/tutorial-set-up-environment/select-instance-and-temperature.png)](media/tutorial-set-up-environment/select-instance-and-temperature.png#lightbox)
+
+ A time series chart appears. Change the **Interval** to **30s**.
+
+1. Repeat the previous step with the other two time series instances so that you're viewing all three, as shown in this chart:
+
+ [![Chart for all time series.](media/tutorial-set-up-environment/explorer-add-three-instances.png)](media/tutorial-set-up-environment/explorer-add-three-instances.png#lightbox)
+
+1. Select the time span picker in the upper right corner. Here you can select specific start and end times down to the millisecond, or choose from pre-configured options such as **Last 30 minutes**. You can also change the default time zone.
+
+ [![Set the time range to the last 30 minute.](media/tutorial-set-up-environment/explorer-thirty-minute-time-range.png)](media/tutorial-set-up-environment/explorer-thirty-minute-time-range.png#lightbox)
+
+ The solution accelerator's progress over the **Last 30 minutes** is now displayed in the Azure Time Series Insights Gen2 Explorer.
+
+## Define and apply a model
+
+In this section, you apply a model to structure your data. To complete the model, you define types, hierarchies, and instances. To learn more about data modeling, read [Time Series Model](./concepts-model-overview.md).
+
+1. In the Explorer, select the **Model** tab:
+
+ [![View the Model tab in the Explorer.](media/tutorial-set-up-environment/select-model-view.png)](media/tutorial-set-up-environment/select-model-view.png#lightbox)
+
+ In the **Types** tab, select **+ Add**.
+
+1. Enter the following parameters:
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter **Elevator** |
+ | **Description** | Enter **This is a type definition for Elevator** |
+
+1. Next, select the **Variables** tab.
+
+ 1. Select **+ Add Variable** and fill in the following values for the first variable of the Elevator type. You will author three variables in total.
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter **Avg Temperature**. |
+ | **Kind** | Select **Numeric** |
+ | **Value** | Select from preset: Select **temperature (Double)**. <br /> Note: It might take a few minutes for **Value** to be automatically populated after Azure Time Series Insights Gen2 starts receiving events.|
+ | **Aggregation Operation** | Expand **Advanced Options**. <br /> Select **AVG**. |
+
+ 1. Select **Apply**. Then, **+ Add Variable** again, and set the following values:
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter **Avg Vibration**. |
+ | **Kind** | Select **Numeric** |
+ | **Value** | Select from preset: Select **vibration (Double)**. <br /> Note: It might take a few minutes for **Value** to be automatically populated after Azure Time Series Insights Gen2 starts receiving events.|
+ | **Aggregation Operation** | Expand **Advanced Options**. <br /> Select **AVG**. |
+
+ 1. Select **Apply**. Then, **+ Add Variable** again, and set the following values for the third and final variable:
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter **Floor**. |
+ | **Kind** | Select **Categorical** |
+ | **Value** | Select from preset: Select **Floor (Double)**. <br /> Note: It might take a few minutes for **Value** to be automatically populated after Azure Time Series Insights Gen2 starts receiving events.|
+ | **Categories** | <span style="text-decoration: underline">Label </span> - <span style="text-decoration: underline">Values</span> <br /> Lower: 1,2,3,4 <br /> Middle: 5,6,7,8,9 <br /> Upper: 10,11,12,13,14,15 |
+ | **Default Category** | Enter **Unknown** |
+
+ [![Add type variables.](media/tutorial-set-up-environment/add-type-variables.png)](media/tutorial-set-up-environment/add-type-variables.png#lightbox)
+
+ 1. Select **Apply**.
+ 1. Select **Save**. Three variables are created and displayed.
+
+ [![After adding the type, review it in the Model view.](media/tutorial-set-up-environment/add-type-and-view.png)](media/tutorial-set-up-environment/add-type-and-view.png#lightbox)
+
+1. Select the **Hierarchies** tab. Then, select **+ Add**.
+
+ 1. In the **Edit Hierarchy** pane, set the following parameters:
+
+ | Parameter | Action |
+ | | |
+ | **Name** | Enter **Location Hierarchy**. |
+ |**Levels**| Enter **Country** as the name of the first level<br />Select **+ Add Level**<br />Enter **City** for the second level, then select **+ Add Level**<br />Enter **Building** as the name of the third and final level |
+
+ 1. Select **Save**.
+
+ [![Display your new hierarchy in the Model view.](media/tutorial-set-up-environment/add-hierarchy-and-view.png)](media/tutorial-set-up-environment/add-hierarchy-and-view.png#lightbox)
+
+1. Navigate to **Instances**.
+
+ 1. Under **Actions** on the far right, and select the pencil icon to edit the first instance with the following values:
+
+ | Parameter | Action |
+ | | |
+ | **Type** | Select **Elevator**. |
+ | **Name** | Enter **Elevator 1**|
+ | **Description** | Enter **Instance for Elevator 1** |
+
+ 1. Navigate to **Instance Fields** and enter the following values:
+
+ | Parameter | Action |
+ | | |
+ | **Hierarchies** | Select **Location Hierarchy** |
+ | **Country** | Enter **USA** |
+ | **City** | Enter **Seattle** |
+ | **Building** | Enter **Space Needle** |
+
+ 1. Select **Save**.
+
+1. Repeat the previous step with the other two instances while using the following values:
+
+ **For Elevator 2:**
+
+ | Parameter | Action |
+ | | |
+ | **Type** | Select **Elevator**. |
+ | **Name** | Enter **Elevator 2**|
+ | **Description** | Enter **Instance for Elevator 2** |
+ | **Hierarchies** | Select **Location Hierarchy** |
+ | **Country** | Enter **USA** |
+ | **City** | Enter **Seattle** |
+ | **Building** | Enter **Pacific Science Center** |
+
+ **For Elevator 3:**
+
+ | Parameter | Action |
+ | | |
+ | **Type** | Select **Elevator**. |
+ | **Name** | Enter **Elevator 3**|
+ | **Description** | Enter **Instance for Elevator 3** |
+ | **Hierarchies** | Select **Location Hierarchy** |
+ | **Country** | Enter **USA** |
+ | **City** | Enter **New York** |
+ | **Building** | Enter **Empire State Building** |
+
+ [![View the updated instances.](media/tutorial-set-up-environment/iot-solution-accelerator-instances.png)](media/tutorial-set-up-environment/iot-solution-accelerator-instances.png#lightbox)
+
+1. Navigate back to the **Analyze** tab to view the charting pane. Under **Location Hierarchy**, expand all hierarchy levels to display the time series instances:
+
+ [![View all hierarchies in chart view.](media/tutorial-set-up-environment/iot-solution-accelerator-view-hierarchies.png)](media/tutorial-set-up-environment/iot-solution-accelerator-view-hierarchies.png#lightbox)
+
+1. Under **Pacific Science Center**, select the Time Series Instance **Elevator 2**, and then select **Show Average Temperature**.
+
+1. For the same instance, **Elevator 2**, select **Show Floor**.
+
+ With your categorical variable, you can determine how much time the elevator spent on the upper, lower, and middle floors.
+
+ [![Visualize Elevator 2 with hierarchy and data.](media/tutorial-set-up-environment/iot-solution-accelerator-elevator-two.png)](media/tutorial-set-up-environment/iot-solution-accelerator-elevator-two.png#lightbox)
+
+## Clean up resources
+
+Now that you've completed the tutorial, clean up the resources you created:
+
+1. From the left menu in the [Azure portal](https://portal.azure.com), select **All resources**, locate your Azure Time Series Insights Gen2 resource group.
+1. Either delete the entire resource group (and all resources contained within it) by selecting **Delete** or remove each resource individually.
+
+## Next steps
+
+In this tutorial, you learned how to:
+
+* Create and use a device simulation accelerator.
+* Create an Azure Time Series Insights Gen2 PAYG environment.
+* Connect the Azure Time Series Insights Gen2 environment to an iot hub.
+* Run a solution accelerator sample to stream data to the Azure Time Series Insights Gen2 environment.
+* Perform a basic analysis of the data.
+* Define a Time Series Model type and hierarchy, and associate them with your instances.
+
+Now that you know how to create your own Azure Time Series Insights Gen2 environment, learn more about the key concepts in Azure Time Series Insights Gen2.
+
+Read about Azure Time Series Insights Gen2 ingestion:
+
+> [!div class="nextstepaction"]
+> [Azure Time Series Insights Gen2 data ingestion overview](./concepts-ingestion-overview.md)
+
+Read about Azure Time Series Insights Gen2 storage:
+
+> [!div class="nextstepaction"]
+> [Azure Time Series Insights Gen2 data storage](./concepts-storage.md)
+
+Learn more about Time Series Models:
+
+> [!div class="nextstepaction"]
+> [Azure Time Series Insights Gen2 data modeling](./concepts-model-overview.md)
+
+Learn more about connecting your environment to Power BI:
+
+> [!div class="nextstepaction"]
+> [Visualize data from Azure Time Series Insights Gen2 in Power BI](./how-to-connect-power-bi.md)
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or 20H2 **10C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2010C.iso) - [Windows 10, version 2004 or 20H2 **11C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2011C.iso) - [Windows 10, version 2004 or 20H2 **1C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso)
+ - [Windows 10, version 2004 or 20H2 **2C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine