Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
azure-arc | Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md | Title: "Deploy and manage Azure Arc-enabled Kubernetes cluster extensions" Previously updated : 03/08/2023 Last updated : 04/14/2023 description: "Create and manage extension instances on Azure Arc-enabled Kubernetes clusters." Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluste * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. > [!NOTE]-> Installing Azure Arc extensions on [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions. +> Installing Azure Arc extensions on [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions. ## Create extension instance The following parameters are required when using `az k8s-extension create` to cr Use one or more of these optional parameters as needed for your scenarios, along with the required parameters. +> [!NOTE] +> You can choose to automatically upgrade your extension instance to the latest minor and patch versions by setting `auto-upgrade-minor-version` to `true`, or you can instead set the version of the extension instance manually using the `--version` parameter. We recommend enabling automatic upgrades for minor and patch versions so that you always have the latest security patches and capabilities. +> +> Because major version upgrades may include breaking changes, automatic upgrades for new major versions of an extension instance aren't supported. You can choose when to [manually upgrade extension instances](#upgrade-extension-instance) to a new major version. ++ | Parameter name | Description | |--|| | `--auto-upgrade-minor-version` | Boolean property that determines whether the extension minor version is automatically upgraded. The default setting is `true`. If this parameter is set to `true`, you can't set the `version` parameter, as the version will be dynamically updated. If set to `false`, the extension won't be automatically upgraded, even for patch versions. | Use one or more of these optional parameters as needed for your scenarios, along | `--configuration-protected-settings` | Settings that aren't retrievable using `GET` API calls or `az k8s-extension show` commands. Typically used to pass in sensitive settings. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. | | `--configuration-protected-settings-file` | Path to a JSON file with `key=value` pairs to be used for passing sensitive settings into the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | | `--release-namespace` | This parameter indicates the namespace within which the release will be created. Only relevant if `scope` is set to `cluster`. |-| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `--auto-upgrade-minor-version` is set to `false`. | +| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. | | `--target-namespace` | Indicates the namespace within which the release will be created. Permission of the system account created for this extension instance will be restricted to this namespace. Only relevant if `scope` is set to `namespace`. | ## Show extension details az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGro ] ``` +## Update extension instance ++> [!NOTE] +> Refer to documentation for the specific extension type to understand the specific settings in `--configuration-settings` and `--configuration-protected-settings` that are able to be updated. For `--configuration-protected-settings`, all settings are expected to be provided, even if only one setting is being updated. If any of these settings are omitted, those settings will be considered obsolete and deleted. ++To update an existing extension instance, use `k8s-extension update`, passing in values for the mandatory and optional parameters. The mandatory and optional parameters are slightly different than those used to create an extension instance. ++This example updates the `auto-upgrade-minor-version` setting for an Azure Machine Learning extension instance to `true`: ++```azurecli +az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubernetes --scope cluster --cluster-name <clusterName> --resource-group <resourceGroupName> --auto-upgrade-minor-version true --cluster-type managedClusters +``` ++### Required parameters for update ++| Parameter name | Description | +|-|| +| `--name` | Name of the extension instance | +| `--cluster-name` | Name of the cluster on which the extension instance has to be created | +| `--resource-group` | The resource group containing the cluster | +| `--cluster-type` | The cluster type on which the extension instance has to be created. For Azure Arc-enabled Kubernetes clusters, use `connectedClusters`. For AKS clusters, use `managedClusters`.| ++### Optional parameters for update ++| Parameter name | Description | +|--|| +| `--auto-upgrade-minor-version` | Boolean property that specifies whether the extension minor version is automatically upgraded. The default setting is `true`. If this parameter is set to true, you can't set the `version` parameter, as the version will be dynamically updated. If set to `false`, the extension won't be automatically upgraded, even for patch versions. | +| `--version` | Version of the extension to be installed (specific version to pin the extension instance to). Must not be supplied if auto-upgrade-minor-version is set to `true`. | +| `--configuration-settings` | Settings that can be passed into the extension to control its functionality.These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-settings-file` can't be used in the same command. Only the settings that require an update need to be provided. The provided settings will be replaced with the specified values. | +| `--configuration-settings-file` | Path to the JSON file with `key=value` pairs to be used for passing in configuration settings to the extension. If this parameter is used in the command, then `--configuration-settings` can't be used in the same command. | +| `--configuration-protected-settings` | Settings that aren't retrievable using `GET` API calls or `az k8s-extension show` commands. Typically used to pass in sensitive settings. These are passed in as space-separated `key=value` pairs after the parameter name. If this parameter is used in the command, then `--configuration-protected-settings-file` can't be used in the same command. When you update a protected setting, all of the protected settings are expected to be specified. If any of these settings are omitted, those settings will be considered obsolete and deleted. | +| `--configuration-protected-settings-file` | Path to a JSON file with `key=value` pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | +| `--scope` | Scope of installation for the extension - `cluster` or `namespace`. | +| `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. | ++## Upgrade extension instance ++As noted earlier, if you set `auto-upgrade-minor-version` to true, the extension will automatically be upgraded when a new minor version is released. For most scenarios, we recommend enabling automatic upgrades. If you set `auto-upgrade-minor-version` to false, you'll have to upgrade the extension manually if you want a newer version. ++Manual upgrades are also required to get a new major instance of an extension. You can choose when to upgrade in order to avoid any unexpected breaking changes with major version upgrades. ++To manually upgrade an extension instance, use `k8s-extension update` and set the `version` parameter to specify a version. ++This example updates an Azure Machine Learning extension instance to version x.y.z: ++```azurecli +az k8s-extension update --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters --name azureml --version x.y.z +``` + ## Delete extension instance -To delete an extension instance on a cluster, use `k8s-extension delete`, passing in values for the mandatory parameters. +To delete an extension instance on a cluster, use `k8s-extension delete`, passing in values for the mandatory parameters: ```azurecli az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters |
azure-monitor | Autoscale Predictive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md | Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview) + Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets description: This article provides information on the new predictive autoscale feature in Azure Monitor. |
azure-monitor | Logs Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md | If the data export rule includes an unsupported table, the configuration will su | ACSRoomsIncomingOperations | | | ACSSMSIncomingOperations | | | ADAssessmentRecommendation | |+| AddonAzureBackupAlerts | | +| AddonAzureBackupJobs | | +| AddonAzureBackupPolicy | | +| AddonAzureBackupProtectedInstance | | +| AddonAzureBackupStorage | | | ADFActivityRun | |+| ADFAirflowSchedulerLogs | | | ADFAirflowTaskLogs | |+| ADFAirflowWebLogs | | | ADFAirflowWorkerLogs | | | ADFPipelineRun | |+| ADFSandboxActivityRun | | +| ADFSandboxPipelineRun | | | ADFSSignInLogs | |+| ADFSSISIntegrationRuntimeLogs | | +| ADFSSISPackageEventMessageContext | | +| ADFSSISPackageEventMessages | | +| ADFSSISPackageExecutableStatistics | | +| ADFSSISPackageExecutionComponentPhases | | +| ADFSSISPackageExecutionDataStatistics | | | ADFTriggerRun | | | ADPAudit | | | ADPDiagnostics | | If the data export rule includes an unsupported table, the configuration will su | ADTModelsOperation | | | ADTQueryOperation | | | ADXCommand | |+| ADXJournal | | | ADXQuery | |+| ADXTableDetails | | +| ADXTableUsageStatistics | | | AegDataPlaneRequests | | | AegDeliveryFailureLogs | | | AegPublishFailureLogs | | If the data export rule includes an unsupported table, the configuration will su | AgriFoodWeatherLogs | | | AGSGrafanaLoginEvents | | | AHDSMedTechDiagnosticLogs | |+| AirflowDagProcessingLogs | | +| AKSAudit | | +| AKSAuditAdmin | | +| AKSControlPlane | | | Alert | Partial support. Data ingestion for Zabbix alerts isn't supported. | | AlertEvidence | | | AlertInfo | | If the data export rule includes an unsupported table, the configuration will su | AMSMediaAccountHealth | | | AMSStreamingEndpointRequests | | | ANFFileAccess | |+| Anomalies | | | ApiManagementGatewayLogs | | | AppAvailabilityResults | | | AppBrowserTimings | | If the data export rule includes an unsupported table, the configuration will su | CDBPartitionKeyRUConsumption | | | CDBPartitionKeyStatistics | | | CDBQueryRuntimeStatistics | |+| ChaosStudioExperimentEventLogs | | | CIEventsAudit | | | CIEventsOperational | | | CloudAppEvents | | If the data export rule includes an unsupported table, the configuration will su | EmailEvents | | | EmailPostDeliveryEvents | | | EmailUrlInfo | |-| Event | Partial support. Data arriving from the Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | +| Event | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | ExchangeAssessmentRecommendation | | | ExchangeOnlineAssessmentRecommendation | | | FailedIngestion | | If the data export rule includes an unsupported table, the configuration will su | Heartbeat | | | HuntingBookmark | | | IdentityDirectoryEvents | |+| IdentityInfo | | | IdentityLogonEvents | | | IdentityQueryEvents | | | InsightsMetrics | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. | If the data export rule includes an unsupported table, the configuration will su | KubeMonAgentEvents | | | KubeNodeInventory | | | KubePodInventory | |+| KubePVInventory | | | KubeServices | | | LAQueryLogs | | | LogicAppWorkflowRuntime | | If the data export rule includes an unsupported table, the configuration will su | MicrosoftHealthcareApisAuditLogs | | | MicrosoftPurviewInformationProtection | | | NetworkAccessTraffic | |+| NetworkMonitoring | | | NSPAccessLogs | | | NTAIpDetails | | | NTANetAnalytics | | If the data export rule includes an unsupported table, the configuration will su | OLPSupplyChainEntityOperations | | | OLPSupplyChainEvents | | | Operation | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. |-| Perf | Partial support. Only Windows perf data is currently supported. Currently, the Linux perf data is missing in export. | +| Perf | | | PFTitleAuditLogs | | | PowerBIActivity | | | PowerBIAuditTenant | | If the data export rule includes an unsupported table, the configuration will su | SQLSecurityAuditEvents | | | SqlVulnerabilityAssessmentScanStatus | | | StorageAntimalwareScanResults | |+| StorageBlobLogs | | | StorageCacheOperationEvents | | | StorageCacheUpgradeEvents | | | StorageCacheWarningEvents | |+| StorageFileLogs | | | StorageMalwareScanningResults | | | StorageMoverCopyLogsFailed | | | StorageMoverCopyLogsTransferred | | | StorageMoverJobRunLogs | |+| StorageQueueLogs | | +| StorageTableLogs | | | SucceededIngestion | | | SynapseBigDataPoolApplicationsEnded | | | SynapseBuiltinSqlPoolRequestsEnded | | If the data export rule includes an unsupported table, the configuration will su | TSIIngress | | | UCClient | | | UCDOAggregatedStatus | |+| UCClientReadinessStatus | | +| UCClientUpdateStatus | | +| UCDeviceAlert | | | UCDOStatus | |+| UCServiceUpdateStatus | | | Update | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. | | UpdateRunProgress | | | UpdateSummary | | If the data export rule includes an unsupported table, the configuration will su | UserPeerAnalytics | | | VIAudit | | | VIIndexing | |+| W3CIISLog | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | WaaSDeploymentStatus | | | WaaSInsiderStatus | | | WaaSUpdateStatus | |-| W3CIISLog | Partial support. Data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics extension agent is collected through storage while this path isnΓÇÖt supported in export. | | Watchlist | | | WebPubSubConnectivity | | | WebPubSubHttpRequest | | |
confidential-computing | Use Cases Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md | Partnered health facilities contribute private health data sets to train an ML m  +### Protecting privacy with IoT and smart-building solutions ++Many countries have strict privacy laws about gathering and using data on peopleΓÇÖs presence and movements inside buildings. This may include data that is directly personally identifiable data from CCTV or security badge swipes. Or, indirectly identifiable where different sets of sensor data could be considered personally identifiable when grouped together. ++Privacy needs to be balanced with cost & environmental needs where organizations are keen to understand occupancy/movement in-order to provide the most efficient use of energy to heat and light a building. ++Determining which areas of corporate real-estate are under or over-occupied by staff from individual departments typically requires processing some personally identifiable data alongside less individual data like temperature and light sensors. ++In this use-case the primary goal is allowing analysis of occupancy data and temperature sensors to be processed alongside CCTV motion tracing sensors and badge-swipe data to understand usage without exposing the raw aggregate data to anyone. ++Confidential compute is used here by placing the analysis application (in this example running on Confidential Container Instances) inside a trusted execution environment where the in-use data is protected by encryption. ++The aggregate data-sets from many types of sensor and data feed are managed in an Azure SQL Always Encrypted with Enclaves database, this protects in-use queries by encrypting them in-memory. This prevents a server administrator from being able to access the aggregate data set while it is being queried and analyzed. ++ ## Enhanced customer data privacy Despite the security level provided by Microsoft Azure is quickly becoming one of the top drivers for cloud computing adoption, customers trust their provider to different extents. Customer asks for: Confidential computing goes in this direction by allowing customers incremental ### Data sovereignty -In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasingly adoption of confidential computing capabilities into PaaS services in Azure, a higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of protecting data sovereignity with a reduced impact to the innovation ability makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services. +In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasingly adoption of confidential computing capabilities into PaaS services in Azure, a higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of protecting data sovereignty with a reduced impact to the innovation ability makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services. ### Reduced chain of trust |
databox | Data Box Deploy Copy Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md | If using a Windows Server host computer, follow these steps to connect to the Da If using a Linux client, use the following command to mount the SMB share. The "vers" parameter below is the version of SMB that your Linux host supports. Plug in the appropriate version in the command below. For versions of SMB that the Data Box supports see [Supported file systems for Linux clients](./data-box-system-requirements.md#supported-file-transfer-protocols-for-clients) ```console-sudo mount -t nfs -o vers=2.1 10.126.76.138:/utsac1_BlockBlob /home/databoxubuntuhost/databox +sudo mount -t cifs -o vers=2.1 10.126.76.138:/utsac1_BlockBlob /home/databoxubuntuhost/databox ``` ## Copy data to Data Box |
defender-for-cloud | Integration Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md | You'll deploy Defender for Endpoint to your Linux machines in one of these ways, - Enable for multiple subscriptions with a PowerShell script > [!NOTE]-> When you enable automatic deployment, Defender for Endpoint for Linux installation will abort on machines with pre-existing security solutions using [fanotify](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements). +> When you enable automatic deployment, Defender for Endpoint for Linux installation will abort on machines with pre-existing running services using [fanotify](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements) and other services that can also cause MDE to malfunction or may be affected by MDE, such as security services. > After you validate potential compatibility issues, we recommend that you manually install Defender for Endpoint on these servers. ##### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows |
defender-for-cloud | Support Matrix Defender For Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md | Defender for Cloud provides recommendations, security alerts, and vulnerability |Azure Database for MySQL*|-|Γ£ö|-| |Azure Database for PostgreSQL*|-|Γ£ö|-| |Azure Event Hubs namespace|Γ£ö|-|-|+|Azure Files|Γ£ö|Γ£ö|-| |Azure Functions app|Γ£ö|-|-| |Azure Key Vault|Γ£ö|Γ£ö|-| |Azure Kubernetes Service|Γ£ö|Γ£ö|-| Defender for Cloud provides recommendations, security alerts, and vulnerability |Azure SQL Managed Instance|Γ£ö|Γ£ö|[Defender for Azure SQL](defender-for-sql-introduction.md)| |Azure Service Bus namespace|Γ£ö|-|-| |Azure Service Fabric account|Γ£ö|-|-|-|Azure Storage accounts|Γ£ö|Γ£ö|-| |Azure Stream Analytics|Γ£ö|-|-| |Azure Subscription|Γ£ö **|Γ£ö|-| |Azure Virtual Network</br> (incl. subnets, NICs, and network security groups)|Γ£ö|-|-| To learn more about the specific Defender for Cloud features available on Window This article explained how Microsoft Defender for Cloud is supported in the Azure, Azure Government, and Azure China 21Vianet clouds. Now that you're familiar with the Defender for Cloud capabilities supported in your cloud, learn how to: - [Manage security recommendations in Defender for Cloud](review-security-recommendations.md)-- [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)+- [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md) |
defender-for-iot | Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md | Defender for IoT network sensors analyze ingested data using built-in analytics Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics. -For example, the **policy violation detection** engine models industry control system (ICS) networks and alerts users of any deviation from baseline behavior. Deviations might include unauthorized use of specific function codes, access to specific objects, or changes to device configuration. +As an example, the **policy violation detection engine** models industrial control systems (ICS) networks in order to detect deviations from the expected "baseline" behavior-by utilizing Behavioral Anomaly Detection (BAD) as outlined in NISTIR 8219. This baseline is developed by understanding the regular activities that take place on the network, such as normal traffic patterns, user actions, and accesses to the ICS network. The BAD system then monitors the network for any deviation from the expected behavior and flags any policy violations. Examples of baseline deviations include the unauthorized use of function codes, access to specific objects, or changes to the configuration of a device. Since many detection algorithms were built for IT, rather than OT networks, the extra baseline for ICS networks helps to shorten the system's learning curve for new detections. |
healthcare-apis | Deploy New Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md | When deployment is completed, the following resources and access roles are creat After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. + - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). + - To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Next steps |
healthcare-apis | Deploy New Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md | When deployment is completed, the following resources and access roles are creat After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. -- To learn about the device mapping, see [How to configure the device mapping](how-to-configure-device-mappings.md).+- To learn about the device mapping, see [Overview of the device mapping](overview-of-device-mapping.md). -- To learn about the FHIR destination mapping, see [How to configure the FHIR destination mapping](how-to-configure-fhir-mappings.md).+- To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Clean up Azure PowerShell deployed resources |
healthcare-apis | Deploy New Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-config.md | For Azure docs information about the device mapping, see [How to configure the M ## Configure the Destination tab -In order to configure the destination mapping tab, you can use the IoMT Connector Data Mapper tool to visualize, edit, and test the destination mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper). You need to configure destination mapping so that your instance of MedTech service can send and receive data to and from the FHIR service. +In order to configure the **Destination** tab, you can use the [Mapping debugger](how-to-use-mapping-debugger.md) tool to create, edit, and test the FHIR destination mapping. You need to configure FHIR destination mapping so that your instance of MedTech service can send transformed device data to the FHIR service. -To begin configuring destination mapping, go to the Create MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out: +To begin configuring FHIR destination mapping, go to the **Create** MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out: 1. Destination properties 2. JSON template request Under the **Destination** tab, use these values to enter the destination propert For more information regarding destination mapping, see the FHIR service GitHub documentation at [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping). -For Azure docs information about destination mapping, see [How to use FHIR destination mappings](how-to-configure-fhir-mappings.md). +For Azure docs information about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ### JSON template request Before you can complete the FHIR destination mapping, you must get a FHIR destination mapping code. Follow these four steps: -1. Go to the [IoMT Connector Data Mapper Tool](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) and get the JSON template for your FHIR destination. +1. Go to the [Mapping debugger](how-to-use-mapping-debugger.md) and get the JSON template for your FHIR destination. 1. Go back to the Destination tab of the Create MedTech service page. 1. Go to the large box below the boxes for FHIR server name, Destination name, and Resolution type. Enter the JSON template request in that box. 1. You'll then receive the FHIR Destination mapping code, which will be saved as part of your configuration. |
healthcare-apis | Deploy New Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md | -In this quickstart, you'll learn how use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). +In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). > [!TIP] > To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) When deployment is completed, the following resources and access roles are creat After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. + - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). + - To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Clean up Azure PowerShell resources |
healthcare-apis | Frequently Asked Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md | The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observa |Potential issue|Fix| ||| |Data is still being processed.|Data is egressed to the FHIR service in batches (every ~five minutes). ItΓÇÖs possible the data is still being processed and extra time is needed for the data to be persisted in the FHIR service.|-|Device mapping hasn't been configured.|Configure and save a conforming and valid [device mapping](how-to-configure-device-mappings.md).| -|FHIR destination mapping hasn't been configured.|Configure and save a conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md).| -|The device message doesn't contain an expected expression defined in the device mapping.|Verify the [JsonPath](https://goessner.net/articles/JsonPath/) or [JMESPath](https://jmespath.org/specification.html) expressions defined in the [device mapping](how-to-configure-device-mappings.md) match tokens defined in the device message.| +|Device mapping hasn't been configured.|Configure and save a conforming and valid [device mapping](overview-of-device-mapping.md).| +|FHIR destination mapping hasn't been configured.|Configure and save a conforming and valid [FHIR destination mapping](overview-of-fhir-destination-mapping.md).| +|The device message doesn't contain an expected expression defined in the device mapping.|Verify the [JsonPath](https://goessner.net/articles/JsonPath/) or [JMESPath](https://jmespath.org/specification.html) expressions defined in the [device mapping](overview-of-device-mapping.md) match tokens defined in the device message.| |A Device resource hasn't been created in the FHIR service (**Resolution type**: **Lookup** only)*.|Create a valid [Device resource](https://www.hl7.org/fhir/device.html) in the FHIR service. Ensure the Device resource contains an identifier that matches the device identifier provided in the incoming message.| |A Patient resource hasn't been created in the FHIR service (**Resolution type**: **Lookup** only)*.|Create a valid [Patient resource](https://www.hl7.org/fhir/patient.html) in the FHIR service.| |The Device.patient reference isn't set, or the reference is invalid (**Resolution type**: **Lookup** only)*.|Make sure the Device resource contains a valid [reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient resource.| For an overview of the MedTech service, see To learn about the MedTech service device message data transformation, see > [!div class="nextstepaction"]-> [Understand the MedTech service device message processing stages](overview.md) +> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) To learn about methods for deploying the MedTech service, see |
healthcare-apis | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md | For more information on the MedTech service device data transformation, see [Ove ## Step 6: Verify the processed device data -You can verify that the device data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the device data isn't mapped or if the mapping isn't authored properly, the device data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](how-to-configure-fhir-mappings.md). +You can verify that the device data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the device data isn't mapped or if the mapping isn't authored properly, the device data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](overview-of-fhir-destination-mapping.md). ### Metrics |
healthcare-apis | Git Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/git-projects.md | Check out our open-source projects on GitHub that provide source code and instru * [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any FHIR service that supports [FHIR R4®](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491) -### Device and FHIR destination mappings --* [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): Tool for editing, testing, and troubleshooting MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the open-source version. - ### Wearables integration Fitbit |
healthcare-apis | How To Use Calculatedcontent Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md | CalculatedContent mappings allow matching on, and extracting values from, an Azu |`PatientIdExpression`|The expression to extract the patient identifier. *Required* when `IdentityResolution` is in `Create` mode, and *optional* when `IdentityResolution` is in `Lookup` mode.|`$.matchedToken.patientId`| |`EncounterIdExpression`|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`| |`CorrelationIdExpression`|*Optional*: The expression to extract the correlation identifier. You can use this output to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|-|`Values[].ValueName`|The name to associate with the value that the next expression extracts. Used to bind the wanted value or component in the FHIR destination-mapping template.|`hr`| +|`Values[].ValueName`|The name to associate with the value that the next expression extracts. Used to bind the wanted value or component in the FHIR destination mapping.|`hr`| |`Values[].ValueExpression`|The expression to extract the wanted value.|`$.matchedToken.heartRate`| |`Values[].Required`|Requires the value to be present in the payload. If the MedTech service doesn't find the value, it won't generate a measurement, and it will create an `InvalidOperationException` instance.|`true`| When you're specifying the language to use for the expression, the following val ## Custom functions -A set of custom functions for the MedTech service is also available. The MedTech service custom functions are outside the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use custom functions with device mappings](how-to-use-custom-functions.md). +A set of custom functions for the MedTech service is also available. The MedTech service custom functions are outside the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use custom functions](how-to-use-custom-functions.md). ## Matched token In this article, you learned how to configure MedTech service device mappings by To learn how to configure FHIR destination mappings, see: > [!div class="nextstepaction"]-> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md) +> [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office, and is used with permission. |
healthcare-apis | How To Use Custom Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md | -Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-configure-device-mappings.md) during the device message [normalization](understand-service.md#normalize) process. +Many functions are available when using **JMESPath** as the expression language. Besides the functions available as part of the JMESPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](overview-of-device-mapping.md) during the device message [normalization](overview-of-device-data-processing-stages.md#normalize) process. > [!TIP] > For more information on JMESPath functions, see the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions). In this article, you learned how to use the MedTech service custom functions wit To learn how to configure the MedTech service device mappings, see > [!div class="nextstepaction"]-> [How to configure device mappings](how-to-configure-device-mappings.md) +> [Overview of the MedTech service device mapping](overview-of-device-mapping.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | How To Use Iotjsonpathcontent Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-mappings.md | -This article describes how to use IoTJsonPathContent mappings with the MedTech service [device mappings](overview-of-device-mapping.md). +This article describes how to use IoTJsonPathContent mappings with the MedTech service [device mapping](overview-of-device-mapping.md). ## IotJsonPathContent With each of these examples, you're provided with: > [!NOTE] > The IoT Hub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. >-> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties). +> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-new-config.md#destination-properties). ```json With each of these examples, you're provided with: > [!NOTE] > The IoT hyub enriches the device message before sending it to the MedTech service device event hub with all properties starting with `iothub`. For example: `iothub-creation-time-utc`. >-> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-05-new-config.md#destination-properties). +> `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. These examples assume your MedTech service is in a **Create** mode. For more information on the **Create** and **Lookup** **Destination properties**, see [Configure Destination properties](deploy-new-config.md#destination-properties). ```json In this article, you learned how to use IotJsonPathContent mappings with the Med To learn how to configure the MedTech service FHIR destination mapping, see > [!div class="nextstepaction"]-> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md) +> [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md | The MedTech service delivers your device data into FHIR service, ensuring that y ### Configurable -The MedTech service can be customized and configured by using [device](overview-of-device-mapping.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations. +The MedTech service can be customized and configured by using [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings to define the filtering and transformation of your data into FHIR Observations. Useful options could include: |
healthcare-apis | Troubleshoot Errors Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md | Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP **Fix**: Set the `location` property of the FHIR destination in your ARM template to the same value as the parent MedTech service's `location` property. > [!NOTE]-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. +> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message and [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. ## Next steps |
healthcare-apis | Troubleshoot Errors Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md | The expression and line with the error are specified in the error message. **Fix**: On the Azure portal, go to your FHIR service, and assign the **FHIR Data Writer** role to your MedTech service (see [step-by-step instructions](deploy-new-deploy.md#grant-access-to-the-fhir-service)). > [!NOTE]-> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message, [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. +> If you're not able to fix your MedTech service issue using this troubleshooting guide, you can open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket attaching copies of your device message and [device and FHIR destination mappings](how-to-use-mapping-debugger.md#overview-of-the-mapping-debugger) to your request to better help with issue determination. ## Next steps |
iot-hub | Iot Hub Bulk Identity Mgmt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md | -Each IoT hub has an identity registry you can use to create per-device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry. To see a working sample in C# and learn how you can use this capability when cloning a hub to a different region, see [How to Clone an IoT Hub](iot-hub-how-to-clone.md). +Each IoT hub has an identity registry you can use to create per-device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry. To see a working sample in C# and learn how you can use this capability when migrating an IoT hub to a different region, see [How to migrate an IoT Hub using Azure Resource Manager templates](migrate-hub-arm.md). > [!NOTE] > IoT Hub has recently added virtual network support in a limited number of regions. This feature secures import and export operations and eliminates the need to pass keys for authentication. Initially, virtual network support is available only in these regions: *WestUS2*, *EastUS*, and *SouthCentralUS*. To learn more about virtual network support and the API calls to implement it, see [IoT Hub Support for virtual networks](virtual-network-support.md). static string GetContainerSasUri(CloudBlobContainer container) ## Next steps -In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the [Managing devices registered to the IoT hub section of How to Clone an IoT Hub](iot-hub-how-to-clone.md#managing-the-devices-registered-to-the-iot-hub). +In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the **Manage devices registered to the IoT hub** section of [How to migrate an IoT hub using Azure Resource Manager templates](migrate-hub-arm.md#manage-the-devices-registered-to-the-iot-hub). -The cloning article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides), with the project being ImportExportDevicesSample. You can download the sample and try it out; there are instructions in the [How to Clone an IoT Hub](iot-hub-how-to-clone.md) article. +The migration article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides), with the project being ImportExportDevicesSample. |
iot-hub | Iot Hub Ha Dr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md | Failover capability won't be available if you disable disaster recovery for an I :::image type="content" source="media/iot-hub-ha-dr/disaster-recovery-disabled.png" alt-text="Screenshot that shows disaster recovery disabled for an IoT hub in Singapore region."::: -You can only disable disaster recovery to avoid data replication outside of the paired region in Brazil South or Southeast Asia when you create an IoT hub. If you want to configure your existing IoT hub to disable disaster recovery, you need to create a new IoT hub with disaster recovery disabled and manually migrate your existing IoT hub. For guidance, see [How to clone an Azure IoT Hub to another region](iot-hub-how-to-clone.md). This article contains advice about migrating routes, custom endpoints, and other IoT Hub artifacts when migrating to a new Iot hub. You can ignore concerns that have to do with migrating across regions. +You can only disable disaster recovery to avoid data replication outside of the paired region in Brazil South or Southeast Asia when you create an IoT hub. If you want to configure your existing IoT hub to disable disaster recovery, you need to create a new IoT hub with disaster recovery disabled and manually migrate your existing IoT hub. For guidance, see [How to migrate an IoT hub](migrate-hub-state-cli.md). ## Achieve cross region HA |
iot-hub | Iot Hub How To Clone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-clone.md | - Title: How to clone an Azure IoT hub -description: How to clone an Azure IoT hub ----- Previously updated : 12/09/2019-# Customer intent: As a customer using IoT Hub, I need to clone my IoT hub to another region. ---# How to clone an Azure IoT hub to another region --This article explores ways to clone an IoT Hub and provides some questions you need to answer before you start. Here are several reasons you might want to clone an IoT hub: - -* You're moving your company from one region to another, such as from Europe to North America (or vice versa), and you want your resources and data to be geographically close to your new location, so you need to move your hub. --* You're setting up a hub for a development versus production environment. --* You want to do a custom implementation of multi-hub high availability. For more information, see the [How to achieve cross region HA section of IoT Hub high availability and disaster recovery](iot-hub-ha-dr.md#achieve-cross-region-ha). --* You want to increase the number of [partitions](iot-hub-scaling.md#partitions) configured for your hub. This number is set when you first create your hub, and can't be changed. You can use the information in this article to clone your hub and when the clone is created, increase the number of partitions. --To clone a hub, you need a subscription with administrative access to the original hub. You can put the new hub in a new resource group and region, in the same subscription as the original hub, or even in a new subscription. You just can't use the same name because the hub name has to be globally unique. --> [!NOTE] -> At this time, there's no feature available for cloning an IoT hub automatically. It's primarily a manual process, and thus is fairly error-prone. The complexity of cloning a hub is directly proportional to the complexity of the hub. For example, cloning an IoT hub with no message routing is fairly simple. If you add message routing as just one complexity, cloning the hub becomes at least an order of magnitude more complicated. If you also move the resources used for routing endpoints, it's another order of magnitude more complicated. --## Things to consider --There are several things to consider before cloning an IoT hub. --* Make sure that all of the features available in the original location are also available in the new location. Some services are in preview, and not all features are available everywhere. --* Don't remove the original resources before creating and verifying the cloned version. Once you remove a hub, it's gone forever, and there's no way to recover it to check the settings or data to make sure the hub is replicated correctly. --* Many resources require globally unique names, so you must use different names for the cloned versions. You also should use a different name for the resource group to which the cloned hub belongs. --* Data for the original IoT hub isn't migrated. This data includes device messages, cloud-to-device (C2D) commands, and job-related information such as schedules and history. Metrics and logging results are also not migrated. --* For data or messages routed to Azure Storage, you can leave the data in the original storage account, transfer that data to a new storage account in the new region, or leave the old data in place and create a new storage account in the new location for the new data. For more information on moving data in Blob storage, see [Get started with AzCopy](../storage/common/storage-use-azcopy-v10.md). --* Data for Event Hubs and for Service Bus Topics and Queues can't be migrated. This data is point-in-time data and isn't stored after the messages are processed. --* You need to schedule downtime for the migration. Cloning the devices to the new hub takes time. If you use the Import/Export method, benchmark testing has revealed that it could take around two hours to move 500,000 devices, and four hours to move a million devices. --* You can copy the devices to the new hub without shutting down or changing the devices. -- * If the devices were originally provisioned using DPS, re-provisioning them updates the connection information stored in each device. - - * Otherwise, you have to use the Import/Export method to move the devices, and then the devices have to be modified to use the new hub. For example, you can set up your device to consume the IoT Hub host name from the twin desired properties. The device will take that IoT Hub host name, disconnect the device from the old hub, and reconnect it to the new one. - -* You need to update any certificates so you can use them with the new resources. Also, you probably have the hub defined in a DNS table somewhere and need to update that DNS information. --## Methodology --This is the general method we recommend for moving an IoT hub from one region to another. For message routing, this assumes the resources aren't being moved to the new region. For more information, see the [section on Message Routing](#how-to-handle-message-routing). -- 1. Export the hub and its settings to a Resource Manager template. -- 1. Make the necessary changes to the template, such as updating all occurrences of the name and the location for the cloned hub. For any resources in the template used for message routing endpoints, update the key in the template for that resource. -- 1. Import the template into a new resource group in the new location. This step creates the clone. -- 1. Debug as needed. -- 1. Add anything that wasn't exported to the template. -- For example, consumer groups aren't exported to the template. You need to add the consumer groups to the template manually or use the [Azure portal](https://portal.azure.com) after the hub is created. -- 1. Copy the devices from the original hub to the clone. This process is covered in the section [Managing the devices registered to the IoT hub](#managing-the-devices-registered-to-the-iot-hub). --## How to handle message routing --If your hub uses [message routing](iot-hub-devguide-messages-d2c.md), exporting the template for the hub includes the routing configuration, but it doesn't include the resources themselves. You must choose whether to move the routing resources to the new location or to leave them in place and continue to use them "as is". --For example, say you have a hub in West US that is routing messages to a storage account (also in West US), and you want to move the hub to East US. You can move the hub and have it still route messages to the storage account in West US, or you can move the hub and also move the storage account. There may be a small performance hit from routing messages to endpoint resources in a different region. --You can move a hub that uses message routing easily if you don't also move the resources used for the routing endpoints. --If the hub uses message routing, you have two choices. --1. Move the resources used for the routing endpoints to the new location. -- * You must create the new resources yourself either manually in the [Azure portal](https://portal.azure.com) or by using Resource Manager templates. -- * You must rename all of the resources when you create them in the new location, as they have globally unique names. - - * You must update the resource names and the resource keys in the new hub's template, before creating the new hub. The resources should be present when the new hub is created. --1. Don't move the resources used for the routing endpoints. Use them "in place". -- * In the step where you edit the template, you need to retrieve the keys for each routing resource and put them in the template before you create the new hub. -- * The hub still references the original routing resources and routes messages to them as configured. -- * You'll have a small performance hit because the hub and the routing endpoint resources aren't in the same location. --## Prepare to migrate the hub to another region --This section provides specific instructions for migrating the hub. --### Find the original hub and export it to a resource template. --1. Sign into the [Azure portal](https://portal.azure.com). --1. Go to **Resource Groups** and select the resource group that contains the hub you want to move. You can also go to **Resources** and find the hub that way. Select the hub. --1. Select **Export template** from the list of properties and settings for the hub. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-export-template.png" alt-text="Screenshot showing the command for exporting the template for the IoT Hub." border="true"::: --1. Select **Download** to download the template. Save the file somewhere you can find it again. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-download-template.png" alt-text="Screenshot showing the command for downloading the template for the IoT Hub." border="true"::: --### View the template --1. Go to the Downloads folder (or to whichever folder you used when you exported the template) and find the zip file. Extract the zip file and find the file called `template.json`. Select and copy it. Go to a different folder and paste the template file (Ctrl+V). Now you can edit it. - - The following example is for a generic hub with no routing configuration. It's an S1 tier hub (with 1 unit) called **ContosoHub** in region **westus**: -- ``` json - { - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "IotHubs_ContosoHub_connectionString": { - "type": "SecureString" - }, - "IotHubs_ContosoHub_containerName": { - "type": "SecureString" - }, - "IotHubs_ContosoHub_name": { - "defaultValue": "ContosoHub", - "type": "String" - } - }, - "variables": {}, - "resources": [ - { - "type": "Microsoft.Devices/IotHubs", - "apiVersion": "2021-07-01", - "name": "[parameters('IotHubs_ContosoHub_name')]", - "location": "westus", - "sku": { - "name": "S1", - "tier": "Standard", - "capacity": 1 - }, - "identity": { - "type": "None" - }, - "properties": { - "ipFilterRules": [], - "eventHubEndpoints": { - "events": { - "retentionTimeInDays": 1, - "partitionCount": 4 - } - }, - "routing": { - "endpoints": { - "serviceBusQueues": [], - "serviceBusTopics": [], - "eventHubs": [], - "storageContainers": [] - }, - "routes": [], - "fallbackRoute": { - "name": "$fallback", - "source": "DeviceMessages", - "condition": "true", - "endpointNames": [ - "events" - ], - "isEnabled": true - } - }, - "storageEndpoints": { - "$default": { - "sasTtlAsIso8601": "PT1H", - "connectionString": "[parameters('IotHubs_ContosoHub_connectionString')]", - "containerName": "[parameters('IotHubs_ContosoHub_containerName')]" - } - }, - "messagingEndpoints": { - "fileNotifications": { - "lockDurationAsIso8601": "PT1M", - "ttlAsIso8601": "PT1H", - "maxDeliveryCount": 10 - } - }, - "enableFileUploadNotifications": false, - "cloudToDevice": { - "maxDeliveryCount": 10, - "defaultTtlAsIso8601": "PT1H", - "feedback": { - "lockDurationAsIso8601": "PT1M", - "ttlAsIso8601": "PT1H", - "maxDeliveryCount": 10 - } - }, - "features": "None", - "disableLocalAuth": false, - "allowedFqdnList": [] - } - } - ] - } - ``` --### Edit the template --You have to make some changes before you can use the template to create the new hub in the new region. Use [Visual Studio Code](https://code.visualstudio.com) or a text editor to edit the template. --#### Edit the hub name and location --1. Remove the container name parameter section at the top. **ContosoHub** doesn't have an associated container. -- ``` json - "parameters": { - ... - "IotHubs_ContosoHub_containerName": { - "type": "SecureString" - }, - ... - }, - ``` --1. Remove the **storageEndpoints** property. -- ```json - "properties": { - ... - "storageEndpoints": { - "$default": { - "sasTtlAsIso8601": "PT1H", - "connectionString": "[parameters('IotHubs_ContosoHub_connectionString')]", - "containerName": "[parameters('IotHubs_ContosoHub_containerName')]" - } - }, - ... - - ``` --1. Under **resources**, change the location from westus to eastus. -- Old version: -- ``` json - "location": "westus", - ``` - - New version: -- ``` json - "location": "eastus", - ``` -#### Update the keys for the routing resources that aren't being moved --When you export the Resource Manager template for a hub that has routing configured, you will see that the keys for those resources aren't provided in the exported template. Their placement is denoted by asterisks. You must fill them in by going to those resources in the portal and retrieving the keys **before** you import the new hub's template and create the hub. --1. Retrieve the keys required for any of the routing resources and put them in the template. You can retrieve the key(s) from the resource in the [Azure portal](https://portal.azure.com). -- For example, if you are routing messages to a storage container, find the storage account in the portal. Under the Settings section, select **Access keys**, then copy one of the keys. Here's what the key looks like when you first export the template: -- ```json - "connectionString": "DefaultEndpointsProtocol=https; - AccountName=fabrikamstorage1234;AccountKey=****", - "containerName": "fabrikamresults", - ``` --1. After you retrieve the account key for the storage account, put it in the template in the clause `AccountKey=****` in the place of the asterisks. --1. For service bus queues, get the Shared Access Key matching the SharedAccessKeyName. Here's the key and the `SharedAccessKeyName` in the json: -- ```json - "connectionString": "Endpoint=sb://fabrikamsbnamespace1234.servicebus.windows.net:5671/; - SharedAccessKeyName=iothubroutes_FabrikamResources; - SharedAccessKey=****; - EntityPath=fabrikamsbqueue1234", - ``` --1. The same applies for the Service Bus Topics and Event Hubs connections. --#### Create the new routing resources in the new location --This section only applies if you are moving the resources used by the hub for the routing endpoints. --If you want to move the routing resources, you must manually set up the resources in the new location. You can create the routing resources using the [Azure portal](https://portal.azure.com), or by exporting the Resource Manager template for each of the resources used by the message routing, editing them, and importing them. After the resources are set up, you can import the hub's template (which includes the routing configuration). --1. Create each resource used by the routing. You can do this manually using the [Azure portal](https://portal.azure.com), or create the resources using Resource Manager templates. If you want to use templates, these are the steps to follow: -- 1. For each resource used by the routing, export it to a Resource Manager template. - - 1. Update the name and location of the resource. -- 1. Update any cross-references between the resources. For example, if you create a template for a new storage account, you need to update the storage account name in that template and any other template that references it. In most cases, the routing section in the template for the hub is the only other template that references the resource. -- 1. Import each of the templates, which deploys each resource. -- Once the resources used by the routing are set up and running, you can continue. --1. In the template for the IoT hub, change the name of each of the routing resources to its new name, and update the location if needed. --Now you have a template that will create a new hub that looks almost exactly like the old hub, depending on how you decided to handle the routing. --## Create the new hub in the new region by loading the template --Create the new hub in the new location using the template. If you have routing resources that are going to move, the resources should be set up in the new location and the references in the template updated to match. If you are not moving the routing resources, they should be in the template with the updated keys. --1. Sign into the [Azure portal](https://portal.azure.com). --1. Select **Create a resource**. --1. In the search box, type "template deployment" and select Enter. --1. Select **template deployment (deploy using custom templates)**. This takes you to a screen for the Template deployment. Select **Create**. You see this screen: -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-custom-deployment.png" alt-text="Screenshot showing the command for building your own template"::: --1. Select **Build your own template in the editor**, which enables you to upload your template from a file. --1. Select **Load file**. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-upload-file.png" alt-text="Screenshot showing the command for uploading a template file"::: --1. Browse for the new template you edited and select it, then select **Open**. It loads your template in the edit window. Select **Save**. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-uploaded-file.png" alt-text="Screenshot showing loading the template"::: --1. Fill in the following fields on the custom deployment page. -- **Subscription**: Select the subscription to use. -- **Resource group**: Create a new resource group in a new location. If you already have one set up, you can select it instead of creating a new one. -- **Region**: If you selected an existing resource group, the region is filled in for you to match the location of the resource group. If you created a new resource group, this will be its location. -- **Connection string**: Fill in the connection string for your hub. -- **Hub name**: Give the new hub in the new region a name. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-custom-deployment-create.png" alt-text="Screenshot showing the custom deployment page"::: --1. Select the **Review + create** button. --1. Select the **Create** button. The portal validates your template and deploys your cloned hub. If you have routing configuration data, it will be included in the new hub, but will point at the resources in the prior location. -- :::image type="content" source="./media/iot-hub-how-to-clone/iot-hub-custom-deployment-final.png" alt-text="Screenshot showing the final custom deployment page"::: --## Managing the devices registered to the IoT hub --Now that you have your clone up and running, you need to copy all of the devices from the original hub to the clone. --There are multiple ways to copy the devices. You either originally used [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md) to provision the devices, or you didn't. If you did, this process isn't difficult. If you didn't, this process can be complicated. --If you didn't use DPS to provision your devices, you can skip the next section and start with [Using Import/Export to move the devices to the new hub](#using-import-export-to-move-the-devices-to-the-new-hub). --## Using DPS to re-provision the devices in the new hub --To use DPS to move the devices to the new location, see [How to re-provision devices](../iot-dps/how-to-reprovision.md). When you're finished, you can view the devices in the [Azure portal](https://portal.azure.com) and verify they are in the new location. --Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices that were re-provisioned to the cloned hub. You can also view the properties for the cloned hub. --If you have implemented routing, test and make sure your messages are routed to the resources correctly. --### Committing the changes after using DPS --This change has been committed by the DPS service. --### Rolling back the changes after using DPS. --If you want to roll back the changes, re-provision the devices from the new hub to the old one. --You are now finished migrating your hub and its devices. You can skip to [Clean-up](#clean-up). --## Using Import-Export to move the devices to the new hub --The application targets .NET Core, so you can run it on either Windows or Linux. You can download the sample, retrieve your connection strings, set the flags for which bits you want to run, and run it. You can do this without ever opening the code. --### Downloading the sample --1. Use the IoT C# samples here: [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip). Download the zip file and unzip it on your computer. --1. The pertinent code is in ./iothub/service/samples/how to guides/ImportExportDevicesSample. You don't need to view or edit the code in order to run the application. --1. To run the application, specify three connection strings and five options. You pass this data in as command-line arguments or use environment variables, or use a combination of the two. We're going to pass the options in as command line arguments, and the connection strings as environment variables. -- The reason for this is because the connection strings are long and ungainly, and unlikely to change, but you might want to change the options and run the application more than once. To change the value of an environment variable, you have to close the command window and Visual Studio or Visual Studio Code, whichever you are using. --### Options --Here are the five options you specify when you run the application. We'll put these on the command line in a minute. --* **addDevices** (argument 1) -- set this to true if you want to add virtual devices that are generated for you. These are added to the source hub. Also, set **numToAdd** (argument 2) to specify how many devices you want to add. The maximum number of devices you can register to a hub is one million. The purpose of this option is for testing. You can generate a specific number of devices, and then copy them to another hub. --* **copyDevices** (argument 3) -- set this to true to copy the devices from one hub to another. --* **deleteSourceDevices** (argument 4) -- set this to true to delete all of the devices registered to the source hub. We recommend waiting until you are certain all of the devices have been transferred before you run this. Once you delete the devices, you can't get them back. --* **deleteDestDevices** (argument 5) -- set this to true to delete all of the devices registered to the destination hub (the clone). You might want to do this if you want to copy the devices more than once. --The basic command is *dotnet run*, which tells .NET to build the local csproj file and then run it. You add your command-line arguments to the end before you run it. --Your command-line will look like these examples: --``` console - // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices -- // Add 1000 devices, don't copy them to the other hub, or delete them. - // The first argument is true, numToAdd is 50, and the other arguments are false. - dotnet run true 1000 false false false -- // Copy the devices you just added to the other hub; don't delete anything. - // The first argument is false, numToAdd is 0, copy-devices is true, and the delete arguments are both false - dotnet run false 0 true false false -``` --### Using environment variables for the connection strings --1. To run the sample, you need the connection strings to the old and new IoT hubs, and to a storage account you can use for temporary work files. We will store the values for these in environment variables. --1. To get the connection string values, sign in to the [Azure portal](https://portal.azure.com). --1. Put the connection strings somewhere you can retrieve them, such as NotePad. If you copy the following, you can paste the connection strings in directly where they go. Don't add spaces around the equal sign, or it changes the variable name. Also, you don't need double-quotes around the connection strings. If you put quotes around the storage account connection string, it won't work. -- Set the environment variables in Windows: -- ``` console - SET IOTHUB_CONN_STRING=<put connection string to original IoT Hub here> - SET DEST_IOTHUB_CONN_STRING=<put connection string to destination or clone IoT Hub here> - SET STORAGE_ACCT_CONN_STRING=<put connection string to the storage account here> - ``` -- Set the environment variables in Linux: -- ``` console - export IOTHUB_CONN_STRING="<put connection string to original IoT Hub here>" - export DEST_IOTHUB_CONN_STRING="<put connection string to destination or clone IoT Hub here>" - export STORAGE_ACCT_CONN_STRING="<put connection string to the storage account here>" - ``` --1. For the IoT hub connection strings, go to each hub in the portal. You can search in **Resources** for the hub. If you know the Resource Group, you can go to **Resource groups**, select your resource group, and then select the hub from the list of assets in that resource group. --1. Select **Shared access policies** from the Settings for the hub, then select **iothubowner** and copy one of the connection strings. Do the same for the destination hub. Add them to the appropriate SET commands. --1. For the storage account connection string, find the storage account in **Resources** or under its **Resource group** and open it. - -1. Under the Settings section, select **Access keys** and copy one of the connection strings. Put the connection string in your text file for the appropriate SET command. --Now you have the environment variables in a file with the SET commands, and you know what your command-line arguments are. Let's run the sample. --### Running the sample application and using command-line arguments --1. Open a command prompt window. Select Windows and type in `command prompt` to get the command prompt window. --1. Copy the commands that set the environment variables, one at a time, and paste them into the command prompt window and select Enter. When you're finished, type `SET` in the command prompt window to see your environment variables and their values. Once you've copied these into the command prompt window, you don't have to copy them again, unless you open a new command prompt window. --1. In the command prompt window, change directories until you are in ./ImportExportDevicesSample (where the ImportExportDevicesSample.csproj file exists). Then type the following, and include your command-line arguments. -- ``` console - // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices - dotnet run arg1 arg2 arg3 arg4 arg5 - ``` -- The dotnet command builds and runs the application. Because you are passing in the options when you run the application, you can change the values of them each time you run the application. For example, you may want to run it once and create new devices, then run it again and copy those devices to a new hub, and so on. You can also perform all the steps in the same run, although we recommend not deleting any devices until you are certain you are finished with the cloning. Here is an example that creates 1000 devices and then copies them to the other hub. -- ``` console - // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices -- // Add 1000 devices, don't copy them to the other hub or delete them. - dotnet run true 1000 false false false -- // Do not add any devices. Copy the ones you just created to the other hub; don't delete anything. - dotnet run false 0 true false false - ``` -- After you verify that the devices were copied successfully, you can remove the devices from the source hub like this: -- ``` console - // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices - // Delete the devices from the source hub. - dotnet run false 0 false true false - ``` --### Running the sample application using Visual Studio --1. If you want to run the application in Visual Studio, change your current directory to the folder where the azureiot.sln file resides. Then run this command in the command prompt window to open the solution in Visual Studio. You must do this in the same command window where you set the environment variables, so those variables are known. -- ``` console - azureiot.sln - ``` - -1. Right-click on the project *ImportExportDevicesSample* and select **Set as startup project**. - -1. Set the variables at the top of Program.cs in the ImportExportDevicesSample folder for the five options. -- ``` csharp - // Add randomly created devices to the source hub. - private static bool addDevices = true; - //If you ask to add devices, this will be the number added. - private static int numToAdd = 0; - // Copy the devices from the source hub to the destination hub. - private static bool copyDevices = false; - // Delete all of the devices from the source hub. (It uses the IoTHubConnectionString). - private static bool deleteSourceDevices = false; - // Delete all of the devices from the destination hub. (Uses the DestIotHubConnectionString). - private static bool deleteDestDevices = false; - ``` --1. Select F5 to run the application. After it finishes running, you can view the results. --### View the results --You can view the devices in the [Azure portal](https://portal.azure.com) and verify they are in the new location. --1. Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices you copied from the old hub to the cloned hub. You can also view the properties for the cloned hub. --1. Check for import/export errors by going to the Azure storage account in the [Azure portal](https://portal.azure.com) and looking in the `devicefiles` container for the `ImportErrors.log`. If this file is empty (the size is 0), there were no errors. If you try to import the same device more than once, it rejects the device the second time and adds an error message to the log file. --### Commit the changes --At this point, you have copied your hub to the new location and migrated the devices to the new clone. Now you need to make changes so the devices work with the cloned hub. --To commit the changes, here are the steps you need to perform: --* Update each device to change the IoT Hub host name to point the IoT Hub host name to the new hub. You should do this using the same method you used when you first provisioned the device. --* Change any applications you have that refer to the old hub to point to the new hub. --* After you're finished, the new hub should be up and running. The old hub should have no active devices and be in a disconnected state. --### Rolling back the changes --If you decide to roll back the changes, here are the steps to perform: --* Update each device to change the IoT Hub Hostname to point the IoT Hub Hostname for the old hub. You should do this using the same method you used when you first provisioned the device. --* Change any applications you have that refer to the new hub to point to the old hub. For example, if you are using Azure Analytics, you may need to reconfigure your [Azure Stream Analytics input](../stream-analytics/stream-analytics-define-inputs.md#stream-data-from-iot-hub). --* Delete the new hub. --* If you have routing resources, the configuration on the old hub should still point to the correct routing configuration, and should work with those resources after the hub is restarted. --### Checking the results --To check the results, change your IoT solution to point to your hub in the new location and run it. In other words, perform the same actions with the new hub that you performed with the previous hub and make sure they work correctly. --If you have implemented routing, test and make sure your messages are routed to the resources correctly. --## Clean-up --Don't clean up until you are certain the new hub is up and running and the devices are working correctly. Also be sure to test the routing if you are using that feature. When you're ready, clean up the old resources by performing these steps: --* If you haven't already, delete the old hub. This removes all of the active devices from the hub. --* If you have routing resources that you moved to the new location, you can delete the old routing resources. --## Next steps --You have cloned an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT Hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md). --If you want to deploy the sample application, see [.NET Core application deployment](/dotnet/core/deploying/index). |
iot-hub | Iot Hub Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-upgrade.md | When you have more devices and need more capabilities, there are three ways to a * Upgrade to a higher tier. For example, upgrade a hub from the B1 tier to the S1 tier for access to advanced features with the same messaging capacity. > [!Warning]- > You cannot upgrade from a Free Hub to a Paid Hub through our upgrade function. You must create a Paid hub and migrate the configurations and devices from the Free hub to the Paid hub. This process is documented at [How to clone an IoT Hub](./iot-hub-how-to-clone.md). + > You cannot upgrade from a Free Hub to a Paid Hub through our upgrade function. You must create a Paid hub and migrate the configurations and devices from the Free hub to the Paid hub. This process is documented at [How to migrate an IoT hub](./migrate-hub-state-cli.md). > [!Tip] > When you are upgrading your IoT Hub to a higher tier, some messages may be received out of order for a short period of time. If your business logic relies on the order of messages, we recommend upgrading during non-business hours. |
iot-hub | Migrate Hub Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-hub-arm.md | + + Title: How to manually migrate an IoT hub ++description: Use the Azure portal, ARM templates, and service SDKs to manually migrate an Azure IoT hub to a new region or new SKU +++++ Last updated : 04/14/2023+++# How to manually migrate an Azure IoT hub using an Azure Resource Manager template ++Use the Azure portal, Azure Resource Manager templates, and Azure IoT Hub service SDKs to migrate an IoT hub to a new region, a new tier, or a new configuration. ++The steps in this article are useful if you want to: ++* Upgrade from the free tier to a basic or standard tier IoT hub. +* Move an IoT hub to a new region. +* Export IoT hub state information to have as a backup. +* Increase the number of [partitions](iot-hub-scaling.md#partitions) for an IoT hub. +* Set up a hub for a development, rather than production, environment. +* Enable a custom implementation of multi-hub high availability. For more information, see the [How to achieve cross region HA section of IoT Hub high availability and disaster recovery](iot-hub-ha-dr.md#achieve-cross-region-ha). ++To migrate a hub, you need a subscription with administrative access to the original hub. You can put the new hub in a new resource group and region, in the same subscription as the original hub, or even in a new subscription. You just can't use the same name because the hub name has to be globally unique. ++## Compare automatic and manual migration steps ++The outcome of this article is similar to [How to automatically migrate an IoT hub using the Azure CLI](./migrate-hub-state-cli.md), but with a different process. Before you begin, decide which process is right for your scenario. ++* The manual process (this article): ++ * Migrates your device registry and your routing and endpoint information. You have to manually recreate other configuration details in the new IoT hub. + * Is faster for migrating large numbers of devices (for example, more than 100,000). + * Uses an Azure Storage account to transfer the device registry. + * Scrubs connection strings for routing and file upload endpoints from the ARM template output, and you need to manually add them back in. ++* The Azure CLI process: ++ * Migrates your device registry, your routing and endpoint information, and other configuration details like IoT Edge deployments or automatic device management configurations. + * Is easier for migrating small numbers of devices (for example, up to 10,000). + * Doesn't require an Azure Storage account. + * Collects connection strings for routing and file upload endpoints and includes them in the ARM template output. ++## Things to consider ++There are several things to consider before migrating an IoT hub. ++* Make sure that all of the features available in the original location are also available in the new location. Some services are in preview, and not all features are available everywhere. ++* Don't remove the original resources before creating and verifying the migrated version. Once you remove a hub, it's gone forever, and there's no way to recover it to check the settings or data to make sure the hub is replicated correctly. ++* Data for the original IoT hub isn't migrated. This data includes device messages, cloud-to-device (C2D) commands, and job-related information such as schedules and history. Metrics and logging results are also not migrated. ++* You need to schedule downtime for the migration. Cloning the devices to the new hub takes time. If you use the Import/Export method, benchmark testing has revealed that it could take around two hours to move 500,000 devices, and four hours to move a million devices. ++* You can copy devices to the new hub without shutting down or changing the devices. ++ * If the devices were originally provisioned using DPS, update their enrollments to point to the new IoT hub. Then, reprovision the devices to update the connection information stored in each device. ++ * Otherwise, you have to use the import/export method to move the devices, and then the devices have to be modified to use the new hub. For example, you can set up your device to consume the IoT Hub host name from the twin desired properties. The device takes that IoT Hub host name, disconnect the device from the old hub, and reconnect it to the new one. ++* You need to update any certificates so you can use them with the new resources. Also, you probably have the hub defined in a DNS table somewhere and need to update that DNS information. ++## Methodology ++This is the general method we recommend for migrating an IoT hub. ++1. Export the hub and its settings to a Resource Manager template. ++1. Make the necessary changes to the template, such as updating all occurrences of the name and the location for the migrated hub. For any resources in the template used for message routing endpoints, update the key in the template for that resource. ++1. Import the template into a new resource group in the new location. This step creates the new IoT hub. ++1. Debug as needed. ++1. Add anything that wasn't exported to the template. ++ For example, consumer groups aren't exported to the template. You need to add the consumer groups to the template manually or use the [Azure portal](https://portal.azure.com) after the hub is created. ++1. Copy the devices from the original hub to the new hub. This process is covered in the section [Manage the devices registered to the IoT hub](#manage-the-devices-registered-to-the-iot-hub). ++## How to handle message routing ++If your hub uses [message routing](iot-hub-devguide-messages-d2c.md), exporting the template for the hub includes the routing configuration, but it doesn't include the resources themselves. If you're migrating the IoT hub to a new region, you must choose whether to move the routing resources to the new location as well or to leave them in place and continue to use them "as is". There may be a small performance hit from routing messages to endpoint resources in a different region. ++If the hub uses message routing, you have two choices. ++* Move the resources used for the routing endpoints to the new location. ++ 1. Create the new resources yourself either manually in the [Azure portal](https://portal.azure.com) or by using Resource Manager templates. ++ 1. Rename all of the resources when you create them in the new location, as they require globally unique names. ++ 1. Update the resource names and the resource keys in the new hub's template before creating the new hub. The resources should be present when the new hub is created. ++* Don't move the resources used for the routing endpoints. Use them "in place". ++ 1. In the step where you edit the template, you need to retrieve the keys for each routing resource and put them in the template before you create the new hub. ++ 1. The hub still references the original routing resources and routes messages to them as configured. You'll have a small performance hit because the hub and the routing endpoint resources aren't in the same location. ++## Prepare to migrate the hub to another region ++This section provides specific instructions for migrating the hub. ++### Export the original hub to a resource template ++1. Sign into the [Azure portal](https://portal.azure.com). ++1. Navigate to the IoT hub that you want to move. ++1. Select **Export template** from the list of properties and settings for the hub. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-export-template.png" alt-text="Screenshot showing the command for exporting the template for the IoT Hub." border="true"::: ++1. Select **Download** to download the template. Save the file somewhere you can find it again. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-download-template.png" alt-text="Screenshot showing the command for downloading the template for the IoT Hub." border="true"::: ++### View the template ++Go to the downloaded template, which is contained in a zip file. Extract the zip file and find the file called `template.json`. ++The following example is for a generic hub with no routing configuration. It's an S1 tier hub (with 1 unit) called **ContosoHub** in region **westus**: ++``` json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "IotHubs_ContosoHub_connectionString": { + "type": "SecureString" + }, + "IotHubs_ContosoHub_containerName": { + "type": "SecureString" + }, + "IotHubs_ContosoHub_name": { + "defaultValue": "ContosoHub", + "type": "String" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.Devices/IotHubs", + "apiVersion": "2021-07-01", + "name": "[parameters('IotHubs_ContosoHub_name')]", + "location": "westus", + "sku": { + "name": "S1", + "tier": "Standard", + "capacity": 1 + }, + "identity": { + "type": "None" + }, + "properties": { + "ipFilterRules": [], + "eventHubEndpoints": { + "events": { + "retentionTimeInDays": 1, + "partitionCount": 4 + } + }, + "routing": { + "endpoints": { + "serviceBusQueues": [], + "serviceBusTopics": [], + "eventHubs": [], + "storageContainers": [] + }, + "routes": [], + "fallbackRoute": { + "name": "$fallback", + "source": "DeviceMessages", + "condition": "true", + "endpointNames": [ + "events" + ], + "isEnabled": true + } + }, + "storageEndpoints": { + "$default": { + "sasTtlAsIso8601": "PT1H", + "connectionString": "[parameters('IotHubs_ContosoHub_connectionString')]", + "containerName": "[parameters('IotHubs_ContosoHub_containerName')]" + } + }, + "messagingEndpoints": { + "fileNotifications": { + "lockDurationAsIso8601": "PT1M", + "ttlAsIso8601": "PT1H", + "maxDeliveryCount": 10 + } + }, + "enableFileUploadNotifications": false, + "cloudToDevice": { + "maxDeliveryCount": 10, + "defaultTtlAsIso8601": "PT1H", + "feedback": { + "lockDurationAsIso8601": "PT1M", + "ttlAsIso8601": "PT1H", + "maxDeliveryCount": 10 + } + }, + "features": "None", + "disableLocalAuth": false, + "allowedFqdnList": [] + } + } + ] +} +``` ++### Edit the template ++You have to make some changes before you can use the template to create the new hub in the new region. Use [Visual Studio Code](https://code.visualstudio.com) or a text editor to edit the template. ++#### Edit the hub name and location ++1. Remove the container name parameter section at the top. **ContosoHub** doesn't have an associated container. ++ ``` json + "parameters": { + ... + "IotHubs_ContosoHub_containerName": { + "type": "SecureString" + }, + ... + }, + ``` ++1. Remove the **storageEndpoints** property. ++ ```json + "properties": { + ... + "storageEndpoints": { + "$default": { + "sasTtlAsIso8601": "PT1H", + "connectionString": "[parameters('IotHubs_ContosoHub_connectionString')]", + "containerName": "[parameters('IotHubs_ContosoHub_containerName')]" + } + }, + ... + + ``` ++1. If you're moving the hub to a new region, change the **location** property under **resources**. ++ ``` json + "location": "westus", + ``` ++#### Update the routing endpoint resources ++When you export the Resource Manager template for a hub that has routing configured, you see that the keys for those resources aren't provided in the exported template. Their placement is denoted by asterisks. You must fill them in by going to those resources in the portal and retrieving the keys **before** you import the new hub's template and create the hub. ++If you moved the routing resources as well, update the name, ID, and resource group of each endpoint as well. ++1. Retrieve the keys required for any of the routing resources and put them in the template. You can retrieve the key(s) from the resource in the [Azure portal](https://portal.azure.com). ++ * For example, if you're routing messages to a storage container, find the storage account in the portal. Under the Settings section, select **Access keys**, then copy one of the keys. Here's what the key looks like when you first export the template: ++ ```json + "connectionString": "DefaultEndpointsProtocol=https; + AccountName=fabrikamstorage1234;AccountKey=****", + "containerName": "fabrikamresults", + ``` ++ After you retrieve the account key for the storage account, put it in the template in the `AccountKey=****` clause in the place of the asterisks. ++ * For service bus queues, get the Shared Access Key matching the SharedAccessKeyName. Here's the key and the `SharedAccessKeyName` in the json: ++ ```json + "connectionString": "Endpoint=sb://fabrikamsbnamespace1234.servicebus.windows.net:5671/; + SharedAccessKeyName=iothubroutes_FabrikamResources; + SharedAccessKey=****; + EntityPath=fabrikamsbqueue1234", + ``` ++ * The same applies for the Service Bus Topics and Event Hubs connections. ++## Create the new hub by loading the template ++Create the new hub using the edited template. If you have routing resources that are going to move, the resources should be set up in the new location and the references in the template updated to match. If you aren't moving the routing resources, they should be in the template with the updated keys. ++1. Sign into the [Azure portal](https://portal.azure.com). ++1. Select **Create a resource**. ++1. In the search box, search for and select **template deployment (deploy using custom templates)**. On the screen for the template deployment, select **Create**. ++1. On the **Custom deployment** page, select **Build your own template in the editor**, which enables you to upload your template from a file. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-custom-deployment.png" alt-text="Screenshot showing the command for building your own template."::: ++1. Select **Load file**. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-upload-file.png" alt-text="Screenshot showing the command for uploading a template file."::: ++1. Browse for the new template you edited and select it, then select **Open**. It loads your template in the edit window. Select **Save**. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-uploaded-file.png" alt-text="Screenshot showing loading the template."::: ++1. Fill in the following fields on the custom deployment page. ++ **Subscription**: Select the subscription to use. ++ **Resource group**: Select an existing resource group or create a new one. ++ **Region**: If you selected an existing resource group, the region is filled in for you to match the location of the resource group. If you created a new resource group, this is its location. ++ **Connection string**: Fill in the connection string for your hub. ++ **Hub name**: Give the new hub a name. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-custom-deployment-create.png" alt-text="Screenshot showing the custom deployment page"::: ++1. Select the **Review + create** button. ++1. Select the **Create** button. The portal validates your template and deploys your new hub. If you have routing configuration data, it is included in the new hub, but points at the resources in the prior location. ++ :::image type="content" source="./media/migrate-hub-arm/iot-hub-custom-deployment-final.png" alt-text="Screenshot showing the final custom deployment page"::: ++## Manage the devices registered to the IoT hub ++Now that you have your new hub up and running, you need to copy all of the devices from the original hub to the new one. ++There are multiple ways to copy the devices. You either originally used [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md) to provision the devices, or you didn't. If you did, this process isn't difficult. If you didn't, this process can be complicated. ++If you didn't use DPS to provision your devices, you can skip the next section and start with [Use Import/Export to move the devices to the new hub](#use-import-export-to-move-the-devices-to-the-new-hub). ++## Use DPS to reprovision the devices in the new hub ++To use DPS to move the devices to the new location, see [How to reprovision devices](../iot-dps/how-to-reprovision.md). When you're finished, you can view the devices in the [Azure portal](https://portal.azure.com) and verify they are in the new location. ++Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices that were reprovisioned to the new hub. You can also view the properties for the new hub. ++If you have implemented routing, test and make sure your messages are routed to the resources correctly. ++### Roll back the changes after using DPS ++If you want to roll back the changes, reprovision the devices from the new hub to the old one. ++You're now finished migrating your hub and its devices. You can skip to [Clean-up](#clean-up). ++## Use import-export to move the devices to the new hub ++The application targets .NET Core, so you can run it on either Windows or Linux. You can download the sample, retrieve your connection strings, set the flags for which bits you want to run, and run it. You can do this without ever opening the code. ++### Download the sample ++1. Use the IoT C# samples here: [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip). Download the zip file and unzip it on your computer. ++1. The pertinent code is in ./iothub/service/samples/how to guides/ImportExportDevicesSample. You don't need to view or edit the code in order to run the application. ++1. To run the application, specify three connection strings and five options. You pass this data in as command-line arguments or use environment variables, or use a combination of the two. We're going to pass the options in as command line arguments, and the connection strings as environment variables. ++ The reason for this is because the connection strings are long and ungainly, and unlikely to change, but you might want to change the options and run the application more than once. To change the value of an environment variable, you have to close the command window and Visual Studio or Visual Studio Code, whichever you're using. ++### Options ++Here are the five options you specify when you run the application: ++* **addDevices** (argument 1) - set this option to `True` if you want to add virtual devices that are generated for you. These devices are added to the source hub. Also, set **numToAdd** (argument 2) to specify how many devices you want to add. The maximum number of devices you can register to a hub is one million. The purpose of this option is for testing. You can generate a specific number of devices, and then copy them to another hub. ++* **copyDevices** (argument 3) - set this option to `True` to copy the devices from one hub to another. ++* **deleteSourceDevices** (argument 4) - set this option to `True` to delete all of the devices registered to the source hub. We recommend waiting until you are certain all of the devices have been transferred before you run this. Once you delete the devices, you can't get them back. ++* **deleteDestDevices** (argument 5) - set this option to `True` to delete all of the devices registered to the destination hub. You might want to do this if you want to copy the devices more than once. ++The basic command is *dotnet run*, which tells .NET to build the local csproj file and then run it. You add your command-line arguments to the end before you run it. ++Your command-line will look like these examples: ++``` console + // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices ++ // Add 1000 devices, don't copy them to the other hub, or delete them. + // The first argument is true, numToAdd is 50, and the other arguments are false. + dotnet run true 1000 false false false ++ // Copy the devices you just added to the other hub; don't delete anything. + // The first argument is false, numToAdd is 0, copy-devices is true, and the delete arguments are both false + dotnet run false 0 true false false +``` ++### Use environment variables for the connection strings ++1. To run the sample, you need the connection strings to the old and new IoT hubs, and to a storage account you can use for temporary work files. We will store the values for these in environment variables. ++1. To get the connection string values, sign in to the [Azure portal](https://portal.azure.com). ++1. Put the connection strings somewhere you can retrieve them, such as NotePad. If you copy the following, you can paste the connection strings in directly where they go. Don't add spaces around the equal sign, or it changes the variable name. Also, you don't need double-quotes around the connection strings. If you put quotes around the storage account connection string, the script fails. ++ Set the environment variables in Windows: ++ ``` console + SET IOTHUB_CONN_STRING=<put connection string to original IoT hub here> + SET DEST_IOTHUB_CONN_STRING=<put connection string to destination IoT hub here> + SET STORAGE_ACCT_CONN_STRING=<put connection string to the storage account here> + ``` ++ Set the environment variables in Linux: ++ ``` console + export IOTHUB_CONN_STRING="<put connection string to original IoT hub here>" + export DEST_IOTHUB_CONN_STRING="<put connection string to destination IoT hub here>" + export STORAGE_ACCT_CONN_STRING="<put connection string to the storage account here>" + ``` ++1. For the IoT hub connection strings, go to each hub in the portal. You can search in **Resources** for the hub. If you know the Resource Group, you can go to **Resource groups**, select your resource group, and then select the hub from the list of assets in that resource group. ++1. Select **Shared access policies** from the Settings for the hub, then select **iothubowner** and copy one of the connection strings. Do the same for the destination hub. Add them to the appropriate SET commands. ++1. For the storage account connection string, find the storage account in **Resources** or under its **Resource group** and open it. ++1. Under the Settings section, select **Access keys** and copy one of the connection strings. Put the connection string in your text file for the appropriate SET command. ++Now you have the environment variables in a file with the SET commands, and you know what your command-line arguments are. Let's run the sample. ++### Run the sample application and using command-line arguments ++1. Open a command prompt window. Select Windows and type in `command prompt` to get the command prompt window. ++1. Copy the commands that set the environment variables, one at a time, and paste them into the command prompt window and select Enter. When you're finished, type `SET` in the command prompt window to see your environment variables and their values. Once you've copied these into the command prompt window, you don't have to copy them again, unless you open a new command prompt window. ++1. In the command prompt window, change directories until you are in ./ImportExportDevicesSample (where the ImportExportDevicesSample.csproj file exists). Then type the following, and include your command-line arguments. ++ ``` console + // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices + dotnet run arg1 arg2 arg3 arg4 arg5 + ``` ++ The dotnet command builds and runs the application. Because you're passing in the options when you run the application, you can change the values of them each time you run the application. For example, you may want to run it once and create new devices, then run it again and copy those devices to a new hub, and so on. You can also perform all the steps in the same run, although we recommend not deleting any devices until you're certain you're finished with the migration. Here's an example that creates 1000 devices and then copies them to the other hub. ++ ``` console + // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices ++ // Add 1000 devices, don't copy them to the other hub or delete them. + dotnet run true 1000 false false false ++ // Do not add any devices. Copy the ones you just created to the other hub; don't delete anything. + dotnet run false 0 true false false + ``` ++ After you verify that the devices were copied successfully, you can remove the devices from the source hub like this: ++ ``` console + // Format: dotnet run add-devices num-to-add copy-devices delete-source-devices delete-destination-devices + // Delete the devices from the source hub. + dotnet run false 0 false true false + ``` ++### Run the sample application using Visual Studio ++1. If you want to run the application in Visual Studio, change your current directory to the folder where the azureiot.sln file resides. Then run this command in the command prompt window to open the solution in Visual Studio. You must do this in the same command window where you set the environment variables, so those variables are known. ++ ``` console + azureiot.sln + ``` ++1. Right-click on the project *ImportExportDevicesSample* and select **Set as startup project**. ++1. Set the variables at the top of Program.cs in the ImportExportDevicesSample folder for the five options. ++ ``` csharp + // Add randomly created devices to the source hub. + private static bool addDevices = true; + //If you ask to add devices, this will be the number added. + private static int numToAdd = 0; + // Copy the devices from the source hub to the destination hub. + private static bool copyDevices = false; + // Delete all of the devices from the source hub. (It uses the IoTHubConnectionString). + private static bool deleteSourceDevices = false; + // Delete all of the devices from the destination hub. (Uses the DestIotHubConnectionString). + private static bool deleteDestDevices = false; + ``` ++1. Select F5 to run the application. After it finishes running, you can view the results. ++### View the results ++You can view the devices in the [Azure portal](https://portal.azure.com) and verify they are in the new location. ++1. Go to the new hub using the [Azure portal](https://portal.azure.com). Select your hub, then select **IoT Devices**. You see the devices you copied from the old hub to the new hub. You can also view the properties for the new hub. ++1. Check for import/export errors by going to the Azure storage account in the [Azure portal](https://portal.azure.com) and looking in the `devicefiles` container for the `ImportErrors.log`. If this file is empty (the size is 0), there were no errors. If you try to import the same device more than once, it rejects the device the second time and adds an error message to the log file. ++### Commit the changes ++At this point, you have copied your hub to the new location and migrated the devices to the new hub. Now you need to make changes so the devices work with the new hub. ++To commit the changes, here are the steps you need to perform: ++* Update each device to change the IoT Hub host name to point the IoT Hub host name to the new hub. You should do this using the same method you used when you first provisioned the device. ++* Change any applications you have that refer to the old hub to point to the new hub. ++* After you're finished, the new hub should be up and running. The old hub should have no active devices and be in a disconnected state. ++### Roll back the changes ++If you decide to roll back the changes, here are the steps to perform: ++* Update each device to change the IoT Hub Hostname to point the IoT Hub Hostname for the old hub. You should do this using the same method you used when you first provisioned the device. ++* Change any applications you have that refer to the new hub to point to the old hub. For example, if you're using Azure Analytics, you may need to reconfigure your [Azure Stream Analytics input](../stream-analytics/stream-analytics-define-inputs.md#stream-data-from-iot-hub). ++* Delete the new hub. ++* If you have routing resources, the configuration on the old hub should still point to the correct routing configuration, and should work with those resources after the hub is restarted. ++### Check the results ++To check the results, change your IoT solution to point to your hub in the new location and run it. In other words, perform the same actions with the new hub that you performed with the previous hub and make sure they work correctly. ++If you have implemented routing, test and make sure your messages are routed to the resources correctly. ++## Clean up ++Don't clean up until you're certain the new hub is up and running and the devices are working correctly. Also be sure to test the routing if you're using that feature. When you're ready, clean up the old resources by performing these steps: ++* If you haven't already, delete the old hub. This removes all of the active devices from the hub. ++* If you have routing resources that you moved to the new location, you can delete the old routing resources. ++## Next steps ++You have migrated an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT Hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md). |
iot-hub | Migrate Hub State Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-hub-state-cli.md | + + Title: How to migrate an IoT hub ++description: Use the Azure CLI iot hub state command group to migrate an IoT hub to a new region, a new tier, or a new configuration +++++ Last updated : 04/14/2023+++# How to automatically migrate an IoT hub using the Azure CLI ++Use the Azure CLI to migrate an IoT hub to a new region, a new tier, or a new configuration. ++The steps in this article are useful if you want to: ++* Upgrade from the free tier to a basic or standard tier IoT hub. +* Move an IoT hub to a new region. +* Export IoT hub state information to have as a backup. +* Increase the number of [partitions](iot-hub-scaling.md#partitions) for an IoT hub. +* Set up a hub for a development, rather than production, environment. ++## Compare automatic and manual migration steps ++The outcome of this article is similar to [How to migrate an Azure IoT hub using Azure Resource Manager templates](iot-hub-how-to-clone.md), but with a different process. Before you begin, decide which process is right for your scenario. ++* The Azure CLI process (this article): ++ * Migrates your device registry, your routing and endpoint information, and other configuration details like IoT Edge deployments or automatic device management configurations. + * Is easier for migrating small numbers of devices (for example, up to 10,000). + * Doesn't require an Azure Storage account. + * Collects connection strings for routing and file upload endpoints and includes them in the ARM template output. ++* The manual process: ++ * Migrates your device registry and your routing and endpoint information. You have to manually recreate other configuration details in the new IoT hub. + * Is faster for migrating large numbers of devices (for example, more than 100,000). + * Uses an Azure Storage account to transfer the device registry. + * Scrubs connection strings for routing and file upload endpoints from the ARM template output, and you need to manually add them back in. ++## Prerequisites ++* Azure CLI ++ The features described in this article require version 0.20.0 or newer of the **azure-iot** extension. To check your extension version, run `az --version`. To update your extension, run `az extension update --name azure-iot`. ++ If you still have the legacy **azure-cli-iot-ext** extension installed, remove that extension before adding the **azure-iot** extension. ++## IoT hub state ++When we talk about migrating the state of an IoT hub, we're referring to a combination of three aspects: ++* **Azure Resource Manager (ARM) resources.** This aspect is everything that can be defined in a resource template, and is the same information you'd get if you exported the resource template from your IoT hub in the Azure portal. Information captured as part of the Azure Resource Manager aspect includes: ++ * Built-in event hub's retention time + * Certificates + * Cloud-to-device properties + * Disable device SAS + * Disable local auth + * Enable file upload notifications + * File upload storage endpoint + * Identities + * User-assigned identities + * System-assigned identities (enabled or disabled) + * Network rule sets + * Routing + * Custom endpoints + * Fallback route + * Routes + * Tags ++* **Configurations.** This aspect is for aspects of an IoT hub that aren't represented in an ARM template. Specifically, this aspect covers automatic device management configurations and IoT Edge deployments. ++* **Devices.** This aspect represents the information in your device registry, which includes: ++ * Device identities and twins + * Module identities and twins ++Any IoT Hub property or configuration not listed here may not be exported or imported correctly. ++## Export the state of an IoT hub ++Use the [az iot hub state export](/cli/azure/iot/hub/state#az-iot-hub-state-export) command to export the state of an IoT hub to a JSON file. ++If you want to run both the export and import steps in one command, refer to the section later in this article to [Migrate an IoT hub](#migrate-an-iot-hub). ++When you export the state of an IoT hub, you can choose which aspects to export. ++| Parameter | Details | +| | - | +| `--aspects` | The state aspects to export. Specify one or more of the accepted values: **arm**, **configurations**, or **devices**. If this parameter is left out, then all three aspects are exported. | +| `--state-file -f` | The path to the file where the state information is written. | +| `--replace -r` | If this parameter is included, then the export command overwrites the contents of the state file. | +| `--hub-name -n`<br>**or**<br>`--login -l` | The name of the origin IoT hub (`-n`) or the connection string for the origin IoT hub (`-l`). If both are provided, then the connection string takes priority. | +| `--resource-group -g` | The name of the resource group for the origin IoT hub. | ++The following example exports all aspects of an IoT hub's state to a file named **myHub-state**: ++```azurecli +az iot hub state export --hub-name myHub --state-file ./myHub-state.json +``` ++The following example exports only the devices and Azure Resource Manager aspects of an IoT hub's state, and overwrites the content of the existing file: ++```azurecli +az iot hub state export --hub-name myHub --state-file ./myHub-state.json --aspects arm devices --replace +``` ++### Export endpoints ++If you choose to export the Azure Resource Manager aspect of an IoT hub, the export command retrieves the connection strings for any endpoints that have key-based authentication and include them in the output ARM template. ++The export command also checks all endpoints to verify that the resource it connects to still exists. If not, then that endpoint and any routes using that endpoint aren't exported. ++## Import the state of an IoT hub ++Use the [az iot hub state import](/cli/azure/iot/hub/state#az-iot-hub-state-import) command to import state information from an exported file to a new or existing IoT hub. ++If you want to run both the export and import steps in one command, refer to the section later in this article to [Migrate an IoT hub](#migrate-an-iot-hub). ++| Parameter | Details | +| | - | +| `--aspects` | The state aspects to import. Specify one or more of the accepted values: **arm**, **configurations**, or **devices**. If this parameter is left out, then all three aspects are imported. | +| `--state-file -f` | The path to the exported state file. | +| `--replace -r` | If this parameter is included, then the import command deletes the current state of the destination hub. | +| `--hub-name -n`<br>**or**<br>`--login -l` | The name of the destination IoT hub (`-n`) or the connection string for the destination IoT hub (`-l`). If both are provided, then the connection string takes priority. | +| `--resource-group -g` | The name of the resource group for the destination IoT hub. | ++The following example imports all aspects to a new IoT hub, which is created if it doesn't already exist: ++```azurecli +az iot hub state import --hub-name myNewHub --state-file ./myHub-state.json +``` ++The following example imports only the devices and configurations aspects to a new IoT hub, which must exist already, and overwrites any existing devices and configurations: ++```azurecli +az iot hub state import --hub-name myNewHub --state-file ./myHub-state.json --aspects devices configurations --replace +``` ++### Create a new IoT hub with state import ++You can use the `az iot hub state import` command to create a new IoT hub or to write to an existing IoT hub. ++If you want to create a new IoT Hub, then you must include the `arm` aspect in the import command. If `arm` isn't included in the command, and the destination hub doesn't exist, then the import command fails. ++If the destination hub doesn't exist, then the `--resource-group` parameter is also required for the import command. ++### Update an existing IoT hub with state import ++If the destination IoT hub already exists, then the `arm` aspect isn't required for the `az iot hub state import` command. If you do include the `arm` aspect, all the resource properties will be overwritten except for the following properties that can't be changed after hub creation: ++* Location +* SKU +* Built-in Event Hubs partition count +* Data residency +* Features ++If the `--resource-group` is specified in the import command and is different than IoT hub's current resource group, then the command fails because it attempts to create a new hub with the same name as the one that already exists. ++If you include the `--replace` flag in the import command, then the following IoT hub aspects are removed from the destination hub before the hub state is uploaded: ++* **ARM**: Any uploaded certificates on the destination hub are deleted. If a certificate is present, it needs an etag to be updated. +* **Devices**: All devices and modules, edge and non-edge, are deleted. +* **Configurations**: All ADM configurations and IoT Edge deployments are deleted. ++## Migrate an IoT hub ++Use the [az iot hub state migrate](/cli/azure/iot/hub/state#az-iot-hub-state-migrate) command to migrate the state of one IoT hub to a new or existing IoT hub. ++This command wraps the export and import steps into a single command, but has no output files. All of the guidance and limitations described in the [Export the state of an IoT hub](#export-the-state-of-an-iot-hub) and [Import the state of an IoT hub](#import-the-state-of-an-iot-hub) sections apply to the `state migrate` command as well. ++If you're migrating a device registry with many devices (for example, a few hundred or a few thousand) you may find it easier and faster to run the export and import commands separately rather than running the migrate command. ++| Parameter | Details | +| | - | +| `--aspects` | The state aspects to migrate. Specify one or more of the accepted values: **arm**, **configurations**, or **devices**. If this parameter is left out, then all three aspects are migrated. | +| `--replace -r` | If this parameter is included, then the migrate command deletes the current state of the destination hub. | +| `--destination-hub --dh`<br>**or**<br>`--destination-hub-login --dl` | The name of the destination IoT hub (`--dh`) or the connection string for the destination IoT hub (`--dl`). If both are provided, then the connection string takes priority. | +| `--destination-resource-group --dg` | Name of the resource group for the destination IoT hub. The destination resource group is required if the destination hub doesn't exist. | +| `--origin-hub --oh`<br>**or**<br>`--origin-hub-login --ol` | The name of the origin IoT hub (`--oh`) or the connection string for the origin IoT hub (`--ol`). If both are provided, then the connection string takes priority. Use the connection string to avoid having to sign in to the Azure CLI session. | +| `--origin-resource-group --og` | The name of the resource group for the origin IoT hub. | ++The following example migrates all aspects of the origin hub to the destination hub, which is created if it doesn't exist: ++```azurecli +az iot hub state migrate --origin-hub myHub --origin-resource-group myGroup --destination-hub myNewHub --destination-resource-group myNewGroup +``` ++## Troubleshoot a migration ++If you can't export or import devices or configurations, check that you have access to those properties. One way to verify your access is by running the `az iot hub device-identity list` or `az iot hub configuration list` commands. ++If the `az iot hub state migrate` command fails, try running the export and import commands separately. The two commands result in the same functionality as the migrate command alone, but by running them separately you can review the state files that are created from the export command. ++## Next steps ++For more information about performing bulk operations against the identity registry in an IoT hub, see [Import and export IoT Hub device identities](./iot-hub-bulk-identity-mgmt.md). |
machine-learning | How To Access Data Interactive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md | Typically the beginning of a machine learning project involves exploratory data > pip install -U azureml-fsspec mltable > ``` -## Access data from a datastore URI, like a filesystem (preview) +## Access data from a datastore URI, like a filesystem An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include: uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/w ``` These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.+You can pip install the `azureml-fsspec`package and its dependency `azureml-dataprep` package. And then you can use the Azure Machine Learning Datastore implementation of `fsspec`. The Azure Machine Learning Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure Machine Learning datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance. + For example, you can directly use Datastore URIs in Pandas - below is an example of reading a CSV file: ```python from azureml.fsspec import AzureMachineLearningFileSystem fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastore/datastorename') # you can specify recursive as False to upload a file-fs.upload(lpath='data/upload_files/crime-spring.csv', rpath='data/fsspec', recursive=False, **{'overwrite': MERGE_WITH_OVERWRITE}) +fs.upload(lpath='data/upload_files/crime-spring.csv', rpath='data/fsspec', recursive=False, **{'overwrite': 'MERGE_WITH_OVERWRITE'}) # you need to specify recursive as True to upload a folder-fs.upload(lpath='data/upload_folder/', rpath='data/fsspec_folder', recursive=True, **{'overwrite': MERGE_WITH_OVERWRITE}) +fs.upload(lpath='data/upload_folder/', rpath='data/fsspec_folder', recursive=True, **{'overwrite': 'MERGE_WITH_OVERWRITE'}) ``` `lpath` is the local path, and `rpath` is the remote path. If the folders you specify in `rpath` do not exist yet, we will create the folders for you. |
operator-nexus | Howto Baremetal Review Read Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-review-read-output.md | Last updated 03/23/2023 -# How to view the output of an `az networkcloud run-read-command` in the Cluster Manager Storage account +# How to view the output of an `az networkcloud baremetalmachine run-read-command` in the Cluster Manager Storage account This guide walks you through accessing the output file that is created in the Cluster Manager Storage account when an `az networkcloud baremetalmachine run-read-command` is executed on a server. The name of the file is identified in the `az rest` status output. |
operator-nexus | Howto Baremetal Run Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md | The command syntax is: ```azurecli az networkcloud baremetalmachine run-read-command --name "<machine-name>" --limit-time-seconds <timeout> \- --commands arguments="<arg1>" arguments="<arg2>" command="<command>" --resource-group "<resourceGroupName>" \ - --subscription "<subscription>" \ - --debug + --commands '[{"command":"<command1>"},{"command":"<command2>","arguments":["<arg1>","<arg2>"]}]' \ + --resource-group "<resourceGroupName>" \ + --subscription "<subscription>" ``` -These commands to not require `arguments`: +These commands don't require `arguments`: - `fdisk -l` - `hostname` These commands to not require `arguments`: - `ss` - `ulimit -a` -All other inputs are required. Multiple commands are each specified with their own `--commands` option. +All other inputs are required. -Each `--commands` option specifies `command` and `arguments`. For a command with multiple arguments, `arguments` is repeated for each one. +Multiple commands can be provided in json format to `--commands` option. -`--debug` is required to get the operation status that can be queried to get the URL for the output file. +For a command with multiple arguments, provide as a list to `arguments` parameter. See [Azure CLI Shorthand](https://github.com/Azure/azure-cli/blob/dev/doc/shorthand_syntax.md) for instructions on constructing the `--commands` structure. ++These commands can be long running so the recommendation is to set `--limit-time-seconds` to at least 600 seconds (10 minutes). Running multiple extracts might take longer that 10 minutes. ++This command runs synchronously. If you wish to skip waiting for the command to complete, specify the `--no-wait --debug` options. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). +/home/priya/azure-docs-pr-pshet/articles/import-export +When an optional argument `--output-directory` is provided, the output result is downloaded and extracted to the local directory. ### This example executes the `hostname` command and a `ping` command. ```azurecli az networkcloud baremetalmachine run-read-command --name "bareMetalMachineName" \ --limit-time-seconds 60 \- --commands command="hostname" \ - --commands arguments="192.168.0.99" arguments="-c" arguments="3" command="ping" \ + --commands '[{"command":"hostname"],"arguments":["198.51.102.1","-c","3"]},{"command":"ping"}]' \ --resource-group "resourceGroupName" \- --subscription "<subscription>" \ - --debug + --subscription "<subscription>" ``` In the response, an HTTP status code of 202 is returned as the operation is performed asynchronously. ## Checking command status and viewing output -The debug output of the command execution contains the 'Azure-AsyncOperation' response header. Note the URL provided. --```azurecli -cli.azure.cli.core.sdk.policies: 'Azure-AsyncOperation': 'https://management.azure.com/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/0797fdd7-28eb-48ec-8c70-39a3f893421d*A0123456789F331FE47B40E2BFBCE2E133FD3ED2562348BFFD8388A4AAA1271?api-version=2022-09-30-preview' -``` -Check the status of the operation with the `az rest` command: +Sample output looks something as below. It prints the top 4K characters of the result to the screen for convenience and provides a short-lived link to the storage blob containing the command execution result. You can use the link to download the zipped output file (tar.gz). ```azurecli-az rest --method get --url <Azure-AsyncOperation-URL> -``` + ====Action Command Output==== + + hostname + rack1compute01 + + ping 198.51.102.1 -c 3 + PING 198.51.102.1 (198.51.102.1) 56(84) bytes of data. -Repeat until the response to the URL displays the result of the run-read-command. + 198.51.102.1 ping statistics + 3 packets transmitted, 0 received, 100% packet loss, time 2049ms -Sample output looks something like this. The `Succeeded` `status` indicates the command was executed on the BMM. The `resultUrl` provides a link to the zipped output file that contains the output from the command execution. The tar.gz file name can be used to identify the file in the Storage account of the Cluster Manager resource group. -See [How To BareMetal Review Output Run-Read](howto-baremetal-review-read-output.md) for instructions on locating the output file in the Storage Account. You can also use the link to directly access the output zip file. -```azurecli -az rest --method get --url https://management.azure.com/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F?api-version=2022-09-30-preview -{ - "endTime": "2023-03-01T12:38:10.8582635Z", - "error": {}, - "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F", - "name": "932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F", - "properties": { - "exitCode": "15", - "outputHead": "====Action Command Output====", - "resultUrl": "https://cmnvc94zkjhvst.blob.core.windows.net/bmm-run-command-output/af4fea82-294a-429e-9d1e-e93d54f4ea24-action-bmmruncmd.tar.gz?se=2023-03-01T16%3A38%3A07Z&sig=Lj9MS01234567898fn4qb2E1HORGh260EHdRrCJTJg%3D&sp=r&spr=https&sr=b&st=2023-03-01T12%3A38%3A07Z&sv=2019-12-12" - }, - "resourceId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/m01-xx-HostedResources-xx/providers/Microsoft.NetworkCloud/bareMetalMachines/m01r750wkr3", - "startTime": "2023-03-01T12:37:48.2823434Z", - "status": "Succeeded" -} + ================================ + Script execution result can be found in storage account: + https://<storage_account_name>.blob.core.windows.net/bmm-run-command-output/a8e0a5fe-3279-46a8-b995-51f2f98a18dd-action-bmmrunreadcmd.tar.gz?se=2023-04-14T06%3A37%3A00Z&sig=XXX&sp=r&spr=https&sr=b&st=2023-04-14T02%3A37%3A00Z&sv=2019-12-12 ```++See [How To BareMetal Review Output Run-Read](howto-baremetal-review-read-output.md) for instructions on locating the output file in the Storage Account. You can also use the link to directly access the output zip file. |
operator-nexus | Howto Cluster Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md | Some arguments that are available for every Azure CLI command ## Create a Cluster Manager -Use the `az network clustermanager create` command to create a Cluster Manager. This command creates a new Cluster Manager or updates the properties of the Cluster Manager if it exists. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command. +Use the `az networkcloud clustermanager create` command to create a Cluster Manager. This command creates a new Cluster Manager or updates the properties of the Cluster Manager if it exists. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command. ```azurecli az networkcloud clustermanager create \ |
operator-nexus | Howto Cluster Metrics Configuration Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-metrics-configuration-management.md | -Users can't control the behavior (enable or disable) for collection of these included standard metrics. Though, users can control the collection of some optional metrics that aren't part of the link to the list. To enable this experience, users will have to create and update a MetricsConfiguration resource for a cluster. By default, creation of this MetricsConfiguration resource doesn't change the collection of metrics. User will have to update the resource to enable or disable these optional metrics collection. +Users can't control the behavior (enable or disable) for collection of these included standard metrics. Though, users can control the collection of some optional metrics that aren't part of the link to the list. To enable this experience, users have to create and update a MetricsConfiguration resource for a cluster. By default, creation of this MetricsConfiguration resource doesn't change the collection of metrics. User has to update the resource to enable or disable these optional metrics collection. > [!NOTE] > * For a cluster, at max, only one MetricsConfiguration resource can be created. > * Users need to create a MetricsConfiguration resource to check a list of optional metrics that can be controlled. -> * Deletion of the MetricsConfiguration resource will result in the standard set of metrics being restored. +> * Deletion of the MetricsConfiguration resource results in the standard set of metrics being restored. ## How to manage cluster metrics configuration -To support the lifecycle of cluster metrics configurations, the following `az rest` interactions allow for the creation and management of a cluster's metrics configurations. +To support the lifecycle of cluster metrics configurations, the following interactions allow for the creation and management of a cluster's metrics configurations. ### Creating a metrics configuration -Use of the `az rest` command requires that the request input is defined, and then a `PUT` request is made to the `Microsoft.NetworkCloud` resource provider. --Define a file with the desired metrics configuration. +Use the `az network cluster metricsconfiguration create` command to create metrics configuration for cluster. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command. ++```azurecli +az networkcloud cluster metricsconfiguration create \ + --cluster-name "<CLUSTER>" \ + --extended-location name="<CLUSTER_EXTENDED_LOCATION_ID>" type="CustomLocation" \ + --location "<LOCATION>" \ + --collection-interval <COLLECTION_INTERVAL (1-1440)> \ + --enabled-metrics "<METRIC_TO_ENABLE_1>" "<METRIC_TO_ENABLE_2>" \ + --tags <TAG_KEY1>="<TAG_VALUE1>" <TAG_KEY2>="<TAG_VALUE2>" \ + --resource-group "<RESOURCE_GROUP>" +``` * Replace values within `<` `>` with your specific information. * Query the cluster resource and find the value of `<CLUSTER-EXTENDED-LOCATION-ID>` in the `properties.clusterExtendedLocation` * The `collectionInterval` field is required, `enabledMetrics` is optional and may be omitted. -Example filename: create_metrics_configuration.json --```json -{ - "location": "<REGION (example: eastus)>", - "extendedLocation": { - "name": "<CLUSTER-EXTENDED-LOCATION-ID>", - "type": "CustomLocation" - }, - "properties": { - "collectionInterval": <COLLECTION-INTERVAL (1-1440)>, - "enabledMetrics": [ - "<METRIC-TO-ENABLE-1>", - "<METRIC-TO-ENABLE-2>" - ] - } -} -``` - > [!NOTE] > * The default metrics collection interval for standard set of metrics is set to every 5 minutes. Changing the `collectionInterval` will also impact the collection frequency for default standard metrics.+> * There can be only one set of metrics configuration defined per cluster. The resource is created with the name `default`. -The following commands will create the metrics configuration. The only name allowed for the metricsConfiguration is `default`. +Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). -```sh -export SUBSCRIPTION=<the subscription id for the cluster> -export RESOURCE_GROUP=<the resource group for the cluster> -export CLUSTER=<the cluter name> +### Metrics configuration elements -az rest -m put -u "https://management.azure.com/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.NetworkCloud/clusters/${CLUSTER}/metricsConfigurations/default?api-version=2022-12-12-preview" -b @create_metrics_configuration.json --debug -``` +| Parameter name | Description | +| --| -- | +| CLUSTER | Resource Name of Cluster | +| LOCATION | The Azure Region where the Cluster is deployed | +| CLUSTER_EXTENDED_LOCATION_ID | The Cluster extended Location from Azure portal | +| COLLECTION_INTERVAL | The collection frequency for default standard metrics | +| RESOURCE_GROUP | The Cluster resource group name | +| TAG_KEY1 | Optional tag1 to pass to Cluster create | +| TAG_VALUE1 | Optional tag1 value to pass to Cluster Create | +| TAG_KEY2 | Optional tag2 to pass to Cluster create | +| TAG_VALUE2 | Optional tag2 value to pass to Cluster create | +| METRIC_TO_ENABLE_1 | Optional metric1 that is enabled in addition to the default metrics | +| METRIC_TO_ENABLE_2 | Optional metric2 that is enabled in addition to the default metrics | -Specifying `--debug` in REST API will result in the tracking operation status in the returned command output. This operation status can be queried to monitor the progress of the operation. See: [How-to track asynchronous operations](howto-track-async-operations-cli.md). +Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). ## Retrieving a metrics configuration After a metrics configuration is created, it can be retrieved using a `az rest` command: -```sh -export SUBSCRIPTION=<the subscription id for the cluster> -export RESOURCE_GROUP=<the resource group for the cluster> -export CLUSTER=<the cluter name> -az rest -m get -u "https://management.azure.com/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.NetworkCloud/clusters/${CLUSTER}/metricsConfigurations/default?api-version=2022-12-12-preview" +```azurecli +az networkcloud cluster metricsconfiguration show \ + --cluster-name "<CLUSTER>" \ + --resource-group "<RESOURCE_GROUP>" ``` -This command will return a JSON representation of the metrics configuration. +This command returns a JSON representation of the metrics configuration. ## Updating a metrics configuration -Much like the creation of a metrics configuration, an update can be performed to change the configuration. A file, containing the metrics to be updated, is consumed as an input. --Example filename: update_metrics_configuration.json +Much like the creation of a metrics configuration, an update can be performed to change the configuration or update the tags assigned to the metrics configuration. -```json -{ - "properties": { - "collectionInterval": <COLLECTION-INTERVAL (1-1440)>, - "enabledMetrics": [ - "<METRIC-TO-ENABLE-1>", - "<METRIC-TO-ENABLE-2>" - ] - } -} +```azurecli +az networkcloud cluster metricsconfiguration update \ + --cluster-name "<CLUSTER>" \ + --collection-interval <COLLECTION_INTERVAL (1-1440)> \ + --enabled-metrics "<METRIC_TO_ENABLE_1>" "<METRIC_TO_ENABLE_2>" \ + --tags <TAG_KEY1>="<TAG_VALUE1>" <TAG_KEY2>="<TAG_VALUE2>" \ + --resource-group "<RESOURCE_GROUP>" ``` -This file is used as input to an `az rest` command. The change may include either or both of the updatable fields, `collectionInterval` or `enabledMetrics`. The `collectionInterval` can be updated independently of `enabledMetrics`. Omit fields that aren't being changed. +The `collection-interval` can be updated independently of `enabled-metrics` list. Omit fields that aren't being changed. -```sh -export SUBSCRIPTION=<the subscription id for the cluster> -export RESOURCE_GROUP=<the resource group for the cluster> -export CLUSTER=<the cluter name> --az rest -m put -u "https://management.azure.com/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.NetworkCloud/clusters/${CLUSTER}/metricsConfigurations/default?api-version=2022-12-12-preview" -b @update_metrics_configuration.json --debug -``` --Specifying `--debug` in REST API will result in the tracking operation status in the returned command output. This operation status can be queried to monitor the progress of the operation. See: [How-to track asynchronous operations](howto-track-async-operations-cli.md). +Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). ## Deleting a metrics configuration -Deletion of the metrics configuration will return the cluster to an unaltered configuration. To delete a metrics configuration, `az rest` API is used. +Deletion of the metrics configuration returns the cluster to an unaltered configuration. To delete a metrics configuration, use the below command: -```sh -export SUBSCRIPTION=<the subscription id for the cluster> -export RESOURCE_GROUP=<the resource group for the cluster> -export CLUSTER=<the cluter name> --az rest -m delete -u "https://management.azure.com/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.NetworkCloud/clusters/${CLUSTER}/metricsConfigurations/default?api-version=2022-12-12-preview" --debug +```azurecli +az networkcloud cluster metricsconfiguration delete \ + --cluster-name "<CLUSTER>" \ + --resource-group "<RESOURCE_GROUP>" ``` -Specifying `--debug` in REST API will result in the tracking operation status in the returned command output. This operation status can be queried to monitor the progress of the operation. See: [How-to track asynchronous operations](howto-track-async-operations-cli.md). +Specifying `--no-wait --debug` options in az cli command results in the execution of this command asynchronously. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). ++ |
operator-nexus | Howto Configure Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md | You can instead create a Cluster with ARM template/parameter files in | Parameter name | Description | | - | | | CLUSTER_NAME | Resource Name of the Cluster |-| LOCATION | The Azure Region where the Cluster will be deployed | +| LOCATION | The Azure Region where the Cluster is deployed | | CL_NAME | The Cluster Manager Custom Location from Azure portal | | CLUSTER_RG | The cluster resource group name | | LAW_ID | Log Analytics Workspace ID for the Cluster | You can instead create a Cluster with ARM template/parameter files in ### Cluster validation -A successful Operator Nexus Cluster creation will result in the creation of an AKS cluster +A successful Operator Nexus Cluster creation results in the creation of an AKS cluster inside your subscription. The cluster ID, cluster provisioning state and deployment state are returned as a result of a successful `cluster create`. az networkcloud cluster show --resource-group "$CLUSTER_RG" \ --resource-name "$CLUSTER_RESOURCE_NAME" ``` -The Cluster deployment is complete when the `provisioningState` of the resource +The Cluster creation is complete when the `provisioningState` of the resource shows: `"provisioningState": "Succeeded"` ### Cluster logging Cluster create Logs can be viewed in the following locations: ## Deploy Cluster -Once a Cluster has been created and the Rack Manifests have been added, the -deploy cluster action can be triggered. The deploy Cluster action creates the -bootstrap image and deploys the Cluster. +Once a Cluster has been created, the deploy cluster action can be triggered. +The deploy Cluster action creates the bootstrap image and deploys the Cluster. -Deploy Cluster will initiate a sequence of events to occur in the Cluster Manager +Deploy Cluster initiates a sequence of events to occur in the Cluster Manager 1. Validation of the cluster/rack properties 2. Generation of a bootable image for the ephemeral bootstrap cluster Deploy the on-premises Cluster: az networkcloud cluster deploy \ --name "$CLUSTER_NAME" \ --resource-group "$CLUSTER_RESOURCE_GROUP" \- --subscription "$SUBSCRIPTION_ID" + --subscription "$SUBSCRIPTION_ID" \ + --no-wait --debug ``` +This command runs synchronously. If you wish to skip waiting for the command to complete, specify the `--no-wait --debug` options. For more information, see [how to track asynchronous operations](howto-track-async-operations-cli.md). + ## Cluster deployment validation View the status of the cluster: ```azurecli-az networkcloud Cluster show --resource-group "$CLUSTER_RG" \ +az networkcloud cluster show --resource-group "$CLUSTER_RG" \ --resource-name "$CLUSTER_RESOURCE_NAME" ``` -The Cluster deployment is complete when the `provisioningState` of the resource -shows: `"provisioningState": "Succeeded"` +The Cluster deployment is complete when detailedStatus is set to `Running` and detailedStatusMessage shows message `Cluster is up and running`. ## Cluster deployment Logging |
operator-nexus | Template Virtualized Network Function Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/template-virtualized-network-function-deployment.md | export mysub="******" export mynfid='******' export myplatcustloc='******' export myhakscustloc='******'+export myHybridAksPluginType='****' ```+Note: hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV` ## Initialization az networkcloud l3network create --name "$myl3n-mgmt" \ --extended-location name="$myplatcustloc" type="CustomLocation" \ --location "$myloc" \ --hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "HostDevice" \+--hybrid-aks-plugin-type "$myHybridAksPluginType" \ --ip-allocation-type "$myalloctype" \ --ipv4-connected-prefix "$myipv4sub" \ --l3-isolation-domain-id "$myl3isdarm" \ az networkcloud l3network create --name "$myl3n-trust" \ --extended-location name="$myplatcustloc" type="CustomLocation" \ --location "$myloc" \ --hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "HostDevice" \+--hybrid-aks-plugin-type "$myHybridAksPluginType" \ --ip-allocation-type "$myalloctype" \ --ipv4-connected-prefix "$myipv4sub" \ --l3-isolation-domain-id "$myl3isdarm" \ az networkcloud l3network create --name "$myl3n-untrust" \ --extended-location name="$myplatcustloc" type="CustomLocation" \ --location "$myloc" \ --hybrid-aks-ipam-enabled "False" \hybrid-aks-plugin-type "HostDevice" \+--hybrid-aks-plugin-type "$myHybridAksPluginType" \ --ip-allocation-type "$myalloctype" \ --ipv4-connected-prefix "$myipv4sub" \ --l3-isolation-domain-id "$myl3isdarm" \ az networkcloud l2network create --name "$myl2n" \ --subscription "$mysub" \ --extended-location name="$myplatcustloc" type="CustomLocation" \ --location "$myloc" \hybrid-aks-plugin-type "HostDevice" \+--hybrid-aks-plugin-type "$myHybridAksPluginType" \ --l2-isolation-domain-id "$myl2isdarm" \ --debug ``` |
sap | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md | The following table shows the required permissions for the service principals: > | Azure CLI | Installing [Azure CLI](/cli/azure/install-azure-cli-linux) | Setup of Deployer and during deployments | The firewall requirements for Azure CLI installation are defined here: [Installing Azure CLI](/cli/azure/azure-cli-endpoints) | > | PIP | 'bootstrap.pypa.io' | Setup of Deployer | See [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) | > | Ansible | 'pypi.org', 'pythonhosted.org', 'galaxy.ansible.com' | Setup of Deployer | |-> | PowerShell Gallery | 'onegetcdn.azureedge.net', 'psg-prod-centralus.azureedge.net', 'psg-prod-eastus.azureedge.net' | Setup of Windows based systems | See [PowerShell Gallery](/powershell/scripting/gallery/getting-started?#network-access-to-the-powershell-gallery) | +> | PowerShell Gallery | 'onegetcdn.azureedge.net', 'psg-prod-centralus.azureedge.net', 'psg-prod-eastus.azureedge.net' | Setup of Windows based systems | See [PowerShell Gallery](/powershell/gallery/gallery/getting-started#network-access-to-the-powershell-gallery) | > | Windows components | 'download.visualstudio.microsoft.com', 'download.visualstudio.microsoft.com', 'download.visualstudio.com' | Setup of Windows based systems | See [Visual Studio components](/visualstudio/install/install-and-use-visual-studio-behind-a-firewall-or-proxy-server#install-visual-studio) | > | SAP Downloads | 'softwaredownloads.sap.com'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | SAP Software download | See [SAP Downloads](https://launchpad.support.sap.com/#/softwarecenter) | > | Azure DevOps Agent | 'https://vstsagentpackage.azureedge.net'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | Setup Azure DevOps | | The automation framework also supports having the deployment environment and SAP The deployment environment provides the following -- A deployment VM, which does Terraform deployments and Ansible configuration.+- One or more deployment virtual machines, which perform the infrastructure deployments using Terraform and performs the system configuration and SAP installation using Ansible playbooks. - A key vault, which contains service principal identity information for use by Terraform deployments. - An Azure Firewall component, which provides outbound internet connectivity. The deployment configuration file defines the region, environment name, and virtual network information. For example: -```json -{ - "infrastructure": { - "environment": "MGMT", - "region": "westeurope", - "vnets": { - "management": { - "address_space": "0.0.0.0/25", - "subnet_mgmt": { - "prefix": "0.0.0.0/28" - }, - "subnet_fw": { - "prefix": "0.0.0.0/26" - } - } - } - }, - "options": { - "enable_deployer_public_ip": true - }, - "firewall_deployment": true -} +```terraform +# The environment value is a mandatory field, it is used for partitioning the environments, for example (PROD and NP) +environment = "MGMT" ++# The location/region value is a mandatory field, it is used to control where the resources are deployed +location = "westeurope" ++# management_network_address_space is the address space for management virtual network +management_network_address_space = "10.10.20.0/25" ++# management_subnet_address_prefix is the address prefix for the management subnet +management_subnet_address_prefix = "10.10.20.64/28" ++# management_firewall_subnet_address_prefix is the address prefix for the firewall subnet +management_firewall_subnet_address_prefix = "10.10.20.0/26" ++# management_bastion_subnet_address_prefix is a mandatory parameter if bastion is deployed and if the subnets are not defined in the workload or if existing subnets are not used +management_bastion_subnet_address_prefix = "10.10.20.128/26" ++deployer_enable_public_ip = false ++firewall_deployment = true ++bastion_deployment = true ``` For more information, see the [in-depth explanation of how to configure the deployer](configure-control-plane.md). |
sentinel | Normalization Manage Parsers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-manage-parsers.md | Some parsers requires you to update the list of sources that are relevant to the - Set the `SourceType` field to the parser specific value specified in the parser documentation. - Set the `Source` field to the identifier of the source used in the events. You may need to query the original table, such as Syslog, to determine the correct value. +If you system does not have the `Sources_by_SourceType` watchlist deployed, deploy the watchlist to your Microsoft Sentinel workspace from the Microsoft Sentinel [GitHub](https://aka.ms/DeployASimWatchlists) repository. + ## <a name="next-steps"></a>Next steps This article discusses managing the Advanced Security Information Model (ASIM) parsers. |
storage | File Sync Troubleshoot Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-installation.md | After deploying the Storage Sync Service, the next steps in deploying Azure File ## Agent installation <a id="agent-installation-failures"></a>**Troubleshoot agent installation failures** -If the Azure File Sync agent installation fails, at an elevated command prompt, run the following command to turn on logging during agent installation: +If the Azure File Sync agent installation fails, locate the installation log file which is located in the agent installation directory. If the Azure File Sync agent is installed on the C: volume, the installation log file is located under C:\Program Files\Azure\StorageSyncAgent\InstallerLog. +> [!Note] +> If the Azure File Sync agent is installed from the command line and the /l\*v switch is used, the log file will be located in the path where the agent installation was executed. ++The log file name for agent installations using the MSI package is AfsAgentInstall. The log file name for agent installations using the MSP package (update package) is AfsUpdater. ++Once you have located the agent installation log file, open the file and search for the failure code at the end of the log. If you search for **error code 1603** or **sandbox**, you should be able to locate the error code. ++Here is a snippet from an agent installation that failed: ```-StorageSyncAgent.msi /l*v AFSInstaller.log +CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException +CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess +CAQuietExec64: Error 0x80070001: Command line returned an error. +CAQuietExec64: Error 0x80070001: QuietExec64 Failed +CAQuietExec64: Error 0x80070001: Failed in ExecCommon64 method +CustomAction SetRegPIIAclSettings returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) +Action ended 12:23:40: InstallExecute. Return value 3. +MSI (s) (0C:C8) [12:23:40:994]: Note: 1: 2265 2: 3: -2147287035 ``` -Review installer.log to determine the cause of the installation failure. +For this example, the agent installation failed with error code -2147287035 (ERROR_ACCESS_DENIED). <a id="agent-installation-gpo"></a>**Agent installation fails with error: Storage Sync Agent Setup Wizard ended prematurely because of an error** In the agent installation log, the following error is logged: ```-CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException -CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess -CAQuietExec64: Error 0x80070001: Command line returned an error. +CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException +CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess +CAQuietExec64: Error 0x80070001: Command line returned an error. +CAQuietExec64: Error 0x80070001: QuietExec64 Failed +CAQuietExec64: Error 0x80070001: Failed in ExecCommon64 method +CustomAction SetRegPIIAclSettings returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) +Action ended 12:23:40: InstallExecute. Return value 3. +MSI (s) (0C:C8) [12:23:40:994]: Note: 1: 2265 2: 3: -2147287035 ``` This issue occurs if the [PowerShell execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) is configured using group policy and the policy setting is "Allow only signed scripts." All scripts included with the Azure File Sync agent are signed. The Azure File Sync agent installation fails because the installer is performing the script execution using the Bypass execution policy setting. This issue occurs if the [PowerShell execution policy](/powershell/module/micros To resolve this issue, temporarily disable the [Turn on Script Execution](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) group policy setting on the server. Once the agent installation completes, the group policy setting can be re-enabled. <a id="agent-installation-on-DC"></a>**Agent installation fails on Active Directory Domain Controller** -If you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version, you may hit the issue where the sync agent will fail to install. ++In the agent installation log, the following error is logged: ++``` +CAQuietExec64: Error 0x80070001: Command line returned an error. +CAQuietExec64: Error 0x80070001: CAQuietExec64 Failed +CustomAction InstallHFSRequiredWindowsFeatures returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) +Action ended 8:51:12: InstallExecute. Return value 3. +MSI (s) (EC:B4) [08:51:12:439]: Note: 1: 2265 2: 3: -2147287035 +``` ++This issue occurs if you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version. To resolve, transfer the PDC role to another domain controller running Windows Server 2012 R2 or more recent, then install sync. |
storage | File Sync Troubleshoot Sync Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md | This article is designed to help you troubleshoot and resolve common sync issues [!INCLUDE [storage-sync-files-change-detection](../../../includes/storage-sync-files-change-detection.md)] <a id="serverendpoint-pending"></a>**Server endpoint health is in a pending state for several hours** -This issue is expected if you create a cloud endpoint and use an Azure file share that contains data. The change enumeration job that scans for changes in the Azure file share must complete before files can sync between the cloud and server endpoints. The time to complete the job is dependent on the size of the namespace in the Azure file share. The server endpoint health should update once the change enumeration job completes. +This issue is expected if you create a cloud endpoint and use an Azure file share that contains data. The cloud change enumeration job that scans for changes in the Azure file share must complete before files can sync between the cloud and server endpoints. The time to complete the job is dependent on the size of the namespace in the Azure file share. The server endpoint health should update once the change enumeration job completes. ++To check the status of the cloud change enumeration job, go the Cloud Endpoint properties in the portal and the status is provided in the Change Enumeration section. ### <a id="broken-sync"></a>How do I monitor sync health? # [Portal](#tab/portal1) Server endpoint provisioning fails with this error code if these conditions are * This server endpoint was provisioned with the initial sync mode: [server authoritative](file-sync-server-endpoint-create.md#initial-sync-section) * Local server path is empty or contains no items recognized as able to sync. -This provisioning error protects you from deleting all content that might be available in an Azure file share. Server authoritative upload is a special mode to catch up a cloud location that was already seeded, with the updates from the server location. Review this [migration guide](../files/storage-files-migration-server-hybrid-databox.md) to understand the scenario for which this mode has been built for. +This provisioning error protects you from deleting all content that might be available in an Azure file share. Server authoritative upload is a special mode to catch up a cloud location that was already seeded, with the updates from the server location. Review this [migration guide](../files/storage-files-migration-server-hybrid-databox.md) to understand the scenario for which this mode has been built. 1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md). 1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md). |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | + File conflicts are created when the file in the Azure file share doesn't match the file in the server endpoint location (size and/or last modified time is different). + + The following scenarios can cause file conflicts: + - A file is created or modified in an endpoint (for example, Server A). If the same file is modified on a different endpoint before the change on Server A is synced to that endpoint, a conflict file is created. + - The file existed in the Azure file share and server endpoint location prior to the server endpoint creation. If the file size and/or last modified time is different between the file on the server and Azure file share when the server endpoint is created, a conflict file is created. + - Sync database was recreated due to corruption or knowledge limit reached. Once the database is recreated, sync enters a mode called reconciliation. If the file size and/or last modified time is different between the file on the server and Azure file share when reconciliation occurs, a conflict file is created. + Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name. The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the file name. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name is **Cloud**. The name follows this taxonomy: \<FileNameWithoutExtension\>-\<endpointName\>\[-#\].\<ext\> |
storage | Storage Files Scale Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md | File scale targets apply to individual files stored in Azure file shares. <sup>3 Azure Files supports 2,000 open handles per share, and in practice can go higher. However, if an application keeps an open handle on the root of the share, the share root limit will be reached before the per-file or per-directory limit is reached.</sup> ## Azure File Sync scale targets-The following table indicates which target are soft, representing the Microsoft tested boundary, and hard, indicating an enforced maximum: +The following table indicates which targets are soft, representing the Microsoft tested boundary, and hard, indicating an enforced maximum: | Resource | Target | Hard limit | |-|--|| The following table indicates which target are soft, representing the Microsoft > [!Note] > An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate. -### Azure File Sync performance metrics +## Azure File Sync performance metrics Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second. For Azure File Sync, performance is critical in two stages: For Azure File Sync, performance is critical in two stages: > [!Note] > When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service resources. As a result, upload performance will be impacted. In extreme cases, some sync sessions will fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced. -To help you plan your deployment for each of the stages, below are the results observed during the internal testing on a system with a config +## Internal test results +To help you plan your deployment for each of the stages (initial one-time provisioning and ongoing sync), below are the results observed during the internal testing on a system with the following configuration: | System configuration | Details | |-|-| To help you plan your deployment for each of the stages, below are the results o | Network | 1 Gbps Network | | Workload | General Purpose File Server| +### Initial one-time provisioning + | Initial one-time provisioning | Details | |-|-| | Number of objects | 25 million objects | To help you plan your deployment for each of the stages, below are the results o | Upload Throughput | 20 objects per second per sync group | | Namespace Download Throughput | 400 objects per second | -### Initial one-time provisioning - **Initial cloud change enumeration**: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 80 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days. Splitting your data into multiple server endpoints and sync groups can speed up **Namespace download throughput** When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint. +### Ongoing sync + | Ongoing sync | Details | |-|--| | Number of objects synced| 125,000 objects (~1% churn) | |
virtual-machine-scale-sets | Virtual Machine Scale Sets Automatic Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md | The following platform SKUs are currently supported (and more are added periodic - Ensure that external resources specified in the scale set model are available and updated. Examples include SAS URI for bootstrapping payload in VM extension properties, payload in storage account, reference to secrets in the model, and more. - For scale sets using Windows virtual machines, starting with Compute API version 2019-03-01, the property *virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates* property must set to *false* in the scale set model definition. The *enableAutomaticUpdates* property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required. +> [!NOTE] +> After an OS disk is replaced through reimage or upgrade, the attached data disks may have their drive letters reassigned. To retain the same drive letters for attached disks, it is suggested to use a custom boot script. ++ ### Service Fabric requirements If you are using Service Fabric, ensure the following conditions are met: |
virtual-machine-scale-sets | Virtual Machine Scale Sets Terminate Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md | -Once enrolled into the feature, scale set instances don't need to wait for specified timeout to expire before the instance is deleted. After receiving a Terminate notification, the instance can choose to be deleted at any time before the terminate timeout expires. +Once enrolled into the feature, scale set instances don't need to wait for specified timeout to expire before the instance is deleted. After receiving a Terminate notification, the instance can choose to be deleted at any time before the terminate timeout expires. Terminate notifications cannot be enabled on Spot instances. For more information on Spot instances, see [Azure Spot Virtual Machines for Virtual Machine Scale Sets](use-spot.md) ## Enable terminate notifications There are multiple ways of enabling termination notifications on your scale set instances, as detailed in the examples below. |