Updates from: 01/19/2021 04:05:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2.md
@@ -23,7 +23,7 @@ Microsoft has deployed a new endpoint (API) for Azure AD Connect that improves t
> [!NOTE] > Currently, the new endpoint does not have a configured group size limit for Microsoft 365 groups that are written back. This may have an effect on your Active Directory and sync cycle latencies. It is recommended to increase your group sizes incrementally.
-## Pre-requisitesΓÇ»
+## PrerequisitesΓÇ»
In order to use the new V2 endpoint, you will need to use [Azure AD Connect version 1.5.30.0](https://www.microsoft.com/download/details.aspx?id=47594) or later and follow the deployment steps provided below to enable the V2 endpoint for your Azure AD Connect server. ## Deployment guidance
@@ -175,7 +175,7 @@ If you have enabled the v2 endpoint and need to rollback, follow these steps:
## Frequently asked questionsΓÇ» **When will the new end point become the default for upgrades and new installations?**ΓÇ»
-</br>We are planning a new release of AADConnect to be published for download in January 2021. This release will use the V2 end point by default and will enable syncing groups larger than 50K withuot any additional configuration. THis release will subsequently be published for auto upgrade to eligible servers.
+</br>We are planning a new release of AADConnect to be published for download in January 2021. This release will use the V2 end point by default and will enable syncing groups larger than 50K without any additional configuration. THis release will subsequently be published for auto upgrade to eligible servers.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/custom-assign-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-assign-powershell.md
@@ -1,6 +1,6 @@
---
-title: Assign custom roles using Azure PowerShell - Azure AD | Microsoft Docs
-description: Manage members of an Azure AD administrator custom role with Azure PowerShell.
+title: Assign custom roles using Azure AD PowerShell - Azure AD | Microsoft Docs
+description: Manage members of an Azure AD administrator custom role with Azure AD PowerShell.
services: active-directory author: curtand manager: daveba
@@ -16,7 +16,7 @@ ms.collection: M365-identity-device-management
--- # Assign custom roles with resource scope using PowerShell in Azure Active Directory
-This article describes how to create a role assignment at organization-wide scope in Azure Active Directory (Azure AD). Assigning a role at organization-wide scope grants access across the Azure AD organization. To create a role assignment with a scope of a single Azure AD resource, see [How to create a custom role and assign it at resource scope](custom-create.md).This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module.
+This article describes how to create a role assignment at organization-wide scope in Azure Active Directory (Azure AD). Assigning a role at organization-wide scope grants access across the Azure AD organization. To create a role assignment with a scope of a single Azure AD resource, see [How to create a custom role and assign it at resource scope](custom-create.md). This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module.
For more information about Azure AD admin roles, see [Assigning administrator roles in Azure Active Directory](permissions-reference.md).
@@ -26,26 +26,26 @@ Connect to your Azure AD organization using a global administrator account to as
## Prepare PowerShell
-Install the Azure AD PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.17). Then import the Azure AD PowerShell preview module, using the following command:
+Install the Azure AD PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview). Then import the Azure AD PowerShell preview module, using the following command:
``` PowerShell
-import-module azureadpreview
+Import-Module AzureADPreview
``` To verify that the module is ready to use, match the version returned by the following command to the one listed here: ``` PowerShell
-get-module azureadpreview
+Get-Module AzureADPreview
ModuleType Version Name ExportedCommands ---------- --------- ---- ---------------- Binary 2.0.0.115 azureadpreview {Add-AzureADMSAdministrati...} ```
-Now you can start using the cmdlets in the module. For a full description of the cmdlets in the Azure AD module, see the online reference documentation for [Azure AD preview module](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.17).
+Now you can start using the cmdlets in the module. For a full description of the cmdlets in the Azure AD module, see the online reference documentation for [Azure AD preview module](https://www.powershellgallery.com/packages/AzureADPreview).
-## Assign a role to a user or service principal with resource scope
+## Assign a directory role to a user or service principal with resource scope
-1. Open the Azure AD preview PowerShell module.
+1. Load the Azure AD PowerShell (Preview) module.
1. Sign in by executing the command `Connect-AzureAD`. 1. Create a new role using the following PowerShell script.
@@ -63,13 +63,13 @@ $resourceScope = '/' + $appRegistration.objectId
$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId ```
-To assign the role to a service principal instead of a user, use the [Get-AzureADMSServicePrincipal cmdlet](/powershell/module/azuread/get-azureadserviceprincipal).
+To assign the role to a service principal instead of a user, use the [Get-AzureADMSServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet.
-## Operations on RoleDefinition
+## Role definitions
-Role definition objects contain the definition of the built-in or custom role, along with the permissions that are granted by that role assignment. This resource displays both custom role definitions and built-in directoryRoles (which are displayed in roleDefinition equivalent form). Today, an Azure AD organization can have a maximum of 30 unique custom RoleDefinitions defined.
+Role definition objects contain the definition of the built-in or custom role, along with the permissions that are granted by that role assignment. This resource displays both custom role definitions and built-in directory roles (which are displayed in roleDefinition equivalent form). Today, an Azure AD organization can have a maximum of 30 unique custom role definitions defined.
-### Create Operations on RoleDefinition
+### Create a role definition
``` PowerShell # Basic information
@@ -77,32 +77,32 @@ $description = "Can manage credentials of application registrations"
$displayName = "Application Registration Credential Administrator" $templateId = (New-Guid).Guid
-# Set of actions to grant
-$allowedResourceAction =
-@(
- "microsoft.directory/applications/standard/read",
- "microsoft.directory/applications/credentials/update"
-)
-$rolePermissions = @{'allowedResourceActions'= $allowedResourceAction}
+# Set of actions to include
+$rolePermissions = @{
+ "allowedResourceActions" = @(
+ "microsoft.directory/applications/standard/read",
+ "microsoft.directory/applications/credentials/update"
+ )
+}
-# Create new custom admin role
+# Create new custom directory role
$customAdmin = New-AzureADMSRoleDefinition -RolePermissions $rolePermissions -DisplayName $displayName -Description $description -TemplateId $templateId -IsEnabled $true ```
-### Read Operations on RoleDefinition
+### Read and list role definitions
``` PowerShell # Get all role definitions Get-AzureADMSRoleDefinitions
-# Get single role definition by objectId
+# Get single role definition by ID
Get-AzureADMSRoleDefinition -Id 86593cfc-114b-4a15-9954-97c3494ef49b # Get single role definition by templateId Get-AzureADMSRoleDefinition -Filter "templateId eq 'c4e39bd9-1100-46d3-8c65-fb160da0071f'" ```
-### Update Operations on RoleDefinition
+### Update a role definition
``` PowerShell # Update role definition
@@ -111,18 +111,18 @@ Get-AzureADMSRoleDefinition -Filter "templateId eq 'c4e39bd9-1100-46d3-8c65-fb16
Set-AzureADMSRoleDefinition -Id c4e39bd9-1100-46d3-8c65-fb160da0071f -DisplayName "Updated DisplayName" ```
-### Delete operations on RoleDefinition
+### Delete a role definition
``` PowerShell # Delete role definition Remove-AzureADMSRoleDefinitions -Id c4e39bd9-1100-46d3-8c65-fb160da0071f ```
-## Operations on RoleAssignment
+## Role assignments
-Role assignments contain information linking a given security principal (a user or application service principal) to a role definition. If required, you can add a scope of a single Azure AD resource for the assigned permissions. Restricting the scope of permissions is supported for built-in and custom roles.
+Role assignments contain information linking a given security principal (a user or application service principal) to a role definition. If required, you can add a scope of a single Azure AD resource for the assigned permissions. Restricting the scope of a role assignment is supported for built-in and custom roles.
-### Create Operations on RoleAssignment
+### Create a role assignment
``` PowerShell # Get the user and role definition you want to link
@@ -137,7 +137,7 @@ $resourceScope = '/' + $appRegistration.objectId
$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId ```
-### Read Operations on RoleAssignment
+### Read and list role assignments
``` PowerShell # Get role assignments for a given principal
@@ -147,7 +147,7 @@ Get-AzureADMSRoleAssignment -Filter "principalId eq '27c8ca78-ab1c-40ae-bd1b-eae
Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '355aed8a-864b-4e2b-b225-ea95482e7570'" ```
-### Delete Operations on RoleAssignment
+### Delete a role assignment
``` PowerShell # Delete role assignment
@@ -157,5 +157,5 @@ Remove-AzureADMSRoleAssignment -Id 'qiho4WOb9UKKgng_LbPV7tvKaKRCD61PkJeKMh7Y458-
## Next steps - Share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032)-- For more about roles and azure AD administrator role assignments, see [Assign administrator roles](permissions-reference.md)
+- For more about roles and Azure AD administrator role assignments, see [Assign administrator roles](permissions-reference.md)
- For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -51,7 +51,7 @@ The following administrator roles are available:
Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. Note that users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
-This role also grants the ability to _consent_ to delegated permissions and application permissions, with the exception of permissions on the Microsoft Graph API.
+This role also grants the ability to _consent_ to delegated permissions and application permissions, with the exception of application permissions on the Microsoft Graph API.
> [!IMPORTANT] > This exception means that you can still consent to permissions for _other_ apps (for example, non-Microsoft apps or apps that you have registered), but not to permissions on Azure AD itself. You can still _request_ these permissions as part of the app registration, but _granting_ (that is, consenting to) these permissions requires an Azure AD admin. This means that a malicious user cannot easily elevate their permissions, for example by creating and consenting to an app that can write to the entire directory and through that app's permissions elevate themselves to become a global admin.
aks https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
@@ -104,7 +104,7 @@ To confirm that the upgrade was successful, use the [az aks show][az-aks-show] c
az aks show --resource-group myResourceGroup --name myAKSCluster --output table ```
-The following example output shows that the cluster now runs *1.13.10*:
+The following example output shows that the cluster now runs *1.18.10*:
```json Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
@@ -184,4 +184,4 @@ This article showed you how to upgrade an existing AKS cluster. To learn more ab
[az-feature-register]: /cli/azure/feature#az-feature-register [az-provider-register]: /cli/azure/provider?view=azure-cli-latest#az-provider-register&preserve-view=true [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
-[upgrade-cluster]: #upgrade-an-aks-cluster
\ No newline at end of file
+[upgrade-cluster]: #upgrade-an-aks-cluster
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-other.md
@@ -145,6 +145,7 @@ The *function.json* file in the *HttpExample* folder declares an HTTP trigger fu
```rust use std::collections::HashMap; use std::env;
+ use std::net::Ipv4Addr;
use warp::{http::Response, Filter}; #[tokio::main]
@@ -164,7 +165,7 @@ The *function.json* file in the *HttpExample* folder declares an HTTP trigger fu
Err(_) => 3000, };
- warp::serve(example1).run(([127, 0, 0, 1], port)).await
+ warp::serve(example1).run((Ipv4Addr::UNSPECIFIED, port)).await
} ```
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-instance-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-instance-management.md
@@ -988,8 +988,8 @@ from datetime import datetime, timedelta
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.HttpResponse: client = df.DurableOrchestrationClient(starter)
- created_time_from = datetime.datetime()
- created_time_to = datetime.datetime.today + timedelta(days = -30)
+ created_time_from = datetime.min
+ created_time_to = datetime.today() + timedelta(days = -30)
runtime_statuses = [OrchestrationRuntimeStatus.Completed] return client.purge_instance_history_by(created_time_from, created_time_to, runtime_statuses)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-performance-monitor-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor-faq.md
@@ -30,7 +30,7 @@ Listed below are the platform requirements for NPM's various capabilities:
- NPM's ExpressRoute Monitor capability supports only Windows server (2008 SP1 or later) operating system. ### Can I use Linux machines as monitoring nodes in NPM?
-The capability to monitor networks using Linux-based nodes is currently in preview. Acccess the agent [here](../../virtual-machines/extensions/oms-linux.md). Reach out to your Account Manager to know more. Linux agents provide monitoring capability only for NPM's Performance Monitor capability, and are not available for the Service Connectivity Monitor and ExpressRoute Monitor capabilities
+The capability to monitor networks using Linux-based nodes is now generally available. Acccess the agent [here](../../virtual-machines/extensions/oms-linux.md). Linux agents provide monitoring capability only for NPM's Performance Monitor capability, and are not available for the Service Connectivity Monitor and ExpressRoute Monitor capabilities
### What are the size requirements of the nodes to be used for monitoring by NPM? For running the NPM solution on node VMs to monitor networks, the nodes should have at least 500-MB memory and one core. You don't need to use separate nodes for running NPM. The solution can run on nodes that have other workloads running on it. The solution has the capability to stop the monitoring process if it uses more than 5% CPU.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/logs-dedicated-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/log-query/logs-dedicated-clusters.md
@@ -53,7 +53,7 @@ More details are billing for Log Analytics dedicated clusters are available [her
## Asynchronous operations and status check
-Some of the configuration steps run asynchronously because they can't be completed quickly. The status in response contains can be one of the followings: 'InProgress', 'Updating', 'Deleting', 'Succeeded or 'Failed' including the error code. When using REST, the response initially returns an HTTP status code 200 (OK) and header with Azure-AsyncOperation property when accepted:
+Some of the configuration steps run asynchronously because they can't be completed quickly. The status in response contains can be one of the followings: 'InProgress', 'Updating', 'Deleting', 'Succeeded or 'Failed' including the error code. When using REST, the response initially returns an HTTP status code 202 (Accepted) and header with Azure-AsyncOperation property:
```JSON "Azure-AsyncOperation": "https://management.azure.com/subscriptions/subscription-id/providers/Microsoft.OperationalInsights/locations/region-name/operationStatuses/operation-id?api-version=2020-08-01"
@@ -120,7 +120,7 @@ Content-type: application/json
*Response*
-Should be 200 OK and a header.
+Should be 202 (Accepted) and a header.
### Check cluster provisioning status
@@ -224,7 +224,7 @@ Content-type: application/json
*Response*
-200 OK and header.
+202 (Accepted) and header.
### Check workspace link status
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/customer-managed-keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/customer-managed-keys.md
@@ -91,7 +91,7 @@ N/A
# [REST](#tab/rest)
-When using REST, the response initially returns an HTTP status code 200 (OK) and header with *Azure-AsyncOperation* property when accepted:
+When using REST, the response initially returns an HTTP status code 202 (Accepted) and header with *Azure-AsyncOperation* property:
```json "Azure-AsyncOperation": "https://management.azure.com/subscriptions/subscription-id/providers/Microsoft.OperationalInsights/locations/region-name/operationStatuses/operation-id?api-version=2020-08-01" ```
@@ -197,7 +197,7 @@ It takes the propagation of the key a few minutes to complete. You can check the
2. Send a GET request on the cluster and look at the *KeyVaultProperties* properties. Your recently updated key should return in the response. A response to GET request should look like this when the key update is complete:
-200 OK and header
+202 (Accepted) and header
```json { "identity": {
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard-errors.md new file mode 100644
@@ -0,0 +1,67 @@
+---
+title: Common errors
+description: This document contain information about common errors that exists in the dashboard
+ms.subservice: alerts
+ms.topic: conceptual
+author: nolavime
+ms.author: nolavime
+ms.date: 01/18/2021
+
+---
+
+# Errors in the connector status
+
+In the connector status list you can find errors that can help you to fix your ITSM connector.
+
+## Status Common Errors
+
+in this section you can find the common error that you can find in the status list and how you should resolve it:
+
+* **Error**: "Unexpected response from ServiceNow along with success status code. Response: { "import_set": "{import_set_id}", "staging_table": "x_mioms_microsoft_oms_incident", "result": [ { "transform_map": "OMS Incident", "table": "incident", "status": "error", "error_message": "{Target record not found|Invalid table|Invalid staging table" }"
+
+ **Cause**: Such error is returned from ServiceNow when:
+ * A custom script deployed in ServiceNow instance causes incidents to be ignored.
+ * "OMS Integrator App" code itself was modified on ServiceNow side, e.g. the onBefore script.
+
+ **Resolution**: Disable all custom scripts or code modifications of the data import path.
+
+* **Error**: "{"error":{"message":"Operation Failed","detail":"ACL Exception Update Failed due to security constraints"}"
+
+ **Cause**: ServiceNow permissions misconfiguration
+
+ **Resolution**: Check that all the roles have been properly assigned as [specified](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role).
+
+* **Error**: "An error occurred while sending the request."
+
+ **Cause**: "ServiceNow Instance unavailable"
+
+ **Resolution**: Check your instance in ServiceNow it might be deleted or unavailable.
+
+* **Error**: "ServiceDeskHttpBadRequestException: StatusCode=429"
+
+ **Cause**: ServiceNow rate limits are too low.
+
+ **Resolution**: Increase or cancel the rate limits in ServiceNow instance as explained [here](https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/task/investigate-rate-limit-violations.html).
+
+* **Error**: "AccessToken and RefreshToken invalid. User needs to authenticate again."
+
+ **Cause**: Refresh token is expired.
+
+ **Resolution**: Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
+
+* **Error**: "Could not create/update work item for alert {alertName}. ITSM Connector {connectionIdentifier} does not exist or was deleted."
+
+ **Cause**: ITSM Connector was deleted.
+
+ **Resolution**: The ITSM Connector was deleted but there are still ITSM Actions defined to use it. There are 2 options to solve this issue:
+ * Find and disable or delete such action
+ * [Reconfigure the action group](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) to use an existing ITSM Connector.
+ * [Create a new ITSM connector](./itsmc-definition.md#create-an-itsm-connection) and [reconfigure the action group to use it](itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
+
+## UI Common Errors
+
+* **Error**:"Something went wrong. Could not get connection details."
+
+ **Cause**: Newly created ITSM Connector has yet to finish the initial Sync.
+
+ **Resolution**: When a new ITSM connector is created, ITSM Connector starts syncing information from ITSM system, such as work item templates and work items. Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-resync-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-resync-servicenow.md
@@ -5,67 +5,16 @@ ms.subservice: alerts
ms.topic: conceptual author: nolavime ms.author: nolavime
-ms.date: 04/12/2020
+ms.date: 01/17/2021
---
-# Troubleshooting problems in ITSM Connector
-
-This article discusses common problems in ITSM Connector and how to troubleshoot them.
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. For more information on alerting, see Overview of alerts in Microsoft Azure.
-The customer can select how they want to be notified on the alert whether it is by mail, SMS, Webhook or even to automate a solution. Another option to be notified is using ITSM.
-ITSM gives you the option to send the alerts to external ticketing system such as ServiceNow.
-
-## Visualize and analyze the incident and change request data
-
-Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. The log record schema for this data is provided in the [Additional information Section](./itsmc-synced-data.md) of this article.
-
-You can visualize the incident and change request data by using the ITSMC dashboard:
-
-![Screenshot that shows the ITSMC dashboard.](media/itsmc-overview/itsmc-overview-sample-log-analytics.png)
-
-The dashboard also provides information about connector status, which you can use as a starting point to analyze problems with the connections.
-
-In order to get more information about the dashboard investigation, see [Error Investigation using the dashboard](./itsmc-dashboard.md).
-
-### Service map
-
-You can also visualize the incidents synced against the affected computers in Service Map.
-
-Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../insights/service-map.md).
-
-If you're using Service Map, you can view the service desk items created in ITSM solutions, as shown here:
-
-![Screenshot that shows the Log Analytics screen.](media/itsmc-overview/itsmc-overview-integrated-solutions.png)
-
-## Troubleshoot ITSM connections
--- If a connection fails to connect to the ITSM system and you get an **Error in saving connection** message, take the following steps:
- - For ServiceNow, Cherwell, and Provance connections:
- - Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections.
- - Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection.
- - For Service Manager connections:
- - Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
--- If data from ServiceNow isn't getting synced to Log Analytics, ensure that the ServiceNow instance isn't sleeping. ServiceNow dev instances sometimes go to sleep when they're idle for a long time. If that isn't what's happening, report the problem.-- If Log Analytics alerts fire but work items aren't created in the ITSM product, if configuration items aren't created/linked to work items, or for other information, see these resources:
- - ITSMC: The solution shows a summary of connections, work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
- - **Log Search** page: View the errors and related information directly by using the query `*ServiceDeskLog_CL*`.
-
-### Troubleshoot Service Manager web app deployment
--- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.-- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.-- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.-
-### How to manually fix sync problems
+# How to manually fix sync problems
Azure Monitor can connect to third-party IT Service Management (ITSM) providers. ServiceNow is one of those providers. For security reasons, you may need to refresh the authentication token used for your connection with ServiceNow. Use the following synchronization process to reactivate the connection and refresh the token: - 1. Search for the solution in the top search banner, then select the relevant solutions ![Screenshot that shows the top search banner and where to select the relevant solutions.](media/itsmc-resync-servicenow/solution-search-8bit.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-troubleshoot-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-troubleshoot-overview.md new file mode 100644
@@ -0,0 +1,58 @@
+---
+title: Troubleshooting problems in ITSM Connector
+description: Troubleshooting problems in IT Service Management Connector
+ms.subservice: alerts
+ms.topic: conceptual
+author: nolavime
+ms.author: nolavime
+ms.date: 04/12/2020
+
+---
+# Troubleshooting problems in ITSM Connector
+
+This article discusses common problems in ITSM Connector and how to troubleshoot them.
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. For more information on alerting, see Overview of alerts in Microsoft Azure.
+The customer can select how they want to be notified on the alert whether it is by mail, SMS, Webhook or even to automate a solution. Another option to be notified is using ITSM.
+ITSM gives you the option to send the alerts to external ticketing system such as ServiceNow.
+
+## Visualize and analyze the incident and change request data
+
+Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. The log record schema for this data is provided in the [Additional information Section](./itsmc-synced-data.md) of this article.
+
+You can visualize the incident and change request data by using the ITSMC dashboard:
+
+![Screenshot that shows the ITSMC dashboard.](media/itsmc-overview/itsmc-overview-sample-log-analytics.png)
+
+The dashboard also provides information about connector status, which you can use as a starting point to analyze problems with the connections.
+
+In order to get more information about the dashboard investigation, see [Error Investigation using the dashboard](./itsmc-dashboard.md).
+
+### Service map
+
+You can also visualize the incidents synced against the affected computers in Service Map.
+
+Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../insights/service-map.md).
+
+If you're using Service Map, you can view the service desk items created in ITSM solutions, as shown here:
+
+![Screenshot that shows the Log Analytics screen.](media/itsmc-overview/itsmc-overview-integrated-solutions.png)
+
+## Troubleshoot ITSM connections
+
+- If a connection fails to connect to the ITSM system and you get an **Error in saving connection** message, take the following steps:
+ - For ServiceNow, Cherwell, and Provance connections:
+ - Ensure that you correctly entered the user name, password, client ID, and client secret for each of the connections.
+ - Ensure that you have sufficient privileges in the corresponding ITSM product to make the connection.
+ - For Service Manager connections:
+ - Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
+
+- If Log Analytics alerts fire but work items aren't created in the ITSM product, if configuration items aren't created/linked to work items, or for other information, see these resources:
+ - ITSMC: The solution shows a summary of connections, work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
+ - **Log Search** page: View the errors and related information directly by using the query `*ServiceDeskLog_CL*`.
+
+### Troubleshoot Service Manager web app deployment
+
+- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.
+- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.
+- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.
\ No newline at end of file
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-restore-files-from-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
@@ -182,7 +182,7 @@ In Linux, the volumes of the recovery point are mounted to the folder where the
If the file recovery process hangs after you run the file-restore script (for example, if the disks are never mounted, or they're mounted but the volumes don't appear), perform the following steps: 1. In the file /etc/iscsi/iscsid.conf, change the setting from:
- - `node.conn[0].timeo.noop_out_timeout = 5` to `node.conn[0].timeo.noop_out_timeout = 30`
+ - `node.conn[0].timeo.noop_out_timeout = 5` to `node.conn[0].timeo.noop_out_timeout = 120`
2. After making the above changes, rerun the script. If there are transient failures, ensure there is a gap of 20 to 30 minutes between reruns to avoid successive bursts of requests impacting the target preparation. This interval between re-runs will ensure the target is ready for connection from the script. 3. After file recovery, make sure you go back to the portal and select **Unmount disks** for recovery points where you weren't able to mount volumes. Essentially, this step will clean any existing processes/sessions and increase the chance of recovery.
backup https://docs.microsoft.com/en-us/azure/backup/backup-managed-disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks.md new file mode 100644
@@ -0,0 +1,217 @@
+---
+title: Back up Azure Managed Disks
+description: Learn how to back up Azure Managed Disks from the Azure portal.
+ms.topic: conceptual
+ms.date: 01/07/2021
+---
+
+# Back up Azure Managed Disks (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+This article explains how to back up [Azure Managed Disk](https://docs.microsoft.com/azure/virtual-machines/managed-disks-overview) from the Azure portal.
+
+In this article, you'll learn how to:
+
+- Create a Backup vault
+
+- Create a backup policy
+
+- Configure a backup of an Azure Disk
+
+- Run an on-demand backup job
+
+For information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Create a Backup vault
+
+A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
+1. Type **Backup center** in the search box.
+1. Under **Services**, select **Backup center**.
+1. In the **Backup center** page, select **Vault**.
+
+ ![Select Vault in Backup center](./media/backup-managed-disks/backup-center.png)
+
+1. In the **Initiate: Create Vault** screen, select **Backup vault**, and **Proceed**.
+
+ ![Initiate: Create vault](./media/backup-managed-disks/initiate-create-vault.png)
+
+1. In the **Basics** tab, provide subscription, resource group, backup vault name, region, and backup storage redundancy. Continue by selecting **Review + create**. Learn more about [creating a Backup vault](https://docs.microsoft.com/azure/backup/backup-vault-overview#create-a-backup-vault).
+
+ ![Review and create vault](./media/backup-managed-disks/review-and-create.png)
+
+## Create Backup policy
+
+1. In the *DemoVault* **Backup vault** created in the previous step, go to **Backup policies** and select **Add**.
+
+ ![Add backup policy](./media/backup-managed-disks/backup-policies.png)
+
+1. In the **Basics** tab, provide policy name, select **Datasource type** as **Azure Disk**. The vault is already prepopulated and the selected vault properties are presented.
+
+ >[!NOTE]
+ > Although the selected vault may have the global-redundancy setting, currently Azure Disk Backup supports snapshot datastore only. All backups are stored in a resource group in your subscription and aren't copied to backup vault storage.
+
+ ![Select datasource type](./media/backup-managed-disks/datasource-type.png)
+
+1. In the **Backup policy** tab, select the backup schedule frequency.
+
+ ![Select backup schedule frequency](./media/backup-managed-disks/backup-schedule-frequency.png)
+
+ Azure Disk Backup offers multiple backups per day. If you require more frequent backups, choose the **Hourly** backup frequency with the ability to take backups with intervals of every 4, 6, 8 or 12 hours. The backups are scheduled based on the **Time** interval selected. For example, if you select **Every 4 hours**, then the backups are taken at approximately in the interval of every 4 hours so the backups are distributed equally across the day. If a once a day backup is sufficient, then choose the **Daily** backup frequency. In the daily backup frequency, you can specify the time of the day when your backups are taken. It's important to note that the time of the day indicates the backup start time and not the time when the backup completes. The time required for completing the backup operation is dependent on various factors including size of the disk, and churn rate between consecutive backups. However, Azure Disk backup is an agentless backup that uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal), which doesn't impact the production application performance.
+
+1. In the **Backup policy** tab, select retention settings that meet the recovery point objective (RPO) requirement.
+
+ The default retention rule applies if no other retention rule is specified. The default retention rule can be modified to change the retention duration, but it cannot be deleted. You can add a new retention rule by selecting **Add retention rule**.
+
+ ![Add a retention rule](./media/backup-managed-disks/add-retention-rule.png)
+
+ You can pick **first successful backup** taken daily or weekly, and provide the retention duration that the specific backups are to be retained before they're deleted. This option is useful to retain specific backups of the day or week for a longer duration of time. All other frequent backups can be retained for a shorter duration.
+
+ ![Retention settings](./media/backup-managed-disks/retention-settings.png)
+
+ >[!NOTE]
+ >Azure Backup for Managed Disks uses incremental snapshots which are limited to 200 snapshots per disk. To allow you to take on-demand backups aside from scheduled backups, backup policy limits the total backups to 180. Learn more about [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disk.
+
+1. Complete the backup policy creation by selecting **Review + create**.
+
+## Configure backup
+
+Backup Vault uses Managed Identity to access other Azure resources. To configure backup of managed disks, Backup vaultΓÇÖs managed identity requires a set of permissions on the source disks and resource groups where snapshots are created and managed.
+
+A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
+
+The following prerequisites are required to configure backup of managed disks:
+
+1. Assign the **Disk Backup Reader** role to Backup VaultΓÇÖs managed identity on the Source disk that is to be backed up.
+
+ 1. Go to the disk that needs to be backed up.
+
+ 1. Go to **Access control (IAM)** and select **Add role assignments**
+
+ 1. On the right context pane, select **Disk Backup Reader** in the **Role** dropdown list. Select the backup vaultΓÇÖs managed identity and **Save**.
+
+ >[!TIP]
+ >Type the backup vault name to select the vaultΓÇÖs managed identity.
+
+ ![Add disk backup reader role](./media/backup-managed-disks/disk-backup-reader-role.png)
+
+1. Assign the **Disk Snapshot Contributor** role to the Backup vaultΓÇÖs managed identity on the Resource group where backups are created and managed by the Azure Backup service. The disk snapshots are stored in a resource group within in your subscription. To allow Azure Backup service to create, store and manage snapshots, you need to provide permissions to the backup vault.
+
+ **Choosing resource group for storing and managing snapshots:**
+
+ - Don't select the same resource group as that of the source disk.
+
+ - As a guideline, it's recommended to create a dedicated resource group as a snapshot datastore to be used by the Azure Backup service. Having a dedicated resource group allows restricting access permissions on the resource group, providing safety and ease of management of the backup data.
+
+ - You can use this resource group for storing snapshots across multiple disks that are being (or planned to be) backed up.
+
+ - You can't create an incremental snapshot for a particular disk outside of that disk's subscription. So choose the resource group within the same subscription as that of the disk to be backed up. Learn more about [incremental snapshot](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disks.
+
+ To assign the role, follow these steps:
+
+ 1. Go to the Resource group. For example, the resource group is *SnapshotRG*, which is in the same subscription as that of the disk to be backed up.
+
+ 1. Go to **Access control (IAM)** and select **Add role assignments**.
+
+ 1. On the right context pane, select **Disk Snapshot Contributor** in the **Role** dropdown list. Select the backup vaultΓÇÖs managed identity and **Save**.
+
+ >[!TIP]
+ >Type the backup vault name to select the vaultΓÇÖs managed identity.
+
+ ![Add disk snapshot contributor role](./media/backup-managed-disks/disk-snapshot-contributor-role.png)
+
+1. Verify that the backup vault's managed identity has the right set of role assignments on the source disk and resource group that serves as the snapshot datastore.
+
+ 1. Go to **Backup vault - > Identity** and select **Azure role assignments**.
+
+ ![Select Azure Role Assignments](./media/backup-managed-disks/azure-role-assignments.png)
+
+ 1. Verify that the role, resource name, and resource type are correctly reflected.
+
+ ![Verify the role, resource name and resource type](./media/backup-managed-disks/verify-role.png)
+
+ >[!NOTE]
+ >While the role assignments are reflected correctly on the portal, it may take approximately 15 minutes for the permission to be applied on the backup vaultΓÇÖs managed identity.
+
+1. Once the prerequisites are met, go to **Backup vault - > overview** and select **Backup** to start configuring backup of the disks.
+
+ ![Select backup](./media/backup-managed-disks/select-backup.png)
+
+1. In the **Basics** tab, select **Azure Disk** as the datasource type.
+
+ ![Select Azure Disk](./media/backup-managed-disks/select-azure-disk.png)
+
+ >[!NOTE]
+ >Azure Backup uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) of managed disks, which store only the delta changes to the disk since the last snapshot on Standard HDD storage, regardless of the storage type of the parent disk. For additional reliability, incremental snapshots are stored on Zone Redundant Storage (ZRS) by default in regions that support ZRS. Currently, Azure Disk Backup supports operational backup of managed disks that doesn't copy the backups to Backup vault storage. So the backup storage redundancy setting of Backup vault does not apply to the recovery points.
+
+1. In the **Backup policy** tab, choose a backup policy.
+
+ ![Choose backup policy](./media/backup-managed-disks/choose-backup-policy.png)
+
+1. In the **Resources** tab, select **Select Azure Disk**. On the right context pane, select the disks to be backed up.
+
+ ![Select disks to back up](./media/backup-managed-disks/select-disks-to-backup.png)
+
+ >[!NOTE]
+ >While the portal allows you to select multiple disks and configure backup, each disk is an individual backup instance. Currently Azure Disk Backup only supports backup of individual disks. Point-in-time backup of multiple disks attached to a virtual disk isn't supported.
+ >
+ >When using the portal, you're limited to selecting disks within the same subscription. If you have several disks to be backed up or if the disks are spread across different subscription, you can use scripts to automate.
+ >
+ >For more information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+1. Select a **Snapshot Resource Group** and select **validate**. This is the resource group where the Azure Backup service created and manages the incremental snapshots for the backup vault. Managed identity is assigned with the required role permissions as part of the prerequisites.
+
+ ![Select snapshot resource group](./media/backup-managed-disks/select-snapshot-resource-group.png)
+
+ >[!NOTE]
+ >Validation might take few minutes to complete before you can complete configuring backup workflow. Validation may fail if:
+ >
+ > - a disk is unsupported. Refer to the [Support Matrix](disk-backup-support-matrix.md) for unsupported scenarios.
+ > - the Backup vault managed identity does not have valid role assignments on the **disk** to be backed up or on the **Snapshot resource group** where incremental snapshots are stored.
+
+1. After a successful validation, select **review and configure** to configure the backup of the selected disks.
+
+## Run an on-demand backup
+
+1. In the *DemoVault* **Backup vault** created in the previous step, go to **Backup instances** and select a backup instance.
+
+ ![Select backup instance](./media/backup-managed-disks/select-backup-instance.png)
+
+1. In the **Backup instances** screen, you'll find:
+
+ - **essential** information including source disk name, the snapshot resource group where incremental snapshots are stored, backup vault, and backup policy.
+ - **Job status** showing summary of backup and restore operations and their status in the last seven days.
+ - A list of **restore points** for the selected time period.
+
+1. Select **Backup** to initiate an on-demand backup.
+
+ ![Select Backup Now](./media/backup-managed-disks/backup-now.png)
+
+1. Select one of the retention rules associated with the backup policy. This retention rule will determine the retention duration of this on-demand backup. Select **Backup now** to start the backup.
+
+ ![Initiate backup](./media/backup-managed-disks/initiate-backup.png)
+
+## Track a backup operation
+
+The Azure Backup service creates a job for scheduled backups or if you trigger on-demand backup operation for tracking. To view the backup job status:
+
+1. Go to the **Backup instance** screen. It shows the jobs dashboard with operation and status for the past seven days.
+
+ ![Jobs dashboard](./media/backup-managed-disks/jobs-dashboard.png)
+
+1. To view the status of the backup operation, select **View all** to show ongoing and past jobs of this backup instance.
+
+ ![Select view all](./media/backup-managed-disks/view-all.png)
+
+1. Review the list of backup and restore jobs and their status. Select a job from the list of jobs to view job details.
+
+ ![Select job to see details](./media/backup-managed-disks/select-job.png)
+
+## Next steps
+
+- [Restore Azure Managed Disks](restore-managed-disks.md)
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
@@ -153,7 +153,7 @@ Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | As of November 23, 2020, supported in the Korea Central (KRC) and South Africa North (SAN) regions.<br/><br/> Azure Backup will backup the virtual machines having disks which are Write Accelarted (WA) enabled during backup.
+Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions. For those supported subscriptions, Azure Backup will backup the virtual machines having disks which are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-faq.md new file mode 100644
@@ -0,0 +1,130 @@
+---
+title: Frequently asked questions about Azure Disk Backup
+description: Get answers to frequently asked questions about Azure Disk Backup
+ms.topic: conceptual
+ms.date: 01/07/2021
+---
+
+# Frequently asked questions about Azure Disk Backup (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+This article answers frequently asked questions about Azure Disk Backup. For more information on the [Azure Disk backup](disk-backup-overview.md) region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Frequently asked questions
+
+### Can I back up the disk using the Azure Disk Backup solution if the same disk is backed up using Azure virtual machine backup?
+
+Azure Backup offers side-by-side support for backup of managed disk using Disk backup and the [Azure VM backup](backup-azure-vms-introduction.md) solutions. This is useful when you need once-a-day application consistent backup of virtual machines and also more frequent backups of the OS disk or a specific data disk, which are crash consistent without impacting the production application performance.
+
+### How do I find the snapshot resource group that I used to configure backup for a disk?
+
+In the **Backup Instance** screen, you can find the snapshot resource group field in the **Essentials** section. You can search and select your backup instance of the corresponding disk from Backup center or the Backup vault.
+
+![Snapshot resource group field](./media/disk-backup-faq/snapshot-resource-group.png)
+
+### What is a snapshot resource group?
+
+Azure Disk Backup offers operational tier backup for managed disk. That is, the snapshots that are created during the scheduled and on-demand backup operations are stored in a resource group within your subscription. Azure Backup offers instant restore because the incremental snapshots are stored within your subscription. This resource group is known as the snapshot resource group. For more information, see [Configure backup](backup-managed-disks.md#configure-backup).
+
+### Why must the snapshot resource group be in same subscription as that of the disk being backed up?
+
+You can't create an incremental snapshot for a particular disk outside of that disk's subscription. So choose the resource group within the same subscription as that of the disk to be backed up. Learn more about [incremental snapshot](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disks.
+
+### Why do I need to provide role assignments to be able to configure backups, perform scheduled and on-demand backups, and restore operations?
+
+Azure Disk Backup uses the least privilege approach to discover, protect, and restore the managed disks in your subscriptions. To achieve this, Azure Backup uses the managed identity of the [Backup vault](backup-vault-overview.md) to access other Azure resources. A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [managed identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview). By default, the Backup vault won't have permission to access the disk to be backed up, create periodic snapshots, delete snapshots after retention period, and to restore a disk from backup. By explicitly granting role assignments to the Backup vault's managed identity, you're in control of managing permissions to the resources on the subscriptions.
+
+### Why does backup policy limit the retention duration?
+
+Azure Disk Backup uses incremental snapshots, which are limited to 200 snapshots per disk. To allow you to take on-demand backups aside from scheduled backups, backup policy limits the total backups to 180. Learn more about [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disks.
+
+### How does the hourly and daily backup frequency work in the backup policy?
+
+Azure Disk Backup offers multiple backups per day. If you require more frequent backups, choose the **Hourly** backup frequency. The backups are scheduled based on the **Time** interval selected. For example, if you select **Every 4 hours**, then the backups are taken at approximately every 4 hours so that the backups are distributed equally across the day. If once a day backup is sufficient enough, then choose the **Daily** backup frequency. In the daily backup frequency, you can specify the time of the day when your backups will be taken. It's important to note that the time of the day indicates the backup start time and not the time when the backup completes. The time required to complete the backup operation is dependent on various factors including the churn rate between consecutive backups. However, Azure Disk backup is an agentless backup that uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal) that don't impact the production application performance.
+
+### Why does the Backup vaultΓÇÖs redundancy setting not apply to the backups stored in operational tier (the snapshot resource group)?
+
+Azure Backup uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) of managed disks that store only the delta changes to disks since the last snapshot on Standard HDD storage, regardless of the storage type of the parent disk. For more reliability, incremental snapshots are stored on Zone Redundant Storage (ZRS) by default in regions that support ZRS. Currently, Azure Disk Backup supports operational backups of managed disks that don't copy the backups to Backup vault storage. So the backup storage redundancy setting of the Backup vault doesn't apply to the recovery points.
+
+### Can I use Backup Center to configure backups and manage backup instances for Azure Disks?
+
+Yes, Azure Disk Backup is integrated into [Backup Center](backup-center-overview.md), which provides a **single unified management experience** in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. You can also use Backup vault to back up, restore, and manage the backup instances that are protected within the vault.
+
+### Why do I need to create a Backup vault and not use a Recovery Services vault?
+
+A Backup vault is a storage entity in Azure that houses backup data for certain newer workloads that Azure Backup supports. You can use Backup vaults to hold backup data for various Azure services, such Azure Database for PostgreSQL servers, Azure Disks, and newer workloads that Azure Backup will support. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Refer to [Backup vaults](https://docs.microsoft.com/azure/backup/backup-vault-overview) to learn more.
+
+### Can the disk to be backed up and the Backup vault be in different subscriptions?
+
+Yes, the source-managed disk to be backed up and the Backup vault can be in different subscriptions.
+
+### Can the disk to be backed up and the Backup vault be in different regions?
+
+No, currently the source-managed disk to be backed up and the Backup vault must be in the same region.
+
+### Can I restore a disk into a different subscription?
+
+Yes, you can restore the disk onto a different subscription than that of the source-managed disk from which the backup is taken.
+
+### Can I back up multiple disks together?
+
+No, point-in-time snapshots of multiple disks attached to a virtual machine isn't supported. For more information, see [Configure backup](backup-managed-disks.md#configure-backup) and to learn more about limitations, refer to the [support matrix](disk-backup-support-matrix.md).
+
+### What are my options to back up disks across multiple subscriptions?
+
+Currently, using the Azure portal to configure backup of disks is limited to a maximum of 20 disks from the same subscription.
+
+### What is a target resource group?
+
+During a restore operation, you can choose the subscription and a resource group where you want to restore the disk to. Azure Backup will create new disks from the recovery point in the selected resource group. This is referred to as a target resource group. Note that the Backup vault's managed identity requires the role assignment on the target resource group to be able to perform restore operation successfully. For more information, see the [restore documentation](restore-managed-disks.md).
+
+### What are the permissions used by Azure Backup during backup and restore operation?
+
+Following are the actions used in the **Disk Backup Reader** role assigned on the **disk** to be backed up:
+
+"Microsoft.Compute/disks/read"
+
+"Microsoft.Compute/disks/beginGetAccess/action"
+
+Following are the actions used in the **Disk Snapshot Contributor** role assigned on the **Snapshot resource group**:
+
+"Microsoft.Compute/snapshots/delete"
+
+"Microsoft.Compute/snapshots/write"
+
+ΓÇ£Microsoft.Compute/snapshots/read"
+
+"Microsoft.Storage/storageAccounts/write"
+
+"Microsoft.Storage/storageAccounts/read"
+
+"Microsoft.Storage/storageAccounts/delete"
+
+"Microsoft.Resources/subscriptions/resourceGroups/read"
+
+"Microsoft.Storage/storageAccounts/listkeys/action"
+
+"Microsoft.Compute/snapshots/beginGetAccess/action"
+
+"Microsoft.Compute/snapshots/endGetAccess/action"
+
+"Microsoft.Compute/disks/beginGetAccess/action"
+
+Following are the actions used in the **Disk Restore Operator** role assigned on **Target Resource Group**:
+
+"Microsoft.Compute/disks/write"
+
+"Microsoft.Compute/disks/read"
+
+"Microsoft.Resources/subscriptions/resourceGroups/read"
+
+>[!NOTE]
+>The permissions on these roles may change in the future, based on the features being added by the Azure Backup service.
+
+## Next steps
+
+- [Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-overview.md new file mode 100644
@@ -0,0 +1,73 @@
+---
+title: Overview of Azure Disk Backup
+description: Learn about the Azure Disk backup solution.
+ms.topic: conceptual
+ms.date: 01/07/2021
+---
+
+# Overview of Azure Disk Backup (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+Azure Disk Backup is a native, cloud-based backup solution that protects your data in managed disks. It's a simple, secure, and cost-effective solution that enables you to configure protection for managed disks in a few steps. It assures that you can recover your data in a disaster scenario.
+
+Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots) with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
+
+If you require application-consistent backup of virtual machine including the data disks, or an option to restore an entire virtual machine from backup, restore a file or folder, or restore to a secondary region, then use the [Azure VM backup](backup-azure-vms-introduction.md) solution. Azure Backup offers side-by-side support for backup of managed disks using Disk Backup in addition to [Azure VM backup](https://docs.microsoft.com/azure/backup/backup-azure-vms-introduction) solutions. This is useful when you need once-a-day application consistent backups of virtual machines and also more frequent backups of OS disks or a specific data disk that are crash consistent, and don't impact the production application performance.
+
+Azure Disk Backup is integrated into Backup Center, which provides a **single unified management experience** in Azure for enterprises to govern, monitor, operate, and analyze backups at scale.
+
+## Key benefits of Disk Backup
+
+Azure Disk backup is an agentless and crash consistent solution that uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots) and offers the following advantages:
+
+- More frequent and quick backups without interrupting the virtual machine.
+- Doesn't affect the performance of the production application.
+- No security concerns since it doesn't require running custom scripts or installing agents.
+- A cost-effective solution to back up specific disks when compared to backing up entire virtual machine.
+
+Azure Disk backup solution is useful in the following scenarios:
+
+- Need for frequent backups per day without application being quiescent
+- Apps running in a cluster scenario: both Windows Server Failover Cluster and Linux clusters are writing to shared disks
+- Specific need for agentless backup because of security or performance concerns on the application
+- Application consistent backup of VM isn't feasible since line-of-business apps don't support Volume Shadow Copy Service (VSS).
+
+Consider Azure Disk Backup in scenarios where:
+
+- a mission-critical application is running on an Azure Virtual machine that demands multiple backups per day to meet the recovery point objective, but without impacting the production environment or application performance
+- your organization or industry regulation restricts installing agents because of security concerns
+- executing custom pre or post scripts and invoking freeze and thaw on Linux virtual machines to get application-consistent backup puts undue overhead on production workload availability
+- containerized applications running on Azure Kubernetes Service (AKS cluster) are using managed disks as persistent storage. Today you have to back up the managed disk via automation scripts that are hard to manage.
+- a managed disk is holding critical business data, used as a file-share, or contains database backup files, and you want to optimize backup cost by not investing in Azure VM backup
+- You have many Linux and Windows single-disk virtual machines (that is, a virtual machine with just an OS disk and no data disks attached) that host webserver or state-less machines or serves as a staging environment with application configuration settings and you need a cost efficient backup solution to protect the OS disk. For example, to trigger a quick on-demand backup before upgrading or patching the virtual machine
+- a virtual machine is running an OS configuration that is unsupported by Azure VM backup solution (for example, Windows 2008 32-bit Server)
+
+## How the backup and restore process works
+
+- The first step in configuring backup for Azure file shares is creating a [Backup vault](backup-vault-overview.md). The vault gives you a consolidated view of the backups configured across different workloads.
+
+- Then create a Backup policy that allows you to configure backup frequency and retention duration.
+
+- To configure backup, go to the Backup vault, assign a backup policy, select the managed disk that needs to be backed up and provide a resource group where the snapshots are to be stored and managed. Azure Backup automatically triggers scheduled backup jobs that create an incremental snapshot of the disk according to the backup frequency. Older snapshots are deleted according to the retention duration specified by the backup policy.
+
+- Azure Backup uses [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) of the managed disk. Incremental snapshots are a cost-effective, point-in-time backup of managed disks that are billed for the delta changes to the disk since the last snapshot. They're always stored on the most cost-effective storage, standard HDD storage regardless of the storage type of the parent disks. The first snapshot of the disk will occupy the used size of the disk, and consecutive incremental snapshots store delta changes to the disk since the last snapshot.
+
+- Once you configure the backup of a managed disk, a backup instance will be created within the backup vault. Using the backup instance, you can find health of backup operations, trigger on-demand backups, and perform restore operations. You can also view health of backups across multiple vaults and backup instances using Backup Center, which provides a single pane of glass view.
+
+- To restore, just select the recovery point from which you want to restore the disk. Provide the resource group where the restored disk is to be created from the snapshot. Azure Backup provides an instant restore experience since the snapshots are stored locally in your subscription.
+
+- Backup Vault uses Managed Identity to access other Azure resources. To configure backup of a managed disk and to restore from past backup, Backup VaultΓÇÖs managed identity requires a set of permissions on the source disk, the snapshot resource group where snapshots are created and managed, and the target resource group where you want to restore the backup. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
+
+- Currently Azure Disk Backup supports operational backup of managed disks and doesn't copy the backups to Backup Vault storage. Refer to the [support matrix](disk-backup-support-matrix.md)for a detailed list of supported and unsupported scenarios, and region availability.
+
+## Pricing
+
+Azure Backup offers a snapshot lifecycle management solution for protecting Azure Disks. The disk snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur **Snapshot Storage** charges. You can visit [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for more details about the snapshot pricing. Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a **Protected Instance** fee and **Backup Storage** cost doesn't apply. Additionally, incremental snapshots occupy delta changes since the last snapshot and are always stored on standard storage regardless of the storage type of the parent-managed disks and are charged according to the pricing of standard storage. This makes Azure Disk Backup a cost-effective solution.
+
+## Next steps
+
+- [Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md new file mode 100644
@@ -0,0 +1,64 @@
+---
+title: Azure Disk Backup support matrix
+description: Provides a summary of support settings and limitations Azure Disk Backup.
+ms.topic: conceptual
+ms.date: 01/07/2021
+ms.custom: references_regions
+---
+
+# Azure Disk Backup support matrix (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+You can use [Azure Backup](https://docs.microsoft.com/azure/backup/backup-overview) to protect Azure Disks. This article summarizes region availability, supported scenarios, and limitations.
+
+## Supported regions
+
+Azure Disk Backup is available in preview in the following regions: West Central US.
+
+More regions will be announced when they become available.
+
+## Limitations
+
+- Azure Disk Backup is supported for Azure Managed Disks, including shared disks (Shared premium SSDs). Unmanaged disks aren't supported. Currently this solution doesn't support Ultra-disks, including shared ultra-disks, because of lack of snapshot capability.
+
+- Azure Disk Backup supports backup of Write Accelerator disk. However, during restore the disk would be restored as a normal disk. Write Accelerated cache can be enabled on the disk after mounting it to a VM.
+
+- Azure Backup provides operational (snapshot) tier backup of Azure managed disks with support for multiple backups per day. The backups aren't copied to the backup vault.
+
+- Currently, the Original-Location Recovery (OLR) option to restore by replacing existing source disks from where the backups were taken isn't supported. You can restore from recovery point to create a new disk either in the same resource group as that of the source disk from where the backups were taken or in any other resource group. This is known as Alternate-Location Recovery (ALR).
+
+- Azure Backup for Managed Disks uses incremental snapshots, which are limited to 200 snapshots per disk. To allow you to take on-demand backup aside from scheduled backups, Backup policy limits the total backups to 180. Learn more about [incremental snapshot](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disks.
+
+- Azure [subscription and service limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#virtual-machine-disk-limits) apply to the total number of disk snapshots per region per subscription.
+
+- Point-in-time snapshots of multiple disks attached to a virtual machine isn't supported.
+
+- The Backup vault and the disks to be backed up can be in the same or different subscriptions. However, both the backup vault and disk to be backed up must be in same region.
+
+- Disks to be backed up and the snapshot resource group where the snapshots are stored locally must be in same subscription.
+
+- Restoring a disk from backup to the same or a different subscription is supported. However, the restored disk will be created in the same region as that of the snapshot.
+
+- ADE encrypted disks can't be protected.
+
+- Disks encrypted with platform-managed keys or customer-managed keys are supported. However, the restore will fail for the restore points of a disk that is encrypted using customer-managed keys (CMK) if the Disk Encryption Set KeyVault key is disabled or deleted.
+
+- Currently, the Backup policy can't be modified, and the Snapshot Resource group that is assigned to a backup instance when you configure the backup of a disk can't be changed.
+
+- Currently, the Azure portal experience to configure the backup of disks is limited to a maximum of 20 disks from the same subscription.
+
+- When configuring backup, the disk selected to be backed up and the snapshot resource group where the snapshots are to be stored must be part of the same subscription. You can't create an incremental snapshot for a particular disk outside of that disk's subscription. Learn more about [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots-portal#restrictions) for managed disk. For more information on how to choose a snapshot resource group, see [Configure backup](backup-managed-disks.md#configure-backup).
+
+- For successful backup and restore operations, role assignments are required by the Backup vaultΓÇÖs managed identity. Use only the role definitions provided in the documentation. Use of other roles like owner, contributor, and so on, isn't supported. You may face permission issues, if you start configuring backup or restore operations soon after assigning roles. This is because the role assignments take a few minutes to take effect.
+
+- Managed disks allow changing the performance tier at deployment or afterwards without changing size of the disk. The Azure Disk Backup solution supports the performance tier changes to the source disk that is being backed up. During restore, the performance tier of the restored disk will be the same as that of the source disk at the time of backup. Follow the documentation [here](https://docs.microsoft.com/azure/virtual-machines/disks-performance-tiers-portal) to change your diskΓÇÖs performance tier after restore operation.
+
+- [Private Links](https://docs.microsoft.com/azure/virtual-machines/disks-enable-private-links-for-import-export-portal) support for managed disks allows you to restrict the export and import of managed disks so that it only occurs within your Azure virtual network. Azure Disk Backup supports backup of disks that have private endpoints enabled. This doesn't include the backup data or snapshots to be accessible through the private endpoint.
+
+## Next steps
+
+- [Back up Azure Managed Disks](backup-managed-disks.md)
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-troubleshoot.md new file mode 100644
@@ -0,0 +1,159 @@
+---
+title: Troubleshooting backup failures in Azure Disk Backup
+description: Learn how to troubleshoot backup failures in Azure Disk Backup
+ms.topic: conceptual
+ms.date: 01/07/2021
+---
+
+# Troubleshooting backup failures in Azure Disk Backup (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+This article provides troubleshooting information on backup and restore issues faced with Azure Disk. For more information on the [Azure Disk backup](disk-backup-overview.md) region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md).
+
+## Common issues faced with Azure Disk Backup
+
+### Error Code: UserErrorSnapshotRGSubscriptionMismatch
+
+Error Message: Invalid subscription selected for Snapshot Data store
+
+Recommended Action: Disks and disk snapshots are stored in the same subscription. You can choose any resource group to store the disk snapshots within the subscription. Select the same subscription as that of the source disk. For more information, see the [support matrix](disk-backup-support-matrix.md).
+
+### Error Code: UserErrorSnapshotRGNotFound
+
+Error Message: Could not perform the operation as Snapshot Data store Resource Group does not exist.
+
+Recommended Action: Create the resource group and provide the required permissions on it. For more information, see [configure backup](backup-managed-disks.md#configure-backup).
+
+### Error Code: UserErrorManagedDiskNotFound
+
+Error Message: Could not perform the operation as Managed Disk no longer exists.
+
+Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk.
+
+### Error Code: UserErrorNotEnoughPermissionOnDisk
+
+Error Message: Azure Backup Service requires additional permissions on the Disk to do this operation.
+
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup Vault managed identity and how to provide it.
+
+### Error Code: UserErrorNotEnoughPermissionOnSnapshotRG
+
+Error Message: Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
+
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand which is the resource group, what permissions are required by the Backup Vault managed identity and how to provide it.
+
+### Error Code: UserErrorDiskBackupDiskOrMSIPermissionsNotPresent
+
+Error Message: Invalid disk or Azure Backup Service requires additional permissions on the Disk to do this operation
+
+Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk. If the disk isn't deleted or moved, grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+
+### Error Code: UserErrorDiskBackupSnapshotRGOrMSIPermissionsNotPresent
+
+Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
+
+Recommended Action: Create a resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity and how to provide it.
+
+### Error Code: UserErrorDiskBackupAuthorizationFailed
+
+Error Message: Backup Vault managed identity is missing the necessary permissions to do this operation.
+
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk to be backed up and on the snapshot data store resource group where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+
+### Error Code: UserErrorSnapshotRGOrMSIPermissionsNotPresent
+
+Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or, Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
+
+Recommended Action: Create the resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity, and how to provide it.
+
+### Error Code: UserErrorOperationalStoreParametersNotProvided
+
+Error Message: Could not perform the operation as Snapshot Data store Resource Group parameter is not provided
+
+Recommended Action: Provide the snapshot data store resource group parameter. The snapshot data store resource group is the location where the disk snapshots are stored. For more information, see [the documentation](backup-managed-disks.md).
+
+### Error Code: UserErrorInvalidOperationalStoreResourceGroup
+
+Error Message: Snapshot Data store Resource Group provided is invalid
+
+Recommended Action: Provide a valid resource group for the snapshot data store. The snapshot data store resource group is the location where the disk snapshots are stored. For more information, see [the documentation](backup-managed-disks.md).
+
+### Error Code: UserErrorDiskBackupDiskTypeNotSupported
+
+Error Message: Unsupported disk type
+
+Recommended Action: Refer to [the support matrix](disk-backup-support-matrix.md) on unsupported scenarios and limitations.
+
+### Error Code: UserErrorSameNameDiskAlreadyExists
+
+Error Message: Could not restore as a Disk with same name already exists in the selected target resource group
+
+Recommended Action: Provide a different disk name for restore. For more information, see [Restore Azure Managed Disks](restore-managed-disks.md).
+
+### Error Code: UserErrorRestoreTargetRGNotFound
+
+Error Message: Operation failed as the Target Resource Group does not exist.
+
+Recommended Action: Provide a valid resource group to restore. For more information, see [Restore Azure Managed Disks](restore-managed-disks.md).
+
+### Error Code: UserErrorNotEnoughPermissionOnTargetRG
+
+Error Message: Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
+
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+
+### Error Code: UserErrorSubscriptionDiskQuotaLimitReached
+
+Error Message: Operation has failed as the Disk quota maximum limit has been reached on the subscription.
+
+Recommended Action: Refer to the [Azure subscription and service limits and quota documentation](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits) or contact Microsoft Support for further guidance.
+
+### Error Code: UserErrorDiskBackupRestoreRGOrMSIPermissionsNotPresent
+
+Error Message: Operation failed as the Target Resource Group does not exist. Or Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
+
+Recommended Action: Provide a valid resource group to restore, and grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+
+### Error Code: UserErrorDESKeyVaultKeyDisabled
+
+Error Message: The key vault key used for disk encryption set is not in enabled state.
+
+Recommended Action: Enable the key vault key used for disk encryption set. Refer to the limitations in the [support matrix](disk-backup-support-matrix.md).
+
+### Error Code: UserErrorMSIPermissionsNotPresentOnDES
+
+Error Message: Azure Backup Service needs permission to access the disk encryption set used with the disk.
+
+Recommended Action: Provide Reader access to the Backup vault's managed identity to the disk encryption set (DES).
+
+### Error Code: UserErrorDESKeyVaultKeyNotAvailable
+
+Error Message: The key vault key used for disk encryption set is not available.
+
+Recommended Action: Ensure that the key vault key used for disk encryption set isn't disabled or deleted.
+
+### Error Code: UserErrorDiskSnapshotNotFound
+
+Error Message: The disk snapshot for this Restore point has been deleted.
+
+Recommended Action: Snapshots are stored in the snapshot data store resource group within your subscription. It's possible that the snapshot related to the selected restore point might have been deleted or moved from this resource group. Consider using another Recovery point to restore. Also, follow the recommended guidelines for choosing Snapshot resource group mentioned in the [restore documentation](restore-managed-disks.md).
+
+### Error Code: UserErrorSnapshotMetadataNotFound
+
+Error Message: The disk snapshot metadata for this Restore point has been deleted
+
+Recommended Action: Consider using another recovery point to restore. For more information, see the [restore documentation](restore-managed-disks.md).
+
+### Error Code: UserErrorMaxConcurrentOperationLimitReached
+
+Error Message: Unable to start the operation as maximum number of allowed concurrent operations has reached for this operation type.
+
+Recommended Action: Wait until the previous operations complete.
+
+## Next steps
+
+- [Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup https://docs.microsoft.com/en-us/azure/backup/encryption-at-rest-with-cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
@@ -32,7 +32,10 @@ This article discusses the following:
- Moving CMK encrypted Recovery Services vault across Resource Groups and Subscriptions isn't currently supported. -- This feature is currently configurable from the Azure portal only.
+- This feature can be configured through the Azure portal and PowerShell.
+
+ >[!NOTE]
+ >Use Az module 5.3.0 or greater to use customer managed keys for backups in the Recovery Services vault.
If you haven't created and configured your Recovery Services vault, you can [read how to do so here](backup-create-rs-vault.md).
@@ -57,6 +60,8 @@ Azure Backup uses system assigned managed identity to authenticate the Recovery
>[!NOTE] >Once enabled, the managed identity must **not** be disabled (even temporarily). Disabling the managed identity may lead to inconsistent behavior.
+**In the portal:**
+ 1. Go to your Recovery Services vault -> **Identity** ![Identity settings](./media/encryption-at-rest-with-cmk/managed-identity.png)
@@ -65,10 +70,34 @@ Azure Backup uses system assigned managed identity to authenticate the Recovery
1. An Object ID is generated, which is the system-assigned managed identity of the vault.
+**With PowerShell:**
+
+Use the [Update-AzRecoveryServicesVault](https://docs.microsoft.com/powershell/module/az.recoveryservices/update-azrecoveryservicesvault) command to enable system-assigned managed identity for the recovery services vault.
+
+Example:
+
+```AzurePowerShell
+$vault=Get-AzRecoveryServicesVault -ResourceGroupName "testrg" -Name "testvault"
+
+Update-AzRecoveryServicesVault -IdentityType SystemAssigned -VaultId $vault.ID
+
+$vault.Identity | fl
+```
+
+Output:
+
+```output
+PrincipalId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Type : SystemAssigned
+```
+ ### Assign permissions to the Recovery Services vault to access the encryption key in the Azure Key Vault You now need to permit the Recovery Services vault to access the Azure Key Vault that contains the encryption key. This is done by allowing the Recovery Services vaultΓÇÖs managed identity to access the Key Vault.
+**In the portal**:
+ 1. Go to your Azure Key Vault -> **Access Policies**. Continue to **+Add Access Policies**. ![Add Access Policies](./media/encryption-at-rest-with-cmk/access-policies.png)
@@ -85,6 +114,32 @@ You now need to permit the Recovery Services vault to access the Azure Key Vault
1. Select **Save** to save changes made to the access policy of the Azure Key Vault.
+**With PowerShell**:
+
+Use the [Set-AzRecoveryServicesVaultProperty](https://docs.microsoft.com/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultproperty) command to enable encryption using customer-managed keys, and to assign or update the encryption key to be used.
+
+Example:
+
+```azurepowershell
+$keyVault = Get-AzKeyVault -VaultName "testkeyvault" -ResourceGroupName "testrg"
+$key = Get-AzKeyVaultKey -VaultName $keyVault -Name "testkey"
+Set-AzRecoveryServicesVaultProperty -EncryptionKeyId $key.ID -KeyVaultSubscriptionId "xxxx-yyyy-zzzz" -VaultId $vault.ID
++
+$enc=Get-AzRecoveryServicesVaultProperty -VaultId $vault.ID
+$enc.encryptionProperties | fl
+```
+
+Output:
+
+```output
+EncryptionAtRestType : CustomerManaged
+KeyUri : testkey
+SubscriptionId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+LastUpdateStatus : Succeeded
+InfrastructureEncryptionState : Disabled
+```
+ ### Enable soft-delete and purge protection on the Azure Key Vault You need to **enable soft delete and purge protection** on your Azure Key Vault that stores your encryption key. You can do this from the Azure Key Vault UI as shown below. (Alternatively, these properties can be set while creating the Key Vault). Read more about these Key Vault properties [here](../key-vault/general/soft-delete-overview.md).
@@ -215,6 +270,8 @@ You can encrypt the restored disk / VM after the restore is complete, regardless
#### Select a Disk Encryption Set while restoring from Vault Recovery Point
+**In the portal**:
+ The Disk Encryption Set is specified under Encryption Settings in the restore pane, as shown below: 1. In the **Encrypt disk(s) using your key**, select **Yes**.
@@ -226,6 +283,21 @@ The Disk Encryption Set is specified under Encryption Settings in the restore pa
![Encrypt disk using your key](./media/encryption-at-rest-with-cmk/encrypt-disk-using-your-key.png)
+**With PowerShell**:
+
+Use the [Get-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) command with the parameter [`-DiskEncryptionSetId <string>`] to [specify the DES](https://docs.microsoft.com/powershell/module/az.compute/get-azdiskencryptionset) to be used for encrypting the restored disk. For more information about restoring disks from VM backup, see [this article](https://docs.microsoft.com/azure/backup/backup-azure-vms-automation#restore-an-azure-vm).
+
+Example:
+
+```azurepowershell
+$namedContainer = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName "V2VM" -VaultId $vault.ID
+$backupitem = Get-AzRecoveryServicesBackupItem -Container $namedContainer -WorkloadType "AzureVM" -VaultId $vault.ID
+$startDate = (Get-Date).AddDays(-7)
+$endDate = Get-Date
+$rp = Get-AzRecoveryServicesBackupRecoveryPoint -Item $backupitem -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime() -VaultId $vault.ID
+$restorejob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -TargetResourceGroupName "DestRGforManagedDisks" -DiskEncryptionSetId ΓÇ£testdes1ΓÇ¥ -VaultId $vault.ID
+```
+ #### Restoring files When performing a file restore, the restored data will be encrypted with the key used for encrypting the target location.
backup https://docs.microsoft.com/en-us/azure/backup/restore-managed-disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-managed-disks.md new file mode 100644
@@ -0,0 +1,129 @@
+---
+title: Restore Azure Managed Disks
+description: Learn how to restore Azure Managed Disks from the Azure portal.
+ms.topic: conceptual
+ms.date: 01/07/2021
+---
+
+# Restore Azure Managed Disks (in preview)
+
+>[!IMPORTANT]
+>Azure Disk Backup is in preview without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For region availability, see the [support matrix](disk-backup-support-matrix.md).
+>
+>[Fill out this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR1vE8L51DIpDmziRt_893LVUNFlEWFJBN09PTDhEMjVHS05UWFkxUlUzUS4u) to sign-up for the preview.
+
+This article explains how to restore [Azure Managed Disks](https://docs.microsoft.com/azure/virtual-machines/managed-disks-overview) from a restore point created by Azure Backup.
+
+Currently, the Original-Location Recovery (OLR) option of restoring by replacing existing the source disk from where the backups were taken isn't supported. You can restore from a recovery point to create a new disk either in the same resource group as that of the source disk from where the backups were taken or in any other resource group. This is known as Alternate-Location Recovery (ALR) and this helps to keep both the source disk and the restored (new) disk.
+
+In this article, you'll learn how to:
+
+- Restore to create a new disk
+
+- Track the restore operation status
+
+## Restore to create a new disk
+
+Backup Vault uses Managed Identity to access other Azure resources. To restore from backup, Backup vaultΓÇÖs managed identity requires a set of permissions on the resource group where the disk is to be restored.
+
+Backup vault uses a system assigned managed identity, which is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more about [Managed Identities](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
+
+The following pre-requisites are required to perform a restore operation:
+
+1. Assign the **Disk Restore Operator** role to the Backup VaultΓÇÖs managed identity on the Resource group where the disk will be restored by the Azure Backup service.
+
+ >[!NOTE]
+ > You can choose the same resource group as that of the source disk from where backups are taken or to any other resource group within the same or a different subscription.
+
+ 1. Go to the resource group where the disk is to be restored to. For example, the resource group is *TargetRG*.
+
+ 1. Go to **Access control (IAM)** and select **Add role assignments**
+
+ 1. On the right context pane, select **Disk Restore Operator** in the **Role** dropdown list. Select the backup vaultΓÇÖs managed identity and **Save**.
+
+ >[!TIP]
+ >Type the backup vault's name to select the vaultΓÇÖs managed identity.
+
+ ![Select disk restore operator role](./media/restore-managed-disks/disk-restore-operator-role.png)
+
+1. Verify that the backup vault's managed identity has the right set of role assignments on the resource group where the disk will be restored.
+
+ 1. Go to **Backup vault - > Identity** and select **Azure role assignments**
+
+ ![Select Azure role assignments](./media/restore-managed-disks/azure-role-assignments.png)
+
+ 1. Verify that the role, resource name, and resource type appear correctly.
+
+ ![Verify role, resource name and resource type](./media/restore-managed-disks/verify-role.png)
+
+ >[!NOTE]
+ >While the role assignments are reflected correctly on the portal, it may take approximately 15 minutes for the permission to be applied on the backup vaultΓÇÖs managed identity.
+ >
+ >During scheduled backups or an on-demand backup operation, Azure Backup stores the disk incremental snapshots in the Snapshot Resource Group provided during configuring backup of the disk. Azure Backup uses these incremental snapshots during the restore operation. If the snapshots are deleted or moved from the Snapshot Resource Group or if the Backup vault role assignments are revoked on the Snapshot Resource Group, the restore operation will fail.
+
+Once the prerequisites are met, follow these steps to perform the restore operation.
+
+1. In the [Azure portal](https://portal.azure.com/), go to **Backup center**. Select **Backup instances** under the **Manage** section. From the list of backup instances, select the disk backup instance for which you want to perform the restore operation.
+
+ ![List of backup instances](./media/restore-managed-disks/backup-instances.png)
+
+ Alternately, you can perform this operation from the Backup vault you used to configure backup for the disk.
+
+1. In the **Backup instance** screen, select the restore point that you want to use to perform the restore operation and select **Restore**.
+
+ ![Select restore point](./media/restore-managed-disks/select-restore-point.png)
+
+1. In the **Restore** workflow, review the **Basics** and **Select recovery point** tab information, and select **Next: Restore parameters**.
+
+ ![Review Basics and Select recovery point information](./media/restore-managed-disks/review-information.png)
+
+1. In the **Restore parameters** tab, select the **Target subscription** and **Target resource group** where you want to restore the backup to. Provide the name of the disk to be restored. Select **Next: Review + restore**.
+
+ ![Restore parameters](./media/restore-managed-disks/restore-parameters.png)
+
+ >[!TIP]
+ >Disks being backed up by Azure Backup using the Disk Backup solution can also be backed up by Azure Backup using the Azure VM backup solution with the Recovery Services vault. If you have configured protection of the Azure VM to which this disk is attached, you can also use the Azure VM restore operation. You can choose to restore the VM, or disks and files or folders from the recovery point of the corresponding Azure VM backup instance. For more information, see [Azure VM backup](https://docs.microsoft.com/azure/backup/about-azure-vm-restore).
+
+1. Once the validation is successful, select **Restore** to start the restore operation.
+
+ ![Initiate restore operation](./media/restore-managed-disks/initiate-restore.png)
+
+ >[!NOTE]
+ > Validation might take few minutes to complete before you can trigger restore operation. Validation may fail if:
+ >
+ > - a disk with the same name provided in **Restored disk name** already exists in the **Target resource group**
+ > - the Backup vault's managed identity doesn't have valid role assignments on the **Target resource group**
+ > - the Backup vault's managed identity role assignments are revoked on the **Snapshot resource group** where incremental snapshots are stored
+ > - If incremental snapshots are deleted or moved from the snapshot resource group
+
+Restore will create a new disk from the selected recovery point in the target resource group that was provided during the restore operation. To use the restored disk on an existing virtual machine, you'll need to perform more steps:
+
+- If the restored disk is a data disk, you can attach an existing disk to a virtual machine. If the restored disk is OS disk, you can swap the OS disk of a virtual machine from the Azure portal under the **Virtual machine** pane - > **Disks** menu in the **Settings** section.
+
+ ![Swap OS disks](./media/restore-managed-disks/swap-os-disks.png)
+
+- For Windows virtual machines, if the restored disk is a data disk, follow the instructions to [detach the original data disk](https://docs.microsoft.com/azure/virtual-machines/windows/detach-disk#detach-a-data-disk-using-the-portal) from the virtual machine. Then [attach the restored disk](https://docs.microsoft.com/azure/virtual-machines/windows/attach-managed-disk-portal) to the virtual machine. Follow the instructions to [swap the OS disk](https://docs.microsoft.com/azure/virtual-machines/windows/os-disk-swap) of the virtual machine with the restored disk.
+
+- For Linux virtual machines, if the restored disk is a data disk, follow the instructions to [detach the original data disk](https://docs.microsoft.com/azure/virtual-machines/linux/detach-disk#detach-a-data-disk-using-the-portal) from the virtual machine. Then [attach the restored disk](https://docs.microsoft.com/azure/virtual-machines/linux/attach-disk-portal#attach-an-existing-disk) to the virtual machine. Follow the instructions to [swap the OS disk](https://docs.microsoft.com/azure/virtual-machines/linux/os-disk-swap) of the virtual machine with the restored disk.
+
+It's recommended that you revoke the **Disk Restore Operator** role assignment from the Backup vault's managed identity on the **Target resource group** after the successful completion of restore operation.
+
+## Track a restore operation
+
+After you trigger the restore operation, the backup service creates a job for tracking. Azure Backup displays notifications about the job in the portal. To view the restore job progress:
+
+1. Go to the **Backup instance** screen. It shows the jobs dashboard with operation and status for the past seven days.
+
+ ![Jobs dashboard](./media/restore-managed-disks/jobs-dashboard.png)
+
+1. To view the status of the restore operation, select **View all** to show ongoing and past jobs of this backup instance.
+
+ ![Select View all](./media/restore-managed-disks/view-all.png)
+
+1. Review the list of backup and restore jobs and their status. Select a job from the list of jobs to view job details.
+
+ ![List of jobs](./media/restore-managed-disks/list-of-jobs.png)
+
+## Next steps
+
+- [Azure Disk Backup FAQ](disk-backup-faq.md)
backup https://docs.microsoft.com/en-us/azure/backup/sap-hana-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-backup-support-matrix.md
@@ -20,7 +20,7 @@ Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **GA:**<br> **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China North, China East2, China North 2 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA | | **OS versions** | SLES 12 with SP2, SP3,SP4 and SP5; SLES 15 with SP0, SP1, SP2 <br><br> As of August 1st, 2020, SAP HANA backup for RHEL (7.4, 7.6, 7.7 & 8.1) is generally available. | |
-| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x <= SPS04 Rev 48, SPS05 (yet to be validated for encryption enabled scenarios) | |
+| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x <= SPS04 Rev 53, SPS05 (yet to be validated for encryption enabled scenarios) | |
| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | | **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x |
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-msrc-releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
@@ -10,13 +10,66 @@ ms.service: cloud-services
ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd
-ms.date: 1/15/2021
+ms.date: 1/18/2021
ms.author: yohaddad --- # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## January 2021 Guest OS
+">[!NOTE]
+>The January Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change."
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| --- | --- | --- | --- | --- |
+| Rel 21-01 | [4598230] | Latest Cumulative Update (LCU) | 6.27 | Jan 12, 2021 |
+| Rel 21-01 | [4580325] | Flash update | 3.93, 4.86, 5.51, 6.27 | Oct 13, 2020 |
+| Rel 21-01 | [4586768] | IE Cumulative Updates | 2.106, 3.93, 4.86 | Nov 10, 2020 |
+| Rel 21-01 | [4598243] | Latest Cumulative Update (LCU) | 5.51 | Jan 12, 2021 |
+| Rel 21-01 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | 2.106 | Jan 12, 2021 |
+| Rel 21-01 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | 2.106 | Jan 12, 2021 |
+| Rel 21-01 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | 4.86 | Jan 12, 2021 |
+| Rel 21-01 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | 4.86 | Jan 12, 2021 |
+| Rel 21-01 | [4578950] | .NET Framework 3.5 Security and Quality Rollup | 3.93 | Jan 12, 2021 |
+| Rel 21-01 | [4578954] | .NET Framework 4.5.2 Security and Quality Rollup | 3.93 | Jan 12, 2021 |
+| Rel 21-01 | [4578966] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.27 | Oct 13, 2020 |
+| Rel 21-01 | [4598279] | Monthly Rollup | 2.106 | Jan 12, 2020 |
+| Rel 21-01 | [4598278] | Monthly Rollup | 3.93 | Jan 12, 2020 |
+| Rel 21-01 | [4598285] | Monthly Rollup | 4.86 | Jan 12, 2020 |
+| Rel 21-01 | [4566426] | Servicing Stack update | 3.93 | Jul 14, 2020 |
+| Rel 21-01 | [4566425] | Servicing Stack update | 4.86 | Jul 14, 2020 |
+| Rel 21-01 OOB | [4578013] | Standalone Security Update | 4.86 | Aug 19, 2020 |
+| Rel 21-01 | [4576750] | Servicing Stack update | 5.51 | Sep 8, 2020 |
+| Rel 21-01 | [4592510] | Servicing Stack update | 2.106 | Dec 8, 2020 |
+| Rel 21-01 | [4598480] | Servicing Stack update | 6.27 | Jan 12, 2021 |
+| Rel 21-01 | [4494175] | Microcode | 5.51 | Sep 1, 2020 |
+| Rel 21-01 | [4494174] | Microcode | 6.27 | Sep 3, 2020 |
+
+[4598230]: https://support.microsoft.com/kb/4598230
+[4580325]: https://support.microsoft.com/kb/4580325
+[4586768]: https://support.microsoft.com/kb/4586768
+[4598243]: https://support.microsoft.com/kb/4598243
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[4578966]: https://support.microsoft.com/kb/4578966
+[4598279]: https://support.microsoft.com/kb/4598279
+[4598278]: https://support.microsoft.com/kb/4598278
+[4598285]: https://support.microsoft.com/kb/4598285
+[4566426]: https://support.microsoft.com/kb/4566426
+[4566425]: https://support.microsoft.com/kb/4566425
+[4578013]: https://support.microsoft.com/kb/4578013
+[4576750]: https://support.microsoft.com/kb/4576750
+[4592510]: https://support.microsoft.com/kb/4592510
+[4598480]: https://support.microsoft.com/kb/4598480
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## December 2020 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/azure-ssis-integration-runtime-package-store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-ssis-integration-runtime-package-store.md
@@ -23,7 +23,7 @@ To lift & shift your on-premises SQL Server Integration Services (SSIS) workload
- Running packages deployed into SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model) - Running packages deployed into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
-When you use Package Deployment Model, you can choose whether you want to provision your Azure-SSIS IR with package stores. They provide a package management layer on top of file system, Azure Files, or MSDB hosted by Azure SQL Managed Instance. Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SQL Server Management Studio (SSMS) similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service?view=sql-server-2017).
+When you use Package Deployment Model, you can choose whether you want to provision your Azure-SSIS IR with package stores. They provide a package management layer on top of file system, Azure Files, or MSDB hosted by Azure SQL Managed Instance. Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SQL Server Management Studio (SSMS) similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service).
## Connect to Azure-SSIS IR
@@ -56,7 +56,7 @@ After you connect to your Azure-SSIS IR on SSMS, you can right-click on any pack
> > Additionally, since legacy SSIS package stores are bound to specific SQL Server version and accessible only on SSMS for that version, lower-version packages in legacy SSIS package stores need to be exported into file system first using the designated SSMS version before they can be imported into Azure-SSIS IR package stores using SSMS 2019 or later versions. >
- > Alternatively, to import multiple SSIS packages into Azure-SSIS IR package stores while switching their protection level, you can use [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) command line utility, see [Deploying multiple packages with dtutil](#deploying-multiple-packages-with-dtutil).
+ > Alternatively, to import multiple SSIS packages into Azure-SSIS IR package stores while switching their protection level, you can use [dtutil](/sql/integration-services/dtutil-utility) command line utility, see [Deploying multiple packages with dtutil](#deploying-multiple-packages-with-dtutil).
* Select **Export Package** to export packages from your package store into **File System**, **SQL Server** (MSDB), or the legacy **SSIS Package Store**.
@@ -69,7 +69,7 @@ After you connect to your Azure-SSIS IR on SSMS, you can right-click on any pack
> > Since Azure-SSIS IR is currently based on **SQL Server 2017**, executing lower-version packages on it will upgrade them into SSIS 2017 packages at run-time. Executing higher-version packages is unsupported. >
- > Alternatively, to export multiple SSIS packages from Azure-SSIS IR package stores while switching their protection level, you can use [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) command line utility, see [Deploying multiple packages with dtutil](#deploying-multiple-packages-with-dtutil).
+ > Alternatively, to export multiple SSIS packages from Azure-SSIS IR package stores while switching their protection level, you can use [dtutil](/sql/integration-services/dtutil-utility) command line utility, see [Deploying multiple packages with dtutil](#deploying-multiple-packages-with-dtutil).
* Select **Delete** to delete existing folders/packages from your package store.
@@ -117,7 +117,7 @@ After you connect to your Azure-SSIS IR on SSMS, you can right-click on it to po
To lift & shift your on-premises SSIS workloads onto SSIS in ADF while maintaining the legacy Package Deployment Model, you need to deploy your packages from file system, MSDB hosted by SQL Server, or legacy SSIS package stores into Azure Files, MSDB hosted by Azure SQL Managed Instance, or Azure-SSIS IR package stores. At the same time, you should also switch their protection level from encryption by user key to unencrypted or encryption by password if you haven't done so already.
-You can use [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) command line utility that comes with SQL Server/SSIS installation to deploy multiple packages in batches. It's bound to specific SSIS version, so if you use it to deploy lower-version packages without switching their protection level, it will simply copy them while preserving their SSIS version. If you use it to deploy them and switch their protection level at the same time, it will upgrade them into its SSIS version.
+You can use [dtutil](/sql/integration-services/dtutil-utility) command line utility that comes with SQL Server/SSIS installation to deploy multiple packages in batches. It's bound to specific SSIS version, so if you use it to deploy lower-version packages without switching their protection level, it will simply copy them while preserving their SSIS version. If you use it to deploy them and switch their protection level at the same time, it will upgrade them into its SSIS version.
Since Azure-SSIS IR is currently based on **SQL Server 2017**, executing lower-version packages on it will upgrade them into SSIS 2017 packages at run-time. Executing higher-version packages is unsupported.
@@ -143,7 +143,7 @@ for %f in (*.dtsx) do dtutil.exe /FILE %f /ENCRYPT FILE;Z:\%f;2;YourEncryptionPa
To run the above commands in a batch file, replace `%f` with `%%f`.
-To deploy multiple packages from legacy SSIS package stores on top of file system into Azure Files and switch their protection level at the same time, you can use the same commands, but replace `YourLocalDrive:\...\YourPackageFolder` with a local folder used by legacy SSIS package stores: `YourLocalDrive:\Program Files\Microsoft SQL Server\YourSQLServerDefaultCompatibilityLevel\DTS\Packages\YourPackageFolder`. For example, if your legacy SSIS package store is bound to SQL Server 2016, go to `YourLocalDrive:\Program Files\Microsoft SQL Server\130\DTS\Packages\YourPackageFolder`. You can find the value for `YourSQLServerDefaultCompatibilityLevel` from a [list of SQL Server default compatibility levels](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level?view=sql-server-ver15#arguments).
+To deploy multiple packages from legacy SSIS package stores on top of file system into Azure Files and switch their protection level at the same time, you can use the same commands, but replace `YourLocalDrive:\...\YourPackageFolder` with a local folder used by legacy SSIS package stores: `YourLocalDrive:\Program Files\Microsoft SQL Server\YourSQLServerDefaultCompatibilityLevel\DTS\Packages\YourPackageFolder`. For example, if your legacy SSIS package store is bound to SQL Server 2016, go to `YourLocalDrive:\Program Files\Microsoft SQL Server\130\DTS\Packages\YourPackageFolder`. You can find the value for `YourSQLServerDefaultCompatibilityLevel` from a [list of SQL Server default compatibility levels](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#arguments).
If you've configured Azure-SSIS IR package stores on top of Azure Files, your deployed packages will appear in them when you connect to your Azure-SSIS IR on SSMS 2019 or later versions.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/built-in-preinstalled-components-ssis-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
@@ -30,45 +30,45 @@ This article lists all built-in and preinstalled components, such as clients, dr
| Type | Name | |------|------|
-| **Built-in connection managers** | [ADO Connection Manager](/sql/integration-services/connection-manager/ado-connection-manager?view=sql-server-2017)<br/><br/>[ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager?view=sql-server-2017)<br/><br/>[Analysis Services Connection Manager](/sql/integration-services/connection-manager/analysis-services-connection-manager?view=sql-server-2017)<br/><br/>[Excel Connection Manager](/sql/integration-services/connection-manager/excel-connection-manager?view=sql-server-2017)<br/><br/>[File Connection Manager](/sql/integration-services/connection-manager/file-connection-manager?view=sql-server-2017)<br/><br/>[Flat File Connection Manager](/sql/integration-services/connection-manager/flat-file-connection-manager?view=sql-server-2017)<br/><br/>[FTP Connection Manager](/sql/integration-services/connection-manager/ftp-connection-manager?view=sql-server-2017)<br/><br/>[Hadoop Connection Manager](/sql/integration-services/connection-manager/hadoop-connection-manager?view=sql-server-2017)<br/><br/>[HTTP Connection Manager](/sql/integration-services/connection-manager/http-connection-manager?view=sql-server-2017)<br/><br/>[MSMQ Connection Manager](/sql/integration-services/connection-manager/msmq-connection-manager?view=sql-server-2017)<br/><br/>[Multiple Files Connection Manager](/sql/integration-services/connection-manager/multiple-files-connection-manager?view=sql-server-2017)<br/><br/>[Multiple Flat Files Connection Manager](/sql/integration-services/connection-manager/multiple-flat-files-connection-manager?view=sql-server-2017)<br/><br/>[ODBC Connection Manager](/sql/integration-services/connection-manager/odbc-connection-manager?view=sql-server-2017)<br/><br/>[OLEDB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager?view=sql-server-2017)<br/><br/>[SAP BW Connection Manager](/sql/integration-services/connection-manager/sap-bw-connection-manager?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[SMO Connection Manager](/sql/integration-services/connection-manager/smo-connection-manager?view=sql-server-2017)<br/><br/>[SMTP Connection Manager](/sql/integration-services/connection-manager/smtp-connection-manager?view=sql-server-2017)<br/><br/>[SQL Server Compact Edition Connection Manager](/sql/integration-services/connection-manager/sql-server-compact-edition-connection-manager?view=sql-server-2017)<br/><br/>[WMI Connection Manager](/sql/integration-services/connection-manager/wmi-connection-manager?view=sql-server-2017) |
-| **Preinstalled connection managers ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis?view=sql-server-ver15))** | [Azure Data Lake Analytics Connection Manager](/sql/integration-services/connection-manager/azure-data-lake-analytics-connection-manager?view=sql-server-2017)<br/><br/>[Azure Data Lake Store Connection Manager](/sql/integration-services/connection-manager/azure-data-lake-store-connection-manager?view=sql-server-2017)<br/><br/>[Azure HDInsight Connection Manager](/sql/integration-services/connection-manager/azure-hdinsight-connection-manager?view=sql-server-2017)<br/><br/>[Azure Resource Manager Connection Manager](/sql/integration-services/connection-manager/azure-resource-manager-connection-manager?view=sql-server-2017)<br/><br/>[Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager?view=sql-server-2017)<br/><br/>[Azure Subscription Connection Manager](/sql/integration-services/connection-manager/azure-subscription-connection-manager?view=sql-server-2017) |
+| **Built-in connection managers** | [ADO Connection Manager](/sql/integration-services/connection-manager/ado-connection-manager)<br/><br/>[ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager)<br/><br/>[Analysis Services Connection Manager](/sql/integration-services/connection-manager/analysis-services-connection-manager)<br/><br/>[Excel Connection Manager](/sql/integration-services/connection-manager/excel-connection-manager)<br/><br/>[File Connection Manager](/sql/integration-services/connection-manager/file-connection-manager)<br/><br/>[Flat File Connection Manager](/sql/integration-services/connection-manager/flat-file-connection-manager)<br/><br/>[FTP Connection Manager](/sql/integration-services/connection-manager/ftp-connection-manager)<br/><br/>[Hadoop Connection Manager](/sql/integration-services/connection-manager/hadoop-connection-manager)<br/><br/>[HTTP Connection Manager](/sql/integration-services/connection-manager/http-connection-manager)<br/><br/>[MSMQ Connection Manager](/sql/integration-services/connection-manager/msmq-connection-manager)<br/><br/>[Multiple Files Connection Manager](/sql/integration-services/connection-manager/multiple-files-connection-manager)<br/><br/>[Multiple Flat Files Connection Manager](/sql/integration-services/connection-manager/multiple-flat-files-connection-manager)<br/><br/>[ODBC Connection Manager](/sql/integration-services/connection-manager/odbc-connection-manager)<br/><br/>[OLEDB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager)<br/><br/>[SAP BW Connection Manager](/sql/integration-services/connection-manager/sap-bw-connection-manager) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[SMO Connection Manager](/sql/integration-services/connection-manager/smo-connection-manager)<br/><br/>[SMTP Connection Manager](/sql/integration-services/connection-manager/smtp-connection-manager)<br/><br/>[SQL Server Compact Edition Connection Manager](/sql/integration-services/connection-manager/sql-server-compact-edition-connection-manager)<br/><br/>[WMI Connection Manager](/sql/integration-services/connection-manager/wmi-connection-manager) |
+| **Preinstalled connection managers ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis))** | [Azure Data Lake Analytics Connection Manager](/sql/integration-services/connection-manager/azure-data-lake-analytics-connection-manager)<br/><br/>[Azure Data Lake Store Connection Manager](/sql/integration-services/connection-manager/azure-data-lake-store-connection-manager)<br/><br/>[Azure HDInsight Connection Manager](/sql/integration-services/connection-manager/azure-hdinsight-connection-manager)<br/><br/>[Azure Resource Manager Connection Manager](/sql/integration-services/connection-manager/azure-resource-manager-connection-manager)<br/><br/>[Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager)<br/><br/>[Azure Subscription Connection Manager](/sql/integration-services/connection-manager/azure-subscription-connection-manager) |
## Built-in and preinstalled data sources on Azure-SSIS IR | Type | Name | |------|------|
-| **Built-in data sources** | [ADO.NET Source](/sql/integration-services/data-flow/ado-net-source?view=sql-server-2017)<br/><br/>[CDC Source](/sql/integration-services/data-flow/cdc-source?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Excel Source](/sql/integration-services/data-flow/excel-source?view=sql-server-2017)<br/><br/>[Flat File Source](/sql/integration-services/data-flow/flat-file-source?view=sql-server-2017)<br/><br/>[HDFS File Source](/sql/integration-services/data-flow/hdfs-file-source?view=sql-server-2017)<br/><br/>[OData Source](/sql/integration-services/data-flow/odata-source?view=sql-server-2017)<br/><br/>[ODBC Source](/sql/integration-services/data-flow/odbc-source?view=sql-server-2017)<br/><br/>[OLEDB Source](/sql/integration-services/data-flow/ole-db-source?view=sql-server-2017)<br/><br/>[Raw File Source](/sql/integration-services/data-flow/raw-file-source?view=sql-server-2017)<br/><br/>[SAP BW Source](/sql/integration-services/data-flow/sap-bw-source?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[XML Source](/sql/integration-services/data-flow/xml-source?view=sql-server-2017) |
-| **Preinstalled data sources ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis?view=sql-server-ver15) + [Power Query Source](/sql/integration-services/data-flow/power-query-source?view=sql-server-ver15))** | [Azure Blob Source](/sql/integration-services/data-flow/azure-blob-source?view=sql-server-2017)<br/><br/>[Azure Data Lake Store Source](/sql/integration-services/data-flow/azure-data-lake-store-source?view=sql-server-2017)<br/><br/>[Flexible File Source](/sql/integration-services/data-flow/flexible-file-source?view=sql-server-ver15)<br/><br/>[Power Query Source](/sql/integration-services/data-flow/power-query-source?view=sql-server-ver15) |
+| **Built-in data sources** | [ADO.NET Source](/sql/integration-services/data-flow/ado-net-source)<br/><br/>[CDC Source](/sql/integration-services/data-flow/cdc-source) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Excel Source](/sql/integration-services/data-flow/excel-source)<br/><br/>[Flat File Source](/sql/integration-services/data-flow/flat-file-source)<br/><br/>[HDFS File Source](/sql/integration-services/data-flow/hdfs-file-source)<br/><br/>[OData Source](/sql/integration-services/data-flow/odata-source)<br/><br/>[ODBC Source](/sql/integration-services/data-flow/odbc-source)<br/><br/>[OLEDB Source](/sql/integration-services/data-flow/ole-db-source)<br/><br/>[Raw File Source](/sql/integration-services/data-flow/raw-file-source)<br/><br/>[SAP BW Source](/sql/integration-services/data-flow/sap-bw-source) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[XML Source](/sql/integration-services/data-flow/xml-source) |
+| **Preinstalled data sources ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis) + [Power Query Source](/sql/integration-services/data-flow/power-query-source))** | [Azure Blob Source](/sql/integration-services/data-flow/azure-blob-source)<br/><br/>[Azure Data Lake Store Source](/sql/integration-services/data-flow/azure-data-lake-store-source)<br/><br/>[Flexible File Source](/sql/integration-services/data-flow/flexible-file-source)<br/><br/>[Power Query Source](/sql/integration-services/data-flow/power-query-source) |
## Built-in and preinstalled data destinations on Azure-SSIS IR | Type | Name | |------|------|
-| **Built-in data destinations** | [ADO.NET Destination](/sql/integration-services/data-flow/ado-net-destination?view=sql-server-2017)<br/><br/>[Data Mining Model Training Destination](/sql/integration-services/data-flow/data-mining-model-training-destination?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[DataReader Destination](/sql/integration-services/data-flow/datareader-destination?view=sql-server-2017)<br/><br/>[Data Streaming Destination](/sql/integration-services/data-flow/data-streaming-destination?view=sql-server-2017)<br/><br/>[Dimension Processing Destination](/sql/integration-services/data-flow/dimension-processing-destination?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Excel Destination](/sql/integration-services/data-flow/excel-destination?view=sql-server-2017)<br/><br/>[Flat File Destination](/sql/integration-services/data-flow/flat-file-destination?view=sql-server-2017)<br/><br/>[HDFS File Destination](/sql/integration-services/data-flow/hdfs-file-destination?view=sql-server-2017)<br/><br/>[ODBC Destination](/sql/integration-services/data-flow/odbc-destination?view=sql-server-2017)<br/><br/>[OLEDB Destination](/sql/integration-services/data-flow/ole-db-destination?view=sql-server-2017)<br/><br/>[Partition Processing Destination](/sql/integration-services/data-flow/partition-processing-destination?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Raw File Destination](/sql/integration-services/data-flow/raw-file-destination?view=sql-server-2017)<br/><br/>[Recordset Destination](/sql/integration-services/data-flow/recordset-destination?view=sql-server-2017)<br/><br/>[SAP BW Destination](/sql/integration-services/data-flow/sap-bw-destination?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[SQL Server Compact Edition Destination](/sql/integration-services/data-flow/sql-server-compact-edition-destination?view=sql-server-2017)<br/><br/>[SQL Server Destination](/sql/integration-services/data-flow/sql-server-destination?view=sql-server-2017) |
-| **Preinstalled data destinations ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis?view=sql-server-ver15))** | [Azure Blob Destination](/sql/integration-services/data-flow/azure-blob-destination?view=sql-server-2017)<br/><br/>[Azure Data Lake Store Destination](/sql/integration-services/data-flow/azure-data-lake-store-destination?view=sql-server-2017)<br/><br/>[Flexible File Destination](/sql/integration-services/data-flow/flexible-file-destination?view=sql-server-ver15) |
+| **Built-in data destinations** | [ADO.NET Destination](/sql/integration-services/data-flow/ado-net-destination)<br/><br/>[Data Mining Model Training Destination](/sql/integration-services/data-flow/data-mining-model-training-destination) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[DataReader Destination](/sql/integration-services/data-flow/datareader-destination)<br/><br/>[Data Streaming Destination](/sql/integration-services/data-flow/data-streaming-destination)<br/><br/>[Dimension Processing Destination](/sql/integration-services/data-flow/dimension-processing-destination) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Excel Destination](/sql/integration-services/data-flow/excel-destination)<br/><br/>[Flat File Destination](/sql/integration-services/data-flow/flat-file-destination)<br/><br/>[HDFS File Destination](/sql/integration-services/data-flow/hdfs-file-destination)<br/><br/>[ODBC Destination](/sql/integration-services/data-flow/odbc-destination)<br/><br/>[OLEDB Destination](/sql/integration-services/data-flow/ole-db-destination)<br/><br/>[Partition Processing Destination](/sql/integration-services/data-flow/partition-processing-destination) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Raw File Destination](/sql/integration-services/data-flow/raw-file-destination)<br/><br/>[Recordset Destination](/sql/integration-services/data-flow/recordset-destination)<br/><br/>[SAP BW Destination](/sql/integration-services/data-flow/sap-bw-destination) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[SQL Server Compact Edition Destination](/sql/integration-services/data-flow/sql-server-compact-edition-destination)<br/><br/>[SQL Server Destination](/sql/integration-services/data-flow/sql-server-destination) |
+| **Preinstalled data destinations ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis))** | [Azure Blob Destination](/sql/integration-services/data-flow/azure-blob-destination)<br/><br/>[Azure Data Lake Store Destination](/sql/integration-services/data-flow/azure-data-lake-store-destination)<br/><br/>[Flexible File Destination](/sql/integration-services/data-flow/flexible-file-destination) |
## Built-in and preinstalled data transformations on Azure-SSIS IR | Type | Name | |------|------|
-| **Built-in auditing transformations** | [Audit Transformation](/sql/integration-services/data-flow/transformations/audit-transformation?view=sql-server-2017)<br/><br/>[Row Count Transformation](/sql/integration-services/data-flow/transformations/row-count-transformation?view=sql-server-2017) |
-| **Built-in BI transformations** | [Data Mining Query Transformation](/sql/integration-services/data-flow/transformations/data-mining-query-transformation?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[DQS Cleansing Transformation](/sql/integration-services/data-flow/transformations/dqs-cleansing-transformation?view=sql-server-2017)<br/><br/>[Fuzzy Grouping Transformation](/sql/integration-services/data-flow/transformations/fuzzy-grouping-transformation?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Fuzzy Lookup Transformation](/sql/integration-services/data-flow/transformations/fuzzy-lookup-transformation?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Term Extraction Transformation](/sql/integration-services/data-flow/transformations/term-extraction-transformation?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Term Lookup Transformation](/sql/integration-services/data-flow/transformations/term-lookup-transformation?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md)) |
-| **Built-in row transformations** | [Character Map Transformation](/sql/integration-services/data-flow/transformations/character-map-transformation?view=sql-server-2017)<br/><br/>[Copy Column Transformation](/sql/integration-services/data-flow/transformations/copy-column-transformation?view=sql-server-2017)<br/><br/>[Data Conversion Transformation](/sql/integration-services/data-flow/transformations/data-conversion-transformation?view=sql-server-2017)<br/><br/>[Derived Column Transformation](/sql/integration-services/data-flow/transformations/derived-column-transformation?view=sql-server-2017)<br/><br/>[Export Column Transformation](/sql/integration-services/data-flow/transformations/export-column-transformation?view=sql-server-2017)<br/><br/>[Import Column Transformation](/sql/integration-services/data-flow/transformations/import-column-transformation?view=sql-server-2017)<br/><br/>[OLE DB Command Transformation](/sql/integration-services/data-flow/transformations/ole-db-command-transformation?view=sql-server-2017)<br/><br/>[Script Component](/sql/integration-services/data-flow/transformations/script-component?view=sql-server-2017) |
-| **Built-in rowset transformations** | [Aggregate Transformation](/sql/integration-services/data-flow/transformations/aggregate-transformation?view=sql-server-2017)<br/><br/>[Percentage Sampling Transformation](/sql/integration-services/data-flow/transformations/percentage-sampling-transformation?view=sql-server-2017)<br/><br/>[Pivot Transformation](/sql/integration-services/data-flow/transformations/pivot-transformation?view=sql-server-2017)<br/><br/>[Row Sampling Transformation](/sql/integration-services/data-flow/transformations/row-sampling-transformation?view=sql-server-2017)<br/><br/>[Sort Transformation](/sql/integration-services/data-flow/transformations/sort-transformation?view=sql-server-2017)<br/><br/>[Unpivot Transformation](/sql/integration-services/data-flow/transformations/unpivot-transformation?view=sql-server-2017) |
-| **Built-in split and join transformations** | [Balanced Data Distributor Transformation](/sql/integration-services/data-flow/transformations/balanced-data-distributor-transformation?view=sql-server-2017)<br/><br/>[Cache Transform](/sql/integration-services/data-flow/transformations/cache-transform?view=sql-server-2017)<br/><br/>[CDC Splitter](/sql/integration-services/data-flow/cdc-splitter?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Conditional Split Transformation](/sql/integration-services/data-flow/transformations/conditional-split-transformation?view=sql-server-2017)<br/><br/>[Lookup Transformation](/sql/integration-services/data-flow/transformations/lookup-transformation?view=sql-server-2017)<br/><br/>[Merge Join Transformation](/sql/integration-services/data-flow/transformations/merge-join-transformation?view=sql-server-2017)<br/><br/>[Merge Transformation](/sql/integration-services/data-flow/transformations/merge-transformation?view=sql-server-2017)<br/><br/>[Multicast Transformation](/sql/integration-services/data-flow/transformations/multicast-transformation?view=sql-server-2017)<br/><br/>[Slowly Changing Dimension Transformation](/sql/integration-services/data-flow/transformations/slowly-changing-dimension-transformation?view=sql-server-2017)<br/><br/>[Union All Transformation](/sql/integration-services/data-flow/transformations/union-all-transformation?view=sql-server-2017) |
+| **Built-in auditing transformations** | [Audit Transformation](/sql/integration-services/data-flow/transformations/audit-transformation)<br/><br/>[Row Count Transformation](/sql/integration-services/data-flow/transformations/row-count-transformation) |
+| **Built-in BI transformations** | [Data Mining Query Transformation](/sql/integration-services/data-flow/transformations/data-mining-query-transformation) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[DQS Cleansing Transformation](/sql/integration-services/data-flow/transformations/dqs-cleansing-transformation)<br/><br/>[Fuzzy Grouping Transformation](/sql/integration-services/data-flow/transformations/fuzzy-grouping-transformation) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Fuzzy Lookup Transformation](/sql/integration-services/data-flow/transformations/fuzzy-lookup-transformation) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Term Extraction Transformation](/sql/integration-services/data-flow/transformations/term-extraction-transformation) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Term Lookup Transformation](/sql/integration-services/data-flow/transformations/term-lookup-transformation) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md)) |
+| **Built-in row transformations** | [Character Map Transformation](/sql/integration-services/data-flow/transformations/character-map-transformation)<br/><br/>[Copy Column Transformation](/sql/integration-services/data-flow/transformations/copy-column-transformation)<br/><br/>[Data Conversion Transformation](/sql/integration-services/data-flow/transformations/data-conversion-transformation)<br/><br/>[Derived Column Transformation](/sql/integration-services/data-flow/transformations/derived-column-transformation)<br/><br/>[Export Column Transformation](/sql/integration-services/data-flow/transformations/export-column-transformation)<br/><br/>[Import Column Transformation](/sql/integration-services/data-flow/transformations/import-column-transformation)<br/><br/>[OLE DB Command Transformation](/sql/integration-services/data-flow/transformations/ole-db-command-transformation)<br/><br/>[Script Component](/sql/integration-services/data-flow/transformations/script-component) |
+| **Built-in rowset transformations** | [Aggregate Transformation](/sql/integration-services/data-flow/transformations/aggregate-transformation)<br/><br/>[Percentage Sampling Transformation](/sql/integration-services/data-flow/transformations/percentage-sampling-transformation)<br/><br/>[Pivot Transformation](/sql/integration-services/data-flow/transformations/pivot-transformation)<br/><br/>[Row Sampling Transformation](/sql/integration-services/data-flow/transformations/row-sampling-transformation)<br/><br/>[Sort Transformation](/sql/integration-services/data-flow/transformations/sort-transformation)<br/><br/>[Unpivot Transformation](/sql/integration-services/data-flow/transformations/unpivot-transformation) |
+| **Built-in split and join transformations** | [Balanced Data Distributor Transformation](/sql/integration-services/data-flow/transformations/balanced-data-distributor-transformation)<br/><br/>[Cache Transform](/sql/integration-services/data-flow/transformations/cache-transform)<br/><br/>[CDC Splitter](/sql/integration-services/data-flow/cdc-splitter) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Conditional Split Transformation](/sql/integration-services/data-flow/transformations/conditional-split-transformation)<br/><br/>[Lookup Transformation](/sql/integration-services/data-flow/transformations/lookup-transformation)<br/><br/>[Merge Join Transformation](/sql/integration-services/data-flow/transformations/merge-join-transformation)<br/><br/>[Merge Transformation](/sql/integration-services/data-flow/transformations/merge-transformation)<br/><br/>[Multicast Transformation](/sql/integration-services/data-flow/transformations/multicast-transformation)<br/><br/>[Slowly Changing Dimension Transformation](/sql/integration-services/data-flow/transformations/slowly-changing-dimension-transformation)<br/><br/>[Union All Transformation](/sql/integration-services/data-flow/transformations/union-all-transformation) |
## Built-in and preinstalled tasks on Azure-SSIS IR | Type | Name | |------|------|
-| **Built-in Analysis Services tasks** | [Analysis Services Execute DDL Task](/sql/integration-services/control-flow/analysis-services-execute-ddl-task?view=sql-server-2017)<br/><br/>[Analysis Services Processing Task](/sql/integration-services/control-flow/analysis-services-processing-task?view=sql-server-2017)<br/><br/>[Data Mining Query Task](/sql/integration-services/control-flow/data-mining-query-task?view=sql-server-2017) |
-| **Built-in data flow tasks** | [Data Flow Task](/sql/integration-services/control-flow/data-flow-task?view=sql-server-2017) |
-| **Built-in data preparation tasks** | [CDC Control Task](/sql/integration-services/control-flow/cdc-control-task?view=sql-server-2017) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Check Database Integrity Task](/sql/integration-services/control-flow/check-database-integrity-task?view=sql-server-2017)<br/><br/>[Data Profiling Task](/sql/integration-services/control-flow/data-profiling-task-and-viewer?view=sql-server-2017)<br/><br/>[File System Task](/sql/integration-services/control-flow/file-system-task?view=sql-server-2017)<br/><br/>[FTP Task](/sql/integration-services/control-flow/ftp-task?view=sql-server-2017)<br/><br/>[Hadoop File System Task](/sql/integration-services/control-flow/hadoop-file-system-task?view=sql-server-2017)<br/><br/>[Hadoop Hive Task](/sql/integration-services/control-flow/hadoop-hive-task?view=sql-server-2017)<br/><br/>[Hadoop Pig Task](/sql/integration-services/control-flow/hadoop-pig-task?view=sql-server-2017)<br/><br/>[Web Service Task](/sql/integration-services/control-flow/web-service-task?view=sql-server-2017)<br/><br/>[XML Task](/sql/integration-services/control-flow/xml-task?view=sql-server-2017) |
-| **Built-in maintenance tasks** | [Back Up Database Task](/sql/integration-services/control-flow/back-up-database-task?view=sql-server-2017)<br/><br/>[Execute T-SQL Statement Task](/sql/integration-services/control-flow/execute-t-sql-statement-task?view=sql-server-2017)<br/><br/>[History Cleanup Task](/sql/integration-services/control-flow/history-cleanup-task?view=sql-server-2017)<br/><br/>[Maintenance Cleanup Task](/sql/integration-services/control-flow/maintenance-cleanup-task?view=sql-server-2017)<br/><br/>[Notify Operator Task](/sql/integration-services/control-flow/notify-operator-task?view=sql-server-2017)<br/><br/>[Rebuild Index Task](/sql/integration-services/control-flow/rebuild-index-task?view=sql-server-2017)<br/><br/>[Reorganize Index Task](/sql/integration-services/control-flow/reorganize-index-task?view=sql-server-2017)<br/><br/>[Select Objects to Transfer](/sql/integration-services/control-flow/select-objects-to-transfer?view=sql-server-2017)<br/><br/>[Shrink Database Task](/sql/integration-services/control-flow/shrink-database-task?view=sql-server-2017)<br/><br/>[Transfer Database Task](/sql/integration-services/control-flow/transfer-database-task?view=sql-server-2017)<br/><br/>[Transfer Error Messages Task](/sql/integration-services/control-flow/transfer-error-messages-task?view=sql-server-2017)<br/><br/>[Transfer Jobs Task](/sql/integration-services/control-flow/transfer-jobs-task?view=sql-server-2017)<br/><br/>[Transfer Logins Task](/sql/integration-services/control-flow/transfer-logins-task?view=sql-server-2017)<br/><br/>[Transfer Master Stored Procedures Task](/sql/integration-services/control-flow/transfer-master-stored-procedures-task?view=sql-server-2017)<br/><br/>[Transfer SQL Server Objects Task](/sql/integration-services/control-flow/transfer-sql-server-objects-task?view=sql-server-2017)<br/><br/>[Update Statistics Task](/sql/integration-services/control-flow/update-statistics-task?view=sql-server-2017) |
-| **Built-in scripting tasks** | [Script Task](/sql/integration-services/control-flow/script-task?view=sql-server-2017) |
-| **Built-in SQL Server tasks** | [Bulk Insert Task](/sql/integration-services/control-flow/bulk-insert-task?view=sql-server-2017)<br/><br/>[Execute SQL Task](/sql/integration-services/control-flow/execute-sql-task?view=sql-server-2017) |
-| **Built-in workflow tasks** | [Execute Package Task](/sql/integration-services/control-flow/execute-package-task?view=sql-server-2017)<br/><br/>[Execute Process Task](/sql/integration-services/control-flow/execute-process-task?view=sql-server-2017)<br/><br/>[Execute SQL Server Agent Job Task](/sql/integration-services/control-flow/execute-sql-server-agent-job-task?view=sql-server-2017)<br/><br/>[Expression Task](/sql/integration-services/control-flow/expression-task?view=sql-server-2017)<br/><br/>[Message Queue Task](/sql/integration-services/control-flow/message-queue-task?view=sql-server-2017)<br/><br/>[Send Mail Task](/sql/integration-services/control-flow/send-mail-task?view=sql-server-2017)<br/><br/>[WMI Data Reader Task](/sql/integration-services/control-flow/wmi-data-reader-task?view=sql-server-2017)<br/><br/>[WMI Event Watcher Task](/sql/integration-services/control-flow/wmi-event-watcher-task?view=sql-server-2017) |
-| **Preinstalled tasks ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis?view=sql-server-ver15))** | [Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task?view=sql-server-2017)<br/><br/>[Azure Blob Upload Task](/sql/integration-services/control-flow/azure-blob-upload-task?view=sql-server-2017)<br/><br/>[Azure Data Lake Analytics Task](/sql/integration-services/control-flow/azure-data-lake-analytics-task?view=sql-server-2017)<br/><br/>[Azure Data Lake Store File System Task](/sql/integration-services/control-flow/azure-data-lake-store-file-system-task?view=sql-server-2017)<br/><br/>[Azure HDInsight Create Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-create-cluster-task?view=sql-server-2017)<br/><br/>[Azure HDInsight Delete Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-delete-cluster-task?view=sql-server-2017)<br/><br/>[Azure HDInsight Hive Task](/sql/integration-services/control-flow/azure-hdinsight-hive-task?view=sql-server-2017)<br/><br/>[Azure HDInsight Pig Task](/sql/integration-services/control-flow/azure-hdinsight-pig-task?view=sql-server-2017)<br/><br/>[Azure SQL Azure Synapse Analytics Upload Task](/sql/integration-services/control-flow/azure-sql-dw-upload-task?view=sql-server-2017)<br/><br/>[Flexible File Task](/sql/integration-services/control-flow/flexible-file-task?view=sql-server-ver15) |
+| **Built-in Analysis Services tasks** | [Analysis Services Execute DDL Task](/sql/integration-services/control-flow/analysis-services-execute-ddl-task)<br/><br/>[Analysis Services Processing Task](/sql/integration-services/control-flow/analysis-services-processing-task)<br/><br/>[Data Mining Query Task](/sql/integration-services/control-flow/data-mining-query-task) |
+| **Built-in data flow tasks** | [Data Flow Task](/sql/integration-services/control-flow/data-flow-task) |
+| **Built-in data preparation tasks** | [CDC Control Task](/sql/integration-services/control-flow/cdc-control-task) ([Enterprise Edition](./how-to-configure-azure-ssis-ir-enterprise-edition.md))<br/><br/>[Check Database Integrity Task](/sql/integration-services/control-flow/check-database-integrity-task)<br/><br/>[Data Profiling Task](/sql/integration-services/control-flow/data-profiling-task-and-viewer)<br/><br/>[File System Task](/sql/integration-services/control-flow/file-system-task)<br/><br/>[FTP Task](/sql/integration-services/control-flow/ftp-task)<br/><br/>[Hadoop File System Task](/sql/integration-services/control-flow/hadoop-file-system-task)<br/><br/>[Hadoop Hive Task](/sql/integration-services/control-flow/hadoop-hive-task)<br/><br/>[Hadoop Pig Task](/sql/integration-services/control-flow/hadoop-pig-task)<br/><br/>[Web Service Task](/sql/integration-services/control-flow/web-service-task)<br/><br/>[XML Task](/sql/integration-services/control-flow/xml-task) |
+| **Built-in maintenance tasks** | [Back Up Database Task](/sql/integration-services/control-flow/back-up-database-task)<br/><br/>[Execute T-SQL Statement Task](/sql/integration-services/control-flow/execute-t-sql-statement-task)<br/><br/>[History Cleanup Task](/sql/integration-services/control-flow/history-cleanup-task)<br/><br/>[Maintenance Cleanup Task](/sql/integration-services/control-flow/maintenance-cleanup-task)<br/><br/>[Notify Operator Task](/sql/integration-services/control-flow/notify-operator-task)<br/><br/>[Rebuild Index Task](/sql/integration-services/control-flow/rebuild-index-task)<br/><br/>[Reorganize Index Task](/sql/integration-services/control-flow/reorganize-index-task)<br/><br/>[Select Objects to Transfer](/sql/integration-services/control-flow/select-objects-to-transfer)<br/><br/>[Shrink Database Task](/sql/integration-services/control-flow/shrink-database-task)<br/><br/>[Transfer Database Task](/sql/integration-services/control-flow/transfer-database-task)<br/><br/>[Transfer Error Messages Task](/sql/integration-services/control-flow/transfer-error-messages-task)<br/><br/>[Transfer Jobs Task](/sql/integration-services/control-flow/transfer-jobs-task)<br/><br/>[Transfer Logins Task](/sql/integration-services/control-flow/transfer-logins-task)<br/><br/>[Transfer Master Stored Procedures Task](/sql/integration-services/control-flow/transfer-master-stored-procedures-task)<br/><br/>[Transfer SQL Server Objects Task](/sql/integration-services/control-flow/transfer-sql-server-objects-task)<br/><br/>[Update Statistics Task](/sql/integration-services/control-flow/update-statistics-task) |
+| **Built-in scripting tasks** | [Script Task](/sql/integration-services/control-flow/script-task) |
+| **Built-in SQL Server tasks** | [Bulk Insert Task](/sql/integration-services/control-flow/bulk-insert-task)<br/><br/>[Execute SQL Task](/sql/integration-services/control-flow/execute-sql-task) |
+| **Built-in workflow tasks** | [Execute Package Task](/sql/integration-services/control-flow/execute-package-task)<br/><br/>[Execute Process Task](/sql/integration-services/control-flow/execute-process-task)<br/><br/>[Execute SQL Server Agent Job Task](/sql/integration-services/control-flow/execute-sql-server-agent-job-task)<br/><br/>[Expression Task](/sql/integration-services/control-flow/expression-task)<br/><br/>[Message Queue Task](/sql/integration-services/control-flow/message-queue-task)<br/><br/>[Send Mail Task](/sql/integration-services/control-flow/send-mail-task)<br/><br/>[WMI Data Reader Task](/sql/integration-services/control-flow/wmi-data-reader-task)<br/><br/>[WMI Event Watcher Task](/sql/integration-services/control-flow/wmi-event-watcher-task) |
+| **Preinstalled tasks ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis))** | [Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task)<br/><br/>[Azure Blob Upload Task](/sql/integration-services/control-flow/azure-blob-upload-task)<br/><br/>[Azure Data Lake Analytics Task](/sql/integration-services/control-flow/azure-data-lake-analytics-task)<br/><br/>[Azure Data Lake Store File System Task](/sql/integration-services/control-flow/azure-data-lake-store-file-system-task)<br/><br/>[Azure HDInsight Create Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-create-cluster-task)<br/><br/>[Azure HDInsight Delete Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-delete-cluster-task)<br/><br/>[Azure HDInsight Hive Task](/sql/integration-services/control-flow/azure-hdinsight-hive-task)<br/><br/>[Azure HDInsight Pig Task](/sql/integration-services/control-flow/azure-hdinsight-pig-task)<br/><br/>[Azure SQL Azure Synapse Analytics Upload Task](/sql/integration-services/control-flow/azure-sql-dw-upload-task)<br/><br/>[Flexible File Task](/sql/integration-services/control-flow/flexible-file-task) |
## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ci-cd-github-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
@@ -67,7 +67,7 @@ CI/CD release pipeline failing with the following error:
2020-07-06T09:50:50.8771655Z ##[error]Details: 2020-07-06T09:50:50.8772837Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported. 2020-07-06T09:50:50.8774148Z ##[error]DataFactoryPropertyUpdateNotSupported: Updating property type is not supported.
-2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
+2020-07-06T09:50:50.8775530Z ##[error]Check out the troubleshooting guide to see if your issue is addressed: https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment#troubleshooting
2020-07-06T09:50:50.8776801Z ##[error]Task failed while creating or updating the template deployment. `
@@ -114,15 +114,15 @@ You are unable to move Data Factory from one Resource Group to another, failing
` {
- "code": "ResourceMoveProviderValidationFailed",
- "message": "Resource move validation failed. Please see details. Diagnostic information: timestamp 'xxxxxxxxxxxxZ', subscription id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', tracking id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', request correlation id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'.",
- "details": [
- {
- "code": "BadRequest",
- "target": "Microsoft.DataFactory/factories",
- "message": "One of the resources contain integration runtimes that are either SSIS-IRs in starting/started/stopping state, or Self-Hosted IRs which are shared with other resources. Resource move is not supported for those resources."
- }
- ]
+ "code": "ResourceMoveProviderValidationFailed",
+ "message": "Resource move validation failed. Please see details. Diagnostic information: timestamp 'xxxxxxxxxxxxZ', subscription id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', tracking id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', request correlation id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'.",
+ "details": [
+ {
+ "code": "BadRequest",
+ "target": "Microsoft.DataFactory/factories",
+ "message": "One of the resources contain integration runtimes that are either SSIS-IRs in starting/started/stopping state, or Self-Hosted IRs which are shared with other resources. Resource move is not supported for those resources."
+ }
+ ]
} `
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-file-system https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
@@ -146,7 +146,7 @@ The following properties are supported for file system under `storeSettings` set
| type | The type property under `storeSettings` must be set to **FileServerReadSettings**. | Yes | | ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | |
-| OPTION 2: server side filter<br>- fileFilter | File server side native filter, which provides better performance than OPTION 3 wildcard filter. Use `*` to match zero or more characters and `?` to match zero or single character. Learn more about the syntax and notes from the **Remarks** under [this section](/dotnet/api/system.io.directory.getfiles?view=netframework-4.7.2#System_IO_Directory_GetFiles_System_String_System_String_System_IO_SearchOption_). | No |
+| OPTION 2: server side filter<br>- fileFilter | File server side native filter, which provides better performance than OPTION 3 wildcard filter. Use `*` to match zero or more characters and `?` to match zero or single character. Learn more about the syntax and notes from the **Remarks** under [this section](/dotnet/api/system.io.directory.getfiles#System_IO_Directory_GetFiles_System_String_System_String_System_IO_SearchOption_). | No |
| OPTION 3: client side filter<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. Such filter happens on ADF side, ADF enumerate the folders/files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: client side filter<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. Such filter happens on ADF side, ADF enumerate the files under the given path then apply the wildcard filter.<br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside.<br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes | | OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-data-consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-data-consistency.md
@@ -64,7 +64,7 @@ The following example provides a JSON definition to enable data consistency veri
"referenceName": "ADLSGen2", "type": "LinkedServiceReference" },
- "path": "sessionlog/"
+ "path": "sessionlog/"
} } }
@@ -79,7 +79,7 @@ linkedServiceName | The linked service of [Azure Blob Storage](connector-azure-b
path | The path of the log files. | Specify the path that you want to store the log files. If you do not provide a path, the service creates a container for you. | No >[!NOTE]
->- When copying binary files from, or to Azure Blob or Azure Data Lake Storage Gen2, ADF does block level MD5 checksum verification leveraging [Azure Blob API](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions?view=azure-dotnet-legacy) and [Azure Data Lake Storage Gen2 API](/rest/api/storageservices/datalakestoragegen2/path/update#request-headers). If ContentMD5 on files exist on Azure Blob or Azure Data Lake Storage Gen2 as data sources, ADF does file level MD5 checksum verification after reading the files as well. After copying files to Azure Blob or Azure Data Lake Storage Gen2 as data destination, ADF writes ContentMD5 to Azure Blob or Azure Data Lake Storage Gen2 which can be further consumed by downstream applications for data consistency verification.
+>- When copying binary files from, or to Azure Blob or Azure Data Lake Storage Gen2, ADF does block level MD5 checksum verification leveraging [Azure Blob API](/dotnet/api/microsoft.azure.storage.blob.blobrequestoptions?view=azure-dotnet-legacy&preserve-view=true) and [Azure Data Lake Storage Gen2 API](/rest/api/storageservices/datalakestoragegen2/path/update#request-headers). If ContentMD5 on files exist on Azure Blob or Azure Data Lake Storage Gen2 as data sources, ADF does file level MD5 checksum verification after reading the files as well. After copying files to Azure Blob or Azure Data Lake Storage Gen2 as data destination, ADF writes ContentMD5 to Azure Blob or Azure Data Lake Storage Gen2 which can be further consumed by downstream applications for data consistency verification.
>- ADF does file size verification when copying binary files between any storage stores. ## Monitoring
@@ -96,7 +96,7 @@ After the copy activity runs completely, you can see the result of data consiste
"filesSkipped": 2, "throughput": 297, "logFilePath": "myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
- "dataConsistencyVerification":
+ "dataConsistencyVerification":
{ "VerificationResult": "Verified", "InconsistentData": "Skipped"
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-azure-ssis-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
@@ -22,7 +22,7 @@ This article provides steps for provisioning an Azure-SQL Server Integration Ser
- Running packages deployed into SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model) - Running packages deployed into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
-After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
+After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
The [Provisioning Azure-SSIS IR](./tutorial-deploy-ssis-packages-azure.md) tutorial shows how to create an Azure-SSIS IR via the Azure portal or the Data Factory app. The tutorial also shows how to optionally use an Azure SQL Database server or managed instance to host SSISDB. This article expands on the tutorial and describes how to do these optional tasks:
@@ -76,7 +76,7 @@ The following table compares certain features of an Azure SQL Database server an
| Feature | SQL Database| SQL Managed instance | |---------|--------------|------------------|
-| **Scheduling** | The SQL Server Agent is not available.<br/><br/>See [Schedule a package execution in a Data Factory pipeline](/sql/integration-services/lift-shift/ssis-azure-schedule-packages?view=sql-server-2017#activity).| The Managed Instance Agent is available. |
+| **Scheduling** | The SQL Server Agent is not available.<br/><br/>See [Schedule a package execution in a Data Factory pipeline](/sql/integration-services/lift-shift/ssis-azure-schedule-packages#activity).| The Managed Instance Agent is available. |
| **Authentication** | You can create an SSISDB instance with a contained database user who represents any Azure AD group with the managed identity of your data factory as a member in the **db_owner** role.<br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Database server](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-on-azure-sql-database). | You can create an SSISDB instance with a contained database user who represents the managed identity of your data factory. <br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Managed Instance](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-on-sql-managed-instance). | | **Service tier** | When you create an Azure-SSIS IR with your Azure SQL Database server, you can select the service tier for SSISDB. There are multiple service tiers. | When you create an Azure-SSIS IR with your managed instance, you can't select the service tier for SSISDB. All databases in your managed instance share the same resource allocated to that instance. | | **Virtual network** | Your Azure-SSIS IR can join an Azure Resource Manager virtual network if you use an Azure SQL Database server with IP firewall rules/virtual network service endpoints. | Your Azure-SSIS IR can join an Azure Resource Manager virtual network if you use a managed instance with private endpoint. The virtual network is required when you don't enable a public endpoint for your managed instance.<br/><br/>If you join your Azure-SSIS IR to the same virtual network as your managed instance, make sure that your Azure-SSIS IR is in a different subnet from your managed instance. If you join your Azure-SSIS IR to a different virtual network from your managed instance, we recommend either a virtual network peering or a network-to-network connection. See [Connect your application to an Azure SQL Database Managed Instance](../azure-sql/managed-instance/connect-application-instance.md). |
@@ -168,7 +168,7 @@ Select **Test connection** when applicable and if it's successful, select **Next
On the **Deployment settings** page of **Integration runtime setup** pane, if you want to manage your packages that are deployed into MSDB, file system, or Azure Files (Package Deployment Model) with Azure-SSIS IR package stores, select the **Create package stores to manage your packages that are deployed into file system/Azure Files/SQL Server database (MSDB) hosted by Azure SQL Managed Instance** check box.
-Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SSMS similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service?view=sql-server-2017). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
+Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SSMS similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
If you select this check box, you can add multiple package stores to your Azure-SSIS IR by selecting **New**. Conversely, one package store can be shared by multiple Azure-SSIS IRs.
@@ -999,9 +999,9 @@ If you use SSISDB, you can deploy your packages into it and run them on your Azu
- For a managed instance with private endpoint, the server endpoint format is `<server name>.<dns prefix>.database.windows.net`. - For a managed instance with public endpoint, the server endpoint format is `<server name>.public.<dns prefix>.database.windows.net,3342`.
-If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
+If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
-For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages?view=sql-server-ver15).
+For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages).
In both cases, you can also run your deployed packages on Azure-SSIS IR by using the Execute SSIS Package activity in Data Factory pipelines. For more information, see [Invoke SSIS package execution as a first-class Data Factory activity](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
@@ -340,7 +340,7 @@ To view and reuse some samples of standard custom setups, complete the following
* A *TLS 1.2* folder, which contains a custom setup script (*main.cmd*) to use strong cryptography and more secure network protocol (TLS 1.2) on each node of your Azure-SSIS IR. The script also disables older SSL/TLS versions.
- * A *ZULU OPENJDK* folder, which contains a custom setup script (*main.cmd*) and PowerShell file (*install_openjdk.ps1*) to install the Zulu OpenJDK on each node of your Azure-SSIS IR. This setup lets you use Azure Data Lake Store and Flexible File connectors to process ORC and Parquet files. For more information, see [Azure Feature Pack for Integration Services](/sql/integration-services/azure-feature-pack-for-integration-services-ssis?view=sql-server-ver15#dependency-on-java).
+ * A *ZULU OPENJDK* folder, which contains a custom setup script (*main.cmd*) and PowerShell file (*install_openjdk.ps1*) to install the Zulu OpenJDK on each node of your Azure-SSIS IR. This setup lets you use Azure Data Lake Store and Flexible File connectors to process ORC and Parquet files. For more information, see [Azure Feature Pack for Integration Services](/sql/integration-services/azure-feature-pack-for-integration-services-ssis#dependency-on-java).
First, [download the latest Zulu OpenJDK](https://www.azul.com/downloads/zulu/zulu-windows/) (for example, *zulu8.33.0.1-jdk8.0.192-win_x64.zip*), and then upload it together with *main.cmd* and *install_openjdk.ps1* to your container.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec.md
@@ -19,7 +19,7 @@ ms.reviewer: douglasl
This article describes the Azure-enabled dtexec (AzureDTExec) command prompt utility. It's used to run SQL Server Integration Services (SSIS) packages on the Azure-SSIS Integration Runtime (IR) in Azure Data Factory.
-The traditional dtexec utility comes with SQL Server. For more information, see [dtexec utility](/sql/integration-services/packages/dtexec-utility?view=sql-server-2017). It's often invoked by third-party orchestrators or schedulers, such as ActiveBatch and Control-M, to run SSIS packages on-premises.
+The traditional dtexec utility comes with SQL Server. For more information, see [dtexec utility](/sql/integration-services/packages/dtexec-utility). It's often invoked by third-party orchestrators or schedulers, such as ActiveBatch and Control-M, to run SSIS packages on-premises.
The modern AzureDTExec utility comes with a SQL Server Management Studio (SSMS) tool. It can also be invoked by third-party orchestrators or schedulers to run SSIS packages in Azure. It facilitates the lifting and shifting or migration of your SSIS packages to the cloud. After migration, if you want to keep using third-party orchestrators or schedulers in your day-to-day operations, they can now invoke AzureDTExec instead of dtexec.
@@ -28,7 +28,7 @@ AzureDTExec runs your packages as Execute SSIS Package activities in Data Factor
AzureDTExec can be configured via SSMS to use an Azure Active Directory (Azure AD) application that generates pipelines in your data factory. It can also be configured to access file systems, file shares, or Azure Files where you store your packages. Based on the values you give for its invocation options, AzureDTExec generates and runs a unique Data Factory pipeline with an Execute SSIS Package activity in it. Invoking AzureDTExec with the same values for its options reruns the existing pipeline. ## Prerequisites
-To use AzureDTExec, download and install the latest version of SSMS, which is version 18.3 or later. Download it from [this website](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017).
+To use AzureDTExec, download and install the latest version of SSMS, which is version 18.3 or later. Download it from [this website](/sql/ssms/download-sql-server-management-studio-ssms).
## Configure the AzureDTExec utility Installing SSMS on your local machine also installs AzureDTExec. To configure its settings, start SSMS with the **Run as administrator** option. Then select **Tools** > **Migrate to Azure** > **Configure Azure-enabled DTExec**.
@@ -77,10 +77,10 @@ The utility is installed at `{SSMS Folder}\Common7\IDE\CommonExtensions\Microsof
/De MyEncryptionPassword ```
-Invoking AzureDTExec offers similar options as invoking dtexec. For more information, see [dtexec Utility](/sql/integration-services/packages/dtexec-utility?view=sql-server-2017). Here are the options that are currently supported:
+Invoking AzureDTExec offers similar options as invoking dtexec. For more information, see [dtexec Utility](/sql/integration-services/packages/dtexec-utility). Here are the options that are currently supported:
- **/F[ile]**: Loads a package that's stored in file system, file share, or Azure Files. As the value for this option, you can specify the UNC path for your package file in file system, file share, or Azure Files with its .dtsx extension. If the UNC path specified contains any space, put quotation marks around the whole path.-- **/Conf[igFile]**: Specifies a configuration file to extract values from. Using this option, you can set a run-time configuration for your package that differs from the one specified at design time. You can store different settings in an XML configuration file and then load them before your package execution. For more information, see [SSIS package configurations](/sql/integration-services/packages/package-configurations?view=sql-server-2017). To specify the value for this option, use the UNC path for your configuration file in file system, file share, or Azure Files with its dtsConfig extension. If the UNC path specified contains any space, put quotation marks around the whole path.
+- **/Conf[igFile]**: Specifies a configuration file to extract values from. Using this option, you can set a run-time configuration for your package that differs from the one specified at design time. You can store different settings in an XML configuration file and then load them before your package execution. For more information, see [SSIS package configurations](/sql/integration-services/packages/package-configurations). To specify the value for this option, use the UNC path for your configuration file in file system, file share, or Azure Files with its dtsConfig extension. If the UNC path specified contains any space, put quotation marks around the whole path.
- **/Conn[ection]**: Specifies connection strings for existing connection managers in your package. Using this option, you can set run-time connection strings for existing connection managers in your package that differ from the ones specified at design time. Specify the value for this option as follows: `connection_manager_name_or_id;connection_string [[;connection_manager_name_or_id;connection_string]...]`. - **/Set**: Overrides the configuration of a parameter, variable, property, container, log provider, Foreach enumerator, or connection in your package. This option can be specified multiple times. Specify the value for this option as follows: `property_path;value`. For example, `\package.variables[counter].Value;1` overrides the value of `counter` variable as 1. You can use the **Package Configuration** wizard to find, copy, and paste the value of `property_path` for items in your package whose value you want to override. For more information, see [Package Configuration wizard](/sql/integration-services/packages/legacy-package-deployment-ssis). - **/De[crypt]**: Sets the decryption password for your package that's configured with the **EncryptAllWithPassword**/**EncryptSensitiveWithPassword** protection level.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-invoke-ssis-package-managed-instance-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
@@ -19,7 +19,7 @@ With this feature, you can run SSIS packages that are stored in SSISDB in a SQL
## Prerequisites
-To use this feature, [download](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017) and install latest SQL Server Management Studio (SSMS). Version support details as below:
+To use this feature, [download](/sql/ssms/download-sql-server-management-studio-ssms) and install latest SQL Server Management Studio (SSMS). Version support details as below:
- To run packages in SSISDB or file system, install SSMS version 18.5 or above. - To run packages in package store, install SSMS version 18.6 or above.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-invoke-ssis-package-ssdt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-ssdt.md
@@ -24,7 +24,7 @@ With this feature, you can attach a newly created/existing Azure-SSIS IR to SSIS
## Prerequisites
-To use this feature, please download and install the latest SSDT with SSIS Projects extension for Visual Studio (VS) from [here](https://marketplace.visualstudio.com/items?itemName=SSIS.SqlServerIntegrationServicesProjects). Alternatively, you can also download and install the latest SSDT as a standalone installer from [here](/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017#ssdt-for-vs-2017-standalone-installer).
+To use this feature, please download and install the latest SSDT with SSIS Projects extension for Visual Studio (VS) from [here](https://marketplace.visualstudio.com/items?itemName=SSIS.SqlServerIntegrationServicesProjects). Alternatively, you can also download and install the latest SSDT as a standalone installer from [here](/sql/ssdt/download-sql-server-data-tools-ssdt#ssdt-for-vs-2017-standalone-installer).
## Azure-enable SSIS projects
@@ -48,7 +48,7 @@ For existing SSIS projects, you can Azure-enable them by following these steps:
![Azure-enable existing SSIS project](media/how-to-invoke-ssis-package-ssdt/ssdt-azure-enabled-for-existing-project.png)
-2. On the **Select Visual Studio Configuration** page, select your existing VS configuration to apply package execution settings in Azure. You can also create a new one if you haven't done so already, see [Creating a new VS configuration](/visualstudio/ide/how-to-create-and-edit-configurations?view=vs-2019). We recommend that you have at least two different VS configurations for package executions in the local and cloud environments, so you can Azure-enable your project against the cloud configuration. In this way, if you've parameterized your project or packages, you can assign different values to your project or package parameters at run-time based on the different execution environments (either on your local machine or in Azure). For example, see [Switching package execution environments](#switchenvironment).
+2. On the **Select Visual Studio Configuration** page, select your existing VS configuration to apply package execution settings in Azure. You can also create a new one if you haven't done so already, see [Creating a new VS configuration](/visualstudio/ide/how-to-create-and-edit-configurations). We recommend that you have at least two different VS configurations for package executions in the local and cloud environments, so you can Azure-enable your project against the cloud configuration. In this way, if you've parameterized your project or packages, you can assign different values to your project or package parameters at run-time based on the different execution environments (either on your local machine or in Azure). For example, see [Switching package execution environments](#switchenvironment).
![Select Visual Studio configuration](media/how-to-invoke-ssis-package-ssdt/ssdt-azure-enabled-select-visual-studio-configurations.png)
@@ -175,7 +175,7 @@ If you parameterize your project/packages in Project Deployment Model, you can c
![Parameterize source connection](media/how-to-invoke-ssis-package-ssdt/ssdt-azure-enabled-example-update-task-with-parameters.png)
-3. By default, you have an existing VS configuration for package executions in the local environment named **Development**. Create a new VS configuration for package executions in the cloud environment named **Azure**, see [Creating a new VS configuration](/visualstudio/ide/how-to-create-and-edit-configurations?view=vs-2019), if you haven't done so already.
+3. By default, you have an existing VS configuration for package executions in the local environment named **Development**. Create a new VS configuration for package executions in the cloud environment named **Azure**, see [Creating a new VS configuration](/visualstudio/ide/how-to-create-and-edit-configurations), if you haven't done so already.
4. When viewing the parameters of your package, select the **Add Parameters to Configurations** button to open the **Manage Parameter Values** window for your package. Next, assign different values of target file path to the **FilePath** package parameter under the **Development** and **Azure** configurations.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-invoke-ssis-package-ssis-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
@@ -194,7 +194,7 @@ If you select **Package store** as your package location, complete the following
1. For **Package store name**, select an existing package store that's attached to your Azure-SSIS IR.
- 1. Specify your package to run by providing its path (without `.dtsx`) from the selected package store in the **Package path** box. If the selected package store is on top of file system/Azure Files, you can browse and select your package by selecting **Browse file storage**, otherwise you can enter its path in the format of `<folder name>\<package name>`. You can also import new packages into the selected package store via SQL Server Management Studio (SSMS) similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service?view=sql-server-2017). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
+ 1. Specify your package to run by providing its path (without `.dtsx`) from the selected package store in the **Package path** box. If the selected package store is on top of file system/Azure Files, you can browse and select your package by selecting **Browse file storage**, otherwise you can enter its path in the format of `<folder name>\<package name>`. You can also import new packages into the selected package store via SQL Server Management Studio (SSMS) similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
1. If you configure your package in a separate file, you need to provide a UNC path to your configuration file (with `.dtsConfig`) in the **Configuration path** box. You can browse and select your configuration by selecting **Browse file storage** or enter its path manually. For example, if you store your configuration in Azure Files, its path is `\\<storage account name>.file.core.windows.net\<file share name>\<configuration name>.dtsConfig`.
@@ -246,7 +246,7 @@ On the **Connection Managers** tab of Execute SSIS Package activity, complete th
For example, without modifying your original package on SSDT, you can convert its on-premises-to-on-premises data flows running on SQL Server into on-premises-to-cloud data flows running on SSIS IR in ADF by overriding the values of **ConnectByProxy**, **ConnectionString**, and **ConnectUsingManagedIdentity** properties in existing connection managers at run-time.
- These run-time overrides can enable Self-Hosted IR (SHIR) as a proxy for SSIS IR when accessing data on premises, see [Configuring SHIR as a proxy for SSIS IR](./self-hosted-integration-runtime-proxy-ssis.md), and Azure SQL Database/Managed Instance connections using the latest MSOLEDBSQL driver that in turn enables Azure Active Directory (AAD) authentication with ADF managed identity, see [Configuring AAD authentication with ADF managed identity for OLEDB connections](/sql/integration-services/connection-manager/ole-db-connection-manager?view=sql-server-ver15#managed-identities-for-azure-resources-authentication).
+ These run-time overrides can enable Self-Hosted IR (SHIR) as a proxy for SSIS IR when accessing data on premises, see [Configuring SHIR as a proxy for SSIS IR](./self-hosted-integration-runtime-proxy-ssis.md), and Azure SQL Database/Managed Instance connections using the latest MSOLEDBSQL driver that in turn enables Azure Active Directory (AAD) authentication with ADF managed identity, see [Configuring AAD authentication with ADF managed identity for OLEDB connections](/sql/integration-services/connection-manager/ole-db-connection-manager#managed-identities-for-azure-resources-authentication).
![Set properties from SSDT on the Connection Managers tab](media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-connection-managers2.png)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-migrate-ssis-job-ssms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-migrate-ssis-job-ssms.md
@@ -36,7 +36,7 @@ In general, for selected SQL agent jobs with applicable job step types, **SSIS J
## Prerequisites
-The feature described in this article requires SQL Server Management Studio version 18.5 or higher. To get the latest version of SSMS, see [Download SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver15).
+The feature described in this article requires SQL Server Management Studio version 18.5 or higher. To get the latest version of SSMS, see [Download SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).
## Migrate SSIS jobs to ADF
data-factory https://docs.microsoft.com/en-us/azure/data-factory/how-to-use-sql-managed-instance-with-ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
@@ -158,7 +158,7 @@ For more info about how to create an Azure-SSIS IR, see [Create an Azure-SSIS in
## Clean up SSISDB logs
-SSISDB logs retention policy are defined by below properties in [catalog.catalog_properties](/sql/integration-services/system-views/catalog-catalog-properties-ssisdb-database?view=sql-server-ver15):
+SSISDB logs retention policy are defined by below properties in [catalog.catalog_properties](/sql/integration-services/system-views/catalog-catalog-properties-ssisdb-database):
- OPERATION_CLEANUP_ENABLED
data-factory https://docs.microsoft.com/en-us/azure/data-factory/monitor-programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-programmatically.md
@@ -63,7 +63,7 @@ For a complete walkthrough of creating and monitoring a pipeline using .NET SDK,
Console.ReadKey(); ```
-For complete documentation on .NET SDK, see [Data Factory .NET SDK reference](/dotnet/api/microsoft.azure.management.datafactory?view=azure-dotnet).
+For complete documentation on .NET SDK, see [Data Factory .NET SDK reference](/dotnet/api/microsoft.azure.management.datafactory).
## Python For a complete walkthrough of creating and monitoring a pipeline using Python SDK, see [Create a data factory and pipeline using Python](quickstart-create-data-factory-python.md).
@@ -81,7 +81,7 @@ activity_runs_paged = list(adf_client.activity_runs.list_by_pipeline_run(
print_activity_run_details(activity_runs_paged[0]) ```
-For complete documentation on Python SDK, see [Data Factory Python SDK reference](/python/api/overview/azure/datafactory?view=azure-python).
+For complete documentation on Python SDK, see [Data Factory Python SDK reference](/python/api/overview/azure/datafactory).
## REST API For a complete walkthrough of creating and monitoring a pipeline using REST API, see [Create a data factory and pipeline using REST API](quickstart-create-data-factory-rest-api.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
@@ -578,7 +578,7 @@ Here are the log attributes of SSIS IR start/stop/maintenance operations.
#### SSIS event message context log attributes
-Here are the log attributes of conditions related to event messages that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSIS catalog (SSISDB) event message context table or view](/sql/integration-services/system-views/catalog-event-message-context?view=sql-server-ver15) that shows run-time values of many SSIS package properties. They're generated when you select `Basic/Verbose` logging level and useful for debugging/compliance checking.
+Here are the log attributes of conditions related to event messages that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSIS catalog (SSISDB) event message context table or view](/sql/integration-services/system-views/catalog-event-message-context) that shows run-time values of many SSIS package properties. They're generated when you select `Basic/Verbose` logging level and useful for debugging/compliance checking.
```json {
@@ -615,7 +615,7 @@ Here are the log attributes of conditions related to event messages that are gen
| **operationId** | String | The unique ID for tracking a particular operation in SSISDB | `1` (1 signifies operations related to packages **not** stored in SSISDB/invoked via T-SQL) | | **contextDepth** | String | The depth of your event message context | `0` (0 signifies the context before package execution starts, 1 signifies the context when an error occurs, and it increases as the context is further from the error) | | **packagePath** | String | The path of package object as your event message context source | `\Package` |
-| **contextType** | String | The type of package object as your event message context source | `60`(see [more context types](/sql/integration-services/system-views/catalog-event-message-context?view=sql-server-ver15#remarks)) |
+| **contextType** | String | The type of package object as your event message context source | `60`(see [more context types](/sql/integration-services/system-views/catalog-event-message-context#remarks)) |
| **contextSourceName** | String | The name of package object as your event message context source | `MyPackage` | | **contextSourceId** | String | The unique ID of package object as your event message context source | `{E2CF27FB-EA48-41E9-AF6F-3FE938B4ADE1}` | | **propertyName** | String | The name of package property for your event message context source | `DelayValidation` |
@@ -624,7 +624,7 @@ Here are the log attributes of conditions related to event messages that are gen
#### SSIS event messages log attributes
-Here are the log attributes of event messages that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB event messages table or view](/sql/integration-services/system-views/catalog-event-messages?view=sql-server-ver15) that shows the detailed text/metadata of event messages. They're generated at any logging level except `None`.
+Here are the log attributes of event messages that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB event messages table or view](/sql/integration-services/system-views/catalog-event-messages) that shows the detailed text/metadata of event messages. They're generated at any logging level except `None`.
```json {
@@ -664,21 +664,21 @@ Here are the log attributes of event messages that are generated by SSIS package
| **level** | String | The level of diagnostic logs | `Informational` | | **operationId** | String | The unique ID for tracking a particular operation in SSISDB | `1` (1 signifies operations related to packages **not** stored in SSISDB/invoked via T-SQL) | | **messageTime** | String | The time when your event message is created in UTC format | `2017-06-28T21:00:27.3534352Z` |
-| **messageType** | String | The type of your event message | `70`(see [more message types](/sql/integration-services/system-views/catalog-operation-messages-ssisdb-database?view=sql-server-ver15#remarks)) |
-| **messageSourceType** | String | The type of your event message source | `20`(see [more message source types](/sql/integration-services/system-views/catalog-operation-messages-ssisdb-database?view=sql-server-ver15#remarks)) |
+| **messageType** | String | The type of your event message | `70`(see [more message types](/sql/integration-services/system-views/catalog-operation-messages-ssisdb-database#remarks)) |
+| **messageSourceType** | String | The type of your event message source | `20`(see [more message source types](/sql/integration-services/system-views/catalog-operation-messages-ssisdb-database#remarks)) |
| **message** | String | The text of your event message | `MyPackage:Validation has started.` | | **packageName** | String | The name of your executed package file | `MyPackage.dtsx` | | **eventName** | String | The name of related run-time event | `OnPreValidate` | | **messageSourceName** | String | The name of package component as your event message source | `Data Flow Task` |
-| **messageSourceId** | String | The unique ID of package component as your event message source | `{1a45a5a4-3df9-4f02-b818-ebf583829ad2} ` |
+| **messageSourceId** | String | The unique ID of package component as your event message source | `{1a45a5a4-3df9-4f02-b818-ebf583829ad2} ` |
| **subcomponentName** | String | The name of data flow component as your event message source | `SSIS.Pipeline` | | **packagePath** | String | The path of package object as your event message source | `\Package\Data Flow Task` | | **executionPath** | String | The full path from parent package to executed component | `\Transformation\Data Flow Task` (This path also captures component iterations) |
-| **threadId** | String | The unique ID of thread executed when your event message is logged | `{1a45a5a4-3df9-4f02-b818-ebf583829ad2} ` |
+| **threadId** | String | The unique ID of thread executed when your event message is logged | `{1a45a5a4-3df9-4f02-b818-ebf583829ad2} ` |
#### SSIS executable statistics log attributes
-Here are the log attributes of executable statistics that are generated by SSIS package executions on your SSIS IR, where executables are containers or tasks in the control flow of packages. They convey similar information as [SSISDB executable statistics table or view](/sql/integration-services/system-views/catalog-executable-statistics?view=sql-server-ver15) that shows a row for each running executable, including its iterations. They're generated at any logging level except `None` and useful for identifying task-level bottlenecks/failures.
+Here are the log attributes of executable statistics that are generated by SSIS package executions on your SSIS IR, where executables are containers or tasks in the control flow of packages. They convey similar information as [SSISDB executable statistics table or view](/sql/integration-services/system-views/catalog-executable-statistics) that shows a row for each running executable, including its iterations. They're generated at any logging level except `None` and useful for identifying task-level bottlenecks/failures.
```json {
@@ -722,7 +722,7 @@ Here are the log attributes of executable statistics that are generated by SSIS
#### SSIS execution component phases log attributes
-Here are the log attributes of run-time statistics for data flow components that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB execution component phases table or view](/sql/integration-services/system-views/catalog-execution-component-phases?view=sql-server-ver15) that shows the time spent by data flow components in all their execution phases. They're generated when you select `Performance/Verbose` logging level and useful for capturing data flow execution statistics.
+Here are the log attributes of run-time statistics for data flow components that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB execution component phases table or view](/sql/integration-services/system-views/catalog-execution-component-phases) that shows the time spent by data flow components in all their execution phases. They're generated when you select `Performance/Verbose` logging level and useful for capturing data flow execution statistics.
```json {
@@ -768,7 +768,7 @@ Here are the log attributes of run-time statistics for data flow components that
#### SSIS execution data statistics log attributes
-Here are the log attributes of data movements through each leg of data flow pipelines, from upstream to downstream components, that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB execution data statistics table or view](/sql/integration-services/system-views/catalog-execution-data-statistics?view=sql-server-ver15) that shows row counts of data moved through data flow tasks. They're generated when you select `Verbose` logging level and useful for computing data flow throughput.
+Here are the log attributes of data movements through each leg of data flow pipelines, from upstream to downstream components, that are generated by SSIS package executions on your SSIS IR. They convey similar information as [SSISDB execution data statistics table or view](/sql/integration-services/system-views/catalog-execution-data-statistics) that shows row counts of data moved through data flow tasks. They're generated when you select `Verbose` logging level and useful for computing data flow throughput.
```json {
data-factory https://docs.microsoft.com/en-us/azure/data-factory/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-baseline.md
@@ -309,7 +309,7 @@ Additionally, ensure that you enable diagnostic settings for services related to
### 2.10: Enable command-line audit logging
-**Guidance**: If you are running your Integration Runtime in an Azure Virtual Machine (VM), you can enable command-line audit logging. The Azure Security Center provides Security Event log monitoring for Azure VMs. Security Center provisions the Microsoft Monitoring Agent on all supported Azure VMs and any new ones that are created if automatic provisioning is enabled or you can install the agent manually. The agent enables the process creation event 4688 and the CommandLine field inside event 4688. New processes created on the VM are recorded by EventLog and monitored by Security CenterΓÇÖs detection services.
+**Guidance**: If you are running your Integration Runtime in an Azure Virtual Machine (VM), you can enable command-line audit logging. The Azure Security Center provides Security Event log monitoring for Azure VMs. Security Center provisions the Microsoft Monitoring Agent on all supported Azure VMs and any new ones that are created if automatic provisioning is enabled or you can install the agent manually. The agent enables the process creation event 4688 and the CommandLine field inside event 4688. New processes created on the VM are recorded by EventLog and monitored by Security Center's detection services.
* [Data collection in Azure Security Center](../security-center/security-center-enable-data-collection.md#data-collection-tier)
@@ -333,9 +333,9 @@ While Azure AD is the recommended method to administrate user access, keep in mi
* [Information on Privileged Identity Manager](../active-directory/privileged-identity-management/pim-deployment-plan.md)
-* [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+* [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
-* [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+* [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember)
* [Information for Local Accounts](../active-directory/devices/assign-local-admin.md#manage-the-device-administrator-role)
@@ -742,7 +742,7 @@ Although classic Azure resources may be discovered via Resource Graph, it is hig
* [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md)
-* [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
+* [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription)
* [Understand Azure RBAC](../role-based-access-control/overview.md)
@@ -878,7 +878,7 @@ Note that this only applies if your Integration Runtime is running in an Azure V
**Guidance**: If you are running your Runtime Integration in an Azure Virtual Machine, depending on the type of scripts, you may use operating system-specific configurations or third-party resources to limit users' ability to execute scripts within Azure compute resources. You can also leverage Azure Security Center Adaptive Application Controls to ensure that only authorized software executes and all unauthorized software is blocked from executing on Azure Virtual Machines.
-* [How to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-6)
+* [How to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy)
* [How to use Azure Security Center Adaptive Application Controls](../security-center/security-center-adaptive-application.md)
@@ -916,7 +916,7 @@ Note that this only applies if your Integration Runtime is running in an Azure V
**Guidance**: Define and implement standard security configurations for Azure Data Factory with Azure Policy. Use Azure Policy aliases in the "Microsoft.DataFactory" namespace to create custom policies to audit or enforce the configuration of your Azure Data Factory instances.
-* [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+* [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias)
* [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
@@ -962,7 +962,7 @@ For most scenarios, the Microsoft base VM templates combined with the Azure Auto
* [Information on creating Azure Resource Manager templates](../virtual-machines/windows/ps-template.md)
-* [How to upload a custom VM VHD to Azure](/azure-stack/operator/azure-stack-add-vm-image?view=azs-1910)
+* [How to upload a custom VM VHD to Azure](/azure-stack/operator/azure-stack-add-vm-image)
**Azure Security Center monitoring**: Yes
@@ -972,9 +972,9 @@ For most scenarios, the Microsoft base VM templates combined with the Azure Auto
**Guidance**: If using custom Azure Policy definitions, use Azure DevOps or Azure Repos to securely store and manage your code.
-* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
+* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
-* [Azure Repos Documentation](/azure/devops/repos/index?view=azure-devops)
+* [Azure Repos Documentation](/azure/devops/repos/index)
**Azure Security Center monitoring**: Not applicable
@@ -1148,7 +1148,7 @@ For any of your data stores, refer to that service's security baseline for recom
* [An overview of Azure VM backup](../backup/backup-azure-vms-introduction.md)
-* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey?view=azurermps-6.13.0)
+* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey)
**Azure Security Center monitoring**: Yes
@@ -1162,7 +1162,7 @@ For any of your data stores, refer to that service's security baseline for guida
* [How to recover files from Azure Virtual Machine backup](../backup/backup-azure-restore-files-from-vm.md)
-* [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey?view=azurermps-6.13.0)
+* [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey)
**Azure Security Center monitoring**: Not applicable
@@ -1212,7 +1212,7 @@ Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
### 10.3: Test security response procedures
-**Guidance**: Conduct exercises to test your systemsΓÇÖ incident response capabilities on a regular cadence. Identify weak points and gaps and revise plan as needed.
+**Guidance**: Conduct exercises to test your systems' incident response capabilities on a regular cadence. Identify weak points and gaps and revise plan as needed.
* [Refer to NIST's publication: Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-84.pdf)
@@ -1262,7 +1262,7 @@ Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
* [Follow the Microsoft Rules of Engagement to ensure your Penetration Tests are not in violation of Microsoft policies](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1)
-* [You can find more information on MicrosoftΓÇÖs strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications, here](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
+* [You can find more information on Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications, here](https://gallery.technet.microsoft.com/Cloud-Red-Teaming-b837392e)
**Azure Security Center monitoring**: Not applicable
data-factory https://docs.microsoft.com/en-us/azure/data-factory/self-hosted-integration-runtime-proxy-ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
@@ -49,7 +49,7 @@ Finally, you download and install the latest version of the self-hosted IR, as w
### Enable Windows authentication for on-premises staging tasks
-If on-premises staging tasks on your self-hosted IR require Windows authentication, [configure your SSIS packages to use the same Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth?view=sql-server-ver15).
+If on-premises staging tasks on your self-hosted IR require Windows authentication, [configure your SSIS packages to use the same Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth).
Your on-premises staging tasks will be invoked with the self-hosted IR service account (*NT SERVICE\DIAHostService*, by default), and your data stores will be accessed with the Windows authentication account. Both accounts require certain security policies to be assigned to them. On the self-hosted IR machine, go to **Local Security Policy** > **Local Policies** > **User Rights Assignment**, and then do the following:
@@ -65,7 +65,7 @@ If you haven't already done so, create an Azure Blob Storage linked service in t
- For **Authentication method**, select **Account key**, **SAS URI**, **Service Principal**, or **Managed Identity**. >[!TIP]
->If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity** method, grant your ADF managed identity proper roles to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory authentication with ADF managed identity](/sql/integration-services/connection-manager/azure-storage-connection-manager?view=sql-server-ver15#managed-identities-for-azure-resources-authentication).
+>If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity** method, grant your ADF managed identity proper roles to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory authentication with ADF managed identity](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication).
![Prepare the Azure Blob storage-linked service for staging](media/self-hosted-integration-runtime-proxy-ssis/shir-azure-blob-storage-linked-service.png)
@@ -127,7 +127,7 @@ Start-AzDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
By using the latest SSDT as either the SSIS Projects extension for Visual Studio or a standalone installer, you can find a new `ConnectByProxy` property that has been added in the connection managers for supported data flow components. * [Download the SSIS Projects extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=SSIS.SqlServerIntegrationServicesProjects)
-* [Download the standalone installer](/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017#ssdt-for-vs-2017-standalone-installer)
+* [Download the standalone installer](/sql/ssdt/download-sql-server-data-tools-ssdt#ssdt-for-vs-2017-standalone-installer)
When you design new packages containing data flow tasks with components that access data on premises, you can enable this property by setting it to *True* in the **Properties** pane of the relevant connection managers.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ssis-azure-connect-with-windows-auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-azure-connect-with-windows-auth.md
@@ -165,4 +165,4 @@ To access a file share in Azure Files from packages running in Azure, do the fol
- Deploy your packages. For more info, see [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms). - Run your packages. For more info, see [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms).-- Schedule your packages. For more info, see [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms?view=sql-server-ver15).
+- Schedule your packages. For more info, see [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ssis-azure-files-file-shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-azure-files-file-shares.md
@@ -48,4 +48,4 @@ To use **Azure Files** when you lift and shift packages that use local file syst
- Deploy your packages. For more info, see [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms). - Run your packages. For more info, see [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms).-- Schedule your packages. For more info, see [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms?view=sql-server-ver15).\ No newline at end of file
+- Schedule your packages. For more info, see [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ssis-integration-runtime-diagnose-connectivity-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
@@ -101,4 +101,4 @@ Use the following sections to learn about the most common errors that occur when
- [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms) - [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms)-- [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms?view=sql-server-ver15)\ No newline at end of file
+- [Schedule SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms)
\ No newline at end of file
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ssis-integration-runtime-ssis-activity-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
@@ -23,7 +23,7 @@ This article includes the most common errors that you might find when you're exe
Use the Azure Data Factory portal to check the output of the SSIS package execution activity. The output includes the execution result, error messages, and operation ID. For details, see [Monitor the pipeline](how-to-invoke-ssis-package-ssis-activity.md#monitor-the-pipeline).
-Use the SSIS catalog (SSISDB) to check the detail logs for the execution. For details, see [Monitor Running Packages and Other Operations](/sql/integration-services/performance/monitor-running-packages-and-other-operations?view=sql-server-2017).
+Use the SSIS catalog (SSISDB) to check the detail logs for the execution. For details, see [Monitor Running Packages and Other Operations](/sql/integration-services/performance/monitor-running-packages-and-other-operations).
## Common errors, causes, and solutions
@@ -87,7 +87,7 @@ This error means the local disk is used up in the SSIS integration runtime node.
This error occurs when package execution can't find a file in the local disk in the SSIS integration runtime. Try these actions: * Don't use the absolute path in the package that's being executed in the SSIS integration runtime. Use the current execution working directory (.) or the temp folder (%TEMP%) instead. * If you need to persist some files on SSIS integration runtime nodes, prepare the files as described in [Customize setup](how-to-configure-azure-ssis-ir-custom-setup.md). All the files in the working directory will be cleaned up after the execution is finished.
-* Use Azure Files instead of storing the file in the SSIS integration runtime node. For details, see [Use Azure file shares](/sql/integration-services/lift-shift/ssis-azure-files-file-shares?view=sql-server-2017#use-azure-file-shares).
+* Use Azure Files instead of storing the file in the SSIS integration runtime node. For details, see [Use Azure file shares](/sql/integration-services/lift-shift/ssis-azure-files-file-shares#use-azure-file-shares).
### Error message: "The database 'SSISDB' has reached its size quota"
@@ -151,7 +151,7 @@ One potential cause is your Self-Hosted integration runtime is not installed or
* Potential cause & recommended action: * If there is also a warning message "The component does not support using connection manager with ConnectByProxy value setting trueΓÇ£ in the execution log, this means a connection manager is used on a component which hasn't supported "ConnectByProxy" yet. The supported components can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#enable-ssis-packages-to-connect-by-proxy)
- * Execution log can be found in [SSMS report](/sql/integration-services/performance/monitor-running-packages-and-other-operations?view=sql-server-2017#reports) or in the log folder you specified in SSIS package execution activity.
+ * Execution log can be found in [SSMS report](/sql/integration-services/performance/monitor-running-packages-and-other-operations#reports) or in the log folder you specified in SSIS package execution activity.
* vNet can also be used to access on-premises data as an alternative. More detail can be found at [Join an Azure-SSIS integration runtime to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md) ### Error message: "Staging task status: Failed. Staging task error: ErrorCode: 2906, ErrorMessage: Package execution failed., Output: {"OperationErrorMessages": "SSIS Executor exit code: -1.\n", "LogLocation": "...\\SSISTelemetry\\ExecutionLog\\...", "effectiveIntegrationRuntime": "...", "executionDuration": ..., "durationInQueue": { "integrationRuntimeQueue": ... }}"
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-deploy-ssis-packages-azure-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
@@ -25,7 +25,7 @@ This tutorial provides steps for using PowerShell to provision an Azure-SQL Serv
- Running packages deployed into SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model) - Running packages deployed into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
-After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
+After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
For conceptual information on Azure-SSIS IRs, see [Azure-SSIS integration runtime overview](concepts-integration-runtime.md#azure-ssis-integration-runtime).
@@ -590,9 +590,9 @@ If you use SSISDB, you can deploy your packages into it and run them on your Azu
- For a managed instance with private endpoint, the server endpoint format is `<server name>.<dns prefix>.database.windows.net`. - For a managed instance with public endpoint, the server endpoint format is `<server name>.public.<dns prefix>.database.windows.net,3342`.
-If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
+If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
-For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages?view=sql-server-ver15).
+For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages).
In both cases, you can also run your deployed packages on Azure-SSIS IR by using the Execute SSIS Package activity in Data Factory pipelines. For more information, see [Invoke SSIS package execution as a first-class Data Factory activity](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-deploy-ssis-packages-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
@@ -25,7 +25,7 @@ This tutorial provides steps for using the Azure portal to provision an Azure-SQ
- Running packages deployed into SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model) - Running packages deployed into file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
-After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
+After an Azure-SSIS IR is provisioned, you can use familiar tools to deploy and run your packages in Azure. These tools are already Azure-enabled and include SQL Server Data Tools (SSDT), SQL Server Management Studio (SSMS), and command-line utilities like [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md).
For conceptual information on Azure-SSIS IRs, see [Azure-SSIS integration runtime overview](concepts-integration-runtime.md#azure-ssis-integration-runtime).
@@ -160,7 +160,7 @@ Select **Test connection** when applicable and if it's successful, select **Next
On the **Deployment settings** page of **Integration runtime setup** pane, if you want to manage your packages that are deployed into MSDB, file system, or Azure Files (Package Deployment Model) with Azure-SSIS IR package stores, select the **Create package stores to manage your packages that are deployed into file system/Azure Files/SQL Server database (MSDB) hosted by Azure SQL Managed Instance** check box.
-Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SSMS similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service?view=sql-server-2017). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
+Azure-SSIS IR package store allows you to import/export/delete/run packages and monitor/stop running packages via SSMS similar to the [legacy SSIS package store](/sql/integration-services/service/package-management-ssis-service). For more information, see [Manage SSIS packages with Azure-SSIS IR package stores](./azure-ssis-integration-runtime-package-store.md).
If you select this check box, you can add multiple package stores to your Azure-SSIS IR by selecting **New**. Conversely, one package store can be shared by multiple Azure-SSIS IRs.
@@ -262,9 +262,9 @@ If you use SSISDB, you can deploy your packages into it and run them on your Azu
- For a managed instance with private endpoint, the server endpoint format is `<server name>.<dns prefix>.database.windows.net`. - For a managed instance with public endpoint, the server endpoint format is `<server name>.public.<dns prefix>.database.windows.net,3342`.
-If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility?view=sql-server-2017) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
+If you don't use SSISDB, you can deploy your packages into file system, Azure Files, or MSDB hosted by your Azure SQL Managed Instance and run them on your Azure-SSIS IR by using [dtutil](/sql/integration-services/dtutil-utility) and [AzureDTExec](./how-to-invoke-ssis-package-azure-enabled-dtexec.md) command-line utilities.
-For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages?view=sql-server-ver15).
+For more information, see [Deploy SSIS projects/packages](/sql/integration-services/packages/deploy-integration-services-ssis-projects-and-packages).
In both cases, you can also run your deployed packages on Azure-SSIS IR by using the Execute SSIS Package activity in Data Factory pipelines. For more information, see [Invoke SSIS package execution as a first-class Data Factory activity](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
@@ -70,7 +70,7 @@ If you don't have an Azure subscription, create a [free](https://azure.microsoft
> [!NOTE] > - Replace &lt;your source schema name&gt; with the schema of your Azure SQL MI that has the customers table.
- > - Change data capture doesn't do anything as part of the transactions that change the table being tracked. Instead, the insert, update, and delete operations are written to the transaction log. Data that is deposited in change tables will grow unmanageably if you do not periodically and systematically prune the data. For more information, see [Enable Change Data Capture for a database](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?enable-change-data-capture-for-a-database=&view=sql-server-ver15)
+ > - Change data capture doesn't do anything as part of the transactions that change the table being tracked. Instead, the insert, update, and delete operations are written to the transaction log. Data that is deposited in change tables will grow unmanageably if you do not periodically and systematically prune the data. For more information, see [Enable Change Data Capture for a database](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server#enable-change-data-capture-for-a-database)
```sql EXEC sys.sp_cdc_enable_db
@@ -188,7 +188,7 @@ In this step, you create a dataset to represent the source data.
3. In the **Set properties** tab, set the dataset name and connection information: 1. Select **AzureSqlMI1** for **Linked service**.
- 2. Select **[dbo].[dbo_customers_CT]** for **Table name**. Note: this table was automatically created when CDC was enabled on the customers table. Changed data is never queried from this table directly but is instead extracted through the [CDC functions](/sql/relational-databases/system-functions/change-data-capture-functions-transact-sql?view=sql-server-ver15).
+ 2. Select **[dbo].[dbo_customers_CT]** for **Table name**. Note: this table was automatically created when CDC was enabled on the customers table. Changed data is never queried from this table directly but is instead extracted through the [CDC functions](/sql/relational-databases/system-functions/change-data-capture-functions-transact-sql).
![Source connection](./media/tutorial-incremental-copy-change-data-capture-feature-portal/source-dataset-configuration.png)
@@ -408,4 +408,4 @@ You see the second file in the `customers/incremental/YYYY/MM/DD` folder of the
Advance to the following tutorial to learn about copying new and changed files only based on their LastModifiedDate: > [!div class="nextstepaction"]
->[Copy new files by lastmodifieddate](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
\ No newline at end of file
+>[Copy new files by lastmodifieddate](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-build-your-first-pipeline-using-editor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-editor.md
@@ -26,7 +26,7 @@ ms.date: 01/22/2018
> This article applies to version 1 of Azure Data Factory, which is generally available. If you use the current version of the Data Factory service, see [Quickstart: Create a data factory by using Data Factory](../quickstart-create-data-factory-dot-net.md). > [!WARNING]
-> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 Powershell cmdlets](/powershell/module/az.datafactory/?view=azps-2.4.0&viewFallbackFrom=azps-2.3.2), [ADF v1 .Net SDK](/dotnet/api/microsoft.azure.management.datafactories.models?view=azure-dotnet), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
+> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 Powershell cmdlets](/powershell/module/az.datafactory/), [ADF v1 .Net SDK](/dotnet/api/microsoft.azure.management.datafactories.models), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
In this article, you learn how to use the [Azure portal](https://portal.azure.com/) to create your first data factory. To do the tutorial by using other tools/SDKs, select one of the options from the drop-down list.
@@ -124,21 +124,21 @@ In this step, you link an on-demand HDInsight cluster to your data factory. The
1. Copy and paste the following snippet to the Draft-1 window. The JSON snippet describes the properties that are used to create the HDInsight cluster on demand.
- ```JSON
+ ```JSON
{ "name": "HDInsightOnDemandLinkedService", "properties": { "type": "HDInsightOnDemand", "typeProperties": {
- "version": "3.5",
+ "version": "3.5",
"clusterSize": 1,
- "timeToLive": "00:05:00",
+ "timeToLive": "00:05:00",
"osType": "Linux",
- "linkedServiceName": "AzureStorageLinkedService"
+ "linkedServiceName": "AzureStorageLinkedService"
} } }
- ```
+ ```
The following table provides descriptions for the JSON properties used in the snippet.
@@ -178,7 +178,7 @@ In this step, you create datasets to represent the input and output data for Hiv
1. Copy and paste the following snippet to the Draft-1 window. In the JSON snippet, you create a dataset called **AzureBlobInput** that represents input data for an activity in the pipeline. In addition, you specify that the input data is in the blob container called **adfgetstarted** and the folder called **inputdata**.
- ```JSON
+ ```JSON
{ "name": "AzureBlobInput", "properties": {
@@ -200,7 +200,7 @@ In this step, you create datasets to represent the input and output data for Hiv
"policy": {} } }
- ```
+ ```
The following table provides descriptions for the JSON properties used in the snippet. | Property | Nested under | Description |
@@ -225,7 +225,7 @@ Now, you create the output dataset to represent the output data stored in the bl
1. Copy and paste the following snippet to the Draft-1 window. In the JSON snippet, you create a dataset called **AzureBlobOutput** to specify the structure of the data that is produced by the Hive script. You also specify that the results are stored in the blob container called **adfgetstarted** and the folder called **partitioneddata**. The **availability** section specifies that the output dataset is produced monthly.
- ```JSON
+ ```JSON
{ "name": "AzureBlobOutput", "properties": {
@@ -244,7 +244,7 @@ Now, you create the output dataset to represent the output data stored in the bl
} } }
- ```
+ ```
For descriptions of these properties, see the section "Create the input dataset." You do not set the external property on an output dataset because the dataset is produced by the Data Factory service. 1. Select **Deploy** on the command bar to deploy the newly created dataset.
@@ -267,50 +267,50 @@ In this step, you create your first pipeline with an HDInsight Hive activity. Th
> >
- ```JSON
- {
- "name": "MyFirstPipeline",
- "properties": {
- "description": "My first Azure Data Factory pipeline",
- "activities": [
- {
- "type": "HDInsightHive",
- "typeProperties": {
- "scriptPath": "adfgetstarted/script/partitionweblogs.hql",
- "scriptLinkedService": "AzureStorageLinkedService",
- "defines": {
- "inputtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/inputdata",
- "partitionedtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/partitioneddata"
- }
- },
- "inputs": [
- {
- "name": "AzureBlobInput"
- }
- ],
- "outputs": [
- {
- "name": "AzureBlobOutput"
- }
- ],
- "policy": {
- "concurrency": 1,
- "retry": 3
- },
- "scheduler": {
- "frequency": "Month",
- "interval": 1
- },
- "name": "RunSampleHiveActivity",
- "linkedServiceName": "HDInsightOnDemandLinkedService"
- }
- ],
- "start": "2017-07-01T00:00:00Z",
- "end": "2017-07-02T00:00:00Z",
- "isPaused": false
- }
- }
- ```
+ ```JSON
+ {
+ "name": "MyFirstPipeline",
+ "properties": {
+ "description": "My first Azure Data Factory pipeline",
+ "activities": [
+ {
+ "type": "HDInsightHive",
+ "typeProperties": {
+ "scriptPath": "adfgetstarted/script/partitionweblogs.hql",
+ "scriptLinkedService": "AzureStorageLinkedService",
+ "defines": {
+ "inputtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/inputdata",
+ "partitionedtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/partitioneddata"
+ }
+ },
+ "inputs": [
+ {
+ "name": "AzureBlobInput"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "AzureBlobOutput"
+ }
+ ],
+ "policy": {
+ "concurrency": 1,
+ "retry": 3
+ },
+ "scheduler": {
+ "frequency": "Month",
+ "interval": 1
+ },
+ "name": "RunSampleHiveActivity",
+ "linkedServiceName": "HDInsightOnDemandLinkedService"
+ }
+ ],
+ "start": "2017-07-01T00:00:00Z",
+ "end": "2017-07-02T00:00:00Z",
+ "isPaused": false
+ }
+ }
+ ```
In the JSON snippet, you create a pipeline that consists of a single activity that uses Hive to process data on an HDInsight cluster.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
@@ -36,7 +36,7 @@ In this tutorial, you create a pipeline with one activity in it: Copy Activity.
A pipeline can have more than one activity. And, you can chain two activities (run one activity after another) by setting the output dataset of one activity as the input dataset of the other activity. For more information, see [multiple activities in a pipeline](data-factory-scheduling-and-execution.md#multiple-activities-in-a-pipeline). > [!NOTE]
-> For complete documentation on .NET API for Data Factory, see [Data Factory .NET API Reference](/dotnet/api/index?view=azuremgmtdatafactories-4.12.1).
+> For complete documentation on .NET API for Data Factory, see [Data Factory .NET API Reference](/dotnet/api/overview/azure/data-factory).
> > The data pipeline in this tutorial copies data from a source data store to a destination data store. For a tutorial on how to transform data using Azure Data Factory, see [Tutorial: Build a pipeline to transform data using Hadoop cluster](data-factory-build-your-first-pipeline.md).
@@ -516,7 +516,7 @@ You should have following four values from these steps:
20. Verify that the two employee records are created in the **emp** table in the specified database. ## Next steps
-For complete documentation on .NET API for Data Factory, see [Data Factory .NET API Reference](/dotnet/api/index?view=azuremgmtdatafactories-4.12.1).
+For complete documentation on .NET API for Data Factory, see [Data Factory .NET API Reference](/dotnet/api/overview/azure/data-factory).
In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity:
data-factory https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-create-data-factories-programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-data-factories-programmatically.md
@@ -23,7 +23,7 @@ ms.custom: devx-track-csharp
You can create, monitor, and manage Azure data factories programmatically using Data Factory .NET SDK. This article contains a walkthrough that you can follow to create a sample .NET console application that creates and monitors a data factory. > [!NOTE]
-> This article does not cover all the Data Factory .NET API. See [Data Factory .NET API Reference](/dotnet/api/index?view=azuremgmtdatafactories-4.12.1) for comprehensive documentation on .NET API for Data Factory.
+> This article does not cover all the Data Factory .NET API. See [Data Factory .NET API Reference](/dotnet/api/overview/azure/data-factory) for comprehensive documentation on .NET API for Data Factory.
## Prerequisites
@@ -39,58 +39,58 @@ Create an Azure Active Directory application, create a service principal for the
1. Launch **PowerShell**. 2. Run the following command and enter the user name and password that you use to sign in to the Azure portal.
- ```powershell
- Connect-AzAccount
- ```
+ ```powershell
+ Connect-AzAccount
+ ```
3. Run the following command to view all the subscriptions for this account.
- ```powershell
- Get-AzSubscription
- ```
+ ```powershell
+ Get-AzSubscription
+ ```
4. Run the following command to select the subscription that you want to work with. Replace **&lt;NameOfAzureSubscription**&gt; with the name of your Azure subscription.
- ```powershell
- Get-AzSubscription -SubscriptionName <NameOfAzureSubscription> | Set-AzContext
- ```
+ ```powershell
+ Get-AzSubscription -SubscriptionName <NameOfAzureSubscription> | Set-AzContext
+ ```
> [!IMPORTANT] > Note down **SubscriptionId** and **TenantId** from the output of this command. 5. Create an Azure resource group named **ADFTutorialResourceGroup** by running the following command in the PowerShell.
- ```powershell
- New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US"
- ```
+ ```powershell
+ New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US"
+ ```
If the resource group already exists, you specify whether to update it (Y) or keep it as (N). If you use a different resource group, you need to use the name of your resource group in place of ADFTutorialResourceGroup in this tutorial. 6. Create an Azure Active Directory application.
- ```powershell
- $azureAdApplication = New-AzADApplication -DisplayName "ADFDotNetWalkthroughApp" -HomePage "https://www.contoso.org" -IdentifierUris "https://www.adfdotnetwalkthroughapp.org/example" -Password "Pass@word1"
- ```
+ ```powershell
+ $azureAdApplication = New-AzADApplication -DisplayName "ADFDotNetWalkthroughApp" -HomePage "https://www.contoso.org" -IdentifierUris "https://www.adfdotnetwalkthroughapp.org/example" -Password "Pass@word1"
+ ```
If you get the following error, specify a different URL and run the command again.
-
- ```powershell
- Another object with the same value for property identifierUris already exists.
- ```
+
+ ```powershell
+ Another object with the same value for property identifierUris already exists.
+ ```
7. Create the AD service principal.
- ```powershell
+ ```powershell
New-AzADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId
- ```
+ ```
8. Add service principal to the **Data Factory Contributor** role.
- ```powershell
+ ```powershell
New-AzRoleAssignment -RoleDefinitionName "Data Factory Contributor" -ServicePrincipalName $azureAdApplication.ApplicationId.Guid
- ```
+ ```
9. Get the application ID.
- ```powershell
- $azureAdApplication
- ```
+ ```powershell
+ $azureAdApplication
+ ```
Note down the application ID (applicationID) from the output. You should have following four values from these steps:
@@ -137,7 +137,7 @@ The Copy Activity performs the data movement in Azure Data Factory. The activity
5. In the App.Config file, update values for **&lt;Application ID&gt;**, **&lt;Password&gt;**, **&lt;Subscription ID&gt;**, and **&lt;tenant ID&gt;** with your own values. 6. Add the following **using** statements to the **Program.cs** file in the project.
- ```csharp
+ ```csharp
using System.Configuration; using System.Collections.ObjectModel; using System.Threading;
@@ -150,17 +150,17 @@ The Copy Activity performs the data movement in Azure Data Factory. The activity
using Microsoft.IdentityModel.Clients.ActiveDirectory;
- ```
+ ```
6. Add the following code that creates an instance of **DataPipelineManagementClient** class to the **Main** method. You use this object to create a data factory, a linked service, input and output datasets, and a pipeline. You also use this object to monitor slices of a dataset at runtime. ```csharp // create data factory management client
- //IMPORTANT: specify the name of Azure resource group here
+ //IMPORTANT: specify the name of Azure resource group here
string resourceGroupName = "ADFTutorialResourceGroup";
- //IMPORTANT: the name of the data factory must be globally unique.
- // Therefore, update this value. For example:APITutorialFactory05122017
+ //IMPORTANT: the name of the data factory must be globally unique.
+ // Therefore, update this value. For example:APITutorialFactory05122017
string dataFactoryName = "APITutorialFactory"; TokenCloudCredentials aadTokenCredentials = new TokenCloudCredentials(
@@ -221,207 +221,207 @@ The Copy Activity performs the data movement in Azure Data Factory. The activity
The FolderPath for the output blob is set to: **adftutorial/apifactoryoutput/{Slice}** where **Slice** is dynamically calculated based on the value of **SliceStart** (start date-time of each slice.)
- ```csharp
- // create input and output datasets
- Console.WriteLine("Creating input and output datasets");
- string Dataset_Source = "DatasetBlobSource";
- string Dataset_Destination = "DatasetBlobDestination";
-
- client.Datasets.CreateOrUpdate(resourceGroupName, dataFactoryName,
- new DatasetCreateOrUpdateParameters()
- {
- Dataset = new Dataset()
- {
- Name = Dataset_Source,
- Properties = new DatasetProperties()
- {
- LinkedServiceName = "AzureStorageLinkedService",
- TypeProperties = new AzureBlobDataset()
- {
- FolderPath = "adftutorial/",
- FileName = "emp.txt"
- },
- External = true,
- Availability = new Availability()
- {
- Frequency = SchedulePeriod.Hour,
- Interval = 1,
- },
-
- Policy = new Policy()
- {
- Validation = new ValidationPolicy()
- {
- MinimumRows = 1
- }
- }
- }
- }
- });
-
- client.Datasets.CreateOrUpdate(resourceGroupName, dataFactoryName,
- new DatasetCreateOrUpdateParameters()
- {
- Dataset = new Dataset()
- {
- Name = Dataset_Destination,
- Properties = new DatasetProperties()
- {
-
- LinkedServiceName = "AzureStorageLinkedService",
- TypeProperties = new AzureBlobDataset()
- {
- FolderPath = "adftutorial/apifactoryoutput/{Slice}",
- PartitionedBy = new Collection<Partition>()
- {
- new Partition()
- {
- Name = "Slice",
- Value = new DateTimePartitionValue()
- {
- Date = "SliceStart",
- Format = "yyyyMMdd-HH"
- }
- }
- }
- },
-
- Availability = new Availability()
- {
- Frequency = SchedulePeriod.Hour,
- Interval = 1,
- },
- }
- }
- });
- ```
+ ```csharp
+ // create input and output datasets
+ Console.WriteLine("Creating input and output datasets");
+ string Dataset_Source = "DatasetBlobSource";
+ string Dataset_Destination = "DatasetBlobDestination";
+
+ client.Datasets.CreateOrUpdate(resourceGroupName, dataFactoryName,
+ new DatasetCreateOrUpdateParameters()
+ {
+ Dataset = new Dataset()
+ {
+ Name = Dataset_Source,
+ Properties = new DatasetProperties()
+ {
+ LinkedServiceName = "AzureStorageLinkedService",
+ TypeProperties = new AzureBlobDataset()
+ {
+ FolderPath = "adftutorial/",
+ FileName = "emp.txt"
+ },
+ External = true,
+ Availability = new Availability()
+ {
+ Frequency = SchedulePeriod.Hour,
+ Interval = 1,
+ },
+
+ Policy = new Policy()
+ {
+ Validation = new ValidationPolicy()
+ {
+ MinimumRows = 1
+ }
+ }
+ }
+ }
+ });
+
+ client.Datasets.CreateOrUpdate(resourceGroupName, dataFactoryName,
+ new DatasetCreateOrUpdateParameters()
+ {
+ Dataset = new Dataset()
+ {
+ Name = Dataset_Destination,
+ Properties = new DatasetProperties()
+ {
+
+ LinkedServiceName = "AzureStorageLinkedService",
+ TypeProperties = new AzureBlobDataset()
+ {
+ FolderPath = "adftutorial/apifactoryoutput/{Slice}",
+ PartitionedBy = new Collection<Partition>()
+ {
+ new Partition()
+ {
+ Name = "Slice",
+ Value = new DateTimePartitionValue()
+ {
+ Date = "SliceStart",
+ Format = "yyyyMMdd-HH"
+ }
+ }
+ }
+ },
+
+ Availability = new Availability()
+ {
+ Frequency = SchedulePeriod.Hour,
+ Interval = 1,
+ },
+ }
+ }
+ });
+ ```
10. Add the following code that **creates and activates a pipeline** to the **Main** method. This pipeline has a **CopyActivity** that takes **BlobSource** as a source and **BlobSink** as a sink. The Copy Activity performs the data movement in Azure Data Factory. The activity is powered by a globally available service that can copy data between various data stores in a secure, reliable, and scalable way. See [Data Movement Activities](data-factory-data-movement-activities.md) article for details about the Copy Activity.
- ```csharp
- // create a pipeline
- Console.WriteLine("Creating a pipeline");
- DateTime PipelineActivePeriodStartTime = new DateTime(2014, 8, 9, 0, 0, 0, 0, DateTimeKind.Utc);
- DateTime PipelineActivePeriodEndTime = PipelineActivePeriodStartTime.AddMinutes(60);
- string PipelineName = "PipelineBlobSample";
-
- client.Pipelines.CreateOrUpdate(resourceGroupName, dataFactoryName,
- new PipelineCreateOrUpdateParameters()
- {
- Pipeline = new Pipeline()
- {
- Name = PipelineName,
- Properties = new PipelineProperties()
- {
- Description = "Demo Pipeline for data transfer between blobs",
-
- // Initial value for pipeline's active period. With this, you won't need to set slice status
- Start = PipelineActivePeriodStartTime,
- End = PipelineActivePeriodEndTime,
-
- Activities = new List<Activity>()
- {
- new Activity()
- {
- Name = "BlobToBlob",
- Inputs = new List<ActivityInput>()
- {
- new ActivityInput()
- {
- Name = Dataset_Source
- }
- },
- Outputs = new List<ActivityOutput>()
- {
- new ActivityOutput()
- {
- Name = Dataset_Destination
- }
- },
- TypeProperties = new CopyActivity()
- {
- Source = new BlobSource(),
- Sink = new BlobSink()
- {
- WriteBatchSize = 10000,
- WriteBatchTimeout = TimeSpan.FromMinutes(10)
- }
- }
- }
-
- },
- }
- }
- });
- ```
+ ```csharp
+ // create a pipeline
+ Console.WriteLine("Creating a pipeline");
+ DateTime PipelineActivePeriodStartTime = new DateTime(2014, 8, 9, 0, 0, 0, 0, DateTimeKind.Utc);
+ DateTime PipelineActivePeriodEndTime = PipelineActivePeriodStartTime.AddMinutes(60);
+ string PipelineName = "PipelineBlobSample";
+
+ client.Pipelines.CreateOrUpdate(resourceGroupName, dataFactoryName,
+ new PipelineCreateOrUpdateParameters()
+ {
+ Pipeline = new Pipeline()
+ {
+ Name = PipelineName,
+ Properties = new PipelineProperties()
+ {
+ Description = "Demo Pipeline for data transfer between blobs",
+
+ // Initial value for pipeline's active period. With this, you won't need to set slice status
+ Start = PipelineActivePeriodStartTime,
+ End = PipelineActivePeriodEndTime,
+
+ Activities = new List<Activity>()
+ {
+ new Activity()
+ {
+ Name = "BlobToBlob",
+ Inputs = new List<ActivityInput>()
+ {
+ new ActivityInput()
+ {
+ Name = Dataset_Source
+ }
+ },
+ Outputs = new List<ActivityOutput>()
+ {
+ new ActivityOutput()
+ {
+ Name = Dataset_Destination
+ }
+ },
+ TypeProperties = new CopyActivity()
+ {
+ Source = new BlobSource(),
+ Sink = new BlobSink()
+ {
+ WriteBatchSize = 10000,
+ WriteBatchTimeout = TimeSpan.FromMinutes(10)
+ }
+ }
+ }
+
+ },
+ }
+ }
+ });
+ ```
12. Add the following code to the **Main** method to get the status of a data slice of the output dataset. There is only one slice expected in this sample.
- ```csharp
- // Pulling status within a timeout threshold
- DateTime start = DateTime.Now;
- bool done = false;
-
- while (DateTime.Now - start < TimeSpan.FromMinutes(5) && !done)
- {
- Console.WriteLine("Pulling the slice status");
- // wait before the next status check
- Thread.Sleep(1000 * 12);
-
- var datalistResponse = client.DataSlices.List(resourceGroupName, dataFactoryName, Dataset_Destination,
- new DataSliceListParameters()
- {
- DataSliceRangeStartTime = PipelineActivePeriodStartTime.ConvertToISO8601DateTimeString(),
- DataSliceRangeEndTime = PipelineActivePeriodEndTime.ConvertToISO8601DateTimeString()
- });
-
- foreach (DataSlice slice in datalistResponse.DataSlices)
- {
- if (slice.State == DataSliceState.Failed || slice.State == DataSliceState.Ready)
- {
- Console.WriteLine("Slice execution is done with status: {0}", slice.State);
- done = true;
- break;
- }
- else
- {
- Console.WriteLine("Slice status is: {0}", slice.State);
- }
- }
- }
- ```
+ ```csharp
+ // Pulling status within a timeout threshold
+ DateTime start = DateTime.Now;
+ bool done = false;
+
+ while (DateTime.Now - start < TimeSpan.FromMinutes(5) && !done)
+ {
+ Console.WriteLine("Pulling the slice status");
+ // wait before the next status check
+ Thread.Sleep(1000 * 12);
+
+ var datalistResponse = client.DataSlices.List(resourceGroupName, dataFactoryName, Dataset_Destination,
+ new DataSliceListParameters()
+ {
+ DataSliceRangeStartTime = PipelineActivePeriodStartTime.ConvertToISO8601DateTimeString(),
+ DataSliceRangeEndTime = PipelineActivePeriodEndTime.ConvertToISO8601DateTimeString()
+ });
+
+ foreach (DataSlice slice in datalistResponse.DataSlices)
+ {
+ if (slice.State == DataSliceState.Failed || slice.State == DataSliceState.Ready)
+ {
+ Console.WriteLine("Slice execution is done with status: {0}", slice.State);
+ done = true;
+ break;
+ }
+ else
+ {
+ Console.WriteLine("Slice status is: {0}", slice.State);
+ }
+ }
+ }
+ ```
13. **(optional)** Add the following code to get run details for a data slice to the **Main** method.
- ```csharp
- Console.WriteLine("Getting run details of a data slice");
-
- // give it a few minutes for the output slice to be ready
- Console.WriteLine("\nGive it a few minutes for the output slice to be ready and press any key.");
- Console.ReadKey();
-
- var datasliceRunListResponse = client.DataSliceRuns.List(
- resourceGroupName,
- dataFactoryName,
- Dataset_Destination,
- new DataSliceRunListParameters()
- {
- DataSliceStartTime = PipelineActivePeriodStartTime.ConvertToISO8601DateTimeString()
- });
-
- foreach (DataSliceRun run in datasliceRunListResponse.DataSliceRuns)
- {
- Console.WriteLine("Status: \t\t{0}", run.Status);
- Console.WriteLine("DataSliceStart: \t{0}", run.DataSliceStart);
- Console.WriteLine("DataSliceEnd: \t\t{0}", run.DataSliceEnd);
- Console.WriteLine("ActivityId: \t\t{0}", run.ActivityName);
- Console.WriteLine("ProcessingStartTime: \t{0}", run.ProcessingStartTime);
- Console.WriteLine("ProcessingEndTime: \t{0}", run.ProcessingEndTime);
- Console.WriteLine("ErrorMessage: \t{0}", run.ErrorMessage);
- }
-
- Console.WriteLine("\nPress any key to exit.");
- Console.ReadKey();
- ```
+ ```csharp
+ Console.WriteLine("Getting run details of a data slice");
+
+ // give it a few minutes for the output slice to be ready
+ Console.WriteLine("\nGive it a few minutes for the output slice to be ready and press any key.");
+ Console.ReadKey();
+
+ var datasliceRunListResponse = client.DataSliceRuns.List(
+ resourceGroupName,
+ dataFactoryName,
+ Dataset_Destination,
+ new DataSliceRunListParameters()
+ {
+ DataSliceStartTime = PipelineActivePeriodStartTime.ConvertToISO8601DateTimeString()
+ });
+
+ foreach (DataSliceRun run in datasliceRunListResponse.DataSliceRuns)
+ {
+ Console.WriteLine("Status: \t\t{0}", run.Status);
+ Console.WriteLine("DataSliceStart: \t{0}", run.DataSliceStart);
+ Console.WriteLine("DataSliceEnd: \t\t{0}", run.DataSliceEnd);
+ Console.WriteLine("ActivityId: \t\t{0}", run.ActivityName);
+ Console.WriteLine("ProcessingStartTime: \t{0}", run.ProcessingStartTime);
+ Console.WriteLine("ProcessingEndTime: \t{0}", run.ProcessingEndTime);
+ Console.WriteLine("ErrorMessage: \t{0}", run.ErrorMessage);
+ }
+
+ Console.WriteLine("\nPress any key to exit.");
+ Console.ReadKey();
+ ```
14. Add the following helper method used by the **Main** method to the **Program** class. This method pops a dialog box that lets you provide **user name** and **password** that you use to log in to Azure portal. ```csharp
@@ -429,11 +429,11 @@ The Copy Activity performs the data movement in Azure Data Factory. The activity
{ AuthenticationContext context = new AuthenticationContext(ConfigurationManager.AppSettings["ActiveDirectoryEndpoint"] + ConfigurationManager.AppSettings["ActiveDirectoryTenantId"]); ClientCredential credential = new ClientCredential(
- ConfigurationManager.AppSettings["ApplicationId"],
- ConfigurationManager.AppSettings["Password"]);
+ ConfigurationManager.AppSettings["ApplicationId"],
+ ConfigurationManager.AppSettings["Password"]);
AuthenticationResult result = await context.AcquireTokenAsync(
- resource: ConfigurationManager.AppSettings["WindowsManagementUri"],
- clientCredential: credential);
+ resource: ConfigurationManager.AppSettings["WindowsManagementUri"],
+ clientCredential: credential);
if (result != null) return result.AccessToken;
@@ -446,10 +446,10 @@ The Copy Activity performs the data movement in Azure Data Factory. The activity
15. Build the console application. Click **Build** on the menu and click **Build Solution**. 16. Confirm that there is at least one file in the adftutorial container in your Azure blob storage. If not, create Emp.txt file in Notepad with the following content and upload it to the adftutorial container.
- ```
+ ```
John, Doe Jane, Doe
- ```
+ ```
17. Run the sample by clicking **Debug** -> **Start Debugging** on the menu. When you see the **Getting run details of a data slice**, wait for a few minutes, and press **ENTER**. 18. Use the Azure portal to verify that the data factory **APITutorialFactory** is created with the following artifacts: * Linked service: **AzureStorageLinkedService**
@@ -469,29 +469,29 @@ parameters.WindowState = "Failed";
var response = dataFactoryManagementClient.ActivityWindows.List(parameters); do {
- foreach (var activityWindow in response.ActivityWindowListResponseValue.ActivityWindows)
- {
- var row = string.Join(
- "\t",
- activityWindow.WindowStart.ToString(),
- activityWindow.WindowEnd.ToString(),
- activityWindow.RunStart.ToString(),
- activityWindow.RunEnd.ToString(),
- activityWindow.DataFactoryName,
- activityWindow.PipelineName,
- activityWindow.ActivityName,
- string.Join(",", activityWindow.OutputDatasets));
- Console.WriteLine(row);
- }
-
- if (response.NextLink != null)
- {
- response = dataFactoryManagementClient.ActivityWindows.ListNext(response.NextLink, parameters);
- }
- else
- {
- response = null;
- }
+ foreach (var activityWindow in response.ActivityWindowListResponseValue.ActivityWindows)
+ {
+ var row = string.Join(
+ "\t",
+ activityWindow.WindowStart.ToString(),
+ activityWindow.WindowEnd.ToString(),
+ activityWindow.RunStart.ToString(),
+ activityWindow.RunEnd.ToString(),
+ activityWindow.DataFactoryName,
+ activityWindow.PipelineName,
+ activityWindow.ActivityName,
+ string.Join(",", activityWindow.OutputDatasets));
+ Console.WriteLine(row);
+ }
+
+ if (response.NextLink != null)
+ {
+ response = dataFactoryManagementClient.ActivityWindows.ListNext(response.NextLink, parameters);
+ }
+ else
+ {
+ response = null;
+ }
} while (response != null); ```
dedicated-hsm https://docs.microsoft.com/en-us/azure/dedicated-hsm/high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dedicated-hsm/high-availability.md
@@ -1,4 +1,4 @@
----
+---
title: High availability - Azure Dedicated HSM | Microsoft Docs description: Learn about basic considerations for Azure Dedicated HSM high availability. This article includes an example. services: dedicated-hsm
@@ -10,17 +10,17 @@ ms.workload: identity
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 03/27/2019
+ms.date: 01/15/2021
ms.author: mbaldwin --- # Azure Dedicated HSM high availability
-Azure Dedicated HSM is underpinned by MicrosoftΓÇÖs highly available datacenters. However, any highly available datacenter is vulnerable to localized failures and in extreme circumstances, regional level failures. Microsoft deploys HSM devices in different datacenters within a region to ensure provisioning multiple devices does not lead to those devices sharing a single rack. A further level of high availability can be achieved by pairing these HSMs across the datacenters in a region using the Gemalto HA Group feature. It is also possible to pair devices across regions to address regional failover in a disaster recovery situation. With this multi-layered high availability configuration, any device failure will be automatically addressed to keep applications working. All datacenters also have spare devices and components on-site so any failed device can be replaced in a timely fashion.
+Azure Dedicated HSM is underpinned by MicrosoftΓÇÖs highly available datacenters. However, any highly available datacenter is vulnerable to localized failures and in extreme circumstances, regional level failures. Microsoft deploys HSM devices in different datacenters within a region to ensure provisioning multiple devices does not lead to those devices sharing a single rack. A further level of high availability can be achieved by pairing these HSMs across the datacenters in a region using the Thales HA Group feature. It is also possible to pair devices across regions to address regional failover in a disaster recovery situation. With this multi-layered high availability configuration, any device failure will be automatically addressed to keep applications working. All datacenters also have spare devices and components on-site so any failed device can be replaced in a timely fashion.
## High availability example
-Information on how to configure HSM devices for high availability at the software level is in the 'Gemalto Luna Network HSM Administration Guide'. This document is available at the [Gemalto HSM Page](https://safenet.gemalto.com/data-encryption/hardware-security-modules-hsms/safenet-network-hsm/).
+Information on how to configure HSM devices for high availability at the software level is in the 'Thales Luna 7 HSM Administration Guide'. This document is available at the [Thales HSM Page](https://thalesdocs.com/gphsm/Content/luna/network/luna_network_releases.htm).
The following diagram shows a highly available architecture. It uses multiple devices in region and multiple devices paired in a separate region. This architecture uses a minimum of four HSM devices and virtual networking components.
@@ -37,4 +37,4 @@ Further concept level topics:
* [Supportability](supportability.md) * [Monitoring](monitoring.md)
-For specific details on configuring HSM devices for high availability, please refer to the Gemalto Customer Support Portal for the Administrator Guides and see section 6.
+For specific details on configuring HSM devices for high availability, please refer to the Thales Customer Support Portal for the Administrator Guides and see section 6.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-manage-sensors-on-the-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
@@ -11,7 +11,7 @@ ms.service: azure
# Onboard and manage sensors in the Defender for IoT portal
-This article describes how to onboard, view, and manage sensors in the Defender for IoT portal.
+This article describes how to onboard, view, and manage sensors in the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
## Onboard sensors
@@ -21,9 +21,9 @@ You onboard a sensor by registering it with Azure Defender for IoT and downloadi
To register:
-1. Go to the **Welcome** page in the Defender for IoT portal.
+1. Go to the **Welcome** page in the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
1. Select **Onboard sensor**.
-1. Create a sensor name. We recommend that you include the IP address of the sensor you installed as part of the name, or use an easily identifiable name. This will ensure easier tracking and consistent naming between the registration name in the Azure Defender for IoT portal and the IP of the deployed sensor displayed in the sensor console.
+1. Create a sensor name. We recommend that you include the IP address of the sensor you installed as part of the name, or use an easily identifiable name. This will ensure easier tracking and consistent naming between the registration name in the Azure [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) and the IP of the deployed sensor displayed in the sensor console.
1. Associate the sensor with an Azure subscription. 1. Choose a sensor management mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed.
@@ -47,7 +47,7 @@ To download an activation file:
## View onboarded sensors
-On the Defender for IoT portal, you can view basic information about onboarded sensors.
+On the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), you can view basic information about onboarded sensors.
1. Select **Sites and Sensors**. 1. On the **Sites and Sensors** page, use filter and search tools to find sensor information that you need.
@@ -61,7 +61,7 @@ The available information includes:
## Manage onboarded sensors
-You use the Defender for IoT portal for management tasks related to sensors.
+You use the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) for management tasks related to sensors.
### Export
@@ -84,7 +84,7 @@ To delete a sensor:
You might want to update the mode that your sensor is managed in. For example: -- **Work in cloud-connected mode instead of locally managed mode**: To do this, update the activation file for your locally connected sensor with an activation file for a cloud-connected sensor. After reactivation, sensor detections are displayed in both the sensor and the Defender for IoT portal. After the reactivation file is successfully uploaded, newly detected alert information is sent to Azure.
+- **Work in cloud-connected mode instead of locally managed mode**: To do this, update the activation file for your locally connected sensor with an activation file for a cloud-connected sensor. After reactivation, sensor detections are displayed in both the sensor and the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). After the reactivation file is successfully uploaded, newly detected alert information is sent to Azure.
- **Work in locally connected mode instead of cloud-connected mode**: To do this, update the activation file for a cloud-connected sensor with an activation file for a locally managed sensor. After reactivation, sensor detection information is displayed only in the sensor.
@@ -92,7 +92,7 @@ You might want to update the mode that your sensor is managed in. For example:
To reactivate a sensor:
-1. Go to **Sites and Sensors** page on the Defender for IoT portal.
+1. Go to **Sites and Sensors** page on the [Defender for IoT portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
2. Select the sensor for which you want to upload a new activation file.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-cisco-ise-pxgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-cisco-ise-pxgrid.md new file mode 100644
@@ -0,0 +1,222 @@
+---
+title: About the Cisco ISE pxGrid integration
+titleSuffix: Azure Defender for IoT
+description: Bridging the capabilities of Defender for IoT with Cisco ISE pxGrid, provides security teams unprecedented device visibility to the enterprise ecosystem.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 12/28/2020
+ms.topic: how-to
+ms.service: azure
+---
+
+# About the Cisco ISE pxGrid integration
+
+Defender for IoT delivers the only ICS and IoT cybersecurity platform built by blue-team experts with a track record defending critical national infrastructure, and the only platform with patented ICS-aware threat analytics and machine learning. Defender for IoT provides:
+
+- Immediate insights about ICS devices, vulnerabilities, and known and zero-day threats.
+
+- ICS-aware deep embedded knowledge of OT protocols, devices, applications, and their behaviors.
+
+- An automated ICS threat modeling technology to predict the most likely paths of targeted ICS attacks via proprietary analytics.
+
+## Powerful device visibility and control
+
+The Defender for IoT integration with Cisco ISE pxGrid provides a new level of centralized visibility, monitoring, and control for the OT landscape.
+
+These bridged platforms enable automated device visibility and protection to previously unreachable ICS and IIoT devices.
+
+### ICS and IIoT device visibility: comprehensive and deep
+
+Patented Defender for IoT technologies ensures comprehensive and deep ICS and IIoT device discovery and inventory management for enterprise data.
+
+Device types, manufacturers, open ports, serial numbers, firmware revisions, IP addresses, and MAC address data and more are updated in real time. Defender for IoT can further enhance visibility, discovery, and analysis from this baseline by integrating with critical enterprise data sources. For example, CMDBs, DNS, Firewalls, and Web APIs.
+
+In addition, the Defender for IoT platform combines passive monitoring and optional selective probing techniques to provide the most accurate and detailed inventory of devices in industrial and critical infrastructure organizations.
+
+### Bridged capabilities
+
+Bridging these capabilities with Cisco ISE pxGrid, provides security teams unprecedented device visibility to the enterprise ecosystem.
+
+Seamless, robust integration with Cisco ISE pxGrid ensures no OT device goes undiscovered and no device information is missed.
+
+:::image type="content" source="media/integration-cisco-isepxgrid-integration/endpoint-categories.png" alt-text="Sample of the endpoint categories OUI.":::
+
+:::image type="content" source="media/integration-cisco-isepxgrid-integration/endpoints.png" alt-text="Sample endpoints in the system":::
+
+:::image type="content" source="media/integration-cisco-isepxgrid-integration/attributes.png" alt-text="Screenshot of the attributes located in the system.":::
+
+### Use case coverage: ISE policies based on Defender for IoT attributes
+
+Use powerful ISE policies based on Defender for IoT deep learning to handle ICS and IoT use case requirements.
+
+Working with policies lets you close the security cycle and bring your network to compliance in real time.
+
+For example, customers can use Defender for IoT ICS and IoT attributes to create policies that handle the following use cases, such as:
+
+- Create an authorization policy to allow known and authorized devices into a sensitive zone based on allowed attributes - for example, specific firmware version for Rockwell Automation PLCs.
+
+- Notify security teams when an ICS device is detected in a non-OT zone.
+
+- Remediate machines with outdated or noncompliant vendors.
+
+- Quarantine and block devices as required.
+
+- Generate reports on PLCs or RTUs running firmware with known vulnerabilities (CVEs).
+
+### About this article
+
+This article describes how to set up pxGrid and the Defender for IoT platform to ensure that Defender for IoT injects OT attributes to Cisco ISE.
+
+### Getting more information
+
+For more information about Cisco ISE pxGrid integration requirements, see <https://www.cisco.com/c/en/us/products/security/pxgrid.html>
+
+## Integration system requirements
+
+This article describes integration system requirements:
+
+Defender for IoT requirements
+
+- Defender for IoT version 2.5
+
+Cisco requirements
+
+- pxGrid version 2.0
+
+- Cisco ISE version 2.4
+
+Network requirements
+
+- Verify that the Defender for IoT appliance has access to the Cisco ISE management interface.
+
+- Verify that you have CLI access to all Defender for IoT appliances in your enterprise.
+
+> [!NOTE]
+> Only devices with MAC addresses are synced with Cisco ISE pxGrid.
+
+## Cisco ISE pxGrid setup
+
+This article describes how to:
+
+- Set up communication with pxGrid
+
+- Subscribe to the endpoint device topic
+
+- Generate certificates
+
+- Define pxGrid settings
+
+## Set up communication with pxGrid
+
+This article describes how to set up communication with pxGrid.
+
+To set up communication:
+
+1. Sign in to Cisco ISE.
+
+1. Select **Administration**>**System** and **Deployment**.
+
+1. Select the required node. In the General Settings tab, select the **pxGrid checkbox**.
+
+ :::image type="content" source="media/integration-cisco-isepxgrid-integration/pxgrid.png" alt-text="Ensure the pxgrid checkbox is selected.":::
+
+1. Select the **Profiling Configuration** tab.
+
+1. Select the **pxGrid checkbox**. Add a description.
+
+ :::image type="content" source="media/integration-cisco-isepxgrid-integration/profile-configuration.png" alt-text="Screenshot of the add description":::
+
+## Subscribe to the endpoint device topic
+
+Verify that the ISE pxGrid node has subscribed to the endpoint device topic. Navigate to **Administration**>**pxGrid Services**>**Web Clients**. There, you can verify that ISE has subscribed to the endpoint device topic.
+
+## Generate certificates
+
+This article describes how to generate certificates.
+
+To generate:
+
+1. Select **Administration** > **pxGrid Services**, and then select **Certificates**.
+
+ :::image type="content" source="media/integration-cisco-isepxgrid-integration/certificates.png" alt-text="Select the certificates tab in order to generate a certificate.":::
+
+1. In the **I Want To** field, select **Generate a single certificate (without a certificate signing request)**.
+
+1. Fill in the remaining fields and select **Create**.
+
+1. Create the client certificate key and the server certificate, and then convert them to java keystore format. YouΓÇÖll need to copy these to each Defender for IoT sensor in your network.
+
+## Define pxGrid settings
+
+To define settings:
+
+1. Select **Administration** > **pxGrid Services** and then select **Settings**.
+
+1. Enable the **Automatically approve new certificate-based accounts** and **Allow password-based account creation.**
+
+ :::image type="content" source="media/integration-cisco-isepxgrid-integration/allow-these.png" alt-text="Ensure both checkboxes are selected.":::
+
+## Set up the Defender for IoT appliances
+
+This article describes how to set up Defender for IoT appliances to communicate with pxGrid. The configuration should be carried out on all Defender for IoT appliances that will inject data to Cisco ISE.
+
+To set up appliances:
+
+1. Sign in to the sensor CLI.
+
+1. Copy `trust.jks`, and , which were previously created on the sensor. Copy them to: `/var/cyberx/properties/`.
+
+1. Edit `/var/cyberx/properties/pxgrid.properties`:
+
+ 1. Add a key and trust, then store filenames and passwords.
+
+ 2. Add the hostname of the pxGrid instance.
+
+ 3. `Enabled=true`
+
+1. Restart the appliance.
+
+1. Repeat these steps for each appliance in your enterprise that will inject data.
+
+## View and manage detections in Cisco ISE
+
+1. Endpoint detections are displayed in the ISE Context **Visibility** > **Endpoints** tab.
+
+1. Select **Policy** > **Profiling** > **Add** > **Rules** > **+ Condition** > **Create New Condition**.
+
+1. Select **Attribute** and use the IOT device dictionaries to build a profiling rule based on the attributes injected. The following attributes are injected.
+
+ - Device type
+
+ - Hardware revision
+
+ - IP address
+
+ - MAC address
+
+ - Name
+
+ - Product ID
+
+ - Protocol
+
+ - Serial number
+
+ - Software revision
+
+ - Vendor
+
+Only devices with MAC addresses are synced.
+
+## Troubleshooting and logs
+
+Logs can be found in:
+
+- `/var/cyberx/logs/pxgrid.log`
+
+- `/var/cyberx/logs/core.log`
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-forescout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-forescout.md new file mode 100644
@@ -0,0 +1,181 @@
+---
+title: About the Forescout integration
+titleSuffix: Azure Defender for IoT
+description: The Azure Defender for IoT integration with the Forescout platform provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/17/2021
+ms.topic: article
+ms.service: azure
+---
+
+# About the Forescout integration
+
+Azure Defender for IoT delivers an ICS and IoT cybersecurity platform built by blue team experts with a track record of defending critical national infrastructure. Defender for IoT is the only platform with patented ICS aware threat analytics and machine learning. Defender for IoT provides:
+
+- Immediate insights about ICS the device landscape with an extensive range of details about attributes.
+- ICS-aware deep embedded knowledge of OT protocols, devices, applications, and their behaviors.
+- Immediate insights into vulnerabilities, and known and zero-day threats.
+- An automated ICS threat modeling technology to predict the most likely paths of targeted ICS attacks via proprietary analytics.
+
+The Defender for IoT integration with the Forescout platform provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape.
+
+These bridged platforms enable automated device visibility and management to previously unreachable ICS devices and siloed workflows.
+
+The integration provides SOC analysts with multilevel visibility into OT protocols deployed in industrial environments. Details are available for items such as firmware, device types, operating systems, and risk analysis scores based on proprietary Azure Defender for IoT technologies.
+
+> [!Note]
+> References to CyberX refer to Azure Defender for IoT.
+## Devices
+
+### Device visibility and management
+
+The device's inventory is enriched with critical attributes detected by the Defender for IoT platform. This will ensures that you:
+
+- Gain comprehensive and continuous visibility into the OT device landscape from a single-pane-of-glass.
+- Obtain real-time intelligence about OT risks.
+
+:::image type="content" source="media/integration-forescout/forescout-device-inventory.png" alt-text="Device inventory":::
+
+:::image type="content" source="media/integration-forescout/forescout-device-details.png" alt-text="Device details":::
+
+### Device control
+
+The Forescout integration helps reduce the time required for industrial and critical infrastructure organizations to detect, investigate, and act on cyber threats.
+
+- Use Azure Defender for IoT OT device intelligence to close the security cycle by triggering Forescout policy actions. For example, you can automatically send alert email to SOC administrators when specific protocols are detected, or when firmware details change.
+
+- Correlate Defender for IoT information with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control.
+
+## System requirements
+
+- Azure Defender for IoT version 2.4 or above
+- Forescout version 8.0 or above
+- A license for the *Forescout eyeExtend* module for the Azure Defender for IoT Platform.
+
+### Getting more Forescout information
+
+For more information about the Forescout platform, see the [Forescout Resource Center](https://www.forescout.com/company/resources/#resource_filter_group).
+
+## System setup
+
+This article describes how to set up communication between the Defender for IoT platform and the Forescout platform.
+
+### Set up the Defender for IoT platform
+
+To ensure communication from Defender for IoT to Forescout, generate an access token in Defender for IoT.
+
+Access tokens allow external systems to access data discovered by Defender for IoT and perform actions with that data using the external REST API, over SSL connections. You can generate access tokens in order to access the Azure Defender for IoT REST API.
+
+To generate a token:
+
+1. Sign in to the Defender for IoT Sensor that will be queried by Forescout.
+
+1. Select **System Settings** and then select **Access Tokens** from the **General** section. The **Access Tokens** dialog box opens.
+ :::image type="content" source="media/integration-forescout/generate-access-tokens-screen.png" alt-text="Access tokens":::
+1. Select **Generate new token** from the **Access Tokens** dialog box.
+1. Enter a token description in the **New access token** dialog box.
+ :::image type="content" source="media/integration-forescout/new-forescout-token.png" alt-text="New access token":::
+1. Select **Next**. The token is displayed in the dialog box. :::image type="content" source="media/integration-forescout/forescout-access-token-display-screen.png" alt-text="View token":::
+ > [!NOTE]
+ > *Record the token in a safe place. You will need it when configuring the Forescout Platform*.
+1. Select **Finish**.
+
+ :::image type="content" source="media/integration-forescout/forescout-access-token-added-successfully.png" alt-text="Finish adding token":::
+
+### Set up the Forescout platform
+
+You can set up the Forescout platform to communicate with a Defender for IoT sensor.
+
+To set up:
+
+1. Install *the Forescout eyeExtend module for CyberX* on the Forescout platform.
+
+1. Sign in to the CounterACT console and select **Options** from the **Tools** menu. The **Options** dialog box opens.
+
+1. Navigate to and select the **Modules** folder.
+
+1. In the **Modules** pane, select **CyberX Platform**. The Defender for IoT platform pane opens.
+
+ :::image type="content" source="media/integration-forescout/settings-for-module.png" alt-text="Azure Defender for IoT module settings":::
+
+ Enter the following information:
+
+ - **Server Address** - Enter the IP address of the Defender for IoT sensor that will be queried by the Forescout appliance.
+ - **Access Token** - Enter the access token generated for the Defender for IoT sensor that will connect to the Forescout appliance. To generate a token, see [Set up the Defender for IoT platform](#set-up-the-defender-for-iot-platform).
+
+1. Select **Apply**.
+
+If you want the Forescout platform to communicate with another sensor:
+
+1. Create a new access token in the relevant Defender for IoT sensor.
+
+1. In the **Forescout Modules** > **CyberX Platform** dialog box:
+
+ - Delete the information displayed.
+
+ - Enter the new sensor IP address and the new access token information.
+
+### Verify communication
+
+After configuring Defender for IoT and Forescout, open the sensor's Access Tokens dialog box in Defender for IoT.
+
+If **N/A** is displayed in the **Used** field for this token, the connection between the sensor and the Forescout appliance is not working.
+
+**Used** indicates the last time an external call with this token was received.
+
+:::image type="content" source="media/integration-forescout/forescout-access-token-added-successfully.png" alt-text="Verifies the token was received":::
+
+## View Defender for IoT detections in Forescout
+
+To view a device's attributes:
+
+1. Sign in to the Forescout platform and then navigate to the **Asset Inventory**.
+
+1. Navigate to the **CyberX Platform**. The following device attributes are displayed for OT devices detected by Defender for IoT.
+
+ | Item | Description |
+ |--|--|
+ | Authorized by Azure Defender for IoT | A device detected on your network by Defender for IoT during the network learning period. |
+ | Firmware | The firmware details of the device. For example, model, and version details. |
+ | Name | The name of the device. |
+ | Operating System | The operating system of the device. |
+ | Type | The type of device. For example, a PLC, Historian or Engineering Station. |
+ | Vendor | The vendor of the device. For example, Rockwell Automation. |
+ | Risk level | The risk level calculated by Defender for IoT. |
+ | Protocols | The protocols detected in the traffic generated by the device. |
+
+:::image type="content" source="media/integration-forescout/device-firmware-attributes-in-forescout.png" alt-text="View the firmware attributes.":::
+
+:::image type="content" source="media/integration-forescout/vendor-attributes-in-forescout.png" alt-text="View the vendors attributes.":::
+
+### Viewing more details
+
+View extra device information for devices directed by Defender for IoT. For example, Forescout compliance and policy information.
+
+To accomplish this, right-click on a device from the **Device Inventory Hosts** section. The host details dialog box opens with additional information.
+
+:::image type="content" source="media/integration-forescout/details-dialog-box-in-forescout.png" alt-text="Host Details":::
+
+## Create Azure Defender for IoT policies in Forescout
+
+Forescout policies can be used to automate control and management of devices detected by Defender for IoT. For example,
+
+- Automatically email the SOC administrators when specific firmware versions are detected.
+
+- Add specific Defender for IoT detected devices to a Forescout group for further handling in incident and security workflows, for example with other SIEM integrations.
+
+Create a Forescout custom policy, with Defender for IoT using condition properties.
+
+To access Defender for IoT properties:
+
+1. Navigate to the **Properties Tree** from the **Policy Conditions** dialog box.
+
+1. Expand the **CyberX Platform** folder in the **Properties Tree**. The Defender for IoT following properties are available.
+
+:::image type="content" source="media/integration-forescout/forescout-property-tree.png" alt-text="Properties":::
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-fortinet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-fortinet.md new file mode 100644
@@ -0,0 +1,242 @@
+---
+title: About the Fortinet integration
+titleSuffix: Azure Defender for IoT
+description: Defender for IoT and Fortinet has established a technology partnership in order to detect and stop attacks on IoT and ICS networks.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/17/2021
+ms.topic: article
+ms.service: azure
+---
+
+# Defender for IoT and Fortinet IIoT and ICS threat detection & prevention
+
+Defender for IoT mitigates IIoT and ICS and SCADA risk with patented, ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats in less than an hour and without relying on agents, rules or signatures, specialized skills, or prior knowledge of the environment.
+
+Defender for IoT and Fortinet has established a technology partnership in order to detect and stop attacks on IoT and ICS networks.
+
+Together, Fortinet and Defender for IoT prevent:
+
+- Unauthorized changes to programmable logic controllers.
+
+- Malware that manipulates ICS and IoT devices via their native protocols.
+- Reconnaissance tools from collecting data.
+- Protocol violations caused by misconfigurations or malicious attackers.
+
+## Defender for IoT and FortiGate joint solution
+
+Defender for IoT detects anomalous behavior in IoT and ICS networks and delivers that information to FortiGate and FortiSIEM, as follows:
+
+- **Visibility:** The information provided by Defender for IoT gives FortiSIEM administrators visibility into previously invisible IoT and ICS networks.
+
+- **Blocking malicious attacks:** FortiGate administrators can use the information discovered by Defender for IoT to create rules to stop the anomalous behavior, regardless of whether that behavior is caused by chaotic actors, or misconfigured devices, before it causes damage to production, profits, or people.
+
+## The Defender for IoT platform
+
+The Defender for IoT platform autodiscovers, and fingerprints any non-managed IoT and ICS devices while continuously monitoring for targeted attacks and malware. Risk and vulnerability management capabilities include automated threat modeling as well as comprehensive reporting about both endpoint and network vulnerabilities, with risk-prioritized mitigation recommendations.
+
+The Defender for IoT platform is agentless, non-intrusive, and easy to deploy, delivering actionable insights less than an hour after being connected to the network.
+
+## Fortinet FortiSIEM
+
+Effective security requires the visibility of all of the devices, and all the infrastructure in real time, but also with context. What devices represent a threat, what is their capability to you to manage the threat that businesses face, and not the noise that multiple security tools create.
+
+Endpoints, IoT, Infrastructure, Security Tools, Applications, VMΓÇÖs, and Cloud, are some of things administrators need to secure, and monitor constantly.
+
+FortiSIEM, and FortinetΓÇÖs multivendor security incident, and events management solution brings it all together, visibility, correlation, automated response, and remediation in a single, scalable solution.
+
+Using a Business Services view, the complexity of managing network and security operations is reduced, freeing resources, improving breach detection. FortiSIEM provides cross correlation while applying machine learning, and UEBA to improve response, in order to stop breaches before they occur.
+
+## Fortinet FortiGate next-generation firewalls
+
+FortiGate's next-generation firewalls (NGFWs) utilize purpose-built security processors and threat intelligence security services from AI powered FortiGuard labs to deliver top rated protection, high-performance inspection of clear texted, and encrypted traffic.
+
+Next generation firewalls reduce cost and complexity with full visibility into applications, users, and networks and provides best of breed security. As an integral part of the Fortinet Security Fabric next-generation firewalls can communicate within FortinetΓÇÖs comprehensive security portfolio and partner security solutions in a multivendor environment to share threat intelligence and improve security posture.
+
+## Use cases
+
+### Prevent unauthorized changes to programmable logic controllers
+
+Organizations use programmable logic controllers (PLCs) to manage physical processes such as robotic arms in factories, spinning turbines in wind farms, and centrifuges in nuclear power plants.
+
+An update to the ladder logic or firmware of a PLC can represent a legitimate activity or an attempt to compromise the device by inserting malicious code. Defender for IoT can detect unauthorized changes to PLCs, and then deliver information about that change to both FortiSIEM and FortiGate. Armed with that information, FortSIEM administrators can decide how to best mitigate the solution. One mitigation option would be to create a rule in FortiGate that stops further communication to the affected device.
+
+### Stop ransomware before it damages IoT and ICS networks
+
+Defender for IoT continuously monitors IoT and ICS networks for behaviors that are caused by ransomware such as LockerGoga, WannaCry, and NotPetya. When integrated with FortiSIEM and FortiGate, Defender for IoT can deliver information about the presence of these types of ransomware so that ForiSIEM operators can see where the malware is, and FortiGate administrators can stop the ransomware from spreading and wreaking more havoc.
+
+## Set sending Defender for IoT alerts to FortiSIEM
+
+Defenders for IoT alerts provide information about an extensive range of security events, including:
+
+- Deviations from learned baseline network activity
+- Malware detections
+- Detections based on suspicious operational changes
+- Network anomalies
+- Protocol deviations from protocol specifications
+
+:::image type="content" source="media/integration-fortinet/address-scan.png" alt-text="Screenshot of the Address Scan Detected window.":::
+
+You can configure Defender for IoT to send alerts to the FortiSIEM server, where alert information is displayed in the Analytics window:
+
+:::image type="content" source="media/integration-fortinet/analytics.png" alt-text="Screenshot of the Analytics window.":::
+
+Each Defender for IoT alert is parsed without any other configuration on the FortiSIEM side and they are presented in the FortiSIEM as security events. The following event details appear by default:
+
+:::image type="content" source="media/integration-fortinet/event-detail.png" alt-text="View your event details in the Event Details window.":::
+
+## Define alert forwarding rules
+
+Use Defender for IoT's Forwarding Rules to send alert information to FortiSIEM.
+
+Options are available to customize the alert rules based on the:
+
+- Specific protocols detected.
+
+- Severity of the event.
+
+- Defender for IoT engine that detects events.
+
+To create a forwarding rule
+
+1. From the sensor or on-premises management console left pane, select **Forwarding**.
+
+ [:::image type="content" source="media/integration-fortinet/forwarding-view.png" alt-text="View your forwarding rules in the Forwarding window.":::](media/integration-fortinet/forwarding-view.png#lightbox)
+
+2. Select **Create Forwarding Rules**. In the **Create Forwarding Rule** window, and define your rule's parameters.
+
+ :::image type="content" source="media/integration-fortinet/new-rule.png" alt-text="Create a new Forwarding Rule window.":::
+
+ | Parameter | Description |
+ |--|--|
+ | **Name** | The forwarding rule name. |
+ | **Select Severity** | The minimum security level incident to forward. For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded. |
+ | **Protocols** | By default, all the protocols are selected.<br><br>To select a specific protocol, select **Specific** and select the protocol for which this rule is applied. |
+ | **Engines** | By default, all the security engines are involved.<br><br>To select a specific security engine for which this rule is applied, select **Specific** and select the engine. |
+ | **System Notifications** | Forward sensor online/offline status. This option is only available if you have logged into the on-premises management console. |
+
+3. To instruct Defender for IoT to send, alert information to FortiSIEM, select **Action** and then select **Send to FortiSIEM**.
+
+ :::image type="content" source="media/integration-fortinet/forward-rule.png" alt-text="Create a Forwarding Rule and select send to Fortinet.":::
+
+4. Enter the FortiSIEM server details.
+
+ :::image type="content" source="media/integration-fortinet/details.png" alt-text="Add the FortiSIEm details to the forwarding rule":::
+
+ | Parameter | Description |
+ | --------- | ----------- |
+ | **Host** | FortiSIEM server address. |
+ | **Port** | FortiSIEM server port. |
+ | **Timezone** | The time stamp for the alert detection. |
+
+5. Select **Submit**.
+
+## Set blocking suspected traffic using Fortigate firewall
+
+You can set blocking policies automatically in the FortiGate firewall based on Defender for IoT alerts.
+
+:::image type="content" source="media/integration-fortinet/firewall.png" alt-text="View of the FortiGate Firewall window view.":::
+
+For example, the following alert can block the malicious source:
+
+:::image type="content" source="media/integration-fortinet/suspicion.png" alt-text="The NotPetya Malware suspicion window":::
+
+To set a FortiGate firewall rule that blocks this malicious source:
+
+1. In FortiGate, create an API key required for the Defender for IoT forwarding rule. For more information, see [Create an API key in FortiGate](#create-an-api-key-in-fortigate).
+
+1. In Defender for IoT **Forwarding**, set a forwarding rule that blocks malware-related alerts. For more information, see [Block suspected traffic using the FortiGate firewall](#block-suspected-traffic-using-the-fortigate-firewall).
+
+1. In Defender for IoT, **Alerts** block a malicious source. For more information, see [Block the suspicious source](#block-the-suspicious-source).
+
+ The malicious source address appears in the FortiGage **Administrator** window.
+
+ :::image type="content" source="media/integration-fortinet/administrator.png" alt-text="The FortiGate Administrator window view.":::
+
+ The blocking policy was automatically created, and it appears in the FortiGate **IPv4 Policy** window.
+
+ :::image type="content" source="media/integration-fortinet/policy.png" alt-text="The FortiGate IPv4 Policy window view.":::
+
+ :::image type="content" source="media/integration-fortinet/edit.png" alt-text="The FortiGate IPv4 Policy Edit view.":::
+
+## Create an API key in FortiGate
+
+1. In FortiGate, select **System** > **Admin Profiles**.
+
+1. Create a profile with the following permissions:
+
+ :::image type="content" source="media/integration-fortinet/admin-profile.png" alt-text="Computer description automatically generated":::
+
+1. Select **System** > **Administrators** and create a new **REST API Admin**.
+
+ :::image type="content" source="media/integration-fortinet/cellphone.png" alt-text="Cell phone description automatically generated":::
+
+ | Parameter | Description |
+ | --------- | ----------- |
+ | **Username** | The forwarding rule name. |
+ | **Comments** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
+ | **Administrator Profile** | From the dropdown list, select the profile name that you have defined in the previous step. |
+ | **PKI Group** | Disable. |
+ | **CORS Allow Origin** | Enable. |
+ | **Restrict login to trusted hosts** | Adds an IP address of the sensors and CMs that will connect to the FortiGate. |
+
+1. Save the API key.
+
+ :::image type="content" source="media/integration-fortinet/api-key.png" alt-text="Cell phone description automatically generates New API Key":::
+
+## Block suspected traffic using the FortiGate firewall
+
+1. In the left pane, select **Forwarding**.
+
+ [:::image type="content" source="media/integration-fortinet/forwarding-view.png" alt-text="The Forwarding window option in a sensor.":::](media/integration-fortinet/forwarding-view.png#lightbox)
+
+1. Select **Create Forwarding Rules** and define rule parameters.
+
+ :::image type="content" source="media/integration-fortinet/new-rule.png" alt-text="Screenshot of Create Forwarding Rule window":::
+
+ | Parameter | Description |
+ | --------- | ----------- |
+ | **Name** | The forwarding rule name. |
+ | **Select Severity** | The minimal security level incident to forward. For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded. |
+ | **Protocols** | By default, all the protocols are selected.<br><br>To select a specific protocol, select **Specific** and select the protocol for which this rule is applied. |
+ | **Engines** | By default, all the security engines are involved.<br><br>To select a specific security engine for which this rule is applied, select **Specific** and select the engine. |
+ | **System Notifications** | Forward sensor *online and offline* status. This option is only available if you have logged into the on-premises management console. |
+
+1. To instruct Defender for IoT to send firewall rules to FortiGate, select **Action** and then select **Send to FortiGate**.
+
+ :::image type="content" source="media/integration-fortinet/fortigate.png" alt-text="Create Forwarding Rule window and select send to FortiGate":::
+
+1. To configure the FortiGate forwarding rule:
+
+ :::image type="content" source="media/integration-fortinet/configure.png" alt-text="Configure the Create Forwarding Rule window":::
+
+1. In the **Actions** pane, set the following parameters:
+
+ | Parameter | Description |
+ |--|--|
+ | Host | The FortiGate server IP address type. |
+ | Port | The FortiGate server port type. |
+ | Username | The FortiGate server username type. |
+ | API Key | Enter the API key that you created in FortiGate. |
+ | Incoming Interface| Define how blocking is executed:<br /><br />**By IP Address**: Always creates blocking policies on Panorama based on IP address.<br /><br />**By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if exists, otherwise IP Address.|
+ | Outgoing Interface |Set the email address for the policy notification email. <br /><br /> **Note**: Make sure you have configured a Mail Server in XSense. If no email address is entered, XSense does not send a notification email.|
+ |Configure| Set-up the following options to allow blocking of the suspicious sources by the FortiGate firewall: <br /><br />**Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit)<br /><br />**Block unauthorized PLC programming / firmware updates**: Unauthorized PLC changes<br /><br />**Block unauthorized PLC stop**: PLC stop (downtime)<br /><br />**Block malware-related alerts**: Blocking of the industrial malware attempts (TRITON, NotPetya, etc.). You can select the option of **Automatic blocking**. In that case, the blocking is executed automatically and immediately.<br /><br />*Block unauthorized scanning*: Unauthorized scanning (potential reconnaissance)
+
+1. Select **Submit**.
+
+## Block the suspicious source
+
+1. In the **Alerts** pane, select the alert related to Fortinet integration.
+
+ :::image type="content" source="media/integration-fortinet/unauthorized.png" alt-text="The Unauthorized PLC programming window":::
+
+1. To automatically block the suspicious source, select **Block Source**.
+
+ :::image type="content" source="media/integration-fortinet/confirm.png" alt-text="The confirmation window.":::
+
+1. In the **Please Confirm** dialog box, select **OK**.
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-palo-alto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-palo-alto.md new file mode 100644
@@ -0,0 +1,185 @@
+---
+title: Palo Alto integration
+titleSuffix: Azure Defender for IoT
+description: Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/17/2021
+ms.topic: article
+ms.service: azure
+---
+
+# About the Palo Alto integration
+
+Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently.
+
+The following integration types are available:
+
+- Automatic blocking option: Direct Defender for IoT-Palo Alto integration
+
+- Sending recommendations for blocking to the central management system: Defender for IoT-Panorama integration
+
+## Configure immediate blocking by specified Palo Alto firewall
+
+In critical cases, such as malware-related alerts, you can enable automatic blocking. This is done by configuring a forwarding rule in Defender for IoT that sends blocking command directly to a specific Palo Alto firewall.
+
+When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
+
+To configure:
+
+1. In the left pane, select **Forwarding**.
+
+ :::image type="content" source="media/integration-paloalto/forwarding.png" alt-text="The forwarding alert screen.":::
+
+1. Select **Create Forwarding Rule**.
+
+ :::image type="content" source="media/integration-paloalto/forward-rule.png" alt-text="Create Forwarding Rule":::
+
+1. To configure the Palo Alto NGFW Forwarding Rule:
+
+ - Define the standard rule parameters and from the **Actions** drop-down box, select **Send to Palo Alto NGFW**.
+
+ :::image type="content" source="media/integration-paloalto/edit.png" alt-text="Edit your Forwarding Rule.":::
+
+1. In the Actions pane, set the following parameters:
+
+ - **Host**: Enter the NGFW server IP address.
+ - **Port**: Enter the NGFW server port.
+ - **Username**: Enter the NGFW server username.
+ - **Password**: Enter the NGFW server password.
+ - **Configure**: Set up the following options to allow blocking of the suspicious sources by the Palo Alto firewall:
+ - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit).
+ - **Block unauthorized PLC programming/firmware updates**: Unauthorized PLC changes.
+ - **Block unauthorized PLC stop**: PLC stop (downtime).
+ - **Block malware-related alerts**: Blocking of industrial malware attempts (TRITON, NotPetya, etc.). You can select the option of **Automatic blocking**. In that case, the blocking is executed automatically and immediately.
+ - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance).
+
+1. Select **Submit**.
+
+To block the suspicious source:
+
+1. In the **Alerts** pane, select the alert related to Palo Alto integration. The **Alert Details** dialog box appears.
+
+ :::image type="content" source="media/integration-paloalto/unauthorized.png" alt-text="Alert details":::
+
+1. To automatically block the suspicious source, select **Block Source**. The **Please Confirm** dialog box appears.
+
+ :::image type="content" source="media/integration-paloalto/please.png" alt-text="Confirm blocking on the Please Confirm screen.":::
+
+1. In the **Please Confirm** dialog box, select **OK**. The suspicious source is blocked by the Palo Alto firewall.
+
+## Sending blocking policies to Palo Alto Panorama
+
+Defender for IoT and Palo Alto Networks have an off-the-shelf integration that automatically creates new policies in Palo Alto Networks NMS, Panorama. This integration requires confirmation by the Panorama Administrator and does not allow automatic blocking.
+
+The integration is intended for the following incidents:
+
+- **Unauthorized PLC changes:** An update to the ladder logic or firmware of an device. This can represent a legitimate activity or an attempt to compromise the device. The compromise could happen by inserting malicious code, such as a Remote Access Trojan (RAT) or parameters causing the physical process, such as a spinning turbine, to operate in an unsafe manner.
+
+- **Protocol Violation:** An packet structure or field value that violates the protocol specification. This can represent a misconfigured application or a malicious attempt to compromise the device. For example, causing a buffer overflow condition in the target device.
+
+- **PLC Stop:** A command that causes the device to stop functioning, thereby risking the physical process that is being controlled by the PLC.
+
+- **Industrial malware found in the ICS network:** Malware that manipulates ICS devices using their native protocols, such as TRITON and Industroyer. Defender for IoT also detects IT malware that has moved laterally into the ICS and SCADA environment, such as Conficker, WannaCry, and NotPetya.
+
+- **Scanning malware:** Reconnaissance tools that collect data about system configuration in a pre-attack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices.
+
+## The process
+
+When Defender for IoT detects a pre-configured use case, the **Block Source** button is added to the alert. Then, when the **CyberX** user selects the **Block Source** button, Defender for IoT creates policies on the Panorama by sending the predefined forwarding rule.
+
+The policy is applied only when the Panorama administrator pushes it to the relevant NGFW in the network.
+
+In IT networks, there may be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours.
+
+In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT-Panorama Integration Architecture.
+
+:::image type="content" source="media/integration-paloalto/structure.png" alt-text="CyberX-Panorama Integration Architecture":::
+
+## Create Panorama blocking policies in Defender for IoT configuration
+
+### To configure DNS Lookup
+
+1. In the left pane, select **System Settings**.
+
+1. In the **System Settings** pane, select **DNS Settings** :::image type="icon" source="media/integration-paloalto/settings.png":::.
+
+ :::image type="content" source="media/integration-paloalto/configuration.png" alt-text="Configure the DNS settings.":::
+
+1. In the **Edit DNS Settings** dialog box, set the following parameters:
+
+ - **Status**: The status of the DNS resolver.
+
+ - **DNS Server Address**: Enter the IP address, or the FQDN of the network DNS Server.
+ - **DNS Server Port**: Enter the port used to query the DNS server.
+ - **Subnets**: Set the Dynamic IP address subnet range. The range that Defender for IoT reverses lookup their IP address in the DNS server to match their current FQDN name.
+ - **Schedule Reverse Lookup**: Define the scheduling options as follows:
+ - By specific times: Specify when to perform the reverse lookup daily.
+ - By fixed intervals (in hours): Set the frequency for performing the reverse lookup.
+ - **Number of Labels**: Instruct Defender for IoT to automatically resolve network IP addresses to device FQDNs. <br />To configure DNS FQDN resolution, add the number of domain labels to display. Up to 30 characters are displayed from left to right.
+1. Select **SAVE**.
+1. To ensure your DNS settings are correct, select **Lookup Test**. The test ensures that the DNS server IP address and DNS server port are set correctly.
+
+### To configure a Forwarding Rule to blocks suspected traffic with the Palo Alto firewall
+
+1. In the left pane, select **Forwarding**. The Forwarding pane appears.
+
+ :::image type="content" source="media/integration-paloalto/forward.png" alt-text="The forwarding screen.":::
+
+1. In the **Forwarding** pane, select **Create Forwarding Rule**.
+
+ :::image type="content" source="media/integration-paloalto/forward-rule.png" alt-text="Create Forwarding Rule":::
+
+1. To configure the Palo Alto Panorama Forwarding Rule:
+
+ Define the standard rule parameters and from the **Actions** drop-down box, select **Send to Palo Alto Panorama**. The action details pane appears.
+
+ :::image type="content" source="media/integration-paloalto/details.png" alt-text="Select action":::
+
+1. In the Actions pane, set the following parameters:
+
+ - **Host**: Enter the Panorama server IP address.
+
+ - **Port**: Enter the Panorama server port.
+ - **Username**: Enter the Panorama server username.
+ - **Password**: Enter the Panorama server password.
+ - **Report Address**: Define how the blocking is executed, as follows:
+
+ - **By IP Address**: Always creates blocking policies on Panorama based on the IP address.
+
+ - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address.
+
+ - **Email**: Set the email address for the policy notification email
+ > [!NOTE]
+ > Make sure you have configured a Mail Server in the Defender for IoT. If no email address is entered, Defender for IoT does not send a notification email.
+ - **Execute a DNS lookup upon alert detection (Checkbox)**: When the By FQDN, or IP Address option is set in the Report Address. This checkbox is selected by default. If only the IP address is set, this option is disabled..
+ - **Configure**: Set up the following options to allow blocking of the suspicious sources by the Palo Alto Panorama:
+
+ - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS, protocol specification (potential exploit).
+
+ - **Block unauthorized PLC programming/firmware updates**: Unauthorized PLC changes.
+
+ - **Block unauthorized PLC stop**: PLC stop (downtime).
+
+ - **Block malware-related alerts**: Blocking of industrial malware attempts (TRITON, NotPetya, etc.). You can select the option of **Automatic blocking**. In that case, the blocking is executed automatically and immediately.
+
+ - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance).
+
+1. Select **Submit**.
+
+### To block the suspicious source
+
+1. In the **Alerts** pane, select the alert related to Palo Alto integration. The **AlertΓÇÖs Details** dialog box appears.
+
+ :::image type="content" source="media/integration-paloalto/unauthorized.png" alt-text="Alert details":::
+
+1. To automatically block the suspicious source, select **Block Source**.
+
+1. In the **Please Confirm** dialog box, select **OK.**
+
+ :::image type="content" source="media/integration-paloalto/please.png" alt-text="Confirm":::
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-servicenow.md new file mode 100644
@@ -0,0 +1,357 @@
+---
+title: About the ServiceNow integration
+titleSuffix: Azure Defender for IoT
+description: The Defender for IoT ICS Management application for ServiceNow provides SOC analysts with multidimensional visibility into the specialized OT protocols and IoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/17/2021
+ms.topic: article
+ms.service: azure
+---
+
+# The Defender for IoT ICS management application for ServiceNow
+
+The Defender for IoT ICS Management application for ServiceNow provides SOC analysts with multidimensional visibility into the specialized OT protocols and IoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. This is an important evolution given the ongoing convergence of IT and OT to support new IoT initiatives, such as smart machines and real-time intelligence.
+
+The application also enables both IT and OT incident response from within one corporate SOC.
+
+## About Defender for IoT
+
+Defender for IoT delivers the only ICS and IoT cybersecurity platform built by blue-team experts with a track record defending critical national infrastructure, and the only platform with patented ICS-aware threat analytics and machine learning. Defender for IoT provides:
+
+- Immediate insights about ICS the device landscape with an extensive range of details about attributes.
+
+- ICS-aware deep embedded knowledge of OT protocols, devices, applications, and their behaviors.
+
+- Immediate insights into vulnerabilities, and known zero-day threats.
+
+- An automated ICS threat modeling technology to predict the most likely paths of targeted ICS attacks via proprietary analytics.
+
+> [!Note]
+> References to CyberX refer to Azure Defender for IoT.
+
+## About the integration
+
+The Defender for IoT integration with ServiceNow provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices.
+
+### Threat management
+
+The Defender for IoT ICS Management application helps:
+
+- Reduce the time required for industrial and critical infrastructure organizations to detect, investigate, and act on cyber threats.
+
+- Obtain real-time intelligence about OT risks.
+
+- Correlate Defender for IoT alerts with ServiceNow threat monitoring and incident management workflows.
+
+- Trigger ServiceNow tickets and workflows with other services and applications on the ServiceNow platform.
+
+ICS and SCADA security threats are identified by Defender for IoT security engines, which provide immediate alert response to malware attacks, network, and security policy deviations, as well as operational and protocol anomalies. For details about alert details sent to ServiceNow, see [Alert reporting](#alert-reporting).
+
+### Device visibility and management
+
+The ServiceNow Configuration Management Database (CMDB) is enriched and supplemented with a rich set of device attributes pushed by the Defender for IoT platform. This ensures comprehensive and continuous visibility into the device landscape and lets you monitor and respond from a single-pane-of-glass. For details about device attributes sent to ServiceNowSee, see [View Defender for IoT detections in ServiceNow](#view-defender-for-iot-detections-in-servicenow).
+
+## System requirements and architecture
+
+This article describes:
+
+- **Software Requirements**
+- **Architecture**
+
+## Software requirements
+
+- ServiceNow Service Management version 3.0.2
+
+- Defender for IoT patch 2.8.11.1 or above
+
+> [!Note]
+> If you are already working with a Defender for IoT and ServiceNow integration, and upgrade using the on-premises management console, pervious data received from Defender for IoT sensors should be cleared from ServiceNow.
+
+## Architecture
+
+### On-premises management console architecture
+
+The on-premises management console provides a unified source for all the device and alert information sent to ServiceNow.
+
+You can set up an on-premises management console to communicate with one instance of ServiceNow. The on-premises management console pushes sensor data to the Defender for IoT application using REST API.
+
+If you are setting up your system to work with an on-premises management console, disable the ServiceNow Sync, Forwarding Rules and proxy configurations in sensors, if they were set up.
+
+These configurations should be set up for the on-premises management console. Configuration instructions are described in this article.
+
+### Sensor architecture
+
+If you want to set up your environment to include direct communication between sensors and ServiceNow, for each sensor define the ServiceNow Sync, Forwarding rules, and proxy configuration (if a proxy is needed).
+
+It recommended setting up your integration using the on-premises management console to communicate with ServiceNow.
+
+## Create access tokens in ServiceNow
+
+This article describes how to create an access token in ServiceNow. The token is needed to communicate with Defender for IoT.
+
+You will need the **Client ID** and **Client Secret** when creating Defender for IoT Forwarding rules, which forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables.
+
+## Set up Defender for IoT to communicate with ServiceNow
+
+This article describes how to set up Defender for IoT to communicate with ServiceNow.
+
+### Send Defender for IoT alert information
+
+This article describes how to configure Defender for IoT to push alert information to ServiceNow tables. For information about alert data sent, see [Alert reporting](#alert-reporting).
+
+Defenders for IoT alerts appear in ServiceNow as security incidents.
+
+Define a Defender for IoT *Forwarding* rule to send alert information to ServiceNow.
+
+To define the rule:
+
+1. In the Defender for IoT left pane, select **Forwarding**.
+
+1. Select the :::image type="content" source="media/integration-servicenow/plus.png" alt-text="The plus icon button."::: icon. The Create Forwarding Rule dialog box opens.
+
+ :::image type="content" source="media/integration-servicenow/forwarding-rule.png" alt-text="Create Forwarding Rule":::
+
+1. Add a rule name.
+
+1. Define criteria under which Defender for IoT will trigger the forwarding rule. Working with Forwarding rule criteria helps pinpoint and manage the volume of information sent from Defender for IoT to ServiceNow. The following options are available:
+
+ - **Severity levels:** This is the minimum-security level incident to forward. For example, if **Minor** is selected, minor alerts, and any alert above this severity level will be forwarded. Levels are pre-defined by Defender for IoT.
+
+ - **Protocols:** Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all.
+
+ - **Engines:** Select the required engines or choose them all. Alerts from selected engines will be sent.
+
+1. Verify that **Report Alert Notifications** is selected.
+
+1. In the Actions section, select **Add** and then select **ServiceNow**.
+
+ :::image type="content" source="media/integration-servicenow/select-servicenow.png" alt-text="Select ServiceNow from the dropdown options.":::
+
+1. Enter the ServiceNow action parameters:
+
+ :::image type="content" source="media/integration-servicenow/parameters.png" alt-text="Fill in the ServiceNow action parameters":::
+
+1. In the **Actions** pane, set the following parameters:
+
+ | Parameter | Description |
+ |--|--|
+ | Domain | Enter the ServiceNow server IP address. |
+ | Username | Enter the ServiceNow server username. |
+ | Password | Enter the ServiceNow server password. |
+ | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. |
+ | Client Secret | Enter the client secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. |
+ | Report Type | **Incidents**: Forward a list of alerts that are presented in ServiceNow with an incident ID and short description of each alert.<br /><br />**Defender for IoT Application**: Forward full alert information, including the sensor details, the engine, the source, and destination addresses. The information is forwarded to the Defender for IoT on the ServiceNow application. |
+
+1. Select **SAVE**. Defenders for IoT alerts appear as incidents in ServiceNow.
+
+### Send Defender for IoT device attributes
+
+This article describes how to configure Defender for IoT to push an extensive range of device attributes to ServiceNow tables. See ***Inventory Information*** for details about the kind of information pushed to ServiceNow.
+
+To send attributes to ServiceNow, you must map your on-premises management console to a ServiceNow instance. This ensures that the Defender for IoT platform can communicate and authenticate with the instance.
+
+To add a ServiceNow instance:
+
+1. Sign in to your Defender for IoT on-premises management console.
+
+1. Select **System Settings** and then **ServiceNow** from the on-premises management console Integration section.
+
+ :::image type="content" source="media/integration-servicenow/servicenow.png" alt-text="Select the ServiceNow button.":::
+
+1. Enter the following sync parameters in the ServiceNow Sync dialog box.
+
+ :::image type="content" source="media/integration-servicenow/sync.png" alt-text="The ServiceNow sync dialog box.":::
+
+ Parameter | Description |
+ |--|--|
+ | Enable Sync | Enable and disable the sync after defining parameters. |
+ | Sync Frequency (minutes) | By default, information is pushed to ServiceNow every 60 minutes. The minimum is 5 minutes. |
+ | ServiceNow Instance | Enter the ServiceNow instance URL. |
+ | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. |
+ | Client Secret | Enter the Client Secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. |
+ | Username | Enter the username for this instance. |
+ | Password | Enter the password for this instance. |
+
+1. Select **SAVE**.
+
+## Verify communication
+
+Verify that the on-premises management console is connected to the ServiceNow instance by reviewing the *Last Sync* date.
+
+:::image type="content" source="media/integration-servicenow/sync-confirmation.png" alt-text="Verify the communication occurred by looking at the last sync.":::
+
+## Set up the integrations using the HTTPS proxy
+
+When setting up the Defender for IoT and ServiceNow integration, the on-premises management console and the ServiceNow server communicate using port 443. If the ServiceNow server is behind the proxy, the default port cannot be used.
+
+Defender for IoT supports an HTTPS proxy in the ServiceNow integration by enabling the change of the default port used for integration.
+
+To configure the proxy:
+
+1. Edit global properties in on-premises management console:
+ `sudo vim /var/cyberx/properties/global.properties`
+
+2. Add the following parameters:
+
+ - `servicenow.http_proxy.enabled=1`
+
+ - `servicenow.http_proxy.ip=1.179.148.9`
+
+ - `servicenow.http_proxy.port=59125`
+
+3. Save and exit.
+
+4. Run the following command: `sudo monit restart all`
+
+After configuration, all the ServiceNow data is forwarded using the configured proxy.
+
+## Download the Defender for IoT application
+
+This article describes how to download the application.
+
+To access the Defender for IoT application:
+
+1. Navigate to <https://store.servicenow.com/>
+
+2. Search for `Defender for IoT` or `CyberX IoT/ICS Management`.
+
+ :::image type="content" source="media/integration-servicenow/search-results.png" alt-text="Search for CyberX in the search bar.":::
+
+3. Select the application.
+
+ :::image type="content" source="media/integration-servicenow/cyberx-app.png" alt-text="Select the application from the list.":::
+
+4. Select **Request App.**
+
+ :::image type="content" source="media/integration-servicenow/sign-in.png" alt-text="Sign in to the application with your credentials.":::
+
+5. Sign in and download the application.
+
+## View Defender for IoT detections in ServiceNow
+
+This article describes the device attributes and alert information presented in ServiceNow.
+
+To view device attributes:
+
+1. Sign in to ServiceNow.
+
+2. Navigate to **CyberX Platform**.
+
+3. Navigate to **Inventory** or **Alert**.
+
+ [:::image type="content" source="media/integration-servicenow/alert-list.png" alt-text="Inventory or Alert":::](media/integration-servicenow/alert-list.png#lightbox)
+
+## Inventory information
+
+The Configuration Management Database (CMDB) is enriched and supplemented by data sent by Defender for IoT to ServiceNow. By adding or updating of device attributes on ServiceNowΓÇÖs CMDB configuration item tables, Defender for IoT can trigger the ServiceNow workflows and business rules.
+
+The following information is available:
+
+- Device attributes, for example the device MAC, OS, vendor, or protocol detected.
+
+- Firmware information, for example the firmware version and serial number.
+
+- Connected device information, for example the direction of the traffic between the source and destination.
+
+### Devices attributes
+
+This article describes the device attributes pushed to ServiceNow.
+
+| Item | Description |
+|--|--|
+| Appliance | The name of the sensor that detected the traffic. |
+| ID | The device ID assigned by Defender for IoT. |
+| Name | The device name. |
+| IP Address | The device IP address or addresses. |
+| Type | The device type, for example a switch, PLC, historian, or Domain Controller. |
+| MAC Address | The device MAC address or addresses. |
+| Operating System | The device operating system. |
+| Vendor | The device vendor. |
+| Protocols | The protocols detected in the traffic generated by the device. |
+| Owner | Enter the name of the device owner. |
+| Location | Enter the physical location of the device. |
+
+View devices connected to a device in this view.
+
+To view connected devices:
+
+1. Select a device and then select the **Appliance** listed in for that device.
+
+ :::image type="content" source="media/integration-servicenow/appliance.png" alt-text="Select the desired appliance from the list.":::
+
+1. In the **Device Details** dialog box, select **Connected Devices**.
+
+### Firmware details
+
+This article describes the device firmware information pushed to ServiceNow.
+
+| Item | Description |
+|--|--|
+| Appliance | The name of the sensor that detected the traffic. |
+| Device | The device name. |
+| Address | The device IP address. |
+| Module Address | The device model and slot number or ID. |
+| Serial | The device serial number. |
+| Model | The device model number. |
+| Version | The firmware version number. |
+| Additional Data | More data about the firmware as defined by the vendor, for example the device type. |
+
+### Connection details
+
+This article describes the device connection information pushed to ServiceNow.
+
+:::image type="content" source="media/integration-servicenow/connections.png" alt-text="The device's connection information":::
+
+| Item | Description |
+|--|--|
+| Appliance | The name of the sensor that detected the traffic. |
+| Direction | The direction of the traffic. <br /> <br /> - **One Way** indicates that the Destination is the server and Source is the client. <br /> <br /> - **Two Way** indicates that both the source and the destination are servers, or that the client is unknown. |
+| Source device ID | The IP address of the device that communicated with the connected device. |
+| Source device name | The name of the device that communicated with the connected device. |
+| Destination device ID | The IP address of the connected device. |
+| Destination device name | The name of the connected device. |
+
+## Alert reporting
+
+Alerts are triggered when Defenders for IoT engines detect changes in network traffic and behavior that require your attention. For details on the kinds of alerts each engine generates, see [About alert engines](#about-alert-engines).
+
+This article describes the device alert information pushed to ServiceNow.
+
+| Item | Description |
+|--|--|
+| Created | The time and date the alert was generated. |
+| Engine | The engine that detected the event. |
+| Title | The alert title. |
+| Description | The alert description. |
+| Protocol | The protocol detected in the traffic. |
+| Severity | The alert severity defined by Defender for IoT. |
+| Appliance | The name of the sensor that detected the traffic. |
+| Source name | The source name. |
+| Source IP address| The source IP address. |
+| Destination name | The destination name. |
+| Destination IP address | The destination IP address. |
+| Assignee | Enter the name of the individual assigned to the ticket. |
+
+### Updating alert information
+
+Select the entry in the created column to view alert information in a form. You can update alert details and assign the alert to an individual to review and handle.
+
+[:::image type="content" source="media/integration-servicenow/alert.png" alt-text="View the alert's information.":::](media/integration-servicenow/alert.png#lightbox)
+
+### About alert engines
+
+This article describes the kind of alerts each engine triggers.
+
+| Alert type | Description |
+|--|--|
+| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /><br />- A new device is detected. <br /><br />- A new configuration is detected on a device. <br /><br />- A device not defined as a programming device carries out a programming change. <br /><br />- A firmware version changed. |
+| Protocol violation alerts | Triggered when the Protocol Violation engine detects a packet structures or field values that don't comply with the protocol specification. |
+| Operational alerts | Triggered when the Operational engine detects network operational incidents or device malfunctioning. For example, a network device was stopped using a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
+| Malware alerts | Triggered when the Malware engine detects malicious network activity, for example, known attacks such as Conficker. |
+| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scanning but is not defined as a scanning device. |
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/integration-splunk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/integration-splunk.md new file mode 100644
@@ -0,0 +1,175 @@
+---
+title: About the Splunk integration
+titleSuffix: Azure Defender for IoT
+description: To address a lack of visibility into the security and resiliency of OT networks, Defender for IoT developed the Defender for IoT, IIoT, and ICS threat monitoring application for Splunk, a native integration between Defender for IoT and Splunk that enables a unified approach to IT and OT security.
+author: shhazam-ms
+manager: rkarlin
+ms.author: shhazam
+ms.date: 1/4/2021
+ms.topic: article
+ms.service: azure
+---
+
+# Defender for IoT and ICS threat monitoring application for Splunk
+
+Defender for IoT mitigates IIoT, ICS, and SCADA risk with patented, ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats in less than an image hour and without relying on agents, rules or signatures, specialized skills, or prior knowledge of the environment.
+
+To address a lack of visibility into the security and resiliency of OT networks, Defender for IoT developed the Defender for IoT, IIoT, and ICS threat monitoring application for Splunk, a native integration between Defender for IoT and Splunk that enables a unified approach to IT and OT security.
+
+> [!Note]
+> References to CyberX refer to Azure Defender for IoT.
+
+## About the Splunk application
+
+The application provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. The application also enables both IT and OT incident response from within one corporate SOC. This is an important evolution given the ongoing convergence of IT and OT to support new IIoT initiatives, such as smart machines and real-time intelligence.
+
+Splunk application can be installed locally or run on a cloud. The integration with Defender for IoT supports both deployments.
+
+## About the integration
+
+The integration of Defender for IoT and Splunk via the native application lets users:
+
+- Reduce the time required for industrial and critical infrastructure organizations to detect, investigate, and act on cyber threats.
+
+- Obtain real-time intelligence about OT risks.
+
+- Correlate Defender for IoT alerts with Splunk Enterprise Security Threat Intelligence repositories.
+
+- Monitor and respond from a single-pane-of-glass.
+
+[:::image type="content" source="media/integration-splunk/splunk-mainpage-v2.png" alt-text="Main page of the splunk tool.":::](media/integration-splunk/splunk-mainpage-v2.png#lightbox)
+
+:::image type="content" source="media/integration-splunk/alerts.png" alt-text="The alerts page in splunk.":::
+
+The application allows Splunk administrators to analyze OT alerts that Defender for IoT sends, and monitor the entire OT security deployment, including details such as:
+
+- Which of the five analytics engines detected the alert.
+
+- Which protocol generated the alert.
+
+- Which Defender for IoT sensor generated the alert.
+
+- The severity level of the alert.
+
+- The source and destination of the communication.
+
+## Requirements
+
+### Version requirements
+
+The following versions are requirements.
+
+- Defender for IoT version 2.4 and above.
+
+- Splunkbase version 11 and above.
+
+- Splunk Enterprise version 7.2 and above.
+
+## Download the application
+
+Download the *CyberX ICS Threat Monitoring for Splunk Application* from the [Splunkbase](https://splunkbase.splunk.com/app/4313/).
+
+## Splunk permission requirements
+
+The following Splunk permission is required:
+
+- Any user with *admin* user role permissions.
+
+## Send Defender for IoT alerts to Splunk
+
+Defender for IoT alerts provides information about an extensive range of security events, including:
+
+- Deviations from learned baseline network activity.
+
+- Malware detections.
+
+- Detections based on suspicious operational changes.
+
+- Network anomalies.
+
+- Protocol deviations from protocol specifications.
+
+:::image type="content" source="media/integration-splunk/address-scan.png" alt-text="The detections screen.":::
+
+You can configure Defender for IoT to send alerts to the Splunk server, where alert information is displayed in the Splunk Enterprise dashboard.
+
+:::image type="content" source="media/integration-splunk/alerts-and-details.png" alt-text="View all of the alerts and their details.":::
+
+The following alert information is sent to the Splunk server.
+
+- The date and time of the alert.
+
+- The Defender for IoT engine that detected the event: Protocol Violation, Policy Violation, Malware, Anomaly, or Operational engine.
+
+- The alert title.
+
+- The alert message.
+
+- The severity of the alert: Warning, Minor, Major or Critical.
+
+- The source device name.
+
+- The source device IP address.
+
+- The destination device name.
+
+- The destination device IP address.
+
+- The Defender for IoT platform IP address (Host).
+
+- The name of the Defender for IoT platform appliance (source type).
+
+Sample output is shown below:
+
+| Time | Event |
+|--|--|
+| 7/23/15<br />9:28:31.000 PM | **Defender for IoT platform Alert**: A device was stopped by a PLC Command<br /><br />**Type**: Operational Violation <br /><br />**Severity**: Major <br /><br />**Source name**: my_device1 <br /><br />**Source IP**: 192.168.110.131 <br /><br />**Destination name**: my_device2<br /><br /> **Destination IP**: 10.140.33.238 <br /><br />**Message**: A network device was stopped using a Stop PLC command. This device will not operate until a Start command is sent. 192.168.110.131 was stopped by 10.140.33.238 (a Siemens S7 device), using a PLC Stop command.<br /><br />**Host**: 192.168.90.43 <br /><br />**Sourcetype**: Sensor_Agent |
+
+## Define alert forwarding rules
+
+Use Defender for IoT *Forwarding Rules* to send alert information to Splunk servers.
+
+Options are available to customize the alert rules based on the:
+
+- Specific protocols detected.
+
+- Severity of the event.
+
+- Defender for IoT engine that detects events.
+
+To create a forwarding rule:
+
+1. From the sensor or on-premises management console left pane, select **Forwarding.**
+
+ :::image type="content" source="media/integration-splunk/forwarding.png" alt-text="Select the blue button Create Forwarding Alert.":::
+
+1. Select **Create Forwarding Rules**. In the **Create Forwarding Rule** window, define the rule parameters.
+
+ :::image type="content" source="media/integration-splunk/forwarding-rule.png" alt-text="Create the rules for your forwarding rule.":::
+
+ | Parameter | Description |
+ |--|--|
+ | **Name** | The forwarding rule name. |
+ | **Select Severity** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
+ | **Protocols** | By default, all the protocols are selected. To select a specific protocol, select **Specific** and select the protocol for which this rule is applied. |
+ | **Engines** | By default, all the security engines are involved. To select a specific security engine for which this rule is applied, select **Specific** and select the engine. |
+ | **System Notifications** | Forward sensor online/offline status. This option is only available if you have logged into the Central Manager. | |
+
+1. To instruct Defender for IoT to send asset information to Splunk, select **Action**, and then select **Send to Splunk Server**.
+
+1. Enter the following Splunk parameters.
+
+ :::image type="content" source="media/integration-splunk/parameters.png" alt-text="The Splunk parameters you should enter on this screen.":::
+
+ | Parameter | Description |
+ |--|--|
+ | **Host** | Splunk server address |
+ | **Port** | 8089 |
+ | **Username** | Splunk server username |
+ | **Password** | Splunk server password |
+
+1. Select **Submit**
+
+## Next steps
+
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/domain-joined/identity-broker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/identity-broker.md
@@ -121,9 +121,9 @@ To troubleshoot authentication issues, see [this guide](./domain-joined-authenti
In the HDInsight ID Broker setup, custom apps and clients that connect to the gateway can be updated to acquire the required OAuth token first. Follow the steps in [this document](../../storage/common/storage-auth-aad-app.md) to acquire the token with the following information:
-* OAuth resource uri: `https://hib.azurehdinsight.net`
+* OAuth resource uri: `https://hib.azurehdinsight.net`
* AppId: 7865c1d2-f040-46cc-875f-831a1ef6a28a
-* Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d)
+* Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d)
After you acquire the OAuth token, use it in the authorization header of the HTTP request to the cluster gateway (for example, https://<clustername>-int.azurehdinsight.net). A sample curl command to Apache Livy API might look like this example:
@@ -141,7 +141,7 @@ For each cluster, a third party application will be registered in AAD with the c
In AAD, consent is required for all third party applications before it can authenticate users or access data. ### Can the consent be approved programatically?
-Microsoft Graph api allows you to automate the consent, see the [API documentation](/graph/api/resources/oauth2permissiongrant?view=graph-rest-1.0)
+Microsoft Graph api allows you to automate the consent, see the [API documentation](/graph/api/resources/oauth2permissiongrant)
The sequence to automate the consent is: * Register an app and grant Application.ReadWrite.All permissions to the app, to access Microsoft Graph
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/troubleshoot-invalidnetworkconfigurationerrorcode-cluster-creation-fails.md
@@ -33,7 +33,7 @@ This error points to a problem with custom DNS configuration. DNS servers within
1. If the above command doesn't return an IP address, then run `nslookup <host_fqdn> 168.63.129.16` (for example, `nslookup hn1-hditest.5h6lujo4xvoe1kprq3azvzmwsd.hx.internal.cloudapp.net 168.63.129.16`). If this command is able to resolve the IP, it means that either your DNS server isn't forwarding the query to Azure's DNS, or it isn't a VM that is part of the same virtual network as the cluster.
-1. If you don't have an Azure VM that can act as a custom DNS server in the clusterΓÇÖs virtual network, then you need to add this first. Create a VM in the virtual network, which will be configured as DNS forwarder.
+1. If you don't have an Azure VM that can act as a custom DNS server in the cluster's virtual network, then you need to add this first. Create a VM in the virtual network, which will be configured as DNS forwarder.
1. Once you have a VM deployed in your virtual network, configure the DNS forwarding rules on this VM. Forward all iDNS name resolution requests to 168.63.129.16, and the rest to your DNS server. [Here](../hdinsight-plan-virtual-network-deployment.md) is an example of this setup for a custom DNS server.
@@ -41,11 +41,11 @@ This error points to a problem with custom DNS configuration. DNS servers within
---
-## "Failed to connect to Azure Storage AccountΓÇ¥
+## "Failed to connect to Azure Storage Account"
### Issue
-Error description contains "Failed to connect to Azure Storage AccountΓÇ¥ or ΓÇ£Failed to connect to Azure SQL".
+Error description contains "Failed to connect to Azure Storage Account" or "Failed to connect to Azure SQL".
### Cause
@@ -67,7 +67,7 @@ Azure Storage and SQL don't have fixed IP Addresses, so we need to allow outboun
### Issue
-Error description contains "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Please ensure that outbound connectivity is allowed.ΓÇ¥
+Error description contains "Failed to establish an outbound connection from the cluster for the communication with the HDInsight resource provider. Please ensure that outbound connectivity is allowed."
### Cause
@@ -148,7 +148,7 @@ Another cause for this `InvalidNetworkConfigurationErrorCode` error code could b
### Resolution
-Use the valid parameters for `Get-AzVirtualNetwork` as documented in the [Az PowerShell SDK](https://docs.microsoft.com/powershell/module/az.network/get-azvirtualnetwork?view=azps-5.3.0&viewFallbackFrom=azps-4.2.0)
+Use the valid parameters for `Get-AzVirtualNetwork` as documented in the [Az PowerShell SDK](/powershell/module/az.network/get-azvirtualnetwork)
---
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-private-link.md
@@ -55,7 +55,7 @@ Standard load balancers do not automatically provide the [public outbound NAT](.
### Prepare your environment
-For successgfull creation of private link services, you must explicitly [disable network policies for private link service](../private-link/disable-private-link-service-network-policy.md).
+For successfully creation of private link services, you must explicitly [disable network policies for private link service](../private-link/disable-private-link-service-network-policy.md).
The following diagram shows an example of the networking configuration required before you create a cluster. In this example, all outbound traffic is [forced](../firewall/forced-tunneling.md) to Azure Firewall using UDR and the required outbound dependencies should be "allowed" on the firewall before creating a cluster. For Enterprise Security Package clusters, the network connectivity to Azure Active Directory Domain Services can be provided by VNet peering.
@@ -95,12 +95,12 @@ networkProperties: {
For a complete template with many of the HDInsight enterprise security features, including Private Link, see [HDInsight enterprise security template](https://github.com/Azure-Samples/hdinsight-enterprise-security/tree/main/ESP-HIB-PL-Template).
-### Use Azure Powershell
+### Use Azure PowerShell
-To use powershell see the example [here](/powershell/module/az.hdinsight/new-azhdinsightcluster?view=azps-5.1.0#example-4--create-an-azure-hdinsight-cluster-with-relay-outbound-and-private-link-feature).
+To use PowerShell, see the example [here](/powershell/module/az.hdinsight/new-azhdinsightcluster#example-4--create-an-azure-hdinsight-cluster-with-relay-outbound-and-private-link-feature).
### Use Azure CLI
-To use Azure CLI, see the example [here](/cli/azure/hdinsight?view=azure-cli-latest#az_hdinsight_create-examples).
+To use Azure CLI, see the example [here](/cli/azure/hdinsight#az_hdinsight_create-examples).
## Next steps
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/quickstart-load-balancer-standard-internal-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
@@ -48,6 +48,12 @@ Create a resource group with [az group create](/cli/azure/group#az_group_create)
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
+In this section, you create a load balancer that load balances virtual machines.
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+
+The following diagram shows the resources created in this quickstart:
+ :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false"::: ## Configure virtual network - Standard
@@ -310,12 +316,9 @@ Create a load balancer rule with [az network lb rule create](/cli/azure/network/
--frontend-ip-name myFrontEnd \ --backend-pool-name myBackEndPool \ --probe-name myHealthProbe \
- --disable-outbound-snat true \
--idle-timeout 15 \ --enable-tcp-reset true ```
->[!NOTE]
->The virtual machines in the backend pool will not have outbound internet connectivity with this configuration. </br> For more information on providing outbound connectivity, see: </br> **[Outbound connections in Azure](load-balancer-outbound-connections.md)**</br> Options for providing connectivity: </br> **[Outbound-only load balancer configuration](egress-only.md)** </br> **[What is Virtual Network NAT?](../virtual-network/nat-overview.md)**
### Add virtual machines to load balancer backend pool
@@ -345,6 +348,12 @@ Add the virtual machines to the backend pool with [az network nic ip-config addr
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about SKUS, see **[Azure Load Balancer SKUs](skus.md)**.
+In this section, you create a load balancer that load balances virtual machines.
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+
+The following diagram shows the resources created in this quickstart:
+ :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false"::: ## Configure virtual network - Basic
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/quickstart-load-balancer-standard-internal-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
@@ -36,12 +36,14 @@ Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
-:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false":::
- In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+The following diagram shows the resources created in this quickstart:
+
+:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false":::
+ A private IP address in the virtual network is configured as the frontend (named as **LoadBalancerFrontend** by default) for the load balancer. The frontend IP address can be **Static** or **Dynamic**.
@@ -194,13 +196,9 @@ In this section, you'll create a load balancer rule:
| Health probe | Select **myHealthProbe**. | | Idle timeout (minutes) | Move the slider to **15** minutes. | | TCP reset | Select **Enabled**. |
- | Outbound source network address translation (SNAT) | Select **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-
+
4. Leave the rest of the defaults and then select **OK**.
->[!NOTE]
->The virtual machines in the backend pool will not have outbound internet connectivity with this configuration. </br> For more information on providing outbound connectivity, see: </br> **[Outbound connections in Azure](load-balancer-outbound-connections.md)**</br> Options for providing connectivity: </br> **[Outbound-only load balancer configuration](egress-only.md)** </br> **[What is Virtual Network NAT?](../virtual-network/nat-overview.md)**
- ## Create backend servers In this section, you:
@@ -273,12 +271,14 @@ These VMs are added to the backend pool of the load balancer that was created ea
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
-:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false":::
- In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+The following diagram shows the resources created in this quickstart:
+
+:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false":::
+ A private IP address in the virtual network is configured as the frontend (named as **LoadBalancerFrontend** by default) for the load balancer. The frontend IP address can be **Static** or **Dynamic**.
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/quickstart-load-balancer-standard-internal-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
@@ -45,6 +45,12 @@ New-AzResourceGroup -Name 'CreateIntLBQS-rg' -Location 'eastus'
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
+In this section, you create a load balancer that load balances virtual machines.
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+
+The following diagram shows the resources created in this quickstart:
+ :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false"::: ## Configure virtual network - Standard
@@ -185,7 +191,7 @@ $lbrule = @{
FrontendIpConfiguration = $feip BackendAddressPool = $bePool }
-$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset -DisableOutboundSNAT
+$rule = New-AzLoadBalancerRuleConfig @lbrule -EnableTcpReset
## Create the load balancer resource. ## $loadbalancer = @{
@@ -201,8 +207,6 @@ $loadbalancer = @{
New-AzLoadBalancer @loadbalancer ```
->[!NOTE]
->The virtual machines in the backend pool will not have outbound internet connectivity with this configuration. </br> For more information on providing outbound connectivity, see: </br> **[Outbound connections in Azure](load-balancer-outbound-connections.md)**</br> Options for providing connectivity: </br> **[Outbound-only load balancer configuration](egress-only.md)** </br> **[What is Virtual Network NAT?](../virtual-network/nat-overview.md)**
## Create virtual machines - Standard
@@ -300,6 +304,12 @@ Id Name PSJobTypeName State HasMoreData Location
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
+In this section, you create a load balancer that load balances virtual machines.
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+
+The following diagram shows the resources created in this quickstart:
+ :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false"::: ## Configure virtual network - Basic
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/samples-designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/samples-designer.md
@@ -67,7 +67,7 @@ Explore these built-in regression samples.
## Classification
-Explore these built-in classification samples. You can learn more about the samples without documentation links by opening the samples and viewing the module comments instead.
+Explore these built-in classification samples. You can learn more about the samples by opening the samples and viewing the module comments in the designer.
| Sample title | Description | | --- | --- |
@@ -79,13 +79,15 @@ Explore these built-in classification samples. You can learn more about the samp
## Computer vision
-Explore these built-in computer vision samples. You can learn more about the samples without documentation links by opening the samples and viewing the module comments instead.
+Explore these built-in computer vision samples. You can learn more about the samples by opening the samples and viewing the module comments in the designer.
+| Sample title | Description |
+| --- | --- |
| Image Classification using DenseNet | Use computer vision modules to build image classification model based on PyTorch DenseNet.| ## Recommender
-Explore these built-in recommender samples. You can learn more about the samples without documentation links by opening the samples and viewing the module comments instead.
+Explore these built-in recommender samples. You can learn more about the samples by opening the samples and viewing the module comments in the designer.
| Sample title | Description | | --- | --- |
@@ -94,7 +96,7 @@ Explore these built-in recommender samples. You can learn more about the samples
## Utility
-Learn more about the samples that demonstrate machine learning utilities and features. You can learn more about the samples without documentation links by opening the samples and viewing the module comments instead.
+Learn more about the samples that demonstrate machine learning utilities and features. You can learn more about the samples by opening the samples and viewing the module comments in the designer.
| Sample title | Description | | --- | --- |
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-create-certification-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: troubleshooting author: iqshahmicrosoft ms.author: iqshah
-ms.date: 01/15/2021
+ms.date: 01/18/2021
--- # Troubleshoot virtual machine certification
@@ -65,7 +65,7 @@ Provisioning issues can include the following failure scenarios:
### Conectix cookie and other VHD specifications
-The 'conectix' string is part of the VHD specification. It's defined as the 8-byte cookie in the VHD footer that identifies the file creator. All VHD files created by Microsoft have this cookie.
+The 'conectix' string is part of the VHD specification. It's defined as the 8-byte cookie in the VHD footer that identifies the file creator. All VHD files created by Microsoft have this cookie.
A VHD formatted blob should have a 512-byte footer in this format:
@@ -306,14 +306,14 @@ To submit your request with SSH disabled image for certification process:
Refer to the following table for any issues that arise when you download the VM image with a shared access signature (SAS) URL.
-|Scenario|Error|Reason|Solution|
-|---|---|---|---|
-|1|Blob not found|The VHD might either be deleted or moved from the specified location.||
-|2|Blob in use|The VHD is used by another internal process.|The VHD should be in a used state when you download it with an SAS URL.|
-|3|Invalid SAS URL|The associated SAS URL for the VHD is incorrect.|Get the correct SAS URL.|
-|4|Invalid signature|The associated SAS URL for the VHD is incorrect.|Get the correct SAS URL.|
-|6|HTTP conditional header|The SAS URL is invalid.|Get the correct SAS URL.|
-|7|Invalid VHD name|Check to see whether any special characters, such as a percent sign `%` or quotation marks `"`, exist in the VHD name.|Rename the VHD file by removing the special characters.|
+|Error|Reason|Solution|
+|---|---|---|
+|Blob not found|The VHD might either be deleted or moved from the specified location.||
+|Blob in use|The VHD is used by another internal process.|The VHD should be in a used state when you download it with an SAS URL.|
+|Invalid SAS URL|The associated SAS URL for the VHD is incorrect.|Get the correct SAS URL.|
+|Invalid signature|The associated SAS URL for the VHD is incorrect.|Get the correct SAS URL.|
+|HTTP conditional header|The SAS URL is invalid.|Get the correct SAS URL.|
+|Invalid VHD name|Check to see whether any special characters, such as a percent sign `%` or quotation marks `"`, exist in the VHD name.|Rename the VHD file by removing the special characters.|
| ## First 1 MB (2048 sectors, each sector of 512 bytes) partition
@@ -371,7 +371,7 @@ These steps apply to Linux only.
1. Enter 2048 as _first sector_ value. You can leave _last sector_ as the default value. >[!IMPORTANT]
- >Any existing data will be erased till 2048 sectors(each sector of 512 bytes). Backup of the VHD before you create a new partition.
+ >Any existing data will be erased till 2048 sectors (each sector of 512 bytes). Backup of the VHD before you create a new partition.
![Putty client command line screenshot showing the commands and output for erased data.](./media/create-vm/vm-certification-issues-solutions-22.png)
@@ -553,7 +553,7 @@ To provide a fixed VM image to replace a VM image that has a vulnerability or ex
#### Provide a new VM image to address the security vulnerability or exploit
-To complete these steps, prepare the technical assets for the VM image you want to add. For more information, see [Create a virtual machine using an approved base](azure-vm-create-using-approved-base.md)or [Create a virtual machine using your own image](azure-vm-create-using-own-image.md) and [Generate a SAS URI for your VM image](azure-vm-get-sas-uri.md).
+To complete these steps, prepare the technical assets for the VM image you want to add. For more information, see [Create a virtual machine using an approved base](azure-vm-create-using-approved-base.md) or [Create a virtual machine using your own image](azure-vm-create-using-own-image.md) and [Generate a SAS URI for your VM image](azure-vm-get-sas-uri.md).
1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. In the left pane, select **Commercial Marketplace** > **Overview**.
migrate https://docs.microsoft.com/en-us/azure/migrate/common-questions-discovery-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/common-questions-discovery-assessment.md
@@ -42,7 +42,8 @@ You can discover up to 10,000 VMware VMs, up to 5,000 Hyper-V VMs, and up to 100
For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises VMs. Please check: - If the VMs are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. There is a known issue currently due to which Azure Migrate appliance cannot collect memory utilization for such VMs.
+- If only memory counters are missing and you are trying to assess Hyper-V VMs. In this scenario, please enable dynamic memory on the VMs and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for Hyper-V VMs only when the VM has dynamic memory enabled.
+ - If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed. Note- If any of the performance counters are missing, Azure Migrate: Server Assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
@@ -53,7 +54,12 @@ The confidence rating is calculated for "Performance-based" assessments based on
- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and 'Recalculate' the assessment. -- Server Assessment is not able to collect the performance data for some or all the VMs in the assessment period. Please check if the VMs were powered on for the duration of the assessment, outbound connections on ports 443 are allowed. For Hyper-V VMs, if dynamic memory is enabled, memory counters will be missing leading to a low confidence rating. Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
+- Server Assessment is not able to collect the performance data for some or all the VMs in the assessment period. For a high confidence rating, please ensure that:
+ - VMs are powered on for the duration of the assessment
+ - Outbound connections on ports 443 are allowed
+ - For Hyper-V VMs dynamic memory is enabled
+
+ Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
- Few VMs were created after discovery in Server Assessment had started. For example, if you are creating an assessment for the performance history of last one month, but few VMs were created in the environment only a week ago. In this case, the performance data for the new VMs will not be available for the entire duration and the confidence rating would be low.
migrate https://docs.microsoft.com/en-us/azure/migrate/concepts-assessment-calculation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-assessment-calculation.md
@@ -263,8 +263,14 @@ This table shows the assessment confidence ratings, which depend on the percenta
Here are a few reasons why an assessment could get a low confidence rating: - You didn't profile your environment for the duration for which you're creating the assessment. For example, if you create the assessment with performance duration set to one day, you must wait at least a day after you start discovery for all the data points to get collected.-- Some VMs were shut down during the time for which the assessment was calculated. If any VMs are turned off for some duration, Server Assessment can't collect the performance data for that period.-- Some VMs were created during the time for which the assessment was calculated. For example, assume you created an assessment for the performance history of the last month, but some VMs were created only a week ago. The performance history of the new VMs won't exist for the complete duration.
+- Assessment is not able to collect the performance data for some or all the VMs in the assessment period. For a high confidence rating, please ensure that:
+ - VMs are powered on for the duration of the assessment
+ - Outbound connections on ports 443 are allowed
+ - For Hyper-V VMs dynamic memory is enabled
+
+ Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
+
+- Some VMs were created during the time for which the assessment was calculated. For example, assume you created an assessment for the performance history of the last month, but some VMs were created only a week ago. In this case, the performance data for the new VMs will not be available for the entire duration and the confidence rating would be low.
> [!NOTE] > If the confidence rating of any assessment is less than five stars, we recommend that you wait at least a day for the appliance to profile the environment and then recalculate the assessment. Otherwise, performance-based sizing might be unreliable. In that case, we recommend that you switch the assessment to on-premises sizing.
migrate https://docs.microsoft.com/en-us/azure/migrate/concepts-azure-vmware-solution-assessment-calculation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-azure-vmware-solution-assessment-calculation.md
@@ -202,6 +202,8 @@ After the effective utilization value is determined, the storage, network, and c
If you use *as on-premises sizing*, Server Assessment doesn't consider the performance history of the VMs and disks. Instead, it allocates AVS nodes based on the size allocated on-premises. The default storage type is vSAN in AVS.
+[Learn more](https://docs.microsoft.com/azure/migrate/tutorial-assess-vmware-azure-vmware-solution#review-an-assessment) about how to review an Azure VMware Solution assessment.
+ ## Confidence ratings Each performance-based assessment in Azure Migrate is associated with a confidence rating that ranges from one (lowest) to five stars (highest).
@@ -230,9 +232,15 @@ Depending on the percentage of data points available, the confidence rating for
Here are a few reasons why an assessment could get a low confidence rating: -- You didn't profile your environment for the duration for which you are creating the assessment. For example, if you create the assessment with performance duration set to one day, you must wait for at least a day after you start discovery for all the data points to get collected.-- Some VMs were shut down during the period for which the assessment was calculated. If any VMs are turned off for some duration, Server Assessment can't collect the performance data for that period.-- Some VMs were created during the period for which the assessment was calculated. For example, if you created an assessment for the performance history of the last month, but some VMs were created in the environment only a week ago, the performance history of the new VMs won't exist for the complete duration.
+- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you create the assessment with performance duration set to one day, you must wait at least a day after you start discovery for all the data points to get collected.
+- Assessment is not able to collect the performance data for some or all the VMs in the assessment period. For a high confidence rating, please ensure that:
+ - VMs are powered on for the duration of the assessment
+ - Outbound connections on ports 443 are allowed
+ - For Hyper-V VMs dynamic memory is enabled
+
+ Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
+
+- Some VMs were created during the time for which the assessment was calculated. For example, assume you created an assessment for the performance history of the last month, but some VMs were created only a week ago. In this case, the performance data for the new VMs will not be available for the entire duration and the confidence rating would be low.
> [!NOTE] > If the confidence rating of any assessment is less than five stars, we recommend that you wait at least a day for the appliance to profile the environment, and then recalculate the assessment. If you don't, performance-based sizing might not be reliable. In that case, we recommend that you switch the assessment to on-premises sizing.
migrate https://docs.microsoft.com/en-us/azure/migrate/create-manage-projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/create-manage-projects.md
@@ -1,8 +1,8 @@
--- title: Create and manage Azure Migrate projects description: Find, create, manage, and delete projects in Azure Migrate.
-author: ms-psharma
-ms.author: panshar
+author: vineetvikram
+ms.author: vivikram
ms.manager: abhemraj ms.topic: how-to ms.date: 11/23/2020
@@ -10,7 +10,7 @@ ms.date: 11/23/2020
# Create and manage Azure Migrate projects
-This article describes how to create, manage, and delete [Azure Migrate](migrate-services-overview.md) projects.
+This article describes how to create, manage, and delete [Azure Migrate](migrate-services-overview.md) projects. If you're using Classic Azure Migrate projects, please delete those projects and follow the steps to create a new Azure Migrate project. You can't upgrade Classic Azure Migrate projects or components to the Azure Migrate.
An Azure Migrate project is used to store discovery, assessment, and migration metadata collected from the environment you're assessing or migrating. In a project you can track discovered assets, create assessments, and orchestrate migrations to Azure.
migrate https://docs.microsoft.com/en-us/azure/migrate/how-to-create-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-create-assessment.md
@@ -38,29 +38,81 @@ There are two types of sizing criteria you can use to create an Azure VM assessm
Run an assessment as follows:
-1. Review the [best practices](best-practices-assessment.md) for creating assessments.
-2. In the **Servers** tab, in **Azure Migrate: Server Assessment** tile, click **Assess**.
+1. On the **Servers** page > **Windows and Linux servers**, click **Assess and migrate servers**.
- ![Screenshot shows Azure Migrate Servers with Assess selected under Assessment tools.](./media/how-to-create-assessment/assess.png)
+ ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
-3. In **Assess servers**, select the assessment type as "Azure VM", select the discovery source and specify the assessment name.
+2. In **Azure Migrate: Server Assessment**, click **Assess**.
- ![Assessment Basics](./media/how-to-create-assessment/assess-servers-azurevm.png)
+ ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
-4. Click **View all** to review the assessment properties.
+3. In **Assess servers** > **Assessment type**, select **Azure VM**.
+4. In **Discovery source**:
- ![Assessment properties](./media/how-to-create-assessment//view-all.png)
+ - If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**.
+ - If you discovered machines using an imported CSV file, select **Imported machines**.
+
+1. Click **Edit** to review the assessment properties.
-5. Click **next** to **Select machines to assess**. In **Select or create a group**, select **Create New**, and specify a group name. A group gathers one or more VMs together for assessment.
-6. In **Add machines to the group**, select VMs to add to the group.
-7. Click **next** to **Review + create assessment** to review the assessment details.
-8. Click **Create Assessment** to create the group, and run the assessment.
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the Edit button to review assessment properties":::
- ![Create an assessment](./media/how-to-create-assessment//assessment-create.png)
+1. In **Assessment properties** > **Target Properties**:
+ - In **Target location**, specify the Azure region to which you want to migrate.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In **Storage type**,
+ - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
+ - Alternatively, select the storage type you want to use for VM when you migrate it.
+ - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
+ - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
+ - [Learn more](https://aka.ms/azurereservedinstances).
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ - In **Performance history**, indicate the data duration on which you want to base the assessment
+ - In **Percentile utilization**, specify the percentile value you want to use for the performance sample.
+ - In **VM Series**, specify the Azure VM series you want to consider.
+ - If you're using performance-based assessment, Azure Migrate suggests a value for you.
+ - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
+ - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+
+ **Component** | **Effective utilization** | **Add comfort factor (2.0)**
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
+
+1. In **Pricing**:
+ - In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer.
+ - In **Currency**, select the billing currency for your account.
+ - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+ - In **VM Uptime**, specify the duration (days per month/hour per day) that VMs will run.
+ - This is useful for Azure VMs that won't run continuously.
+ - Cost estimates are based on the duration specified.
+ - Default is 31 days per month/24 hours per day.
+ - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-9. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
-10. Click **Export assessment**, to download it as an Excel file.
+1. Click **Save** if you make changes.
+ ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-group.png" alt-text="Add VMs to a group":::
+
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
++
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+
+1. Click **Export assessment**, to download it as an Excel file.
+ > [!NOTE]
+ > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
## Review an Azure VM assessment
migrate https://docs.microsoft.com/en-us/azure/migrate/how-to-create-azure-vmware-solution-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-create-azure-vmware-solution-assessment.md
@@ -55,27 +55,29 @@ Run an Azure VMware Solution (AVS) assessment as follows:
![Screenshot shows Azure Migrate Servers with Assess selected under Assessment tools.](./media/how-to-create-assessment/assess.png)
-3. In **Assess servers**, select the assessment type as "Azure VMware Solution (AVS)", select the discovery source and specify the assessment name.
+3. In **Assess servers**, select the assessment type as "Azure VMware Solution (AVS)", select the discovery source.
- ![Assessment Basics](./media/how-to-create-avs-assessment/assess-servers-avs.png)
+ :::image type="content" source="./media/how-to-create-avs-assessment/assess-servers-avs.png" alt-text="Add assessment basics":::
-4. Click **View all** to review the assessment properties.
+4. Click **Edit** to review the assessment properties.
- ![AVS Assessment properties](./media/how-to-create-avs-assessment/avs-view-all.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-servers.png" alt-text="Location of the Edit button to review assessment properties":::
-5. Click **next** to **Select machines to assess**. In **Select or create a group**, select **Create New**, and specify a group name. A group gathers one or more VMs together for assessment.
-
-6. In **Add machines to the group**, select VMs to add to the group.
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name. A group gathers one or more VMs together for assessment.
+
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-group.png" alt-text="Add VMs to a group":::
-7. Click **next** to **Review + create assessment** to review the assessment details.
+1. In **Add machines to the group**, select VMs to add to the group.
-8. Click **Create Assessment** to create the group, and run the assessment.
+1. Click **next** to **Review + create assessment** to review the assessment details.
- ![Create an AVS assessment](./media/how-to-create-avs-assessment/avs-assessment-create.png)
+1. Click **Create Assessment** to create the group, and run the assessment.
-9. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
-10. Click **Export assessment**, to download it as an Excel file.
+1. Click **Export assessment**, to download it as an Excel file.
## Review an Azure VMware Solution (AVS) assessment
@@ -85,6 +87,11 @@ An Azure VMware Solution (AVS) assessment describes:
- **Azure VMware Solution (AVS) readiness**: Whether the on-premises VMs are suitable for migration to Azure VMware Solution (AVS). - **Number of AVS nodes**: Estimated number of AVS nodes required to run the VMs. - **Utilization across AVS nodes**: Projected CPU, memory, and storage utilization across all nodes.
+ - Utilization includes up front factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large),
+NSX Edge, if HCX is deployed also the HCX Manager and IX appliance consuming ~ 44vCPU (11 CPU), 75GB of RAM and 722GB of storage before
+compression and deduplication.
+ - Memory, dedupe and compression are currently set to 100% utilization for memory and 1.5 dedupe and compression which will be a user defined
+input in coming releases further allowing user to fine tune their required sizing.
- **Monthly cost estimation**: The estimated monthly costs for all Azure VMware Solution (AVS) nodes running the on-premises VMs.
@@ -94,7 +101,7 @@ An Azure VMware Solution (AVS) assessment describes:
2. In **Assessments**, click on an assessment to open it.
- ![AVS Assessment summary](./media/how-to-create-avs-assessment/avs-assessment-summary.png)
+ :::image type="content" source="./media/how-to-create-avs-assessment/avs-assessment-summary.png" alt-text="AVS Assessment summary":::
### Review Azure VMware Solution (AVS) readiness
migrate https://docs.microsoft.com/en-us/azure/migrate/migrate-support-matrix-physical-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical-migration.md
@@ -10,7 +10,7 @@ ms.date: 06/14/2020
# Support matrix for physical server migration
-This article summarizes support settings and limitations for migrating physical servers with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing physical servers for migration to Azure, review the [assessment support matrix](migrate-support-matrix-physical.md).
+This article summarizes support settings and limitations for migrating physical servers to Azure with [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) . If you're looking for information about assessing physical servers for migration to Azure, review the [assessment support matrix](migrate-support-matrix-physical.md).
## Migrating machines as physical
@@ -20,7 +20,7 @@ You can migrate on-premises machines as physical servers, using agent-based repl
- VMs virtualized by platforms such as Xen, KVM. - Hyper-V VMs or VMware VMs if for some reason you don't want to use the standard [Hyper-V](tutorial-migrate-hyper-v.md) or [VMware](server-migrate-overview.md) flows. - VMs running in private clouds.-- VMs running in public clouds such as Amazon Web Services (AWS) or Google Cloud Platform (GCP).
+- VMs running in public clouds, including Amazon Web Services (AWS) or Google Cloud Platform (GCP).
## Migration limitations
@@ -52,7 +52,6 @@ The table summarizes support for physical servers you want to migrate using agen
**NFS** | NFS volumes mounted as volumes on the machines won't be replicated. **iSCSI targets** | Machines with iSCSI targets aren't supported for agentless migration. **Multipath IO** | Not supported.
-**Storage vMotion** | Supported
**Teamed NICs** | Not supported. **IPv6** | Not supported.
migrate https://docs.microsoft.com/en-us/azure/migrate/migrate-support-matrix-vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware.md
@@ -18,7 +18,7 @@ If you want to migrate VMware VMs to Azure, review the [migration support matrix
## Limitations
-**Support** | **Details**
+**Requirement** | **Details**
--- | --- **Project limits** | You can create multiple projects in an Azure subscription.<br/><br/> You can discover and assess up to 35,000 VMware VMs in a single [project](migrate-support-matrix.md#azure-migrate-projects). A project can also include physical servers, and Hyper-V VMs, up to the assessment limits for each. **Discovery** | The Azure Migrate appliance can discover up to 10,000 VMware VMs on a vCenter Server.
@@ -76,9 +76,9 @@ In addition to discovering machines, Server Assessment can discover apps, roles,
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises machines that you want to assess and migrate to Azure. The table summarizes the requirements for setting up agentless dependency analysis.
+[Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises machines that you want to assess and migrate to Azure. The table summarizes the requirements for setting up agentless dependency analysis.
-**Requirement** | **Details**
+**Support** | **Details**
--- | --- **Supported machines** | Currently supported for VMware VMs only. **Windows VMs** | Windows Server 2016<br/> Windows Server 2012 R2<br/> Windows Server 2012<br/> Windows Server 2008 R2 (64-bit).<br/>Microsoft Windows Server 2008 (32-bit).
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-aws.md
@@ -49,25 +49,25 @@ Run an assessment as follows:
1. On the **Servers** page > **Windows and Linux servers**, click **Assess and migrate servers**.
- ![Location of Assess and migrate servers button](./media/tutorial-assess-aws/assess.png)
+ ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
-2. In **Azure Migrate: Server Assessment, click **Assess**.
+2. In **Azure Migrate: Server Assessment**, click **Assess**.
- ![Location of the Assess button](./media/tutorial-assess-aws/assess-servers.png)
+ ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
3. In **Assess servers** > **Assessment type**, select **Azure VM**. 4. In **Discovery source**: - If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-5. Specify a name for the assessment.
-6. Click **View all** to review the assessment properties.
+
+1. Click **Edit** to review the assessment properties.
- ![Location of the View all button to review assessment properties](./media/tutorial-assess-aws/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the edit button to review assessment properties":::
-7. In **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**:
- In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government) - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
@@ -75,22 +75,21 @@ Run an assessment as follows:
- In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it. - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. - [Learn more](https://aka.ms/azurereservedinstances).
-8. In **VM Size**:
-
- - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider. - If you're using performance-based assessment, Azure Migrate suggests a value for you. - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
- **Details** | **Utilization** | **Add comfort factor (2.0)**
- Read IOPS | 100 | 200
- Write IOPS | 100 | 200
- Read throughput | 100 Mbps | 200 Mbps
- Write throughput | 100 Mbps | 200 Mbps
+
+ **Component** | **Effective utilization** | **Add comfort factor (2.0)**
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-9. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
@@ -98,20 +97,28 @@ Run an assessment as follows:
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.- - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-10. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
+
+ ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-physical/assess-group.png" alt-text="Add VMs to a group":::
- ![Assessment properties](./media/tutorial-assess-aws/assessment-properties.png)
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-11. In **Assess Servers**, click **Next**.
-12. In **Select machines to assess**, select **Create New**, and specify a group name.
-13. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-14. In **Review + create assessment, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. Click **Export assessment**, to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-gcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-gcp.md
@@ -47,48 +47,47 @@ Run an assessment as follows:
1. On the **Servers** page > **Windows and Linux servers**, click **Assess and migrate servers**.
- ![Location of Assess and migrate servers button](./media/tutorial-assess-gcp/assess.png)
+ ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
-2. In **Azure Migrate: Server Assessment, click **Assess**.
+2. In **Azure Migrate: Server Assessment**, click **Assess**.
- ![Location of the Assess button](./media/tutorial-assess-gcp/assess-servers.png)
+ ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
3. In **Assess servers** > **Assessment type**, select **Azure VM**. 4. In **Discovery source**: - If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-5. Specify a name for the assessment.
-6. Click **View all** to review the assessment properties.
+
+1. Click **Edit** to review the assessment properties.
- ![Location of the View all button to review assessment properties](./media/tutorial-assess-gcp/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the edit button to review assessment properties":::
-7. In **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**:
- In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government) - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserved instances for the VM when you migrate it.
+ - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. - [Learn more](https://aka.ms/azurereservedinstances).
-8. In **VM Size**:
-
- - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider. - If you're using performance-based assessment, Azure Migrate suggests a value for you. - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
- **Details** | **Utilization** | **Add comfort factor (2.0)**
- Read IOPS | 100 | 200
- Write IOPS | 100 | 200
- Read throughput | 100 Mbps | 200 Mbps
- Write throughput | 100 Mbps | 200 Mbps
+
+ **Component** | **Effective utilization** | **Add comfort factor (2.0)**
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-9. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
@@ -96,20 +95,30 @@ Run an assessment as follows:
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.- - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-10. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
+
+ ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-physical/assess-group.png" alt-text="Add VMs to a group":::
++
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
- ![Assessment properties](./media/tutorial-assess-gcp/assessment-properties.png)
-11. In **Assess Servers**, click **Next**.
-12. In **Select machines to assess**, select **Create New**, and specify a group name.
-13. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-14. In **Review + create assessment, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. Click **Export assessment**, to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-hyper-v https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-hyper-v.md
@@ -55,7 +55,7 @@ Run an assessment as follows:
![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
-2. In **Azure Migrate: Server Assessment, click **Assess**.
+2. In **Azure Migrate: Server Assessment**, click **Assess**.
![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
@@ -65,14 +65,13 @@ Run an assessment as follows:
- If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-5. Specify a name for the assessment.
-6. Click **View all** to review the assessment properties.
+1. Click **Edit** to review the assessment properties.
- ![Location of the View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the Edit button to review assessment properties":::
-7. In **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**:
- In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government) - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
@@ -80,9 +79,8 @@ Run an assessment as follows:
- In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it. - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. - [Learn more](https://aka.ms/azurereservedinstances).
-8. In **VM Size**:
-
- - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider.
@@ -90,10 +88,11 @@ Run an assessment as follows:
- Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two: **Component** | **Effective utilization** | **Add comfort factor (2.0)**
- Cores | 2 | 4
- Memory | 8 GB | 16 GB
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-9. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
@@ -101,20 +100,29 @@ Run an assessment as follows:
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.- - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-10. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
-11. In **Assess Servers**, click **Next**.
-12. In **Select machines to assess**, select **Create New**, and specify a group name.
-13. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-14. In **Review + create assessment, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-hyper-v/assess-machines.png" alt-text="Create new group and add machines":::
+
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
++
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. Click **Export assessment**, to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-physical.md
@@ -52,11 +52,11 @@ Run an assessment as follows:
1. On the **Servers** page > **Windows and Linux servers**, click **Assess and migrate servers**.
- ![Location of Assess and migrate servers button](./media/tutorial-assess-physical/assess.png)
+ ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
2. In **Azure Migrate: Server Assessment**, click **Assess**.
- ![Location of the Assess button](./media/tutorial-assess-physical/assess-servers.png)
+ ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
3. In **Assess servers** > **Assessment type**, select **Azure VM**. 4. In **Discovery source**:
@@ -64,14 +64,13 @@ Run an assessment as follows:
- If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-5. Specify a name for the assessment.
-6. Click **View all** to review the assessment properties.
+1. Click **Edit** to review the assessment properties.
- ![Location of the View all button to review assessment properties](./media/tutorial-assess-physical/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the edit button to review assessment properties":::
-7. In **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**:
- In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government) - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
@@ -79,20 +78,21 @@ Run an assessment as follows:
- In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it. - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. - [Learn more](https://aka.ms/azurereservedinstances).
-8. In **VM Size**:
-
- - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider. - If you're using performance-based assessment, Azure Migrate suggests a value for you. - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+
**Component** | **Effective utilization** | **Add comfort factor (2.0)**
- Cores | 2 | 4
- Memory | 8 GB | 16 GB
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-9. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
@@ -100,20 +100,30 @@ Run an assessment as follows:
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.- - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-10. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
+
+ ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-physical/assess-group.png" alt-text="Add VMs to a group":::
++
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
- ![Assessment properties](./media/tutorial-assess-physical/assessment-properties.png)
-11. In **Assess Servers**, click **Next**.
-12. In **Select machines to assess**, select **Create New**, and specify a group name.
-13. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-14. In **Review + create assessment, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. Click **Export assessment**, to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-vmware-azure-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-vmware-azure-vm.md
@@ -65,14 +65,13 @@ Run an assessment as follows:
- If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-1. Specify a name for the assessment.
-1. Click **View all** to review the assessment properties.
+1. Click **Edit** to review the assessment properties.
![Location of the View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png) 1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government) - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
@@ -80,20 +79,21 @@ Run an assessment as follows:
- In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it. - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. - [Learn more](https://aka.ms/azurereservedinstances).
- 7. In **VM Size**:
-
- - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
+ 1. In **VM Size**:
+ - In **Sizing criterion**, select if you want to base the assessment on machine configuration data/metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider. - If you're using performance-based assessment, Azure Migrate suggests a value for you. - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+
**Component** | **Effective utilization** | **Add comfort factor (2.0)**
- Cores | 2 | 4
- Memory | 8 GB | 16 GB
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-8. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
@@ -101,23 +101,30 @@ Run an assessment as follows:
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.- - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-9. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
-10. In **Assess Servers**, click **Next**.
-11. In **Select machines to assess**, select **Create New**, and specify a group name.
-12. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
+1. In **Assess Servers** > click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
![Add VMs to a group](./media/tutorial-assess-vmware-azure-vm/assess-group.png)
-13. In **Review + create assessment, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
++
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+
+1. After the assessment is created, view it in **Servers** > **Azure Migrate: Server Assessment** > **Assessments**.
+1. Click **Export assessment**, to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-assess-vmware-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-vmware-azure-vmware-solution.md
@@ -54,58 +54,63 @@ Run an assessment as follows:
![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vmware-solution/assess.png)
-2. In **Azure Migrate: Server Assessment**, click **Assess**.
+1. In **Azure Migrate: Server Assessment**, click **Assess**.
-3. In **Assess servers** > **Assessment type**, select **Azure VMware Solution (AVS) (Preview)**.
-4. In **Discovery source**:
+1. In **Assess servers** > **Assessment type**, select **Azure VMware Solution (AVS) (Preview)**.
+
+1. In **Discovery source**:
- If you discovered machines using the appliance, select **Machines discovered from Azure Migrate appliance**. - If you discovered machines using an imported CSV file, select **Imported machines**.
-5. Specify a name for the assessment.
-6. Click **View all** to review the assessment properties.
-
- ![Page for selecting the assessment settings](./media/tutorial-assess-vmware-azure-vmware-solution/assess-servers.png)
+1. Click **Edit** to review the assessment properties.
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-servers.png" alt-text="Page for selecting the assessment settings":::
+
-7. 1n **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**:
- In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify.
- - You can currently assess for three regions (East US, West US, West Europe)
- - In **Storage type**, leave **vSAN**. This is the default storage type for an AVS private cloud.
+ - You can currently assess for four regions (Australia East, East US, West Europe, West US)
+ - The **Storage type** is defaulted to **vSAN**. This is the default storage type for an AVS private cloud.
- **Reserved Instances** aren't currently supported for AVS nodes.
-8. In **VM Size**:
- - In **Node type**, select a node type based on the workloads running on the on-premises VMs.
- - Azure Migrate recommends the node of nodes needed to migrate the VMs to AVS.
- - The default node type is AV36.
- - **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises VM disk requirement, determines the total vSAN storage required in AVS.
+1. In **VM Size**:
+ - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the node of nodes needed to migrate the VMs to AVS.
+ - In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises VM disk requirement, determines the total vSAN storage required in AVS.
- In **CPU Oversubscription**, specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads.
-9. In **Node Size**:
+1. In **Node Size**:
- In **Sizing criterion**, select if you want to base the assessment on static metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two: **Component** | **Effective utilization** | **Add comfort factor (2.0)**
- --- | --- | ---
- Cores | 2 | 4
- Memory | 8 GB | 16 GB
+ --- | --- | ---
+ Cores | 2 | 4
+ Memory | 8 GB | 16 GB
-10. In **Pricing**:
+1. In **Pricing**:
- In **Offer**, [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in is displayed Server Assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
-11. Click **Save** if you make changes.
+1. Click **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vmware-solution/view-all.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all.png" alt-text="Assessment properties":::
-12. In **Assess Servers**, click **Next**.
-13. In **Assess Servers** > **Select machines to assess**, to create a new group of servers for assessment, select **Create New**, and specify a group name.
-14. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
-15. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Assess Servers**, click **Next**.
+
+1. In **Select machines to assess** > **Assessment name** > specify a name for the assessment.
+
+1. In **Select or create a group** > select **Create New** and specify a group name.
+
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-group.png" alt-text="Add VMs to a group":::
+
+1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
+
+1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
@@ -117,6 +122,11 @@ An AVS assessment describes:
- AVS readiness: Whether the on-premises VMs are suitable for migration to Azure VMware Solution (AVS). - Number of AVS nodes: Estimated number of AVS nodes required to run the VMs. - Utilization across AVS nodes: Projected CPU, memory, and storage utilization across all nodes.
+ - Utilization includes upfront factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large),
+NSX Edge, if HCX is deployed also the HCX Manager and IX appliance consuming ~ 44vCPU (11 CPU), 75GB of RAM and 722GB of storage before
+compression and deduplication.
+ - Memory, dedupe and compression are currently set to 100% utilization for memory and 1.5 dedupe and compression which will be a user defined
+input in coming releases further allowing user to fine tune their required sizing.
- Monthly cost estimation: The estimated monthly costs for all Azure VMware Solution (AVS) nodes running the on-premises VMs. ## View an assessment
@@ -124,8 +134,12 @@ An AVS assessment describes:
To view an assessment: 1. In **Servers** > **Azure Migrate: Server Assessment**, click the number next to **Assessments**.
-2. In **Assessments**, select an assessment to open it.
-3. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment.
+
+1. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
+
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary.png" alt-text="AVS Assessment summary":::
+
+1. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment.
### Review readiness
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-hyper-v https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-hyper-v.md
@@ -75,10 +75,41 @@ If you just created a free Azure account, you're the owner of your subscription.
## Prepare Hyper-V hosts
-Set up an account with Administrator access on the Hyper-V hosts. The appliance uses this account for discovery.
+You can prepare Hyper-V hosts manually, or using a script. The preparation steps are summarized in the table. The script prepares these automatically.
-- Option 1: Prepare an account with Administrator access to the Hyper-V host machine.-- Option 2: If you don't want to assign Administrator permissions, create a local or domain user account, and add the user account to these groups- Remote Management Users, Hyper-V Administrators, and Performance Monitor Users.
+**Step** | **Script** | **Manual**
+--- | --- | ---
+Verify host requirements | Checks that the host is running a supported version of Hyper-V, and the Hyper-V role.<br/><br/>Enables the WinRM service, and opens ports 5985 (HTTP) and 5986 (HTTPS) on the host (needed for metadata collection). | The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull VM metadata and performance data, using a Common Information Model (CIM) session.
+Verify PowerShell version | Checks that you're running the script on a supported PowerShell version. | Check you're running PowerShell version 4.0 or later on the Hyper-V host.
+Create an account | Verifies that you have the correct permissions on the Hyper-V host.<br/><br/> Allows you to to create a local user account with the correct permissions. | Option 1: Prepare an account with Administrator access to the Hyper-V host machine.<br/><br/> Option 2: Prepare a Local Admin account, or Domain Admin account, and add the account to these groups: Remote Management Users, Hyper-V Administrators, and Performance Monitor Users.
+Enable PowerShell remoting | Enables PowerShell remoting on the host , so that the Azure Migrate appliance can run PowerShell commands on the host, over a WinRM connection. | To set up, on each host, open a PowerShell console as admin, and run this command: ``` powershell Enable-PSRemoting -force ```
+Set up Hyper-V integration services | Checks that the Hyper-V Integration Services is enabled on all VMs managed by the host. | [Enable Hyper-V Integration Services](/windows-server/virtualization/hyper-v/manage/manage-hyper-v-integration-services.md) on each VM.<br/><br/> If you're running Windows Server 2003, [follow these instructions](prepare-windows-server-2003-migration.md).
+Delegate credentials if VM disks are located on remote SMB shares | Delegates credentials | Run this command to enable CredSSP to delegate credentials on hosts running Hyper-V VMs with disks on SMB shares: ```powershell Enable-WSManCredSSP -Role Server -Force ```<br/><br/> You can run this command remotely on all Hyper-V hosts.<br/><br/> If you add new host nodes on a cluster they're automatically added for discovery, but you need to enable CredSSP manually.<br/><br/> When you set up the appliance, you finish setting up CredSSP by [enabling it on the appliance](#delegate-credentials-for-smb-vhds).
+
+### Run the script
+
+1. Download the script from the [Microsoft Download Center](https://aka.ms/migrate/script/hyperv). The script is cryptographically signed by Microsoft.
+2. Validate the script integrity using either MD5, or SHA256 hash files. Hashtag values are below. Run this command to generate the hash for the script:
+
+ ```powershell
+ C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]
+ ```
+ Example usage:
+
+ ```powershell
+ C:\>CertUtil -HashFile C:\Users\Administrators\Desktop\ MicrosoftAzureMigrate-Hyper-V.ps1 SHA256
+ ```
+3. After validating the script integrity, run the script on each Hyper-V host with this PowerShell command:
+
+ ```powershell
+ PS C:\Users\Administrators\Desktop> MicrosoftAzureMigrate-Hyper-V.ps1
+ ```
+Hash values are:
+
+**Hash** | **Value**
+--- | ---
+MD5 | 0ef418f31915d01f896ac42a80dc414e
+SHA256 | 0ad60e7299925eff4d1ae9f1c7db485dc9316ef45b0964148a3c07c80761ade2
## Set up a project
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-discover-vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-vmware.md
@@ -5,7 +5,7 @@ author: vineetvikram
ms.author: vivikram ms.manager: abhemraj ms.topic: tutorial
-ms.date: 09/14/2020
+ms.date: 9/14/2020
ms.custom: mvc #Customer intent: As an VMware admin, I want to discover my on-premises VMware VM inventory. ---
migrate https://docs.microsoft.com/en-us/azure/migrate/tutorial-migrate-physical-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-migrate-physical-virtual-machines.md
@@ -159,6 +159,8 @@ On machines you want to migrate, you need to install the Mobility service agent.
- You can obtain the passphrase on the replication appliance. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase. - Don't regenerate the passphrase. This will break connectivity and you will have to reregister the replication appliance.
+> [!NOTE]
+> In the */Platform* parameter, you specify *VMware* if you migrate VMware VMs, or physical machines.
### Install on Windows
mysql https://docs.microsoft.com/en-us/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/quickstart-create-mysql-server-database-using-azure-portal.md
@@ -40,7 +40,7 @@ An Azure subscription is required. If you don't have an Azure subscription, crea
Server name | **mydemoserver** | Enter a unique name. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. Data source |**None** | Select **None** to create a new server from scratch. Select **Backup** only if you're restoring from a geo-backup of an existing server. Location |Your desired location | Select a location from the list.
- Version | The latest major version| Use the latest major version. See [all supported versions](../mysql/concepts-supported-versions.md).
+ Version | The latest major version| Use the latest major version. See [all supported versions](concepts-supported-versions.md).
Compute + storage | Use the defaults| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days**, with the **Geographically Redundant** backup option.<br/>Review the [pricing](https://azure.microsoft.com/pricing/details/mysql/) page, and update the defaults if you need to. Admin username | **mydemoadmin** | Enter your server admin user name. You can't use **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public** for the admin user name. Password | A password | A new password for the server admin user. The password must be 8 to 128 characters long and contain a combination of uppercase or lowercase letters, numbers, and non-alphanumeric characters (!, $, #, %, and so on).
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-data-encryption-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-data-encryption-postgresql.md
@@ -10,7 +10,7 @@ ms.date: 01/13/2020
# Azure Database for PostgreSQL Single server data encryption with a customer-managed key
-Azure PostgreSQL leverages [Azure Storage encryption](../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it is a very similar to Transparent Data Encruption (TDE) in other databases such as SQL Server. Many organizations require full control on access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+Azure PostgreSQL leverages [Azure Storage encryption](../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it is a very similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control on access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you are responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
Data encryption with customer-managed keys for Azure Database for PostgreSQL Single server, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../key-vault/general/secure-your-key-vault.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
@@ -139,4 +139,4 @@ For Azure Database for PostgreSQL, the support for encryption of data at rest us
## Next steps
-Learn how to [set up data encryption with a customer-managed key for your Azure database for PostgreSQL Single server by using the Azure portal](howto-data-encryption-portal.md).
\ No newline at end of file
+Learn how to [set up data encryption with a customer-managed key for your Azure database for PostgreSQL Single server by using the Azure portal](howto-data-encryption-portal.md).
postgresql https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
@@ -72,7 +72,7 @@ Here are some concepts to be familiar with when using virtual networks with Post
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). > [!NOTE]
-> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
## Public access (allowed IP addresses) Characteristics of the public access method include:
private-link https://docs.microsoft.com/en-us/azure/private-link/create-private-link-service-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/create-private-link-service-portal.md
@@ -4,20 +4,22 @@ title: 'Quickstart - Create a Private Link service by using the Azure portal'
titlesuffix: Azure Private Link description: Learn how to create a Private Link service by using the Azure portal in this quickstart services: private-link
-author: malopMSFT
+author: asudbring
# Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal ms.service: private-link ms.topic: quickstart
-ms.date: 02/03/2020
+ms.date: 01/18/2021
ms.author: allensu --- # Quickstart: Create a Private Link service by using the Azure portal
-An Azure Private Link service refers to your own service that is managed by Private Link. You can give Private Link access to the service or resource that operates behind Azure Standard Load Balancer. Consumers of your service can access it privately from their own virtual networks. In this quickstart, you learn how to create a Private Link service by using the Azure portal.
+Get started creating a Private Link service that refers to your service. Give Private Link access to your service or resource deployed behind an Azure Standard Load Balancer. Users of your service have private access from their virtual network.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to the Azure portal
@@ -25,159 +27,210 @@ Sign in to the Azure portal at https://portal.azure.com.
## Create an internal load balancer
-First, create a virtual network. Next, create an internal load balancer to use with the Private Link service.
+In this section, you'll create a virtual network and an internal Azure Load Balancer.
+
+### Virtual network
+
+In this section, you create a virtual network and subnet to host the load balancer that accesses your Private Link service.
+
+1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+
+2. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |------------------|-----------------------------------------------------------------|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreatePrivLinkService-rg** |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **East US 2** |
+
+3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+4. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--------------------|----------------------------|
+ | IPv4 address space | Enter **10.1.0.0/16** |
-## Virtual network and parameters
+5. Under **Subnet name**, select the word **default**.
-In this section, you create a virtual network. You also create the subnet to host the load balancer that accesses your Private Link service.
+6. In **Edit subnet**, enter this information:
-In this section you will need to replace the following parameters in the steps with the information below:
+ | Setting | Value |
+ |--------------------|----------------------------|
+ | Subnet name | Enter **mySubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
-| Parameter | Value |
-|-----------------------------|----------------------|
-| **\<resource-group-name>** | myResourceGroupLB |
-| **\<virtual-network-name>** | myVNet |
-| **\<region-name>** | East US 2 |
-| **\<IPv4-address-space>** | 10.3.0.0/16 |
-| **\<subnet-name>** | myBackendSubnet |
-| **\<subnet-address-range>** | 10.3.0.0/24 |
+7. Select **Save**.
-[!INCLUDE [virtual-networks-create-new](../../includes/virtual-networks-create-new.md)]
+8. Select the **Review + create** tab or select the **Review + create** button.
+
+9. Select **Create**.
### Create a standard load balancer
-Use the portal to create a standard internal load balancer. The name and IP address you specify are automatically configured as the load balancer's front end.
+Use the portal to create a standard internal load balancer.
-1. On the upper-left side of the portal, select **Create a resource** > **Networking** > **Load Balancer**.
+1. On the top left-hand side of the screen, select **Create a resource** > **Networking** > **Load Balancer**.
-1. On the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | --- | --- |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **myResourceGroupLB** from the box.|
- | **Name** | Enter **myLoadBalancer**. |
- | **Region** | Select **East US 2**. |
- | **Type** | Select **Internal**. |
- | **SKU** | Select **Standard**. |
- | **Virtual network** | Select **myVNet**. |
- | **IP address assignment** | Select **Static**. |
- | **Private IP address**|Enter an address that's in the address space of your virtual network and subnet. An example is 10.3.0.7. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePrivLinkService-rg** created in the previous step.|
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **East US 2**. |
+ | Type | Select **Internal**. |
+ | SKU | Select **Standard** |
+ | Virtual network | Select **myVNet** created in the previous step. |
+ | Subnet | Select **mySubnet** created in the previous step. |
+ | IP address assignment | Select **Dynamic**. |
+ | Availability zone | Select **Zone-redundant** |
+
+3. Accept the defaults for the remaining settings, and then select **Review + create**.
-1. Accept the defaults for the remaining settings, and then select **Review + create**
+4. In the **Review + create** tab, select **Create**.
-1. On the **Review + create** tab, select **Create**.
+## Create load balancer resources
-### Create standard load balancer resources
+In this section, you configure:
-In this section, you configure load balancer settings for a back-end address pool and a health probe. You also specify load balancer rules.
+* Load balancer settings for a backend address pool.
+* A health probe.
+* A load balancer rule.
-#### Create a back-end pool
+### Create a backend pool
-A back-end address pool contains the IP addresses of the virtual NICs connected to the load balancer. This pool lets you distribute traffic to your resources. Create the back-end address pool named **myBackendPool** to include resources that load balance traffic.
+A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
-1. Select **All Services** from the leftmost menu.
-1. Select **All resources**, and then select **myLoadBalancer** from the resources list.
-1. Under **Settings**, select **Backend pools**, and then select **Add**.
-1. On the **Add a backend pool** page, enter **myBackendPool** as the name for your back-end pool, and then select **Add**.
+Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
-#### Create a health probe
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
-Use a health probe to let the load balancer monitor resource status. Based on resource response to health checks, the health probe dynamically adds or removes resources from the load balancer rotation.
+2. Under **Settings**, select **Backend pools**, then select **Add**.
-To create a health probe to monitor the health of the resources:
+3. On the **Add a backend pool** page, for name, type **myBackendPool**, as the name for your backend pool, and then select **Add**.
-1. Select **All resources** on the leftmost menu, and then select **myLoadBalancer** from the resource list.
+### Create a health probe
-1. Under **Settings**, select **Health probes**, and then select **Add**.
+The load balancer monitors the status of your app with a health probe.
-1. On the **Add a health probe** page, enter or select the following values:
+The health probe adds or removes VMs from the load balancer based on their response to health checks.
- - **Name**: Enter **myHealthProbe**.
- - **Protocol**: Select **TCP**.
- - **Port**: Enter **80**.
- - **Interval**: Enter **15**. This value is the number of seconds between probe attempts.
- - **Unhealthy threshold**: Enter **2**. This value is the number of consecutive probe failures that occur before a virtual machine is considered unhealthy.
+Create a health probe named **myHealthProbe** to monitor the health of the VMs.
-1. Select **OK**.
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
-#### Create a load balancer rule
+2. Under **Settings**, select **Health probes**, then select **Add**.
+
+ | Setting | Value |
+ | ------- | ----- |
+ | Name | Enter **myHealthProbe**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**.|
+ | Interval | Enter **15** for number of **Interval** in seconds between probe attempts. |
+ | Unhealthy threshold | Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
+ | | |
-A load balancer rule defines how traffic is distributed to resources. The rule defines:
+3. Leave the rest the defaults and Select **OK**.
-- The front-end IP configuration for incoming traffic.-- The back-end IP pool to receive the traffic.-- The required source and destination ports.
+### Create a load balancer rule
-The load balancer rule named **myLoadBalancerRule** listens to port 80 in the **LoadBalancerFrontEnd** front end. The rule sends network traffic to the **myBackendPool** back-end address pool on the same port 80.
+A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination port are defined in the rule.
-To create a load balancer rule:
+In this section, you'll create a load balancer rule:
-1. Select **All resources** on the leftmost menu, and then select **myLoadBalancer** from the resource list.
+* Named **myHTTPRule**.
+* In the frontend named **LoadBalancerFrontEnd**.
+* Listening on **Port 80**.
+* Directs load balanced traffic to the backend named **myBackendPool** on **Port 80**.
-1. Under **Settings**, select **Load-balancing rules**, and then select **Add**.
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
-1. On the **Add load-balancing rule** page, enter or select the following values if they aren't already present:
+2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
- - **Name**: Enter **myLoadBalancerRule**.
- - **Frontend IP address:** Enter **LoadBalancerFrontEnd**.
- - **Protocol**: Select **TCP**.
- - **Port**: Enter **80**.
- - **Backend port**: Enter **80**.
- - **Backend pool**: Select **myBackendPool**.
- - **Health probe**: Select **myHealthProbe**.
+3. Use these values to configure the load-balancing rule:
+
+ | Setting | Value |
+ | ------- | ----- |
+ | Name | Enter **myHTTPRule**. |
+ | IP Version | Select **IPv4** |
+ | Frontend IP address | Select **LoadBalancerFrontEnd** |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**.|
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool**.|
+ | Health probe | Select **myHealthProbe**. |
+ | Idle timeout (minutes) | Move the slider to **15** minutes. |
+ | TCP reset | Select **Enabled**. |
-1. Select **OK**.
+4. Leave the rest of the defaults and then select **OK**.
## Create a Private Link service
-In this section, you will create a Private Link service behind a standard load balancer.
+In this section, you'll create a Private Link service behind a standard load balancer.
+
+1. On the upper-left part of the page in the Azure portal, select **Create a resource**.
+
+2. Search for **Private Link** in the **Search the Marketplace** box.
-1. On the upper-left part of the page in the Azure portal, select **Create a resource** > **Networking** > **Private Link Center (Preview)**. You can also use the portal's search box to search for Private Link.
+3. Select **Create**.
-1. In **Private Link Center - Overview** > **Expose your own service so others can connect**, select **Start**.
+4. In **Overview** under **Private Link Center**, select the blue **Create private link service** button.
-1. Under **Create a private link service - Basics**, enter or select this information:
+5. In the **Basics** tab under **Create private link service**, enter, or select the following information:
- | Setting | Value |
- |-------------------|------------------------------------------------------------------------------|
- | Project details: | |
- | **Subscription** | Select your subscription. |
- | **Resource Group** | Select **myResourceGroupLB**. |
- | Instance details: | |
- | **Name** | Enter **myPrivateLinkService**. |
- | **Region** | Select **East US 2**. |
+ | Setting | Value |
+ | ------- | ----- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource Group | Select **CreatePrivLinkService-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myPrivateLinkService**. |
+ | Region | Select **East US 2**. |
-1. Select **Next: Outbound settings**.
+6. Select the **Outbound settings** tab or select **Next: Outbound settings** at the bottom of the page.
-1. Under **Create a private link service - Outbound settings**, enter or select this information:
+7. In the **Outbound settings** tab, enter or select the following information:
- | Setting | Value |
- |-----------------------------------|---------------------------------------------------------------------------------|
- | **Load Balancer** | Select **myLoadBalancer**. |
- | **Load Balancer frontend IP address** | Select the front-end IP address of **myLoadBalancer**. |
- | **Source NAT Virtual network** | Select **myVNet**. |
- | **Source NAT subnet** | Select **myBackendSubnet**. |
- | **Enable TCP proxy v2** | Select **YES** or **NO** depending on whether your application expects a TCP proxy v2 header. |
- | **Private IP address settings** | Configure the allocation method and IP address for each NAT IP. |
+ | Setting | Value |
+ | ------- | ----- |
+ | Load balancer | Select **myLoadBalancer**. |
+ | Load balancer frontend IP address | Select **LoadBalancerFrontEnd (10.1.0.4)**. |
+ | Source NAT subnet | Select **mySubnet (10.1.0.0/24)**. |
+ | Enable TCP proxy V2 | Leave the default of **No**. </br> If your application expects a TCP proxy v2 header, select **Yes**. |
+ | **Private IP address settings** | |
+ | Leave the default settings | |
-1. Select **Next: Access security**.
+8. Select the **Access security** tab or select **Next: Access security** at the bottom of the page.
-1. Under **Create a private link service - Access security**, select **Visibility**, and then choose **Role-based access control only**.
-
-1. Either select **Next: Tags** > **Review + create** or choose the **Review + create** tab at the top of the page.
+9. Leave the default of **Role-based access control only** in the **Access security** tab.
-1. Review your information, and select **Create**.
+10. Select the **Tags** tab or select **Next: Tags** at the bottom of the page.
+
+11. Select the **Review + create** tab or select **Next: Review + create** at the bottom of the page.
+
+12. Select **Create** in the **Review + create** tab.
## Clean up resources
-When you are done using the Private Link service, delete the resource group to clean up the resources used in this quickstart.
+When you're done using the Private Link service, delete the resource group to clean up the resources used in this quickstart.
-1. Enter **myResourceGroupLB** in the search box at the top of the portal, and select **myResourceGroupLB** from the search results.
+1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal, and select **CreatePrivLinkService-rg** from the search results.
1. Select **Delete resource group**.
-1. In **TYPE THE RESOURCE GROUP NAME**, enter **myResourceGroup**.
+1. In **TYPE THE RESOURCE GROUP NAME**, enter **CreatePrivLinkService-rg**.
1. Select **Delete**. ## Next steps
-In this quickstart, you created an internal Azure load balancer and a Private Link service. You can also learn how to [create a private endpoint by using the Azure portal](./create-private-endpoint-portal.md).
\ No newline at end of file
+In this quickstart, you:
+
+* Created a virtual network and internal Azure Load Balancer.
+* Created a private link service
+
+To learn more about Azure Private endpoint, continue to:
+> [!div class="nextstepaction"]
+> [Quickstart: Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
\ No newline at end of file
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-azure-sql-database-managed-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database-managed-instance.md
@@ -24,19 +24,19 @@ The Azure SQL Database Managed Instance data source supports the following funct
### Known limitations
-Azure Purview doesn't support scanning of [views](https://docs.microsoft.com/sql/relational-databases/views/views?view=sql-server-ver15) in Azure SQL Managed Instance.
+Azure Purview doesn't support scanning of [views](/sql/relational-databases/views/views?view=azuresqldb-mi-current&preserve-view=true) in Azure SQL Managed Instance.
## Prerequisites - Create a new Purview account if you don't already have one. -- [Configure public endpoint in Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/managed-instance/public-endpoint-configure)
+- [Configure public endpoint in Azure SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure)
> [!Note] > Your organization must be able to allow public endpoint as **private endpoint is not yet supported** by Purview. If you use private endpoint, the scan will not be successful. ### Setting up authentication for a scan
-Authentication to scan Azure SQL Database Managed Instance. If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](https://docs.microsoft.com/azure/azure-sql/database/logins-create-manage). There are three authentication methods that Purview supports today:
+Authentication to scan Azure SQL Database Managed Instance. If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](/azure/azure-sql/database/logins-create-manage). There are three authentication methods that Purview supports today:
- SQL authentication - Service Principal
@@ -47,7 +47,7 @@ Authentication to scan Azure SQL Database Managed Instance. If you need to creat
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Purview account should have the appropriate permissions to be able to scan the resource(s).
-You can follow the instructions in [CREATE LOGIN](https://docs.microsoft.com/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database Managed Instance if you don't have this available. You will need **username** and **password** for the next steps.
+You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database Managed Instance if you don't have this available. You will need **username** and **password** for the next steps.
1. Navigate to your key vault in the Azure portal 1. Select **Settings > Secrets**
@@ -81,8 +81,8 @@ To use a service principal, you can use an existing one or create a new one.
##### Configure Azure AD authentication in the database account The service principal or managed identity must have permission to get metadata for the database, schemas and tables. It must also be able to query the tables to sample for classification.-- [Configure and manage Azure AD authentication with Azure SQL](https://docs.microsoft.com/azure/azure-sql/database/authentication-aad-configure)-- Create an Azure AD user in Azure SQL Database Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](https://docs.microsoft.com/azure/azure-sql/database/authentication-aad-configure?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities)
+- [Configure and manage Azure AD authentication with Azure SQL](/azure/azure-sql/database/authentication-aad-configure)
+- Create an Azure AD user in Azure SQL Database Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities)
- Assign `db_owner` (**recommended**) permission to the identity ##### Add service principal to key vault and Purview's credential
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-azure-sql-database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
@@ -24,7 +24,7 @@ The Azure SQL Database data source supports the following functionality:
### Known limitations
-Azure Purview doesn't support scanning of [views](https://docs.microsoft.com/sql/relational-databases/views/views?view=sql-server-ver15&preserve-view=true) in Azure SQL Database.
+Azure Purview doesn't support scanning of [views](/sql/relational-databases/views/views?view=azuresqldb-current&preserve-view=true) in Azure SQL Database.
## Prerequisites
@@ -46,7 +46,7 @@ Authentication to scan Azure SQL Database. If you need to create new authenticat
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Purview account should have the appropriate permissions to be able to scan the resource(s).
-You can follow the instructions in [CREATE LOGIN](https://docs.microsoft.com/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database if you don't have this available. You will need **username** and **password** for the next steps.
+You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database if you don't have this available. You will need **username** and **password** for the next steps.
1. Navigate to your key vault in the Azure portal 1. Select **Settings > Secrets**
@@ -96,7 +96,7 @@ The service principal or managed identity must have permission to get metadata f
``` > [!Note]
- > The `Username` is your own service principal or Purview's managed identity. You can read more about [fixed-database roles and their capabilities](https://docs.microsoft.com/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15&preserve-view=true#fixed-database-roles).
+ > The `Username` is your own service principal or Purview's managed identity. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15&preserve-view=true#fixed-database-roles).
##### Add service principal to key vault and Purview's credential
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-azure-synapse-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-synapse-analytics.md
@@ -18,7 +18,7 @@ Azure Synapse Analytics (formerly SQL DW) supports full and incremental scans to
### Known limitations
-Azure Purview doesn't support scanning of [views](https://docs.microsoft.com/sql/relational-databases/views/views?view=sql-server-ver15) in Azure Synapse Analytics
+Azure Purview doesn't support scanning of [views](/sql/relational-databases/views/views?view=azure-sqldw-latest&preserve-view=true) in Azure Synapse Analytics
## Prerequisites
@@ -28,7 +28,7 @@ Azure Purview doesn't support scanning of [views](https://docs.microsoft.com/sql
## Setting up authentication for a scan
-There are three ways to set up authentication for Azure blob storage:
+There are three ways to set up authentication for Azure Synapse Analytics:
- Managed Identity - SQL Authentication
@@ -39,7 +39,7 @@ There are three ways to set up authentication for Azure blob storage:
### Managed Identity (Recommended)
-Your Purview account has its own Managed Identity which is basically your Purview name when you created it. You must create an Azure AD user in Azure Synapse Analytics (formerly SQL DW) with the exact Purview's Managed Identity name by following the prerequisites and tutorial on [Create Azure AD users using Azure AD applications](https://docs.microsoft.com/azure/azure-sql/database/authentication-aad-service-principal-tutorial).
+Your Purview account has its own Managed Identity which is basically your Purview name when you created it. You must create an Azure AD user in Azure Synapse Analytics (formerly SQL DW) with the exact Purview's Managed Identity name by following the prerequisites and tutorial on [Create Azure AD users using Azure AD applications](/azure/azure-sql/database/authentication-aad-service-principal-tutorial).
Example SQL syntax to create user and grant permission:
@@ -96,7 +96,7 @@ GO
### SQL authentication
-You can follow the instructions in [CREATE LOGIN](https://docs.microsoft.com/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure Synapse Analytics (formerly SQL DW) if you don't already have one.
+You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azure-sqldw-latest&preserve-view=true#examples-1) to create a login for Azure Synapse Analytics (formerly SQL DW) if you don't already have one.
When authentication method selected is **SQL Authentication**, you need to get your password and store in the key vault:
purview https://docs.microsoft.com/en-us/azure/purview/register-scan-on-premises-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-on-premises-sql-server.md
@@ -30,7 +30,7 @@ SQL server on-premises data source supports:
### Known limitations
-Azure Purview doesn't support scanning of [views](https://docs.microsoft.com/sql/relational-databases/views/views?view=sql-server-ver15) in SQL Server.
+Azure Purview doesn't support scanning of [views](/sql/relational-databases/views/views) in SQL Server.
## Prerequisites
security-center https://docs.microsoft.com/en-us/azure/security-center/upcoming-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 01/05/2021
+ms.date: 01/18/2021
ms.author: memildin ---
@@ -27,11 +27,25 @@ If you're looking for the latest release notes, you'll find them in the [What's
## Planned changes
+- [Two recommendations from "Apply system updates" security control being deprecated](#two-recommendations-from-apply-system-updates-security-control-being-deprecated)
- [Enhancements to SQL data classification recommendation](#enhancements-to-sql-data-classification-recommendation) - ["Not applicable" resources to be reported as "Compliant" in Azure Policy assessments](#not-applicable-resources-to-be-reported-as-compliant-in-azure-policy-assessments) - [35 preview recommendations added to increase coverage of Azure Security Benchmark](#35-preview-recommendations-being-added-to-increase-coverage-of-azure-security-benchmark)
+### Two recommendations from "Apply system updates" security control being deprecated
+
+**Estimated date for change:** February 2021
+
+The following two recommendations are scheduled to be deprecated in February 2021:
+
+- **Your machines should be restarted to apply system updates**. This might result in a slight impact on your secure score.
+- **Monitoring agent should be installed on your machines**. This recommendation relates to on-premises machines only and some of its logic will be transfered to another recommendation, **Log Analytics agent health issues should be resolved on your machines**. This might result in a slight impact on your secure score.
+
+We recommend checking your continuous export and workflow automation configurations to see whether these recommendations are included in them. Also, any dashboards or other monitoring tools that might be using them should be updated accordingly.
+
+Learn more about these recommendations in the [security recommendations reference page](recommendations-reference.md).
+ ### Enhancements to SQL data classification recommendation
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-solution-config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-solution-config.md
@@ -37,6 +37,7 @@ If your security solution already has an existing connector, use the connector-s
- [Illusive Networks AMS](connect-illusive-attack-management-system.md) - [One Identity Safeguard](connect-one-identity.md) - [Palo Alto Networks](connect-paloalto.md)
+- [Thycotic Secret Server](connect-thycotic-secret-server.md)
- [Trend Micro Deep Security](connect-trend-micro.md) - [Trend Micro TippingPoint](connect-trend-micro-tippingpoint.md) - [WireX Network Forensics Platform](connect-wirex-systems.md)
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cisco-ucs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco-ucs.md new file mode 100644
@@ -0,0 +1,74 @@
+---
+title: Connect Cisco Unified Computing System (UCS) data to Azure Sentinel| Microsoft Docs
+description: Learn how to use the Cisco UCS data connector to pull Cisco UCS logs into Azure Sentinel. View Cisco UCS data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/17/2021
+ms.author: yelevin
+
+---
+# Connect your Cisco Unified Computing System (UCS) to Azure Sentinel
+
+> [!IMPORTANT]
+> The Cisco UCS connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Cisco Unified Computing System (UCS) appliance to Azure Sentinel. The Cisco UCS data connector allows you to easily connect your UCS logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Cisco UCS and Azure Sentinel makes use of Syslog.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permission on the Azure Sentinel workspace.
+
+- Your Cisco UCS solution must be configured to export logs via Syslog.
+
+## Forward Cisco UCS logs to the Syslog agent
+
+Configure Cisco UCS to forward Syslog messages to your Azure Sentinel workspace via the Syslog agent.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select the **Cisco UCS (Preview)** connector, and then **Open connector page**.
+
+1. Follow the instructions on the **Cisco UCS** connector page:
+
+ 1. Install and onboard the agent for Linux
+
+ - Choose an Azure Linux VM or a non-Azure Linux machine (physical or virtual).
+
+ 1. Configure the logs to be collected
+
+ - Select the facilities and severities in the workspace advanced settings configuration
+
+ 1. Configure and connect the Cisco UCS
+
+ - Follow [these instructions](https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/110265-setup-syslog-for-ucs.html#configsremotesyslog) to configure the Cisco UCS to forward syslog. For the remote server, use the IP address of the Linux machine you installed the Linux agent on.
+
+## Find your data
+
+After a successful connection is established, the data appears in Log Analytics under Syslog.
+
+See the **Next steps** tab in the connector page for some useful sample queries.
+
+## Validate connectivity
+
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect Cisco UCS to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-data-sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-data-sources.md
@@ -73,8 +73,10 @@ The following data connection methods are supported by Azure Sentinel:
- [Okta SSO](connect-okta-single-sign-on.md) - [Orca Security](connect-orca-security-alerts.md) - [Perimeter 81 logs](connect-perimeter-81-logs.md)
+ - [Proofpoint On Demand (POD) Email Security](connect-proofpoint-pod.md)
- [Proofpoint TAP](connect-proofpoint-tap.md) - [Qualys VM](connect-qualys-vm.md)
+ - [Salesforce Service Cloud](connect-salesforce-service-cloud.md)
- [Squadra Technologies secRMM](connect-squadra-secrmm.md) - [Symantec ICDX](connect-symantec.md) - [VMware Carbon Black Cloud Endpoint Standard](connect-vmware-carbon-black.md)
@@ -100,15 +102,19 @@ The following data connection methods are supported by Azure Sentinel:
- [Illusive Networks AMS](connect-illusive-attack-management-system.md) - [One Identity Safeguard](connect-one-identity.md) - [Palo Alto Networks](connect-paloalto.md)
+ - [Thycotic Secret Server](connect-thycotic-secret-server.md)
- [Trend Micro Deep Security](connect-trend-micro.md) - [Trend Micro TippingPoint](connect-trend-micro-tippingpoint.md) - [WireX Network Forensics Platform](connect-wirex-systems.md) - [Zscaler](connect-zscaler.md) - [Other CEF-based appliances](connect-common-event-format.md) - **Firewalls, proxies, and endpoints - Syslog:**
+ - [Cisco Unified Computing System (UCS)](connect-cisco-ucs.md)
- [Infoblox NIOS](connect-infoblox.md)
+ - [Juniper SRX](connect-juniper-srx.md)
- [Pulse Connect Secure](connect-pulse-connect-secure.md) - [Sophos XG](connect-sophos-xg-firewall.md)
+ - [Squid Proxy](connect-squid-proxy.md)
- [Symantec Proxy SG](connect-symantec-proxy-sg.md) - [Symantec VIP](connect-symantec-vip.md) - [Other Syslog-based appliances](connect-syslog.md)
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-infoblox https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-infoblox.md
@@ -31,7 +31,7 @@ This article explains how to connect your [Infoblox Network Identity Operating S
## Forward Infoblox logs to the Syslog agent
-Configure Infoblox to forward Syslog messages to your Azure workspace via the Syslog agent.
+Configure Infoblox to forward Syslog messages to your Azure Sentinel workspace via the Syslog agent.
1. In the Azure Sentinel portal, click **Data connectors** and select **Infoblox NIOS** connector.
@@ -45,7 +45,7 @@ After a successful connection is established, the data appears in Log Analytics
## Validate connectivity
-It may take upwards of 20 minutes until your logs start to appear in Log Analytics.
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
## Next steps
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-juniper-srx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-juniper-srx.md new file mode 100644
@@ -0,0 +1,77 @@
+---
+title: Connect Juniper Networks SRX data to Azure Sentinel| Microsoft Docs
+description: Learn how to use the Juniper SRX data connector to pull Juniper SRX logs into Azure Sentinel. View Juniper SRX data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/17/2021
+ms.author: yelevin
+
+---
+# Connect your Juniper SRX firewall to Azure Sentinel
+
+> [!IMPORTANT]
+> The Juniper SRX connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Juniper SRX firewall appliance to Azure Sentinel. The Juniper SRX data connector allows you to easily connect your SRX logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Juniper SRX and Azure Sentinel makes use of Syslog.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permission on the Azure Sentinel workspace.
+
+- Your Juniper SRX solution must be configured to export logs via Syslog.
+
+## Forward Juniper SRX logs to the Syslog agent
+
+Configure Juniper SRX to forward Syslog messages to your Azure Sentinel workspace via the Syslog agent.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select the **Juniper SRX (Preview)** connector, and then **Open connector page**.
+
+1. Follow the instructions on the **Juniper SRX** connector page:
+
+ 1. Install and onboard the agent for Linux
+
+ - Choose an Azure Linux VM or a non-Azure Linux machine (physical or virtual).
+
+ 1. Configure the logs to be collected
+
+ - Select the facilities and severities in the workspace advanced settings configuration
+
+ 1. Configure and connect the Juniper SRX
+
+ - Follow these instructions to configure the Juniper SRX to forward syslog.
+ - [Traffic logs (Security policy logs)](https://kb.juniper.net/InfoCenter/index?page=content&id=KB16509&actp=METADATA)
+ - [System logs](https://kb.juniper.net/InfoCenter/index?page=content&id=kb16502)
+ - For the remote server, use the IP address of the Linux machine you installed the Linux agent on.
+
+## Find your data
+
+After a successful connection is established, the data appears in Log Analytics under Syslog.
+
+See the **Next steps** tab in the connector page for some useful sample queries.
+
+## Validate connectivity
+
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect Juniper SRX to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-proofpoint-pod https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-proofpoint-pod.md new file mode 100644
@@ -0,0 +1,68 @@
+---
+title: Connect Proofpoint On Demand Email Security data to Azure Sentinel| Microsoft Docs
+description: Learn how to use the Proofpoint On Demand Email Security data connector to pull POD Email Security logs into Azure Sentinel. View POD Email Security data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/17/2021
+ms.author: yelevin
+
+---
+# Connect your Proofpoint On Demand Email Security (POD) solution to Azure Sentinel
+
+> [!IMPORTANT]
+> The Proofpoint On Demand Email Security connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Proofpoint On Demand Email Security appliance to Azure Sentinel. The POD data connector allows you to easily connect your POD logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Proofpoint On Demand Email Security and Azure Sentinel makes use of Websocket API.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permission on the Azure Sentinel workspace.
+
+- You must have read permissions to shared keys for the workspace. [Learn more about workspace keys](../azure-monitor/platform/log-analytics-agent.md#workspace-id-and-key).
+
+- You must have read and write permissions to Azure Functions in order to create a Function App. [Learn more about Azure Functions](/azure/azure-functions/).
+
+- You must have the following Websocket API credentials: ProofpointClusterID, ProofpointToken. [Learn more about Websocket API](https://proofpointcommunities.force.com/community/s/article/Proofpoint-on-Demand-Pod-Log-API).
+
+## Configure and connect Proofpoint On Demand Email Security
+
+Proofpoint On Demand Email Security can integrate and export logs directly to Azure Sentinel.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Proofpoint On Demand Email Security (Preview)** and then **Open connector page**.
+
+1. Follow the steps described in the **Configuration** section of the connector page.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under *Custom Logs*, in the following tables:
+- `ProofpointPOD_message_CL`
+- `ProofpointPOD_maillog_CL`
+
+See the **Next steps** tab in the connector page for some useful sample queries.
+
+## Validate connectivity
+
+It may take up to 60 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect Proofpoint On Demand Email Security to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-salesforce-service-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-salesforce-service-cloud.md new file mode 100644
@@ -0,0 +1,66 @@
+---
+title: Connect Salesforce Service Cloud data to Azure Sentinel | Microsoft Docs
+description: Learn how to use the Salesforce Service Cloud data connector to pull Salesforce logs into Azure Sentinel. View Salesforce data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/17/2021
+ms.author: yelevin
+
+---
+# Connect your Salesforce Service Cloud to Azure Sentinel
+
+> [!IMPORTANT]
+> The Salesforce Service Cloud connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Salesforce Service Cloud solution to Azure Sentinel. The Salesforce Service Cloud data connector allows you to easily connect your Salesforce data with Azure Sentinel, so that you can view it in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Salesforce Service Cloud and Azure Sentinel makes use of REST API.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permission on the Azure Sentinel workspace.
+
+- You must have read permissions to shared keys for the workspace. [Learn more about workspace keys](../azure-monitor/platform/log-analytics-agent.md#workspace-id-and-key).
+
+- You must have read and write permissions to Azure Functions in order to create a Function App. [Learn more about Azure Functions](/azure/azure-functions/).
+
+- You must have the following Salesforce REST API credentials: **Salesforce API Username**, **Salesforce API Password**, **Salesforce Security Token**, **Salesforce Consumer Key**, **Salesforce Consumer Secret**. [Learn more about Salesforce REST API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm).
+
+## Configure and connect Salesforce Service Cloud
+
+Salesforce Service Cloud can integrate and export logs directly to Azure Sentinel.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Salesforce Service Cloud (Preview)** and then **Open connector page**.
+
+1. Follow the steps described in the **Configuration** section of the connector page.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under the **CustomLogs** section, in the `SalesforceServiceCloud_CL` table.
+
+See the **Next steps** tab in the connector page for some useful sample queries.
+
+## Validate connectivity
+
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect Salesforce Service Cloud to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-squid-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-squid-proxy.md new file mode 100644
@@ -0,0 +1,68 @@
+---
+title: Connect Squid Proxy data to Azure Sentinel| Microsoft Docs
+description: Learn how to use the Squid Proxy data connector to pull Squid Proxy logs into Azure Sentinel. View Squid Proxy data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 01/17/2021
+ms.author: yelevin
+
+---
+# Connect your Squid Proxy to Azure Sentinel
+
+> [!IMPORTANT]
+> The Squid Proxy connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Squid Proxy appliance to Azure Sentinel. The Squid Proxy data connector allows you to easily connect your Squid logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Squid Proxy and Azure Sentinel makes use of Syslog.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permission on the Azure Sentinel workspace.
+
+## Forward Squid Proxy logs to the Syslog agent
+
+Configure Squid Proxy to forward Syslog messages to your Azure workspace via the Syslog agent.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select the **Squid Proxy (Preview)** connector, and then **Open connector page**.
+
+1. Follow the instructions on the **Squid Proxy** connector page:
+
+ 1. Install and onboard the agent for Linux
+
+ - Choose an Azure Linux VM or a non-Azure Linux machine (physical or virtual).
+
+ 1. Configure the logs to be collected
+
+ - In the workspace advanced settings, add a custom log type, upload a sample file, and configure as directed.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under **Custom Logs**, in the `SquidProxy_CL` table.
+
+See the **Next steps** tab in the connector page for some useful sample queries.
+
+## Validate connectivity
+
+It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Next steps
+
+In this document, you learned how to connect Squid Proxy to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-thycotic-secret-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-thycotic-secret-server.md new file mode 100644
@@ -0,0 +1,79 @@
+---
+title: Connect Thycotic Secret Server to Azure Sentinel| Microsoft Docs
+description: Learn how to use the Thycotic Secret Server data connector to pull Thycotic Secret Server logs into Azure Sentinel. View Thycotic Secret Server data in workbooks, create alerts, and improve investigation.
+services: sentinel
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+editor: ''
+
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.devlang: na
+ms.topic: how-to
+ms.tgt_pltfrm: na
+ms.workload: na
+ms.date: 12/13/2020
+ms.author: yelevin
+
+---
+# Connect your Thycotic Secret Server to Azure Sentinel
+
+> [!IMPORTANT]
+> The Thycotic Secret Server connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article explains how to connect your Thycotic Secret Server appliance to Azure Sentinel. The Thycotic Secret Server data connector allows you to easily connect your Thycotic Secret Server logs with Azure Sentinel, so that you can view the data in workbooks, use it to create custom alerts, and incorporate it to improve investigation. Integration between Thycotic and Azure Sentinel makes use of the CEF Data Connector to properly parse and display Secret Server Syslog messages.
+
+> [!NOTE]
+> Data will be stored in the geographic location of the workspace on which you are running Azure Sentinel.
+
+## Prerequisites
+
+- You must have read and write permissions on your Azure Sentinel workspace.
+
+- You must have read permissions to shared keys for the workspace.
+
+- Your Thycotic Secret Server must be configured to export logs via Syslog.
+
+## Send Thycotic Secret Server logs to Azure Sentinel
+
+To get its logs into Azure Sentinel, configure your Thycotic Secret Server to send Syslog messages in CEF format to your Linux-based log forwarding server (running rsyslog or syslog-ng). This server will have the Log Analytics agent installed on it, and the agent forwards the logs to your Azure Sentinel workspace.
+
+1. In the Azure Sentinel navigation menu, select **Data connectors**.
+
+1. From the **Data connectors** gallery, select **Thycotic Secret Server (Preview)**, and then **Open connector page**.
+
+1. Follow the instructions in the **Instructions** tab, under **Configuration**:
+
+ 1. Under **1. Linux Syslog agent configuration** - Do this step if you don't already have a log forwarder running, or if you need another one. See [STEP 1: Deploy the log forwarder](connect-cef-agent.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ 1. Under **2. Forward Common Event Format (CEF) logs to Syslog agent** - Follow Thycotic's instructions to [configure Secret Server](https://thy.center/ss/link/syslog). This configuration should include the following elements:
+ - Log destination ΓÇô the hostname and/or IP address of your log forwarding server
+ - Protocol and port ΓÇô **TCP 514** (if recommended otherwise, be sure to make the parallel change in the syslog daemon on your log forwarding server)
+ - Log format ΓÇô CEF
+ - Log types ΓÇô all available
+
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+
+ It may take up to 20 minutes until your logs start to appear in Log Analytics.
+
+## Find your data
+
+After a successful connection is established, the data appears in **Logs**, under the **Azure Sentinel** section, in the *CommonSecurityLog* table.
+
+To query Thycotic Secret Server data in Log Analytics, copy the following into the query window, applying other filters as you choose:
+
+```kusto
+CommonSecurityLog
+| where DeviceVendor == "Thycotic Software"
+```
+
+See the **Next steps** tab in the connector page for some useful workbooks and query samples.
+
+## Next steps
+
+In this document, you learned how to connect Thycotic Secret Server to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
\ No newline at end of file
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-trend-micro-tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-trend-micro-tippingpoint.md
@@ -44,15 +44,15 @@ To get its logs into Azure Sentinel, configure your TippingPoint TPS solution to
1. Follow the instructions in the **Instructions** tab, under **Configuration**:
- 1. **1. Linux Syslog agent configuration** - Do this step if you don't already have a log forwarder running, or if you need another one. See [STEP 1: Deploy the log forwarder](connect-cef-agent.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **1. Linux Syslog agent configuration** - Do this step if you don't already have a log forwarder running, or if you need another one. See [STEP 1: Deploy the log forwarder](connect-cef-agent.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
- 1. **2. Forward Trend Micro TippingPoint SMS logs to Syslog agent** - This configuration should include the following elements:
+ 1. Under **2. Forward Trend Micro TippingPoint SMS logs to Syslog agent** - This configuration should include the following elements:
- Log destination ΓÇô the hostname and/or IP address of your log forwarding server - Protocol and port ΓÇô **TCP 514** (if recommended otherwise, be sure to make the parallel change in the syslog daemon on your log forwarding server) - Log format ΓÇô **ArcSight CEF Format v4.2** - Log types ΓÇô all available
- 1. **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/sql-database-output-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/sql-database-output-managed-identity.md
@@ -118,7 +118,7 @@ Once you've created a contained database user and given access to Azure services
Once you've created a contained database user and given access to Azure services in the portal as described in the previous section, your Stream Analytics job has permission from Managed Identity to **CONNECT** to your Azure Synapse database resource via managed identity. We recommend that you further grant the SELECT, INSERT, and ADMINISTER DATABASE BULK OPERATIONS permissions to the Stream Analytics job as those will be needed later in the Stream Analytics workflow. The **SELECT** permission allows the job to test its connection to the table in the Azure Synapse database. The **INSERT** and **ADMINISTER DATABASE BULK OPERATIONS** permissions allow testing end-to-end Stream Analytics queries once you have configured an input and the Azure Synapse database output.
-To grant the ADMINISTER DATABASE BULK OPERATIONS permission, you will need to grant all permissions that are labeled as **CONTROL** under [Implied by database permission](/sql/t-sql/statements/grant-database-permissions-transact-sql?view=azure-sqldw-latest#remarks) to the Stream Analytics job. You need this permission because the Stream Analytics job performs the COPY statement, which requires [ADMINISTER DATABASE BULK OPERATIONS and INSERT](/sql/t-sql/statements/copy-into-transact-sql).
+To grant the ADMINISTER DATABASE BULK OPERATIONS permission, you will need to grant all permissions that are labeled as **CONTROL** under [Implied by database permission](/sql/t-sql/statements/grant-database-permissions-transact-sql?view=azure-sqldw-latest&preserve-view=true#remarks) to the Stream Analytics job. You need this permission because the Stream Analytics job performs the COPY statement, which requires [ADMINISTER DATABASE BULK OPERATIONS and INSERT](/sql/t-sql/statements/copy-into-transact-sql).
---
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/overview-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
@@ -50,7 +50,7 @@ Query languages used in Synapse SQL can have different supported features depend
| **INSERT statement** | Yes | No | | **UPDATE statement** | Yes | No | | **DELETE statement** | Yes | No |
-| **MERGE statement** | No | No |
+| **MERGE statement** | Yes ([preview](https://docs.microsoft.com/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15)) | No |
| **[Transactions](develop-transactions.md)** | Yes | Yes, applicable on meta-data objects. | | **[Labels](develop-label.md)** | Yes | No | | **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) for data loading. | No |
@@ -154,4 +154,4 @@ Data that is analyzed can be stored in various storage formats. The following ta
Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles: - [Best Practices for dedicated SQL pool](best-practices-sql-pool.md)-- [Best practices for serverless SQL pool](best-practices-sql-on-demand.md)\ No newline at end of file
+- [Best practices for serverless SQL pool](best-practices-sql-on-demand.md)