Updates from: 09/01/2021 03:08:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/userjourneys.md
Previously updated : 06/27/2021 Last updated : 08/31/2021
The **OrchestrationStep** element can contain the following elements:
Orchestration steps can be conditionally executed based on preconditions defined in the orchestration step. The `Preconditions` element contains a list of preconditions to evaluate. When the precondition evaluation is satisfied, the associated orchestration step skips to the next orchestration step.
-Each precondition evaluates a single claim. There are two types of preconditions:
- 
-- **Claims exist** - Specifies that the actions should be performed if the specified claims exist in the user's current claim bag.-- **Claim equals** - Specifies that the actions should be performed if the specified claim exists, and its value is equal to the specified value. The check performs a case-sensitive ordinal comparison. When checking Boolean claim type, use `True`, or `False`.-
-Azure AD B2C evaluates the preconditions in list order. The oder-based preconditions allows you set the order in which the preconditions are applied. The first precondition that satisfied overrides all the subsequent preconditions. The orchestration step is executed only if all of the preconditions are not satisfied.
+Azure AD B2C evaluates the preconditions in list order. The order-based preconditions allows you set the order in which the preconditions are applied. The first precondition that satisfied overrides all the subsequent preconditions. The orchestration step is executed only if all of the preconditions are not satisfied.
The **Preconditions** element contains the following element:
The **Precondition** elements contains the following elements:
| - | -- | -- | | Value | 1:2 | The identifier of a claim type. The claim is already defined in the claims schema section in the policy file, or parent policy file. When the precondition is type of `ClaimEquals`, a second `Value` element contains the value to be checked. | | Action | 1:1 | The action that should be performed if the precondition evaluation is satisfied. Possible value: `SkipThisOrchestrationStep`. The associated orchestration step skips to the next one. |
+
+Each precondition evaluates a single claim. There are two types of preconditions:
+ 
+- **ClaimsExist** - Specifies that the actions should be performed if the specified claims exist in the user's current claim bag.
+- **ClaimEquals** - Specifies that the actions should be performed if the specified claim exists, and its value is equal to the specified value. The check performs a case-sensitive ordinal comparison. When checking Boolean claim type, use `True`, or `False`.
+
+ If the claim is null or uninitialized, the precondition is ignored, whether the `ExecuteActionsIf` is `true`, or `false`. As a best practice, check both that the claim exists, and equals to a value.
+
+An example scenario would be to challenge the user for MFA if the user has `MfaPreference` set to `Phone`. To perform this conditional logic, check if the `MfaPreference` claim exists, and also check the claim value equals to `Phone`. The following XML demonstrates how to implement this logic with preconditions.
+ 
+```xml
+<Preconditions>
+ <!-- Skip this orchestration step if MfaPreference doesn't exist. -->
+ <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
+ <Value>MfaPreference</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ <!-- Skip this orchestration step if MfaPreference doesn't equal to Phone. -->
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="false">
+ <Value>MfaPreference</Value>
+ <Value>Phone</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+</Preconditions>
+```
#### Preconditions examples
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/overview.md
# What is Azure Active Directory Domain Services?
-Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
+Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.
An Azure AD DS managed domain lets you run legacy applications in the cloud that can't use modern authentication methods, or where you don't want directory lookups to always go back to an on-premises AD DS environment. You can lift and shift those legacy applications from your on-premises environment into a managed domain, without needing to manage the AD DS environment in the cloud.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
Use the steps below to provision roles for a user to your application. Note that
- **Example output (PATCH)**
- ```
+ ```json
"Operations": [
- {
- "op": "Add",
- "path": "roles",
- "value": [
- {
- "value": "{\"id\":\"06b07648-ecfe-589f-9d2f-6325724a46ee\",\"value\":\"25\",\"displayName\":\"Role1234\"}"
- }
- ]
+ {
+ "op": "Add",
+ "path": "roles",
+ "value": [
+ {
+ "value": "{\"id\":\"06b07648-ecfe-589f-9d2f-6325724a46ee\",\"value\":\"25\",\"displayName\":\"Role1234\"}"
+ }
+ ]
``` The request format in the PATCH and POST differ. To ensure that POST and PATCH are sent in the same format, you can use the feature flag described [here](./application-provisioning-config-problem-scim-compatibility.md#flags-to-alter-the-scim-behavior).
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension.md
Previously updated : 08/17/2021 Last updated : 08/20/2021
When you use the NPS extension for Azure AD Multi-Factor Authentication, the aut
1. **NAS/VPN Server** receives requests from VPN clients and converts them into RADIUS requests to NPS servers. 2. **NPS Server** connects to Active Directory Domain Services (AD DS) to perform the primary authentication for the RADIUS requests and, upon success, passes the request to any installed extensions.   3. **NPS Extension** triggers a request to Azure AD Multi-Factor Authentication for the secondary authentication. Once the extension receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim, issued by Azure STS.
-4. **Azure AD MFA** communicates with Azure Active Directory (Azure AD) to retrieve the user's details and performs the secondary authentication using a verification method configured to the user.
+ >[!NOTE]
+ >Users must have access to their default authentication method to complete the MFA requirement. They cannot choose an alternative method. Their default authentication method will be used even if it's been disabled in the tenant authentication methods and MFA policies.
+1. **Azure AD MFA** communicates with Azure Active Directory (Azure AD) to retrieve the user's details and performs the secondary authentication using a verification method configured to the user.
The following diagram illustrates this high-level authentication request flow:
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
The following prerequisites are required to use these cmdlets.
|PasswordWriteBack|See [PasswordWriteBack](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-password-writeback) permissions for Azure AD Connect| |HybridExchangePermissions|See [HybridExchangePermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-hybrid-deployment) permissions for Azure AD Connect| |ExchangeMailPublicFolderPermissions| See [ExchangeMailPublicFolderPermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-mail-public-folders) permissions for Azure AD Connect|
-|CloudHR| Applies 'Full control' on 'Descendant User objects' and 'Create/delete User objects' on 'This object and all descendant objects'|
+|CloudHR| Applies 'Create/delete User objects' on 'This object and all descendant objects'|
|All|adds all the above permissions.| You can use AADCloudSyncPermissions in one of two ways:
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-prerequisites.md
You need the following to use Azure AD Connect cloud sync:
A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management,the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) ### Prerequisites for gMSA:
-1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2016.
+1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later.
2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
-3. At least one domain controller in the domain must be running Windows Server 2016.
+3. At least one domain controller in the domain must be running Windows Server 2012 or later.
4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later. ### Custom gMSA account
When using OU scoping filter
- You can only sync up to 59 separate OUs for a given configuration. - Nested OUs are supported (that is, you **can** sync an OU that has 130 nested OUs, but you **cannot** sync 60 separate OUs in the same configuration).
+### Password Hash Sync
+- Using password hash sync with InetOrgPerson is not supported.
+ ## Next steps
active-directory Migrate Android Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/migrate-android-adal-msal.md
In MSAL, there's a hierarchy of exceptions, and each has its own set of associat
| If you're catching these errors in ADAL... | ...catch these MSAL exceptions: | |--|| | *No equivalent ADALError* | `MsalArgumentException` |
-| <ul><li>`ADALError.ANDROIDKEYSTORE_FAILED`<li>`ADALError.AUTH_FAILED_USER_MISMATCH`<li>`ADALError.DECRYPTION_FAILED`<li>`ADALError.DEVELOPER_AUTHORITY_CAN_NOT_BE_VALIDED`<li>`ADALError.EVELOPER_AUTHORITY_IS_NOT_VALID_INSTANCE`<li>`ADALError.DEVELOPER_AUTHORITY_IS_NOT_VALID_URL`<li>`ADALError.DEVICE_CONNECTION_IS_NOT_AVAILABLE`<li>`ADALError.DEVICE_NO_SUCH_ALGORITHM`<li>`ADALError.ENCODING_IS_NOT_SUPPORTED`<li>`ADALError.ENCRYPTION_ERROR`<li>`ADALError.IO_EXCEPTION`<li>`ADALError.JSON_PARSE_ERROR`<li>`ADALError.NO_NETWORK_CONNECTION_POWER_OPTIMIZATION`<li>`ADALError.SOCKET_TIMEOUT_EXCEPTION`</ul> | `MsalClientException` |
+| <ul><li>`ADALError.ANDROIDKEYSTORE_FAILED`<li>`ADALError.AUTH_FAILED_USER_MISMATCH`<li>`ADALError.DECRYPTION_FAILED`<li>`ADALError.DEVELOPER_AUTHORITY_CAN_NOT_BE_VALIDED`<li>`ADALError.DEVELOPER_AUTHORITY_IS_NOT_VALID_INSTANCE`<li>`ADALError.DEVELOPER_AUTHORITY_IS_NOT_VALID_URL`<li>`ADALError.DEVICE_CONNECTION_IS_NOT_AVAILABLE`<li>`ADALError.DEVICE_NO_SUCH_ALGORITHM`<li>`ADALError.ENCODING_IS_NOT_SUPPORTED`<li>`ADALError.ENCRYPTION_ERROR`<li>`ADALError.IO_EXCEPTION`<li>`ADALError.JSON_PARSE_ERROR`<li>`ADALError.NO_NETWORK_CONNECTION_POWER_OPTIMIZATION`<li>`ADALError.SOCKET_TIMEOUT_EXCEPTION`</ul> | `MsalClientException` |
| *No equivalent ADALError* | `MsalDeclinedScopeException` | | <ul><li>`ADALError.APP_PACKAGE_NAME_NOT_FOUND`<li>`ADALError.BROKER_APP_VERIFICATION_FAILED`<li>`ADALError.PACKAGE_NAME_NOT_FOUND`</ul> | `MsalException` | | *No equivalent ADALError* | `MsalIntuneAppProtectionPolicyRequiredException` |
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Create and then select a new self-signed test certificate for the package: > 1. In the **Solution Explorer**, double-click the *Package.appxmanifest* file. > 1. Select **Packaging** > **Choose Certificate...** > **Create...**.
-> 1. Enter a password and then select **OK**.
-> 1. Select **Select from file...**, and then select the *Native_UWP_V2_TemporaryKey.pfx* file you just created, and select **OK**.
-> 1. Close the *Package.appxmanifest* file (select **OK** if prompted to save the file).
+> 1. Enter a password and then select **OK**. A certificate called *Native_UWP_V2_TemporaryKey.pfx* is created.
+> 1. Select **OK** to dismiss the **Choose a certificate** dialog, and then verify that you see *Native_UWP_V2_TemporaryKey.pfx* in Solution Explorer.
> 1. In the **Solution Explorer**, right-click the **Native_UWP_V2** project and select **Properties**. > 1. Select **Signing**, and then select the .pfx you created in the **Choose a strong name key file** drop-down.
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
To add authentication with the Microsoft identity platform (formerly Azure AD v2
> [!NOTE] > If you want to start directly with the new ASP.NET Core templates for Microsoft identity platform, that leverage Microsoft.Identity.Web, you can download a preview NuGet package containing project templates for .NET Core 3.1 and .NET 5.0. Then, once installed, you can directly instantiate ASP.NET Core web applications (MVC or Blazor). See [Microsoft.Identity.Web web app project templates](https://aka.ms/ms-id-web/webapp-project-templates) for details. This is the simplest approach as it will do all the steps below for you. >
-> If you prefer to start your project with the current default ASP.NET Core web project within Visual Studio or by using `dotnet new mvc --auth SingleAuth` or `dotnet new webapp --auth SingleAuth`, you'll see code like the following:
+> If you prefer to start your project with the current default ASP.NET Core web project within Visual Studio or by using `dotnet new mvc --auth SingleOrg` or `dotnet new webapp --auth SingleOrg`, you'll see code like the following:
> >```c# > services.AddAuthentication(AzureADDefaults.AuthenticationScheme)
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-resources.md
If you need to add resources to an access package, you should check whether the
## Add resource roles
-A resource role is a collection of permissions associated with a resource. The way you make resources available for users to request is by adding resource roles to your access package. You can add resource roles for groups, teams, applications, and SharePoint sites.
+A resource role is a collection of permissions associated with a resource. The way you make resources available for users to request is by adding resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites.
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
Once an application role is part of an access package:
Here are some considerations when selecting an application: - Applications may also have groups assigned to their roles as well. You can choose to add a group in place of an application role in an access package, however then the application will not be visible to the user as part of the access package in the My Access portal.
+- Azure portal may also show service principals for services that cannot be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they cannot be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
+- Applications which only support Personal Microsoft Account users for authentication, and do not support organizational accounts in your directory, do not have application roles and cannot be added to access package catalogs.
1. On the **Add resource roles to access package** page, click **Applications** to open the Select applications pane.
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-troubleshoot.md
This article describes some items you should check to help you troubleshoot Azur
## Administration
-* If you get an access denied message when configuring entitlement management, and you are a Global administrator, ensure that your directory has an [Azure AD Premium P2 (or EMS E5) license](entitlement-management-overview.md#license-requirements).
+* If you get an access denied message when configuring entitlement management, and you are a Global administrator, ensure that your directory has an [Azure AD Premium P2 (or EMS E5) license](entitlement-management-overview.md#license-requirements). If you've recently renewed an expired Azure AD Premium P2 subscription, then it may take 8 hours for this license renewal to be visible.
+
+* If your tenant's Azure AD Premium P2 license has expired, then you will not be able to process new access requests or perform access reviews.
* If you get an access denied message when creating or viewing access packages, and you are a member of a Catalog creator group, you must [create a catalog](entitlement-management-catalog-create.md) prior to creating your first access package.
This article describes some items you should check to help you troubleshoot Azur
Note that the Azure portal may also show service principals for services that cannot be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they cannot be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services.
+* Applications which only support Personal Microsoft Account users for authentication, and do not support organizational accounts in your directory, do not have application roles and cannot be added to access package catalogs.
+ * For a group to be a resource in an access package, it must be able to be modifiable in Azure AD. Groups that originate in an on-premises Active Directory cannot be assigned as resources because their owner or member attributes cannot be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups cannot be modified in Azure AD either. * SharePoint Online document libraries and individual documents cannot be added as resources. Instead, create an [Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md), include that group and a site role in the access package, and in SharePoint Online use that group to control access to the document library or document.
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
The following table provides a summary of the permissions required on AD objects
| Feature | Permissions | | | | | ms-DS-ConsistencyGuid feature |Read and Write permissions to the ms-DS-ConsistencyGuid attribute documented in [Design Concepts - Using ms-DS-ConsistencyGuid as sourceAnchor](plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor). |
-| Password hash sync |<li>Replicate Directory Changes</li> <li>Replicate Directory Changes All |
+| Password hash sync |<li>Replicate Directory Changes - required for basic read only</li> <li>Replicate Directory Changes All |
| Exchange hybrid deployment |Read and Write permissions to the attributes documented in [Exchange hybrid writeback](reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) for users, groups, and contacts. | | Exchange Mail Public Folder |Read permissions to the attributes documented in [Exchange Mail Public Folder](reference-connect-sync-attributes-synchronized.md#exchange-mail-public-folder) for public folders. | | Password writeback |Read and Write permissions to the attributes documented in [Getting started with password management](../authentication/tutorial-enable-sspr-writeback.md) for users. |
This cmdlet will set the following permissions:
|Allow |AD DS Connector Account |Read all properties |Descendant Group objects| |Allow |AD DS Connector Account |Read all properties |Descendant User objects| |Allow |AD DS Connector Account |Read all properties |Descendant Contact objects|
+|Allow|AD DS Connector Account|Replicating Directory Changes|This object only (Domain root)|
### Configure MS-DS-Consistency-Guid Permissions
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Previously updated : 02/27/2019 Last updated : 08/31/2021
Azure Active Directory can provide a users group membership information in token
> > - Support for use of sAMAccountName and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from AD FS and other identity providers. Groups managed in Azure AD do not contain the attributes necessary to emit these claims. > - In larger organizations the number of groups a user is a member of may exceed the limit that Azure Active Directory will add to a token. 150 groups for a SAML token, and 200 for a JWT. This can lead to unpredictable results. If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application.
+> - Group claims have a 5-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will only have a "hasgroups":true claim if the user is in more than 5 groups.
> - For new application development, or in cases where the application can be configured for it, and where nested group support isn't required, we recommend that in-app authorization is based on application roles rather than groups. This limits the amount of information that needs to go into the token, is more secure, and separates user assignment from app configuration. ## Group claims for applications migrating from AD FS and other identity providers
active-directory Pim Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-email-notifications.md
Title: Email notifications in PIM - Azure Active Directory | Microsoft Docs
+ Title: Email notifications in Privileged Identity Management (PIM) - Azure Active Directory | Microsoft Docs
description: Describes email notifications in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
na
ms.devlang: na Previously updated : 06/30/2021 Last updated : 08/24/2021
These emails include a **PIM** prefix in the subject line. Here's an example:
- PIM: Alain Charon was permanently assigned the Backup Reader role
+## Email timing for activation approvals
+
+When users activate their role and the role setting requires approval, approvers will receive two emails for each approval:
+
+- Request to approve or deny the user's activation request (sent by the request approval engine)
+- The user's request is approved (sent by the request approval engine)
+
+Also, Global Administrators and Privileged Role Administrators receive an email for each approval:
+
+- The user's role is activated (sent by Privileged Identity Management)
+
+The first two emails sent by the request approval engine can be delayed. Currently, 90% of emails take three to ten minutes, but for 1% customers it can be much longer, up to fifteen minutes.
+
+If an approval request is approved in the Azure portal before the first email is sent, the first email will no longer be triggered and other approvers won't be notified by email of the approval request. It might appear as if the they didn't get an email but it's the expected behavior.
+ ## Notifications for Azure AD roles Privileged Identity Management sends emails when the following events occur for Azure AD roles:
Privileged Identity Management sends emails when the following events occur for
- When a privileged role activation request is completed - When Azure AD Privileged Identity Management is enabled
-Who receives these emails for Azure AD roles depends on your role, the event, and the notifications setting:
+Who receives these emails for Azure AD roles depends on your role, the event, and the notifications setting.
| User | Role activation is pending approval | Role activation request is completed | PIM is enabled | | | | | |
The email includes:
The **Overview of your top roles** section lists the top five roles in your organization based on total number of permanent and eligible administrators for each role. The **Take action** link opens [Discovery & Insights](pim-security-wizard.md) where you can convert permanent administrators to eligible administrators in batches.
-## Email timing for activation approvals
-
-When users activate their role and the role setting requires approval, approvers will receive two emails for each approval:
--- Request to approve or deny the user's activation request (sent by the request approval engine)-- The user's request is approved (sent by the request approval engine)-
-Also, Global Administrators and Privileged Role Administrators receive an email for each approval:
--- The user's role is activated (sent by Privileged Identity Management)-
-The first two emails sent by the request approval engine can be delayed. Currently, 90% of emails take three to ten minutes, but for 1% customers it can be much longer, up to fifteen minutes.
-
-If an approval request is approved in the Azure portal before the first email is sent, the first email will no longer be triggered and other approvers won't be notified by email of the approval request. It might appear as if the they didn't get an email but it's the expected behavior.
-
-## PIM emails for Azure resource roles
+## Notifications for Azure resource roles
Privileged Identity Management sends emails to Owners and User Access Administrators when the following events occur for Azure resource roles:
The following shows an example email that is sent when a user is assigned an Azu
![New Privileged Identity Management email for Azure resource roles](./media/pim-email-notifications/email-resources-new.png)
+## Notifications for Privileged Access groups
+
+Privileged Identity Management sends emails to Owners only when the following events occur for Privileged Access group assignments:
+
+- When an Owner or Member role assignment is pending approval
+- When an Owner or Member role is assigned
+- When an Owner or Member role is soon to expire
+- When an Owner or Member role is eligible to extend
+- When an Owner or Member role is being renewed by an end user
+- When an Owner or Member role activation request is completed
+
+Privileged Identity Management sends emails to end users when the following events occur for Privileged Access group role assignments:
+
+- When an Owner or Member role is assigned to the user
+- When a user's an Owner or Member role is expired
+- When a user's an Owner or Member role is extended
+- When a user's an Owner or Member role activation request is completed
++ ## Next steps - [Configure Azure AD role settings in Privileged Identity Management](pim-how-to-change-default-settings.md)
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-developer-portal-customize.md
Previously updated : 11/16/2020 Last updated : 08/31/2021
Before you make your portal available to the visitors, you should personalize th
### Home page
-The default **Home** page is filled with placeholder content. You can either remove entire sections containing this content or keep the structure and adjust the elements one by one. Replace the generated text and images with your own and make sure the links point to desired locations.
+The default **Home** page is filled with placeholder content. You can either remove entire sections containing this content or keep the structure and adjust the elements one by one. Replace the generated text and images with your own and make sure the links point to desired locations. You can edit the structure and content of the home page by:
+* Dragging and dropping page elements to the desired placement on the site.
+* Selecting text and heading elements to edit and format content.
+* Verifying your buttons point to the right locations.
### Layouts Replace the automatically generated logo in the navigation bar with your own image.
+1. In the developer portal, select the default **Contoso** logo in the top left of the navigation bar.
+1. Select the **Edit** icon.
+1. Under the **Main** section, select **Source**.
+1. In the **Media** pop-up, either select:
+ * An image already uploaded in your library, or
+ * **Upload file** to upload a new image file to use, or
+ * Select **None** to forego using a logo.
+1. The logo updates in real-time.
+1. Select outside the pop-up windows to exit the media library.
+1. Click **Save**.
+ ### Styling
-Although you don't need to adjust any styles, you may consider adjusting particular elements. For example, change the primary color to match your brand's color.
+Although you don't need to adjust any styles, you may consider adjusting particular elements. For example, change the primary color to match your brand's color. You can do this in two ways:
+
+#### Overall site style
+
+1. In the developer portal, select the **Styles** icon from the left tool bar.
+1. Under the **Colors** section, select the color style item you want to edit.
+1. Click the **Edit** icon for that style item.
+1. Select the color from the color-picker, or enter the hex color code.
+1. Add and name another color item by clicking **Add color**.
+1. Click **Save**.
+
+#### Container style
+
+1. On the main page of the developer portal, select the container background.
+1. Click the **Edit** icon.
+1. In the pop-up, set:
+ * The background to clear, an image, a specific color, or a gradient.
+ * The container size, margin, and padding.
+ * Container position and height.
+1. Select outside the pop-up windows to exit the container settings.
+1. Click **Save**.
### Customization example
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-sample-send-request.md
There are certain tradeoffs when using a fire-and-forget style of request. If fo
The `send-request` policy enables using an external service to perform complex processing functions and return data to the API management service that can be used for further policy processing. ### Authorizing reference tokens
-A major function of API Management is protecting backend resources. If the authorization server used by your API creates [JWT tokens](https://jwt.io/) as part of its OAuth2 flow, as [Azure Active Directory](../active-directory/hybrid/whatis-hybrid-identity.md) does, then you can use the `validate-jwt` policy to verify the validity of the token. Some authorization servers create what are called [reference tokens](https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/) that cannot be verified without making a callback to the authorization server.
+A major function of API Management is protecting backend resources. If the authorization server used by your API creates [JWT tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) as part of its OAuth2 flow, as [Azure Active Directory](../active-directory/hybrid/whatis-hybrid-identity.md) does, then you can use the `validate-jwt` policy to verify the validity of the token. Some authorization servers create what are called [reference tokens](https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/) that cannot be verified without making a callback to the authorization server.
### Standardized introspection In the past, there has been no standardized way of verifying a reference token with an authorization server. However a recently proposed standard [RFC 7662](https://tools.ietf.org/html/rfc7662) was published by the IETF that defines how a resource server can verify the validity of a token.
From the response object, you can retrieve the body and RFC 7622 tells API Manag
Alternatively, if the authorization server doesn't include the "active" field to indicate whether the token is valid, use a tool like Postman to determine what properties are set in a valid token. For example, if a valid token response contains a property called "expires_in", check whether this property name exists in the authorization server response this way:
+```xml
<when condition="@(((IResponse)context.Variables["tokenstate"]).Body.As<JObject>().Property("expires_in") == null)">
+```
+ ### Reporting failure You can use a `<choose>` policy to detect if the token is invalid and if so, return a 401 response.
app-service Configure Authentication Provider Apple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-apple.md
More information about generating and validating tokens can be found in [Apple's
### Sign the client secret JWT You'll use the `.p8` file you downloaded previously to sign the client secret JWT. This file is a [PCKS#8 file](https://en.wikipedia.org/wiki/PKCS_8) that contains the private signing key in PEM format. There are many libraries that can create and sign the JWT for you.
-There are different kinds of open-source libraries available online for creating and signing JWT tokens. For more information about generating JWT tokens, see jwt.io. For example, one way of generating the client secret is by importing the [Microsoft.IdentityModel.Tokens NuGet package](https://www.nuget.org/packages/Microsoft.IdentityModel.Tokens/) and running a small amount of C# code shown below.
+There are different kinds of open-source libraries available online for creating and signing JWT tokens. For more information about generating JWT tokens, see [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). For example, one way of generating the client secret is by importing the [Microsoft.IdentityModel.Tokens NuGet package](https://www.nuget.org/packages/Microsoft.IdentityModel.Tokens/) and running a small amount of C# code shown below.
```csharp using Microsoft.IdentityModel.Tokens;
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
The following table shows environment variables prefixes that App Service uses f
| `POSTGRESQLCONNSTR_` | Signifies a PostgreSQL connection string in the app configuration. It's injected into a .NET app as a connection string. | | `CUSTOMCONNSTR_` | Signifies a custom connection string in the app configuration. It's injected into a .NET app as a connection string. | | `MYSQLCONNSTR_` | Signifies an Azure SQL Database connection string in the app configuration. It's injected into a .NET app as a connection string. |
-| `AZUREFILESSTORAGE_` | A connection string to a custom Azure File storage for a container app. |
+| `AZUREFILESSTORAGE_` | A connection string to a custom Azure file share for a container app. |
| `AZUREBLOBSTORAGE_` | A connection string to a custom Azure Blobs storage for a container app. | ## Deployment
APACHE_RUN_GROUP | RUN sed -i 's!User ${APACHE_RUN_GROUP}!Group www-data!g' /etc
DOMAIN_OWNERSHIP_VERIFICATION_IDENTIFIERS -->
-## TSL/SSL
+## TLS/SSL
For more information, see [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md).
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
description: This article provides an overview of the Azure Application Gateway
Previously updated : 07/20/2020 Last updated : 08/31/2021 # Application Gateway multiple site hosting
-Multiple site hosting enables you to configure more than one web application on the same port of an application gateway. It allows you to configure a more efficient topology for your deployments by adding up to 100+ websites to one application gateway. Each website can be directed to its own backend pool. For example, three domains, contoso.com, fabrikam.com, and adatum.com, point to the IP address of the application gateway. You'd create three multi-site listeners and configure each listener for the respective port and protocol setting.
+Multiple site hosting enables you to configure more than one web application on the same port of application gateways using public-facing listeners. It allows you to configure a more efficient topology for your deployments by adding up to 100+ websites to one application gateway. Each website can be directed to its own backend pool. For example, three domains, contoso.com, fabrikam.com, and adatum.com, point to the IP address of the application gateway. You'd create three multi-site listeners and configure each listener for the respective port and protocol setting.
You can also define wildcard host names in a multi-site listener and up to 5 host names per listener. To learn more, see [wildcard host names in listener](#wildcard-host-names-in-listener-preview).
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
Below are some basic concepts related to Microsoft Azure Attestation.
## JSON Web Token (JWT)
-[JSON Web Token](https://jwt.io/) (JWT) is an open standard [RFC7519](https://tools.ietf.org/html/rfc7519) method for securely transmitting information between parties as a JavaScript Object Notation (JSON) object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret or a public/private key pair.
+[JSON Web Token](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) (JWT) is an open standard [RFC7519](https://tools.ietf.org/html/rfc7519) method for securely transmitting information between parties as a JavaScript Object Notation (JSON) object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret or a public/private key pair.
## JSON Web Key (JWK)
automation Automation Deploy Template Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-deploy-template-runbook.md
In a text editor, copy the following text:
Save the file locally as **TemplateTest.json**.
-## Save the Resource Manager template in Azure Storage
+## Save the Resource Manager template in Azure Files
-Now we use PowerShell to create an Azure Storage file share and upload the **TemplateTest.json** file. For instructions on how to create a file share and upload a file in the Azure portal, see [Get started with Azure File storage on Windows](../storage/files/storage-dotnet-how-to-use-files.md).
+Now we use PowerShell to create an Azure file share and upload the **TemplateTest.json** file. For instructions on how to create a file share and upload a file in the Azure portal, see [Get started with Azure Files on Windows](../storage/files/storage-files-quick-create-use-windows.md).
Launch PowerShell on your local machine, and run the following commands to create a file share and upload the Resource Manager template to that file share.
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/dsc-linux-powershell.md
+
+ Title: Apply Linux Azure Automation State Configuration using PowerShell
+description: This article tells you how to configure a Linux virtual machine to a desired state using Azure Automation State Configuration with PowerShell.
+++ Last updated : 08/31/2021++
+# Configure Linux desired state with Azure Automation State Configuration using PowerShell
+
+In this tutorial, you'll apply an Azure Automation State Configuration with PowerShell to an Azure Linux virtual machine to check whether it complies with a desired state. The desired state is to identify if the apache2 service is present on the node.
+Azure Automation State Configuration allows you to specify configurations for your machines and ensure those machines are in a specified state over time. For more information about State Configuration, see [Azure Automation State Configuration overview](./automation-dsc-overview.md).
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> - Onboard an Azure Linux VM to be managed by Azure Automation DSC
+> - Compose a configuration
+> - Install PowerShell module for Automation
+> - Import a configuration to Azure Automation
+> - Compile a configuration into a node configuration
+> - Assign a node configuration to a managed node
+> - Modify the node configuration mapping
+> - Check the compliance status of a managed node
+
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure Automation account. To learn more about Automation accounts, see [Automation Account authentication overview](./automation-security-overview.md).
+- An Azure Resource Manager virtual machine (VM) running Ubuntu 18.04 LTS or later. For instructions on creating an Azure Linux VM, see [Create a Linux virtual machine in Azure with PowerShell](../virtual-machines/windows/quick-create-powershell.md).
+- The PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed on the machine you'll be using to write, compile, and apply a state configuration to a target Azure Linux VM. Ensure you have the latest version. If necessary, run `Update-Module -Name Az`.
+
+## Create a configuration
+
+Review the code below and note the presence of two node [configurations](/powershell/scripting/dsc/configurations/configurations): `IsPresent` and `IsNotPresent`. This configuration calls one resource in each node block: the [nxPackage resource](/powershell/scripting/dsc/reference/resources/linux/lnxpackageresource). This resource manages the presence of the **apache2** package. Then, in a text editor, copy the following code to a local file and name it `LinuxConfig.ps1`:
+
+```powershell
+Configuration LinuxConfig
+{
+ Import-DscResource -ModuleName 'nx'
+
+ Node IsPresent
+ {
+ nxPackage apache2
+ {
+ Name = 'apache2'
+ Ensure = 'Present'
+ PackageManager = 'Apt'
+ }
+ }
+
+ Node IsNotPresent
+ {
+ nxPackage apache2
+ {
+ Name = 'apache2'
+ Ensure = 'Absent'
+ }
+ }
+}
+```
+
+## Sign in to Azure
+
+From your machine, sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) PowerShell cmdlet and follow the on-screen directions.
+
+```powershell
+# Sign in to your Azure subscription
+$sub = Get-AzSubscription -ErrorAction SilentlyContinue
+if(-not($sub))
+{
+ Connect-AzAccount
+}
+
+# If you have multiple subscriptions, set the one to use
+# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
+```
+
+## Initialize variables
+
+For efficiency and decreased chance of error when executing the cmdlets, revise the PowerShell code further below as necessary and then execute.
+
+| Variable | Value |
+|||
+|$resourceGroup| Replace `yourResourceGroup` with the actual name of your resource group.|
+|$automationAccount| Replace `yourAutomationAccount` with the actual name of your Automation account.|
+|$VM| Replace `yourVM` with the actual name of your Azure Linux VM.|
+|$configurationName| Leave as is with `LinuxConfig`. The name of the configuration used in this tutorial.|
+|$nodeConfigurationName0|Leave as is with `LinuxConfig.IsNotPresent`. The name of a node configuration used in this tutorial.|
+|$nodeConfigurationName1|Leave as is with `LinuxConfig.IsPresent`. The name of a node configuration used in this tutorial.|
+|$moduleName|Leave as is with `nx`. The name of the PowerShell module used for DSC in this tutorial.|
+|$moduleVersion| Obtain the latest version number for `nx` from the [PowerShell Gallery](https://www.powershellgallery.com/packages/nx). This tutorial uses version `1.0`.|
+
+```powershell
+$resourceGroup = "yourResourceGroup"
+$automationAccount = "yourAutomationAccount"
+$VM = "yourVM"
+$configurationName = "LinuxConfig"
+$nodeConfigurationName0 = "LinuxConfig.IsNotPresent"
+$nodeConfigurationName1 = "LinuxConfig.IsPresent"
+$moduleName = "nx"
+$moduleVersion = "1.0"
+```
+
+## Install nx module
+
+Azure Automation uses a number of PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. **nx** is the module with DSC Resources for Linux. Install the **nx** module with the [New-AzAutomationModule](/powershell/module/az.automation/new-azautomationmodule) cmdlet. For more information about modules, see [Manage modules in Azure Automation](./shared-resources/modules.md). Run the following command:
+
+```powershell
+New-AzAutomationModule `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $moduleName `
+ -ContentLinkUri "https://www.powershellgallery.com/api/v2/package/$moduleName/$moduleVersion"
+```
+
+The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/new-azautomationmodule-output.png" alt-text="Output from New-AzAutomationModule command.":::
+
+You can verify the installation running the following command:
+
+```powershell
+Get-AzAutomationModule `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $moduleName
+```
+
+## Import configuration to Azure Automation
+
+Call the [Import-AzAutomationDscConfiguration](/powershell/module/az.automation/import-azautomationdscconfiguration) cmdlet to upload the configuration into your Automation account. Revise value for `-SourcePath` with your actual path and then run the following command:
+
+```powershell
+Import-AzAutomationDscConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -SourcePath "path\LinuxConfig.ps1" `
+ -Published
+```
+
+The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/import-azautomationdscconfiguration-output.png" alt-text="Output from Import-AzAutomationDscConfiguration command.":::
+
+You can view the configuration from your Automation account running the following command:
+
+```powershell
+Get-AzAutomationDscConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $configurationName
+```
+
+## Compile configuration in Azure Automation
+
+Before you can apply a desired state to a node, the configuration defining that state must be compiled into one or more node configurations. Call the [Start-AzAutomationDscCompilationJob](/powershell/module/Az.Automation/Start-AzAutomationDscCompilationJob) cmdlet to compile the `LinuxConfig` configuration in Azure Automation. For more information about compilation, see [Compile DSC configurations](./automation-dsc-compile.md). Run the following command:
+
+```powershell
+Start-AzAutomationDscCompilationJob `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -ConfigurationName $configurationName
+```
+
+The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/start-azautomationdsccompilationjob-output.png" alt-text="Output from Start-AzAutomationDscCompilationJob command.":::
+
+You can view the compilation job from your Automation account using the following command:
+
+```powershell
+Get-AzAutomationDscCompilationJob `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -ConfigurationName $configurationName
+```
+
+Wait for the compilation job to complete before proceeding. The configuration must be compiled into a node configuration before it can be assigned to a node. Execute the following code to check for status every 5 seconds:
+
+```powershell
+while ((Get-AzAutomationDscCompilationJob `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -ConfigurationName $configurationName).Status -ne "Completed")
+{
+ Write-Output "Wait"
+ Start-Sleep -Seconds 5
+}
+Write-Output "Compilation complete"
+```
+
+After the compilation job completes, you can also view the node configuration metadata using the following command:
+
+```powershell
+Get-AzAutomationDscNodeConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount
+```
+
+## Register the Azure Linux VM for an Automation account
+
+Register the Azure Linux VM as a Desired State Configuration (DSC) node for the Azure Automation account. The [Register-AzAutomationDscNode](/powershell/module/az.automation/register-azautomationdscnode) cmdlet only supports VMs running Windows OS. The Azure Linux VM will first need to be configured for DSC. For detailed steps, see [Get started with Desired State Configuration (DSC) for Linux](/powershell/scripting/dsc/getting-started/lnxgettingstarted).
+
+1. Construct a python script with the registration command using PowerShell for later execution on your Azure Linux VM by running the following code:
+
+ ```powershell
+ $primaryKey = (Get-AzAutomationRegistrationInfo `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount).PrimaryKey
+
+ $URL = (Get-AzAutomationRegistrationInfo `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount).Endpoint
+
+ Write-Output "sudo /opt/microsoft/dsc/Scripts/Register.py $primaryKey $URL"
+ ```
+
+ These commands obtain the Automation account's primary access key and URL and concatenates it to the registration command. Ensure you remove any carriage returns from the output. This command will be used in a later step.
+
+1. Connect to your Azure Linux VM. If you used a password, you can use the syntax below. If you used a public-private key pair, see [SSH on Linux](./../virtual-machines/linux/mac-create-ssh-keys.md) for detailed steps. The other commands retrieve information about what packages can be installed, including what updates to currently installed packages packages are available, and installs Python.
+
+ ```cmd
+ ssh user@IP
+
+ sudo apt-get update
+ sudo apt-get install -y python
+ ```
+
+1. Install Open Management Infrastructure (OMI). For more information on OMI, see [Open Management Infrastructure](https://github.com/Microsoft/omi). Verify the latest [release](https://github.com/Microsoft/omi/releases). Revise the release version below as needed, and then execute the commands in your ssh session:
+
+ ```bash
+ wget https://github.com/microsoft/omi/releases/download/v1.6.8-0/omi-1.6.8-0.ssl_110.ulinux.x64.deb
+
+ sudo dpkg -i ./omi-1.6.8-0.ssl_110.ulinux.x64.deb
+ ```
+
+1. Install PowerShell Desired State Configuration for Linux. For more information, see [DSC on Linux](https://github.com/microsoft/PowerShell-DSC-for-Linux). Verify the latest [release](https://github.com/microsoft/PowerShell-DSC-for-Linux/releases). Revise the release version below as needed, and then execute the commands in your ssh session:
+
+ ```bash
+ wget https://github.com/microsoft/PowerShell-DSC-for-Linux/releases/download/v1.2.1-0/dsc-1.2.1-0.ssl_110.x64.deb
+
+ sudo dpkg -i ./dsc-1.2.1-0.ssl_110.x64.deb
+ ```
+
+1. Now you can register the node using the `sudo /opt/microsoft/dsc/Scripts/Register.py <Primary Access Key> <URL>` Python script created in step 1. Run the commands in your ssh session, and the following output should look similar as shown below:
+
+ ```output
+ instance of SendConfigurationApply
+ {
+ ReturnValue=0
+ }
+
+ ```
+
+1. You can verify the registration in PowerShell using the following command:
+
+ ```powershell
+ Get-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $VM
+ ```
+
+ The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/get-azautomationdscnode-output.png" alt-text="Output from Get-AzAutomationDscNode command.":::
+
+## Assign a node configuration
+
+Call the [Set-AzAutomationDscNode](/powershell/module/Az.Automation/Set-AzAutomationDscNode) cmdlet to set the node configuration mapping. Run the following commands:
+
+```powershell
+# Get the ID of the DSC node
+$node = Get-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $VM
+
+# Set node configuration mapping
+Set-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -NodeConfigurationName $nodeConfigurationName0 `
+ -NodeId $node.Id `
+ -Force
+```
+
+The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/set-azautomationdscnode-output.png" alt-text="Output from Set-AzAutomationDscNode command.":::
+
+## Modify the node configuration mapping
+
+Call the [Set-AzAutomationDscNode](/powershell/module/Az.Automation/Set-AzAutomationDscNode) cmdlet to modify the node configuration mapping. Here, you modify the current node configuration mapping from `LinuxConfig.IsNotPresent` to `LinuxConfig.IsPresent`. Run the following command:
+
+```powershell
+# Modify node configuration mapping
+Set-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -NodeConfigurationName $nodeConfigurationName1 `
+ -NodeId $node.Id `
+ -Force
+```
+
+## Check the compliance status of a managed node
+
+Each time State Configuration does a consistency check on a managed node, the node sends a status report back to the pull server. The following example uses the [Get-AzAutomationDscNodeReport](/powershell/module/Az.Automation/Get-AzAutomationDscNodeReport) cmdlet to report on the compliance status of a managed node.
+
+```powershell
+Get-AzAutomationDscNodeReport `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -NodeId $node.Id `
+ -Latest
+```
+
+The output should look similar as shown below:
+
+ :::image type="content" source="media/dsc-linux-powershell/get-azautomationdscnodereport-output.png" alt-text="Output from Get-AzAutomationDscNodeReport command.":::
+
+The first report may not be available immediately and may take up to 30 minutes after you enable a node. For more information about report data, see see [Using a DSC report server](/powershell/scripting/dsc/pull-server/reportserver).
+
+## Clean up resources
+
+The following steps help you delete the resources created for this tutorial that are no longer needed.
+
+1. Remove DSC node from management by an Automation account. Although you can't register a node through PowerShell, you can unregister it with PowerShell. Run the following commands:
+
+ ```powershell
+ # Get the ID of the DSC node
+ $NodeID = (Get-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $VM).Id
+
+ Unregister-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Id $NodeID `
+ -Force
+
+ # Verify using the same command from Register the Azure Linux VM for an Automation account. A blank response indicates success.
+ Get-AzAutomationDscNode `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $VM
+ ```
+
+1. Remove metadata from DSC node configurations in Automation. Run the following commands:
+
+ ```powershell
+ Remove-AzAutomationDscNodeConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $nodeConfigurationName0 `
+ -IgnoreNodeMappings `
+ -Force
+
+ Remove-AzAutomationDscNodeConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $nodeConfigurationName1 `
+ -IgnoreNodeMappings `
+ -Force
+
+ # Verify using the same command from Compile configuration in Azure Automation.
+ Get-AzAutomationDscNodeConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $nodeConfigurationName0
+
+ Get-AzAutomationDscNodeConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $nodeConfigurationName1
+ ```
+
+ Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationDscNodeConfiguration : NodeConfiguration LinuxConfig.IsNotPresent not found`.
+
+1. Remove DSC configuration from Automation. Run the following command:
+
+ ```powershell
+ Remove-AzAutomationDscConfiguration `
+ -AutomationAccountName $automationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $configurationName `
+ -Force
+
+ # Verify using the same command from Import configuration to Azure Automation.
+ Get-AzAutomationDscConfiguration `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $configurationName
+ ```
+
+ Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationDscConfiguration : Operation returned an invalid status code 'NotFound'`.
+
+1. Removes nx module from Automation. Run the following command:
+
+ ```powershell
+ Remove-AzAutomationModule `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $moduleName -Force
+
+ # Verify using the same command from Install nx module.
+ Get-AzAutomationModule `
+ -ResourceGroupName $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $moduleName
+ ```
+
+ Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationModule : The module was not found. Module name: nx.`.
+
+## Next steps
+
+In this tutorial, you applied an Azure Automation State Configuration with PowerShell to an Azure Linux VM to check whether it complied with a desired state. For a more thorough explanation of configuration composition, see:
+
+> [!div class="nextstepaction"]
+> [Compose DSC configurations](./compose-configurationwithcompositeresources.md)
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/faq.md
Azure Arc enabled Kubernetes allows you to extend AzureΓÇÖs management capabilit
## Do I need to connect my AKS clusters running on Azure to Azure Arc?
-No. All Azure Arc enabled Kubernetes features, including Azure Monitor and Azure Policy (Gatekeeper), are available on AKS (a native resource in Azure Resource Manager).
+Connecting an Azure Kubernetes Service (AKS) cluster to Azure Arc is only required for running Arc enabled services like App Services and Data Services on top of the cluster. This can be done using the [custom locations](custom-locations.md) feature of Arc enabled Kubernetes. This is a point in time limitation for now till cluster extensions and custom locations are introduced natively on top of AKS clusters.
+
+If you don't want to use custom locations and just want to use management features like Azure Monitor and Azure Policy (Gatekeeper), they are available natively on AKS and connection to Azure Arc is not required in such cases.
## Should I connect my AKS-HCI cluster and Kubernetes clusters on Azure Stack Hub and Azure Stack Edge to Azure Arc?
If the Azure Arc enabled Kubernetes cluster is on Azure Stack Edge, AKS on Azure
## How to address expired Azure Arc enabled Kubernetes resources?
-The Managed Service Identity (MSI) certificate associated with your Azure Arc enabled Kubernetes has an expiration window of 90 days. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster. To get your Kubernetes cluster working with Azure Arc again:
+The system assigned managed identity associated with your Azure Arc enabled Kubernetes cluster is only used by the Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days and the agents keep attempting to renew this certificate between Day 46 to Day 90. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster and you'll then need to delete and connect the cluster to Azure Arc once again. It is thus advisable to have the cluster come online at least once between Day 46 to Day 90 time window to ensure renewal of the managed identity certificate.
+
+To check when the certificate is about to expire for any given cluster, run the following command:
+
+```console
+az connectedk8s show -n <name> -g <resource-group>
+```
+
+In the output, the value of the `managedIdentityCertificateExpirationTime` indicates when the managed identity certificate will expire (90D mark for that certificate).
+
+If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp from the past, then the `connectivityStatus` field in the above output will be set to `Expired`. In such cases, to get your Kubernetes cluster working with Azure Arc again:
1. Delete Azure Arc enabled Kubernetes resource and agents on the cluster.
The Managed Service Identity (MSI) certificate associated with your Azure Arc en
``` > [!NOTE]
-> `az connectedk8s delete` will also delete configurations on top of the cluster. After running `az connectedk8s connect`, recreate the configurations on the cluster, either manually or using Azure Policy.
+> `az connectedk8s delete` will also delete configurations and cluster extensions on top of the cluster. After running `az connectedk8s connect`, recreate the configurations and cluster extensions on the cluster, either manually or using Azure Policy.
## If I am already using CI/CD pipelines, can I still use Azure Arc enabled Kubernetes and configurations?
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Export allows you to export the data stored in Azure Cache for Redis to Redis co
2. Select **Choose Storage Container** and select the storage account you want. The storage account must be in the same subscription and region as your cache. > [!IMPORTANT]
- > Export works with page blobs, which are supported by both classic and Resource Manager storage accounts. Export is not supported by Blob storage accounts at this time. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md).
+ >
+ > - Export works with page blobs that are supported by both classic and Resource Manager storage accounts.
+ > - Azure Cache for Redis does not support exporting to ADLS Gen2 storage accounts.
+ > - Export is not supported by Blob storage accounts at this time.
+ >
+ > For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md).
> ![Storage account](./media/cache-how-to-import-export-data/cache-export-data-choose-account.png)
Yes, for PowerShell instructions see [To import an Azure Cache for Redis](cache-
On the left, if you remain on **Import data** or **Export data** for longer than 15 minutes before starting the operation, you receive an error with an error message similar to the following example:
-```output
+```azcopy
The request to import data into cache 'contoso55' failed with status 'error' and error 'One of the SAS URIs provided could not be used for the following reason: The SAS token end time (se) must be at least 1 hour from now and the start time (st), if given, must be at least 15 minutes in the past. ```
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-timers.md
$timerTask = Start-DurableTimer -Duration $expiryTime -NoWait
$winner = Wait-DurableTask -Task @($activityTask, $timerTask) -Any if ($winner -eq $activityTask) {
- Stop-DurableTaskTimer -Task $timerTask
+ Stop-DurableTimerTask -Task $timerTask
return $True } else {
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.
### Log custom telemetry
-By default, some of the telemetry is collected for Functions apps. This telemetry ends up as traces in Application Insights. For more control, you can instead use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure) to send custom telemetry data to your Application Insights instance.
+Log telemetry is collected for Functions apps via Functions runtime by default. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default via [Function bindings](https://docs.microsoft.com/azure/azure-functions/functions-triggers-bindings?tabs=csharp#supported-bindings). To collect custom request/dependency telemetry (not through bindings) you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure) to send custom telemetry data to your Application Insights instance.
+ You can find the list of supported libraries [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). >[!NOTE]
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-versions.md
A few features were removed, updated, or replaced after version 1.x. This sectio
In version 2.x, the following changes were made:
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure File storage by default. When upgrading an app from version 1.x to version 2.x, existing secrets that are in file storage are reset.
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When upgrading an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/language-support-policy.md
After the language end-of-life date, function apps that use retired language ver
There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
-|Language Versions |EOL Date |Expected Retirement Date|
+|Language Versions |EOL Date |Retirement Date|
|--|--|-| |.NET 5|February 2022|TBA|
-|Node 6|30 April 2019|TBA|
-|Node 8|31 December 2019|TBA|
-|Node 10|30 April 2021|TBA|
-|PowerShell Core 6| 4 September 2020|TBA|
-|Python 3.6 |23 December 2021|TBA|
+|Node 6|30 April 2019|28 February 2022|
+|Node 8|31 December 2019|28 February 2022|
+|Node 10|30 April 2021|30 September 2022|
+|PowerShell Core 6| 4 September 2020|30 September 2022|
+|Python 3.6 |23 December 2021|30 September 2022|
## Language version support timeline
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/storage-considerations.md
Azure Functions requires an Azure Storage account when you create a function app
| [Azure Table Storage](../storage/tables/table-storage-overview.md) | Used by [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). | > [!IMPORTANT]
-> When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in Azure File storage in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.
+> When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in Azure Files in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.
## Storage account requirements
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-cognitiveservices.md
Title: Cognitive Services on Azure Government | Microsoft Docs
+ Title: Cognitive Services on Azure Government
description: Guidance for developing Cognitive Services applications for Azure Government cloud: gov
ms.devlang: na
na Previously updated : 10/10/2020 Last updated : 08/30/2021
-# Cognitive Services on Azure Government ΓÇô Computer Vision, Face, and Translator
+# Cognitive Services on Azure Government
-For feature variations and limitations, see [Compare Azure Government and global Azure](compare-azure-government-global-azure.md).
+This article provides developer guidance for using Computer Vision, Face API, Text Analytics, and Translator cognitive services. For feature variations and limitations, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md).
## Prerequisites [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* Install and Configure [Azure PowerShell](/powershell/azure/install-az-ps)
-* Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md)
+- Install and Configure [Azure PowerShell](/powershell/azure/install-az-ps)
+- Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md)
-## Part 1: Provision Cognitive Services Accounts
+## Part 1: Provision Cognitive Services accounts
-In order to access any of the Cognitive Services APIs, you must first provision a Cognitive Services account for each of the APIs you want to access. **Cognitive Services is not yet supported in the Azure Government Portal**, but you can use Azure PowerShell to access the APIs and services.
+In order to access any of the Cognitive Services APIs, you must first provision a Cognitive Services account for each of the APIs you want to access. You can create cognitive services in the [Azure Government portal](https://portal.azure.us/), or you can use Azure PowerShell to access the APIs and services as described in this article.
> [!NOTE]
-> You must go through the process of creating an account and retrieving a key(explained below) **for each** of the APIs you want to access.
->
+> You must go through the process of creating an account and retrieving account key (explained below) **for each** of the APIs you want to access.
> 1. Make sure that you have the **Cognitive Services resource provider registered on your account**.
In order to access any of the Cognitive Services APIs, you must first provision
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.CognitiveServices ```
-2. In the PowerShell command below, replace "rg-name", "name-of-your-api", and "location-of-resourcegroup" with your relevant account information.
+2. In the PowerShell command below, replace `rg-name`, `name-of-your-api`, and `location-of-resourcegroup` with your relevant account information.
- Replace the "type of API" tag with any of the three following APIs you want to access:
- * ComputerVision
- * Face
- * TextTranslation
+ Replace the `type of API` tag with any of the following APIs you want to access:
+ - ComputerVision
+ - Face
+ - TextAnalytics
+ - TextTranslation
```powershell New-AzCognitiveServicesAccount -ResourceGroupName 'rg-name' -name 'name-of-your-api' -Type <type of API> -SkuName S0 -Location 'location-of-resourcegroup'
In order to access any of the Cognitive Services APIs, you must first provision
You must retrieve an account key to access the specific API.
-In the PowerShell command below, replace the "youraccountname" tag with the name that you gave the Account that you created above. Replace the 'rg-name' tag with the name of your resource group.
+In the PowerShell command below, replace the `<youraccountname>` tag with the name that you gave the Account that you created above. Replace the `rg-name` tag with the name of your resource group.
```powershell Get-AzCognitiveServicesAccountKey -Name <youraccountname> -ResourceGroupName 'rg-name'
Copy and save the first key somewhere as you will need it to make calls to the A
Now you are ready to make calls to the APIs.
-## Part 2: API Quickstarts
+
+## Part 2: API Quickstarts
+ The Quickstarts below will help you to get started with the APIs available through Cognitive Services in Azure Government.
-## Computer Vision API
+
+## Computer Vision
+ ### Prerequisites
-* Get the Microsoft Computer Vision API Windows SDK [here](https://github.com/Microsoft/Cognitive-vision-windows).
+- Get the [Microsoft Computer Vision API Windows SDK](https://github.com/Microsoft/Cognitive-vision-windows).
-* Make sure Visual Studio has been installed:
+- Make sure Visual Studio has been installed:
- [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload. >[!NOTE]
The Quickstarts below will help you to get started with the APIs available throu
> ### Variations
-* The URI for accessing the Computer Vision API in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](compare-azure-government-global-azure.md#guidance-for-developers).
-### Analyze an Image With Computer Vision API using C# <a name="AnalyzeImage"> </a>
+- The URI for accessing Computer Vision in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
+
+### Analyze an image with Computer Vision using C#
With the [Analyze Image method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa), you can extract visual features based on image content. You can upload an image or specify an image URL and choose which features to return, including:
-* A detailed list of tags related to the image content.
-* A description of image content in a complete sentence.
-* The coordinates, gender, and age of any faces contained in the image.
-* The ImageType (clip art or a line drawing).
-* The dominant color, the accent color, or whether an image is black & white.
-* The category defined in this [taxonomy](../cognitive-services/computer-vision/category-taxonomy.md).
-* Does the image contain adult or sexually suggestive content?
+
+- A detailed list of tags related to the image content.
+- A description of image content in a complete sentence.
+- The coordinates, gender, and age of any faces contained in the image.
+- The ImageType (clip art or a line drawing).
+- The dominant color, the accent color, or whether an image is black & white.
+- The category defined in this [taxonomy](../cognitive-services/computer-vision/category-taxonomy.md).
+- Does the image contain adult or sexually suggestive content?
### Analyze an image C# example request
namespace VisionApp1
``` ### Analyze an Image response
-A successful response is returned in JSON. Following is an example of a successful response:
+A successful response is returned in JSON. Shown below is an example of a successful response:
```json
A successful response is returned in JSON. Following is an example of a successf
} } ```
-For more information, please see [public documentation](../cognitive-services/computer-vision/index.yml) and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) for Computer Vision API.
+For more information, see [public documentation](../cognitive-services/computer-vision/index.yml) and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) for Computer Vision.
## Face API+ ### Prerequisites
-* Get the Microsoft Face API Windows SDK [here](https://www.nuget.org/packages/Microsoft.ProjectOxford.Face/)
-* Make sure Visual Studio has been installed:
+- Get the [Microsoft Face API Windows SDK](https://www.nuget.org/packages/Microsoft.ProjectOxford.Face/).
+
+- Make sure Visual Studio has been installed:
- [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload. >[!NOTE]
For more information, please see [public documentation](../cognitive-services/co
> >
-### Variations
-* The URI for accessing the Face API in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](compare-azure-government-global-azure.md#guidance-for-developers).
+### Variations
+- The URI for accessing the Face API in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
-### Detect Faces in images with Face API using C# <a name="Detect"> </a>
-Use the [Face - Detect method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
-to detect faces in an image and return face attributes including:
-* Face ID: Unique ID used in several Face API scenarios.
-* Face Rectangle: The left, top, width, and height indicating the location of the face in the image.
-* Landmarks: An array of 27-point face landmarks pointing to the important positions of face components.
-* Facial attributes including age, gender, smile intensity, head pose, and facial hair.
+### Detect faces in images with Face API using C#
+
+Use the [Face - Detect method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) to detect faces in an image and return face attributes including:
+
+- Face ID: Unique ID used in several Face API scenarios.
+- Face Rectangle: The left, top, width, and height indicating the location of the face in the image.
+- Landmarks: An array of 27-point face landmarks pointing to the important positions of face components.
+- Facial attributes including age, gender, smile intensity, head pose, and facial hair.
### Face detect C# example request
namespace FaceApp1
``` ### Face detect response
-A successful response is returned in JSON. Following is an example of a successful response:
+A successful response is returned in JSON. Shown below is an example of a successful response:
```json Response:
Response:
} ] ```
-For more information, please see [public documentation](../cognitive-services/Face/index.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
+For more information, see [public documentation](../cognitive-services/Face/index.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
++
+## Text Analytics
+
+For instructions on how to use Text Analytics, see [Quickstart: Use the Text Analytics client library and REST API](../cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md?tabs=version-3-1&pivots=programming-language-csharp).
+
+### Variations
+
+- The URI for accessing Text Analytics in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
++
+## Translator
-## Translator API
### Prerequisites
-* Make sure Visual Studio has been installed:
+- Make sure Visual Studio has been installed:
- [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload. >[!NOTE]
For more information, please see [public documentation](../cognitive-services/Fa
> ### Variations
-* The URI for accessing the Translator API in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](compare-azure-government-global-azure.md#guidance-for-developers).
-* [Virtual Network support](../cognitive-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region.
+
+- The URI for accessing Translator in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
+- [Virtual Network support](../cognitive-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region.
The URI for accessing the API is: - `https://<your-custom-domain>.cognitiveservices.azure.us/translator/text/v3.0`
- - You can find your custom domain endpoint in the overview blade on the Azure portal once the resource is created.
-* There are 2 regions `US Gov Virginia` and `US Gov Arizona`
-### Text Translation Method
-The below example uses [Text Translation - Translate method](../cognitive-services/translator/reference/v3-0-translate.md) to translate a string of text from a language into another specified language. There are multiple [language codes](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation) that can be used with the Text Translation API.
+ - You can find your custom domain endpoint in the overview blade on the Azure Government portal once the resource is created.
+- There are 2 regions: `US Gov Virginia` and `US Gov Arizona`.
+
+### Text translation method
-### Text Translation C# example request
+The below example uses [Text Translation - Translate method](../cognitive-services/translator/reference/v3-0-translate.md) to translate a string of text from a language into another specified language. There are multiple [language codes](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation) that can be used with Translator.
+
+### Text translation C# example request
The sample is written in C#.
The sample is written in C#.
6. Replace the `text` value with text that you want to translate. 7. Run the program.
-You can also test out different languages and texts by replacing the "text", "from", and "to" variables in Program.cs.
+You can also test out different languages and texts by replacing the `text`, `from`, and `to` variables in Program.cs.
```csharp using System;
namespace TextTranslator
} } ```
-For more information, please see [public documentation](../cognitive-services/translator/translator-info-overview.md) and [public API documentation](../cognitive-services/translator/reference/v3-0-reference.md) for Translator Text API.
+For more information, see [public documentation](../cognitive-services/translator/translator-info-overview.md) and [public API documentation](../cognitive-services/translator/reference/v3-0-reference.md) for Translator.
+ ### Next Steps
-* Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/)
-* Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
-* Give us feedback or request new features via the [Azure Government feedback forum](https://feedback.azure.com/forums/558487-azure-government)
+
+- Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/)
+- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-functions.md
description: Azure Monitor seamlessly integrates with your application running o
Previously updated : 06/26/2020 Last updated : 08/27/2021 # Monitoring Azure Functions with Azure Monitor Application Insights
-[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Azure Application Insights to monitor functions.
+[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Azure Application Insights to monitor functions. For languages other than .NET and .NETCore additional language-specific workers/extensions are needed to get the full benefits of distributed tracing.
Application Insights collects log, performance, and error data, and automatically detects performance anomalies. Application Insights includes powerful analytics tools to help you diagnose issues and to understand how your functions are used. When you have the visibility into your application data, you can continuously improve performance and usability. You can even use Application Insights during local function app project development. The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd).
-## Distributed tracing for Java applications on Windows (public preview)
+## Distributed tracing for Java applications (public preview)
> [!IMPORTANT]
-> This feature is currently in public preview for Java Azure Functions on Windows, distributed tracing for Java Azure Functions on Linux is not supported.
-> For Consumption plan it has a cold start of 8-9 seconds.
+> This feature is currently in public preview for Java Azure Functions both Windows and Linux
-If your applications are written in Java you can view richer data from your functions applications, including, requests, dependencies, logs, and metrics. The additional data also lets you see and diagnose end-to-end transactions and see the application map, which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.
+If your applications are written in Java you can view richer data from your functions applications, including requests, dependencies, logs, and metrics. The additional data also lets you see and diagnose end-to-end transactions and see the application map, which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.
-The end-to-end diagnostics and the application map provide visibility into one single transaction/request. Together these two features are very helpful for finding the root cause of reliability issues and performance bottlenecks on a per request basis.
+The end-to-end diagnostics and the application map provide visibility into one single transaction/request. Together these two features are helpful for finding the root cause of reliability issues and performance bottlenecks on a per request basis.
-### How to enable distributed tracing for Java Function apps?
+### How to enable distributed tracing for Java Function apps
-Navigate to the functions app Overview blade, go to configurations. Under Application Settings, click "+ New application setting". Add the following two application settings with below values, then click Save on the upper left. DONE!
+Navigate to the functions app Overview blade and go to configurations. Under Application Settings, click "+ New application setting".
+> [!div class="mx-imgBorder"]
+> ![Under Settings, add new application settings](./media//functions/create-new-setting.png)
+
+Add the following application settings with below values, then click Save on the upper left. DONE!
+
+#### Windows
``` XDT_MicrosoftApplicationInsights_Java -> 1 ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 ```
+#### Linux
+```
+ApplicationInsightsAgent_EXTENSION_VERSION -> ~3
+```
+
+## Distributed tracing for Python Function apps
+
+To collect custom telemetry from services such as Redis, Memcached, MongoDB, and more, you can use the [OpenCensus Python Extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](https://docs.microsoft.com/azure/azure-functions/functions-reference-python?tabs=azurecli-linux%2Capplication-level#log-custom-telemetry). You can find the list of supported services [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+ ## Next Steps * Read more instructions and information about monitoring [Monitoring Azure Functions](../../azure-functions/functions-monitoring.md)
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
All metrics that you send using [trackMetric](./api-custom-events-metrics.md#tra
![Usage and estimated cost](./media/pre-aggregated-metrics-log-metrics/001-cost.png)
+## Quotas
+
+Pre-aggregated metrics are stored as time series in Azure Monitor, and [Azure Monitor quotas on custom metrics](../essentials/metrics-custom-overview.md#quotas-and-limits) apply.
+
+> [!NOTE]
+> Going over the quota might have unintended consequences. Azure Monitor might become unreliable in your subscription or region. To learn how to avoid exceeding the quota, see [Design limitations and considerations](../essentials/metrics-custom-overview.md#design-limitations-and-considerations).
+
## Why is collection of custom metrics dimensions turned off by default? The collection of custom metrics dimensions is turned off by default because in the future storing custom metrics with dimensions will be billed separately from Application Insights, while storing the non-dimensional custom metrics will remain free (up to a quota). You can learn about the upcoming pricing model changes on our official [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
The following sample creates a diagnostic setting for an Activity log by adding
## Diagnostic setting for Azure Key Vault The following sample creates a diagnostic setting for an Azure Key Vault by adding a resource of type `Microsoft.KeyVault/vaults/providers/diagnosticSettings` to the template.
+> [!IMPORTANT]
+> For Azure Key Vault, the event hub must be in the same region as the key vault.
+ ### Template file ```json
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/rest-api-walkthrough.md
Get Activity Logs without filter or select:
GET https://management.azure.com/subscriptions/089bd33f-d4ec-47fe-8ba5-0753aa5c5b33/providers/microsoft.insights/eventtypes/management/values?api-version=2015-04-01 ```
+## Troubleshooting
+
+If you receive a 429, 503, or 504 error, please retry the API in one minute.
++ ## Next steps * Review the [Overview of Monitoring](../overview.md).
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
Content-type: application/json
"properties": { "keyVaultProperties": { "keyVaultUri": "https://key-vault-name.vault.azure.net",
- "kyName": "key-name",
+ "keyName": "key-name",
"keyVersion": "current-version" }, "sku": {
A response to GET request should look like this when the key update is complete:
"properties": { "keyVaultProperties": { "keyVaultUri": "https://key-vault-name.vault.azure.net",
- "kyName": "key-name",
+ "keyName": "key-name",
"keyVersion": "current-version" }, "provisioningState": "Succeeded",
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-configure.md
The below Azure Resource Manager template creates:
* A Log Analytics workspace named "my-workspace" * Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection" > [!NOTE]
-> The below ARM template uses API version "2019-04-01", which doesn't support setting the AMPLS access modes. When using the below template, the resulting AMPLS is set with QueryAccessMode="Open" and IngestionAccessMode="PrivateOnly", meaning it allows queries to run on resources both in and out of the AMPLS, but limits ingestion to reach only Private Link resources.
+> The below ARM template uses an old API version which doesn't support setting the AMPLS access modes. When using the below template, the resulting AMPLS is set with QueryAccessMode="Open" and IngestionAccessMode="PrivateOnly", meaning it allows queries to run on resources both in and out of the AMPLS, but limits ingestion to reach only Private Link resources.
``` {
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-design.md
The simplest and most secure approach would be:
2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS. 3. Block network egress traffic as much as possible.
-If you can't use a single Private Link and a single Azure Monitor Private Link Scope (AMPLS), the next best thing would be to create isolated Private Link connections for isolated networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
## Plan by network topology
To avoid this conflict, create only a single AMPLS object per DNS.
### Hub-and-spoke networks
-Hub-and-spoke topologies can avoid the issue of DNS overrides by setting the Private Link connection on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
+Hub-and-spoke networks should use a single Private Link connection set on the hub (main) network, and not on each spoke VNet.
![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
Hub-and-spoke topologies can avoid the issue of DNS overrides by setting the Pri
> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but **must also verify they don't share the same DNS zones in order to avoid DNS overrides**. ### Peered networks
-Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply.
+Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is once again to create a single Private Link on a network that's accessible to your other networks. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS applies.
### Isolated networks
-If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. After that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
+If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. After that's done, create a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
### Testing locally: Edit your machine's hosts file instead of the DNS
-As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints.
+To test Private Links locally without affecting other clients on your network, make sure Not to update your DNS when you create your Private Endpoint. Instead, edit the hosts file on your machine so it will send requests to the Private Link endpoints:
* Set up a Private Link, but when connecting to a Private Endpoint choose **not** to auto-integrate with the DNS (step 5b). * Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](./private-link-configure.md#reviewing-your-endpoints-dns-settings).
Choosing the proper access mode has detrimental effects on your network traffic.
* Private Only - allows the VNet to reach only Private Link resources (resources in the AMPLS). That's the most secure mode of work, preventing data exfiltration. To achieve that, traffic to Azure Monitor resources out of the AMPLS is blocked. ![Diagram of AMPLS Private Only access mode](./media/private-link-security/ampls-private-only-access-mode.png)
-* Open - allows the VNet to reach both Private Link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). While the Open access mode doesn't prevent data exfiltration, it still offers the other benefits of Private Links - traffic to Private Link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode allows for a gradual onboarding process, or a mixed mode of work, combining Private Link access to some resources and public access to others.
+* Open - allows the VNet to reach both Private Link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). While the Open access mode doesn't prevent data exfiltration, it still offers the other benefits of Private Links - traffic to Private Link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode is useful for a mixed mode of work (accessing some resources publicly and others over a Private Link), or during a gradual onboarding process.
![Diagram of AMPLS Open access mode](./media/private-link-security/ampls-open-access-mode.png) Access modes are set separately for ingestion and queries. For example, you can set the Private Only mode for ingestion and the Open mode for queries.
Your Log Analytics workspaces or Application Insights components can be set to:
That granularity allows you to set access according to your needs, per workspace. For example, you may accept ingestion only through Private Link connected networks (meaning specific VNets), but still choose to accept queries from all networks, public and private.
-Blocking queries from public networks means clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data are also affected by that setting.
+Blocking queries from public networks means clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes logs, metrics, and the live metrics stream. Blocking queries from public networks affects all experiences that run these queries, such as workbooks, dashboards, Insights in the Azure portal, and queries run from outside the Azure portal.
### Exceptions
Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials
#### Azure Resource Manager Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
-Additionally, specific experiences (such as the LogicApp connector, Update Management solution, and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
+> [!NOTE]
+> Queries sent through the Azure Resource Management (ARM) API can't use Azure Monitor Private Links. These queries can only go through if the target resource allows queries from public networks (set through the Network Isolation blade, or [using the CLI](./private-link-configure.md#set-resource-access-flags)).
+>
+> The following experiences are known to run queries through the ARM API:
+> * Sentinel
+> * LogicApp connector
+> * Update Management solution
+> * Change Tracking solution
+> * VM Insights
+> * Container Insights
+> * Log Analytics' Workspace Summary blade (showing the solutions dashboard)
## Application Insights considerations * YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
Storage accounts are used in the ingestion process of custom logs. By default, s
For more information on connecting your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md) ### Automation
-If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
+If you use Log Analytics solutions that require an Automation account (such as Update Management, Change Tracking, or Inventory) you should also create a Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
> [!NOTE] > Some products and Azure portal experiences query data through Azure Resource Manager and therefore won't be able to query data over a Private Link, unless Private Link settings are applied to the Resource Manager as well. To overcome this, you can configure your resources to accept queries from public networks as explained in [Controlling network access to your resources](./private-link-design.md#control-network-access-to-your-resources) (Ingestion can remain limited to Private Link networks).
azure-monitor Quick Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/quick-create-workspace-cli.md
- Title: Create a Log Analytics workspace using Azure CLI | Microsoft Docs
-description: Learn how to create a Log Analytics workspace to enable management solutions and data collection from your cloud and on-premises environments with Azure CLI.
--- Previously updated : 05/26/2020---
-# Create a Log Analytics workspace with Azure CLI 2.0
-
-The Azure CLI 2.0 is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use Azure CLI 2.0 to deploy a Log Analytics workspace in Azure Monitor. A Log Analytics workspace is a unique environment for Azure Monitor log data. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. You require a Log Analytics workspace if you intend on collecting data from the following sources:
-
-* Azure resources in your subscription
-* On-premises computers monitored by System Center Operations Manager
-* Device collections from Configuration Manager
-* Diagnostic or log data from Azure storage
------ This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a workspace
-Create a workspace with [az deployment group create](/cli/azure/deployment/group#az_deployment_group_create). The following example creates a workspace in the *eastus* location using a Resource Manager template from your local machine. The JSON template is configured to only prompt you for the name of the workspace, and specifies a default value for the other parameters that would likely be used as a standard configuration in your environment. Or you can store the template in an Azure storage account for shared access in your organization. For further information about working with templates, see [Deploy resources with Resource Manager templates and Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
-
-For information about regions supported, see [regions Log Analytics is available in](https://azure.microsoft.com/regions/services/) and search for Azure Monitor from the **Search for a product** field.
-
-The following parameters set a default value:
-
-* location - defaults to East US
-* sku - defaults to the new Per-GB pricing tier released in the April 2018 pricing model
-
->[!WARNING]
->If creating or configuring a Log Analytics workspace in a subscription that has opted into the new April 2018 pricing model, the only valid Log Analytics pricing tier is **PerGB2018**.
->
-
-### Create and deploy template
-
-1. Copy and paste the following JSON syntax into your file:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "workspaceName": {
- "type": "String",
- "metadata": {
- "description": "Specifies the name of the workspace."
- }
- },
- "location": {
- "type": "String",
- "allowedValues": [
- "eastus",
- "westus"
- ],
- "defaultValue": "eastus",
- "metadata": {
- "description": "Specifies the location in which to create the workspace."
- }
- },
- "sku": {
- "type": "String",
- "allowedValues": [
- "Standalone",
- "PerNode",
- "PerGB2018"
- ],
- "defaultValue": "PerGB2018",
- "metadata": {
- "description": "Specifies the service tier of the workspace: Standalone, PerNode, Per-GB"
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/workspaces",
- "name": "[parameters('workspaceName')]",
- "apiVersion": "2015-11-01-preview",
- "location": "[parameters('location')]",
- "properties": {
- "sku": {
- "Name": "[parameters('sku')]"
- },
- "features": {
- "searchVersion": 1
- }
- }
- }
- ]
- }
- ```
-
-2. Edit the template to meet your requirements. Review [Microsoft.OperationalInsights/workspaces template](/azure/templates/microsoft.operationalinsights/2015-11-01-preview/workspaces) reference to learn what properties and values are supported.
-3. Save this file as **deploylaworkspacetemplate.json** to a local folder.
-4. You are ready to deploy this template. Use the following commands from the folder containing the template. When you're prompted for a workspace name, provide a name that is unique in your resource group.
-
- ```azurecli
- az deployment group create --resource-group <my-resource-group> --name <my-deployment-name> --template-file deploylaworkspacetemplate.json
- ```
-
-The deployment can take a few minutes to complete. When it finishes, you see a message similar to the following that includes the result:
-
-![Example result when deployment is complete](media/quick-create-workspace-cli/template-output-01.png)
-
-## Troubleshooting
-When you create a workspace that was deleted in the last 14 days and in [soft-delete state](../logs/delete-workspace.md#soft-delete-behavior), the operation could have different outcome depending on your workspace configuration:
-1. If you provide the same workspace name, resource group, subscription and region as in the deleted workspace, your workspace will be recovered including its data, configuration and connected agents.
-2. Workspace name must be unique per resource group. If you use a workspace name that is already exists, also in soft-delete in your resource group, you will get an error *The workspace name 'workspace-name' is not unique*, or *conflict*. To override the soft-delete and permanently delete your workspace and create a new workspace with the same name, follow these steps to recover the workspace first and perform permanent delete:
- * [Recover](../logs/delete-workspace.md#recover-workspace) your workspace
- * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace
- * Create a new workspace using the same workspace name
-
-## Next steps
-Now that you have a workspace available, you can configure collection of monitoring telemetry, run log searches to analyze that data, and add a management solution to provide additional data and analytic insights.
-
-* To enable data collection from Azure resources with Azure Diagnostics or Azure storage, see [Collect Azure service logs and metrics for use in Log Analytics](../essentials/resource-logs.md#send-to-log-analytics-workspace).
-* Add [System Center Operations Manager as a data source](../agents/om-agents.md) to collect data from agents reporting your Operations Manager management group and store it in your Log Analytics workspace.
-* Connect [Configuration Manager](../logs/collect-sccm.md) to import computers that are members of collections in the hierarchy.
-* Review the [monitoring solutions](../insights/solutions.md) available and how to add or remove a solution from your workspace.
-
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
You can download the latest version of the Windows agent from [here](https://aka
2. Run the following command.
- ```dos
+ ```cmd
InstallDependencyAgent-Windows.exe /S /RebootMode=manual ```
You can download the latest version of the Linux agent from [here](https://aka.m
1. Sign on to the computer with an account that has administrative rights.
-2. Run the following command as root`sh InstallDependencyAgent-Linux64.bin -s`.
+2. Run the following command as root.
+
+ ```bash
+ InstallDependencyAgent-Linux64.bin -s
+ ```
If the Dependency agent fails to start, check the logs for detailed error information. On Linux agents, the log directory is */var/opt/microsoft/dependency-agent/log*.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
Resource Manager provides the following functions for getting resource values in
* [getSecret](#getsecret) * [list*](#list) * [pickZones](#pickzones)
+* [providers (deprecated)](#providers)
* [reference](#reference) * [resourceId](#resourceid) * [subscriptionResourceId](#subscriptionresourceid)
The output from the preceding examples returns three arrays.
You can use the response from pickZones to determine whether to provide null for zones or assign virtual machines to different zones.
+## providers
+
+**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions.md
The following functions are available for getting resource values:
* [listSecrets](./bicep-functions-resource.md#list) * [list*](./bicep-functions-resource.md#list) * [pickZones](./bicep-functions-resource.md#pickzones)
+* [providers (deprecated)](./bicep-functions-resource.md#providers)
* [reference](./bicep-functions-resource.md#reference) * [resourceId](./bicep-functions-resource.md#resourceid) - can be used at any scope, but the valid parameters change depending on the scope. * [subscriptionResourceId](./bicep-functions-resource.md#subscriptionresourceid)
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
Title: Move Azure App Service resources description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 08/26/2021 Last updated : 08/30/2021 # Move guidance for App Service resources
When moving a Web App across subscriptions, the following guidance applies:
- App Service Environments - All App Service resources in the resource group must be moved together. - App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment. After the move, the web app is no longer hosted in the App Service Environment.-- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group.
+- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates).
- App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section. ## Find original resource group
When using the portal to move your App Service resources, you may see an error i
:::image type="content" source="./media/app-service-move-limitations/show-hidden-types.png" alt-text="Show hidden types":::
+## Move with free managed certificates
+
+You can't move a free App Service managed certificate. Instead, delete the managed certificate and recreate it after moving the web app. To get instructions for deleting the certificate, use the **Migration Operations** tool.
+
+If your free App Service managed certificate gets created in an unexpected resource group, try moving the app service plan back to its original resource group. Then, recreate the free managed certificate. This issue will be fixed.
+ ## Move support To determine which App Service resources can be moved, see move support status for:
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 08/24/2021 Last updated : 08/30/2021 # Move operation support for resources
Jump to a resource provider namespace:
> | availablestacks | No | No | No | > | billingmeters | No | No | No | > | certificates | No | Yes | No |
+> | certificates (managed) | No | No | No |
> | connectiongateways | Yes | Yes | No | > | connections | Yes | Yes | No | > | customapis | Yes | Yes | No |
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 08/16/2021 Last updated : 08/31/2021
Resource Manager provides the following functions for getting resource values in
* [extensionResourceId](#extensionresourceid) * [list*](#list) * [pickZones](#pickzones)
+* [providers (deprecated)](#providers)
* [reference](#reference) * [resourceGroup](#resourcegroup) * [resourceId](#resourceid)
To get values from parameters, variables, or the current deployment, see [Deploy
## extensionResourceId
-`extensionResourceId(resourceId, resourceType, resourceName1, [resourceName2], ...)`
+`extensionResourceId(baseResourceId, resourceType, resourceName1, [resourceName2], ...)`
Returns the resource ID for an [extension resource](../management/extension-resource-types.md), which is a resource type that is applied to another resource to add to its capabilities.
Returns the resource ID for an [extension resource](../management/extension-reso
| Parameter | Required | Type | Description | |: |: |: |: |
-| resourceId |Yes |string |The resource ID for the resource that the extension resource is applied to. |
-| resourceType |Yes |string |Type of resource including resource provider namespace. |
-| resourceName1 |Yes |string |Name of resource. |
+| baseResourceId |Yes |string |The resource ID for the resource that the extension resource is applied to. |
+| resourceType |Yes |string |Type of the extension resource including resource provider namespace. |
+| resourceName1 |Yes |string |Name of the extension resource. |
| resourceName2 |No |string |Next resource name segment, if needed. | Continue adding resource names as parameters when the resource type includes more segments.
The basic format of the resource ID returned by this function is:
{scope}/providers/{extensionResourceProviderNamespace}/{extensionResourceType}/{extensionResourceName} ```
-The scope segment varies by the resource being extended.
+The scope segment varies by the base resource being extended. For example, the ID for a subscription has different segments than the ID for a resource group.
When the extension resource is applied to a **resource**, the resource ID is returned in the following format:
When the extension resource is applied to a **resource**, the resource ID is ret
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{baseResourceProviderNamespace}/{baseResourceType}/{baseResourceName}/providers/{extensionResourceProviderNamespace}/{extensionResourceType}/{extensionResourceName} ```
-When the extension resource is applied to a **resource group**, the format is:
+When the extension resource is applied to a **resource group**, the returned format is:
```json /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{extensionResourceProviderNamespace}/{extensionResourceType}/{extensionResourceName} ```
-When the extension resource is applied to a **subscription**, the format is:
+An example of using this function with a resource group is shown in the next section.
+
+When the extension resource is applied to a **subscription**, the returned format is:
```json /subscriptions/{subscriptionId}/providers/{extensionResourceProviderNamespace}/{extensionResourceType}/{extensionResourceName} ```
-When the extension resource is applied to a **management group**, the format is:
+When the extension resource is applied to a **management group**, the returned format is:
```json /providers/Microsoft.Management/managementGroups/{managementGroupName}/providers/{extensionResourceProviderNamespace}/{extensionResourceType}/{extensionResourceName} ```
+An example of using this function with a management group is shown in the next section.
+ ### extensionResourceId example The following example returns the resource ID for a resource group lock.
The following example shows how to use the pickZones function to enable zone red
] ```
+## providers
+
+**The providers function has been deprecated.** We no longer recommend using it. If you used this function to get an API version for the resource provider, we recommend that you provide a specific API version in your template. Using a dynamically returned API version can break your template if the properties change between versions.
+ ## reference `reference(resourceName or resourceIdentifier, [apiVersion], ['Full'])`
azure-resource-manager Template Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions.md
Title: Template functions description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 10/12/2020 Last updated : 08/31/2021 # ARM template functions
Resource Manager provides the following functions for getting resource values:
* [listSecrets](template-functions-resource.md#list) * [list*](template-functions-resource.md#list) * [pickZones](template-functions-resource.md#pickzones)
+* [providers (deprecated)](template-functions-resource.md#providers)
* [reference](template-functions-resource.md#reference) * [resourceGroup](template-functions-resource.md#resourcegroup) - can only be used in deployments to a resource group. * [resourceId](template-functions-resource.md#resourceid) - can be used at any scope, but the valid parameters change depending on the scope.
azure-sql Authentication Aad Service Principal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal-tutorial.md
To grant this required permission, run the following script.
> [!NOTE] > This script must be executed by an Azure AD `Global Administrator` or a `Privileged Roles Administrator`. >
-> In **public preview**, you can assign the `Directory Readers` role to a group in Azure AD. The group owners can then add the managed identity as a member of this group, which would bypass the need for a `Global Administrator` or `Privileged Roles Administrator` to grant the `Directory Readers` role. For more information on this feature, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
+> You can assign the `Directory Readers` role to a group in Azure AD. The group owners can then add the managed identity as a member of this group, which would bypass the need for a `Global Administrator` or `Privileged Roles Administrator` to grant the `Directory Readers` role. For more information on this feature, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
- Replace `<TenantId>` with your `TenantId` gathered earlier. - Replace `<server name>` with your SQL logical server name. If your server name is `myserver.database.windows.net`, replace `<server name>` with `myserver`.
azure-sql Authentication Aad Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal.md
To enable an Azure AD object creation in SQL Database on behalf of an Azure AD a
> [!IMPORTANT] > Steps 1 and 2 must be executed in the above order. First, create or assign the server identity, followed by granting the [**Directory Readers**](../../active-directory/roles/permissions-reference.md#directory-readers) permission. Omitting one of these steps, or both will cause an execution error during an Azure AD object creation in Azure SQL on behalf of an Azure AD application. >
-> In **public preview**, you can assign the **Directory Readers** role to a group in Azure AD. The group owners can then add the managed identity as a member of this group, which would bypass the need for a **Global Administrator** or **Privileged Roles Administrator** to grant the **Directory Readers** role. For more information on this feature, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
+> You can assign the **Directory Readers** role to a group in Azure AD. The group owners can then add the managed identity as a member of this group, which would bypass the need for a **Global Administrator** or **Privileged Roles Administrator** to grant the **Directory Readers** role. For more information on this feature, see [Directory Readers role in Azure Active Directory for Azure SQL](authentication-aad-directory-readers-role.md).
## Troubleshooting and limitations
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 05/10/2021 Last updated : 08/30/2021 # Use auto-failover groups to enable transparent and coordinated failover of multiple databases
Be aware of the following limitations:
- Failover groups cannot be created between two servers or instances in the same Azure regions. - Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name. - Database rename is not supported for instances in failover group. You will need to temporarily delete failover group to be able to rename a database.-- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases will be impossible on the secondary instance unless the objects are manually created on the secondary.
+- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance.
## Programmatically managing failover groups
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The following table lists the major features of SQL Server and provides informat
| [DDL statements](/sql/t-sql/statements/statements) | Most - see individual statements | Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) | | [DDL triggers](/sql/relational-databases/triggers/ddl-triggers) | Database only | Yes | | [Distributed partition views](/sql/t-sql/statements/create-view-transact-sql#partitioned-views) | No | Yes |
-| [Distributed transactions - MS DTC](/sql/relational-databases/native-client-ole-db-transactions/supporting-distributed-transactions) | No - see [Elastic transactions](elastic-transactions-overview.md) | No - see [Linked server differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#linked-servers). Try to consolidate databases from several distributed SQL Server instances into one SQL Managed Instance during migration. |
+| [Distributed transactions - MS DTC](/sql/relational-databases/native-client-ole-db-transactions/supporting-distributed-transactions) | No - see [Elastic transactions](elastic-transactions-overview.md) | No - see [Elastic transactions](elastic-transactions-overview.md) |
| [DML triggers](/sql/relational-databases/triggers/create-dml-triggers) | Most - see individual statements | Yes | | [DMVs](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views) | Most - see individual DMVs | Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) | | [Elastic query](elastic-query-overview.md) (in public preview) | Yes, with required RDBMS type. | No |
The following table lists the major features of SQL Server and provides informat
| Time zone choice | No | [Yes](../managed-instance/timezones-overview.md), and it must be configured when the SQL Managed Instance is created. | | [Trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql) | No | Yes, but only limited set of global trace flags. See [DBCC differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#dbcc) | | [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Yes, [Transactional and snapshot replication subscriber only](migrate-to-database-from-sql-server.md) | Yes, in [public preview](/sql/relational-databases/replication/replication-with-sql-database-managed-instance). See the constraints [here](../managed-instance/transact-sql-tsql-differences-sql-server.md#replication). |
-| [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) | Yes - General Purpose and Business Critical service tiers only| [Yes](transparent-data-encryption-tde-overview.md) |
+| [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) | Yes - General Purpose, Business Critical, and Hyperscale (in preview) service tiers only| [Yes](transparent-data-encryption-tde-overview.md) |
| Windows authentication | No | No | | [Windows Server Failover Clustering](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server) | No. Other techniques that provide [high availability](high-availability-sla.md) are included with every database. Disaster recovery is discussed in [Overview of business continuity with Azure SQL Database](business-continuity-high-availability-disaster-recover-hadr-overview.md). | No. Other techniques that provide [high availability](high-availability-sla.md) are included with every database. Disaster recovery is discussed in [Overview of business continuity with Azure SQL Database](business-continuity-high-availability-disaster-recover-hadr-overview.md). |
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Last updated 01/27/2021
In this quickstart, you create a [single database](single-database-overview.md) in Azure SQL Database using either the Azure portal, a PowerShell script, or an Azure CLI script. You then query the database using **Query editor** in the Azure portal.
-## Prerequisite
+## Prerequisites
- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).-- You may also need the latest version of either [Azure PowerShell](/powershell/azure/install-az-ps) or the [Azure CLI](/cli/azure/install-azure-cli-windows), depending on the creation method you choose.
+- The latest version of either [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli-windows).
## Create a single database
This quickstart creates a single database in the [serverless compute tier](serve
# [Portal](#tab/azure-portal)
-To create a single database in the Azure portal this quickstart starts at the Azure SQL page.
+To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the [Select SQL Deployment option](https://portal.azure.com/#create/Microsoft.AzureSQL) page. 1. Under **SQL databases**, leave **Resource type** set to **Single database**, and select **Create**.
To create a single database in the Azure portal this quickstart starts at the Az
1. On the **Basics** tab of the **Create SQL Database** form, under **Project details**, select the desired Azure **Subscription**. 1. For **Resource group**, select **Create new**, enter *myResourceGroup*, and select **OK**.
-1. For **Database name** enter *mySampleDatabase*.
+1. For **Database name**, enter *mySampleDatabase*.
1. For **Server**, select **Create new**, and fill out the **New server** form with the following values:
- - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. So enter something like mysqlserver12345, and the portal lets you know if it is available or not.
+ - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. So enter something like mysqlserver12345, and the portal lets you know if it's available or not.
- **Server admin login**: Enter *azureuser*. - **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field. - **Location**: Select a location from the dropdown list.
az sql db create \
--capacity 2 ```
+# [Azure CLI (sql up)](#tab/azure-cli-sql-up)
+
+## Use Azure Cloud Shell
+
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com](https://shell.azure.com). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press **Enter** to run it.
+
+## Create a database and resources
+
+The [az sql up](/cli/azure/sql#az_sql_up) command simplifies the database creation process. With it, you can create a database and all of its associated resources with a single command. This includes the resource group, server name, server location, database name, and login information. The database is created with a default pricing tier of General Purpose, Provisioned, Gen5, 2 vCores.
+
+This command creates and configures a [logical server](logical-servers.md) for Azure SQL Database for immediate use. For more granular resource control during database creation, use the standard Azure CLI commands in this article.
+
+> [!NOTE]
+> When running the `az sql up` command for the first time, the Azure CLI prompts you to install the `db-up` extension. This extension is currently in preview. Accept the installation to continue. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+
+1. Run the `az sql up` command. If any required parameters aren't used, like `--server-name`, that resource is created with a random name and login information assigned to it.
+
+ ```azurecli-interactive
+ az sql up \
+ --resource-group $resourceGroupName \
+ --location $location \
+ --server-name $serverName \
+ --database-name mySampleDatabase \
+ --admin-user $adminlogin \
+ --admin-password $password
+ ```
+
+2. A server firewall rule is automatically created. If the server declines your IP address, create a new firewall rule using the `az sql server firewall-rule create` command.
+
+ ```azurecli-interactive
+ az sql server firewall-rule create \
+ --resource-group $resourceGroupName \
+ --server $serverName \
+ -n AllowYourIp \
+ --start-ip-address $startip \
+ --end-ip-address $endip
+ ```
+
+3. All required resources are created, and the database is ready for queries.
# [PowerShell](#tab/azure-powershell)
To delete the resource group and all its resources, run the following Azure CLI
az group delete --name $resourceGroupName ```
+### [Azure CLI (sql up)](#tab/azure-cli-sql-up)
+
+To delete the resource group and all its resources, run the following Azure CLI command, using the name of your resource group:
+
+```azurecli-interactive
+az group delete --name $resourceGroupName
+```
+ ### [PowerShell](#tab/azure-powershell) To delete the resource group and all its resources, run the following PowerShell cmdlet, using the name of your resource group:
azure-sql Aad Security Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/aad-security-configure-tutorial.md
Once the Azure AD server principal (login) has been created, and provided with `
GO ```
-> [!NOTE]
-> Azure AD guest users are supported for SQL Managed Instance logins, only when added as part of an Azure AD Group. An Azure AD guest user is an account that is invited to the Azure AD instance that the managed instance belongs to, from another Azure AD instance. For example, joe@contoso.com (Azure AD account) or steve@outlook.com (Microsoft account) can be added to a group in the Azure AD aadsqlmi instance. Once the users are added to a group, a login can be created in the SQL Managed Instance **master** database for the group using the **CREATE LOGIN** syntax. Guest users who are members of this group can connect to the managed instance using their current logins (for example, joe@contoso.com or steve@outlook.com).
+Guest users are supported as individual users (without being part of an AAD group (although they can be)) and the logins can be created in master directly (for example, joe@contoso.con) using the current login syntax.
## Create an Azure AD user from the Azure AD server principal (login)
azure-video-analyzer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/manage-account-connected-to-azure.md
In the **Update connection to Azure Media Services** dialog of your [Video Analy
|Application ID|The Azure AD application ID (with permissions for the specified Media Services account) that you created for this Video Analyzer for Media account. <br/><br/>To get the app ID, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Azure AD App**. Copy the relevant parameters.| |Application key|The Azure AD application key associated with your Media Services account that you specified above. <br/><br/>To get the app key, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Manage application** -> **Certificates & secrets**. Copy the relevant parameters.|
-## Autoscale reserved units
-
-The **Settings** page enables you to set the autoscaling of media reserved units (RU). If the option is **On**, you can allocate the maximum number of RUs and be sure that Video Analyzer for Media stops/starts RUs automatically. With this option, you don't pay extra money for idle time but also don't wait for indexing jobs to complete a long time when the indexing load is high.
-
-Autoscale doesn't scale below 1 RU or above the default limit of the Media Services account. To increase the limit, create a service request. For information about quotas and limitations and how to open a support ticket, see [Quotas and limitations](../../media-services/previous/media-services-quotas-and-limitations.md).
-
-![Autoscale reserved units Video Analyzer for Media](./media/manage-account-connected-to-azure/autoscale-reserved-units.png)
- ## Errors and warnings If your account needs some adjustments, you see relevant errors and warnings about your account configuration on the **Settings** page. The messages contain links to exact places in Azure portal where you need to make changes. This section gives more details about the error and warning messages.
azure-video-analyzer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets.md
If you use a video player other than Azure Media Player, you must manually manip
function jumpTo(evt) { var origin = evt.origin || evt.originalEvent.origin;
- // Validate that the event comes from the videobreakdown domain.
- if ((origin === "https://www.videobreakdown.com") && evt.data.time !== undefined){
+ // Validate that the event comes from the videoindexer domain.
+ if ((origin === "https://www.videoindexer.ai") && evt.data.time !== undefined){
// Call your player's "jumpTo" implementation. playerInstance.currentTime = evt.data.time;
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Azure VMware Solution private clouds are provisioned with a vCenter Server and N
## vCenter access and identity
-In Azure VMware Solution, vCenter has a built-in local user called cloudadmin and is assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). In general, the CloudAdmin role creates and manages workloads in your private cloud. But in Azure VMware Solution, the CloudAdmin role has vCenter privileges that differ from other VMware cloud solutions.
--- In a vCenter and ESXi on-premises deployment, the administrator has access to the vCenter administrator\@vsphere.local account. They can also have more AD users and groups assigned. --- In an Azure VMware Solution deployment, the administrator doesn't have access to the administrator user account. They can, however, assign AD users and groups to the CloudAdmin role on vCenter. -
-The private cloud user doesn't have access to and can't configure specific management components Microsoft supports and manages. For example, clusters, hosts, datastores, and distributed virtual switches.
> [!IMPORTANT] > Azure VMware Solution offers custom roles on vCenter but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter](#create-custom-roles-on-vcenter) section later in this article.
Use the *admin* account to access NSX-T Manager. It has full privileges and lets
Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about: -- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#register-the-microsoftavs-resource-provider)
+- [How to configure external identity source for vCenter](configure-identity-source-vcenter.md)
+
+- [How to enable Azure VMware Solution resource](deploy-azure-vmware-solution.md#register-the-microsoftavs-resource-provider)
+ - [Details of each privilege](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html)+ - [How Azure VMware Solution monitors and repairs private clouds](./concepts-private-clouds-clusters.md#host-monitoring-and-remediation)
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-run-command.md
+
+ Title: Concepts - Run commands in Azure VMware Solution
+description: Learn about using run commands in Azure VMware Solution.
+ Last updated : 08/31/2021+++
+# Run commands in Azure VMware Solution
+
+In Azure VMware Solution, you'll get vCenter access with CloudAdmin role. You can [view the privileges granted](concepts-identity.md#view-the-vcenter-privileges) to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter. Run commands are a collection of PowerShell cmdlets that you do certain operations on vCenter, which requires elevated privileges.
+
+Azure VMware Solution supports the following operations:
+
+- [Install and uninstall JetStream DR solution](deploy-disaster-recovery-using-jetstream.md)
+
+- [Configure an external identity source](configure-identity-source-vcenter.md)
+
+- [View and edit the storage policy](configure-storage-policy.md)
++
+>[!NOTE]
+>Run commands are executed one at a time in the order submitted.
+
+## View the status of an execution
+
+You can view the status of any run command executed, including the output, errors, warnings, and information.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Run command** > **Run execution status**.
+
+ You can sort by the execution name, package name, package version, command name, start time, end time, and status.
+
+ :::image type="content" source="media/run-command/run-execution-status.png" alt-text="Screenshot showing Run execution status tab." lightbox="media/run-command/run-execution-status.png":::
+
+1. Select the execution you want to view.
+
+ :::image type="content" source="media/run-command/run-execution-status-example.png" alt-text="Screenshot showing an example of a run execution.":::
+
+ You can view more details about the execution including the output, errors, warnings, and information.
+
+ - **Details** - Summary of the execution details, such as the name, status, package, and command name ran.
+
+ - **Output** - Message at the end of successful execution of a cmdlet. Not all cmdlets have output.
+
+ :::image type="content" source="media/run-command/run-execution-status-example-output.png" alt-text="Screenshot showing the output of a run execution.":::
+
+ - **Error** - Terminating exception that stopped the execution of a cmdlet.
+
+ :::image type="content" source="media/run-command/run-execution-status-example-error.png" alt-text="Screenshot showing the errors detected during the execution of an execution.":::
+
+ - **Warning** - Non-Terminating exception occurred during the execution of a cmdlet.
+
+ :::image type="content" source="media/run-command/run-execution-status-example-warning.png" alt-text="Screenshot showing the warnings detected during the execution of an execution.":::
+
+ - **Information** - Progress message during the execution of a cmdlet.
+
+ :::image type="content" source="medilet as it runs.":::
+++
+## Cancel or delete a job
+++
+### Method 1
+
+>[!NOTE]
+>Method 1 is irreversible.
+
+1. Select **Run command** > **Run execution status** and then select the job you want to cancel.
+
+ :::image type="content" source="media/run-command/run-execution-cancel-delete-job-method-1.png" alt-text="Screenshot showing how to cancel and delete a run command.":::
+
+2. Select **Yes** to cancel and remove the job for all users.
+++
+### Method 2
+
+1. Select **Run command** > **Packages** > **Run execution status**.
+
+2. Select **More** (...) for the job you want to cancel and delete.
+
+ :::image type="content" source="media/run-command/run-execution-cancel-delete-job-method-2.png" alt-text="Screenshot showing how to cancel and delete a run command using the ellipsis.":::
+
+3. Select **Yes** to cancel and remove the job for all users.
+++
+## Next steps
+
+Now that you've learned about the Run command concepts, you can use the Run command feature to:
+
+- [Configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you perform other VM operations, such as cloning or migrating.
+
+- [Configure external identity source for vCenter](configure-identity-source-vcenter.md) - vCenter has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). With the Run command feature, you can configure Active Directory over LDAP or LDAPS for vCenter as an external identity source.
+
+- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
description: Learn about storage capacity, storage policies, fault tolerance, and storage integration in Azure VMware Solution private clouds. Previously updated : 07/28/2021 Last updated : 08/31/2021 # Azure VMware Solution storage concepts
Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datas
## Storage policies and fault tolerance
-That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or you apply a new policy, the cluster continues to grow with this configuration. In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an architecture perspective.
+That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or apply a new policy, the cluster grows with this configuration. To set the storage policy, see [Configure storage policy](configure-storage-policy.md).
+
+In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an architecture perspective.
:::image type="content" source="media/concepts/vsphere-vm-storage-policies.png" alt-text="Screenshot that shows the vSphere Client VM Storage Policies."::: |Provisioning type |Description | |||
-|**Thick** | Is reserved or pre-allocated storage space. It protects systems by allowing them to function even if the vSAN datastore is full because the space is already reserved. For example, if you create a 10-GB virtual disk with thick provisioning, the full amount of virtual disk storage capacity is pre-allocated on the physical storage of the virtual disk and consumes all the space allocated to it in the datastore. It won't allow other virtual machines (VMs) to share the space from the datastore. |
+|**Thick** | Reserved or pre-allocated storage space. It protects systems by allowing them to function even if the vSAN datastore is full because the space is already reserved. For example, if you create a 10-GB virtual disk with thick provisioning. In that case, the full amount of virtual disk storage capacity is pre-allocated on the physical storage of the virtual disk and consumes all the space allocated to it in the datastore. It won't allow other virtual machines (VMs) to share the space from the datastore. |
|**Thin** | Consumes the space that it needs initially and grows to the data space demand used in the datastore. Outside the default (thick provision), you can create VMs with FTT-1 thin provisioning. For dedupe setup, use thin provisioning for your VM template. | >[!TIP]
You can use Azure storage services in workloads running in your private cloud. T
## Alerts and monitoring
-Microsoft provides alerts when capacity consumption exceeds 75%. You can monitor capacity consumption metrics that are integrated into Azure Monitor. For more information, see [Configure Azure Alerts in Azure VMware Solution](configure-alerts-for-azure-vmware-solution.md).
+Microsoft provides alerts when capacity consumption exceeds 75%. In addition, you can monitor capacity consumption metrics that are integrated into Azure Monitor. For more information, see [Configure Azure Alerts in Azure VMware Solution](configure-alerts-for-azure-vmware-solution.md).
## Next steps Now that you've covered Azure VMware Solution storage concepts, you may want to learn about: - [Attach disk pools to Azure VMware Solution hosts (Preview)](attach-disk-pools-to-azure-vmware-solution-hosts.md) - You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance.+
+- [Configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you perform other VM operations, such as cloning or migrating.
+ - [Scale clusters in the private cloud][tutorial-scale-private-cloud] - You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis.+ - [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. + - [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-identity-source-vcenter.md
+
+ Title: Configure external identity source for vCenter
+description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter as an external identity source.
+ Last updated : 08/31/2021+++++
+# Configure external identity source for vCenter
++++
+>[!NOTE]
+>Run commands are executed one at a time in the order submitted.
+
+In this how-to, you learn how to:
+
+> [!div class="checklist"]
+> * List all existing external identity sources integrated with vCenter SSO
+> * Add Active Directory over LDAP, with or without SSL
+> * Add existing AD group to cloudadmin group
+> * Remove AD group from the cloudadmin role
+> * Remove existing external identity sources
+++
+## Prerequisites
+
+- Establish connectivity from your on-premises network to your private cloud.
+
+- If you have AD with SSL, download the certificate for AD authentication and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+
+- If you use FQDN, enable DNS resolution on your on-premises AD.
+
+
+
+## List external identity
+++
+You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter SSO.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Run command** > **Packages** > **Get-ExternalIdentitySources**.
+
+ :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot showing how to access the run commands available." lightbox="media/run-command/run-command-overview.png":::
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ :::image type="content" source="media/run-command/run-command-get-external-identity-sources.png" alt-text="Screenshot showing how to list external identity source. ":::
+
+ | **Field** | **Value** |
+ | | |
+ | **Retain up to** |Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **getExternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++
+## Add Active Directory over LDAP
+
+You'll run the `New-AvsLDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter.
+
+1. Select **Run command** > **Packages** > **New-AvsLDAPIdentitySource**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Name** | User-friendly name of the external identity source, for example, **avslap.local**. |
+ | **DomainName** | The FQDN of the domain. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source if you're using SSPI authentications. |
+ | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldap://yourserver:389**. |
+ | **SecondaryURL** | Secondary fall-back URL if there's primary failure. |
+ | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=yourserver,DC=internal**. Base DN is needed to use LDAP Authentication. |
+ | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=yourserver,DC= internal**. Base DN is needed to use LDAP Authentication. |
+ | **Credential** | Username and password used for authentication with the AD source (not cloudadmin). |
+ | **GroupName** | Group to give cloud admin access in your external identity source, for example, **avs-admins**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
+++
+## Add Active Directory over LDAP with SSL
+
+You'll run the `New-AvsLDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter.
+
+1. Download the certificate for AD authentication and upload it to an Azure Storage account as blob storage.
+
+1. [Grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+
+1. Select **Run command** > **Packages** > **New-AvsLDAPSIdentitySource**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Name** | User-friendly name of the external identity source ,for example, **avslap.local**. |
+ | **DomainName** | The FQDN of the domain. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source if you're using SSPI authentications. |
+ | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldap://yourserver:389**. |
+ | **SecondaryURL** | Secondary fall-back URL if there's primary failure. |
+ | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=yourserver,DC=internal**. Base DN is needed to use LDAP Authentication. |
+ | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=yourserver,DC= internal**. Base DN is needed to use LDAP Authentication. |
+ | **Credential** | The username and password used for authentication with the AD source (not cloudadmin). |
+ | **CertificateSAS** | Path to SAS strings with the certificates for authentication to the AD source. |
+ | **GroupName** | Group to give cloud admin access in your external identity source, for example, **avs-admins**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++++
+## Add existing AD group to cloudadmin group
+
+You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to cloudadmin group. The users in this group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter SSO.
+
+1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **GroupName** | Name of the group to add, for example, **VcAdminGroup**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **addADgroup**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++++
+## Remove AD group from the cloudadmin role
+
+You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD group from the cloudadmin role.
+
+1. Select **Run command** > **Packages** > **Remove-GroupFromCloudAdmins**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **GroupName** | Name of the group to remove, for example, **VcAdminGroup**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **removeADgroup**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++++++
+## Remove existing external identity sources
+
+You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing external identity sources in bulk.
+
+1. Select **Run command** > **Packages** > **Remove-ExternalIdentitySources**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **remove_externalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++
+## Next steps
+
+Now that you've learned about how to configure LDAP and LDAPS, you can learn more about:
+
+- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating.
+
+- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
+
+
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-storage-policy.md
+
+ Title: Configure storage policy
+description: Learn how to configure storage policy for your Azure VMware Solution virtual machines.
+ Last updated : 08/31/2021+
+#Customer intent: As an Azure service administrator, I want set the vSAN storage policies to determine how storage is allocated to the VM.
+++
+# Configure storage policy
+
+vSAN storage policies define storage requirements for your virtual machines (VMs). These policies guarantee the required level of service for your VMs because they determine how storage is allocated to the VM. Each VM deployed to a vSAN datastore is assigned at least one VM storage policy.
+
+You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating. Post-deployment cloudadmin users or equivalent roles can't change the default storage policy for a VM. However, **VM storage policy** per disk changes is permitted.
+
+The Run command lets authorized users change the default or existing VM storage policy to an available policy for a VM post-deployment. There are no changes made on the disk-level VM storage policy. You can always change the disk level VM storage policy as per your requirements.
++
+>[!NOTE]
+>Run commands are executed one at a time in the order submitted.
++
+In this how-to, you learn how to:
+
+> [!div class="checklist"]
+> * List all storage policies
+> * Set the storage policy for a VM
+> * Specify storage policy for a cluster
+++
+## Prerequisites
+
+Make sure that the [minimum level of hosts are met](https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vsphere.vmc-aws-manage-data-center-vms.doc/GUID-EDBB551B-51B0-421B-9C44-6ECB66ED660B.html).
+
+| **RAID configuration** | **Failures to tolerate (FTT)** | **Minimum hosts required** |
+| | :: | :: |
+| RAID-1 (Mirroring) <br />Default setting. | 1 | 3 |
+| RAID-5 (Erasure Coding) | 1 | 4 |
+| RAID-1 (Mirroring) | 2 | 5 |
+| RAID-6 (Erasure Coding) | 2 | 6 |
+| RAID-1 (Mirroring) | 3 | 7 |
++
+
+
+## List storage policies
+
+You'll run the `Get-StoragePolicy` cmdlet to list the vSAN based storage policies available to set on a VM.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Run command** > **Packages** > **Get-StoragePolicies**.
+
+ :::image type="content" source="media/run-command/run-command-overview-storage-policy.png" alt-text="Screenshot showing how to access the storage policy run commands available." lightbox="media/run-command/run-command-overview-storage-policy.png":::
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ :::image type="content" source="media/run-command/run-command-get-storage-policy.png" alt-text="Screenshot showing how to list storage policies available. ":::
+
+ | **Field** | **Value** |
+ | | |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **Get-StoragePolicies-Exec1**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++++
+## Set storage policy on VM
+
+You'll run the `Set-AvsVMStoragePolicy` cmdlet to Modify vSAN based storage policies on an individual VM.
+
+>[!NOTE]
+>You cannot use the vSphere Client to change the default storage policy or any existing storage policies for a VM.
+
+1. Select **Run command** > **Packages** > **Set-AvsVMStoragePolicy**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **VMName** | Name of the target VM. |
+ | **StoragePolicyName** | Name of the storage policy to set. For example, **RAID-FTT-1**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **changeVMStoragePolicy**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++
+## Specify storage policy for a cluster
+
+You'll run the `Set-ClusterDefaultStoragePolicy` cmdlet to specify default storage policy for a cluster,
+
+1. Select **Run command** > **Packages** > **Set-ClusterDefaultStoragePolicy**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **ClusterName** | Name of the cluster. |
+ | **StoragePolicyName** | Name of the storage policy to set. For example, **RAID-FTT-1**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **Set-ClusterDefaultStoragePolicy-Exec1**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
+++
+## Next steps
+
+Now that you've learned how to configure vSAN storage policies, you can learn more about:
+
+- [How to attach disk pools to Azure VMware Solution hosts (Preview)](attach-disk-pools-to-azure-vmware-solution-hosts.md) - You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance.
+
+- [How to configure external identity for vCenter](configure-identity-source-vcenter.md) - vCenter has a built-in local user called cloudadmin and assigned to the CloudAdmin role. The local cloudadmin user is used to set up users in Active Directory (AD). With the Run command feature, you can configure Active Directory over LDAP or LDAPS for vCenter as an external identity source.
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-vmware-hcx.md
In your data center, you can connect or pair the VMware HCX Cloud Manager in Azu
1. Under **Infrastructure**, select **Site Pairing** and select the **Connect To Remote Site** option (in the middle of the screen).
-1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9`, the Azure VMware Solution cloudadmin\@vsphere.local username, and the password. Then select **Connect**.
+1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9` and the credentials for a user which holds the CloudAdmin role in your private cloud. Then select **Connect**.
> [!NOTE] > To successfully establish a site pair: > * Your VMware HCX Connector must be able to route to your HCX Cloud Manager IP over port 443. >
- > * Use the same password that you used to sign in to vCenter. You defined this password on the initial deployment screen.
+ > * A service account from your external identity source, such as Active Directory, is recommended for site pairing connections. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
You'll see a screen showing that your VMware HCX Cloud Manager in Azure VMware Solution and your on-premises VMware HCX Connector are connected (paired).
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
+
+ Title: Deploy disaster recovery using JetStream
+description: Learn how to deploy JetStream Disaster Recovery (DR) for your Azure VMware Solution private cloud and on-premises VMware workloads.
+ Last updated : 08/31/2021+++
+# Deploy disaster recovery using JetStream
+
+[JetStream Disaster Recovery (DR)](https://www.jetstreamsoft.com/product-portfolio/jetstream-dr/) is installed in a VMware vSphere environment and managed through a vCenter plug-in appliance. It provides cloud-native Continuous Data Protection (CDP), which constantly replicates virtual machine (VM) I/O operations. Instead of capturing snapshots at regular intervals, it continuously captures and replicates data as it's written to the primary storage with minimal effect on running applications. As a result, it allows you to quickly recover VMs and their data, reaching a lower recovery point objective (RPO).
+
+With Azure VMware Solution, you can store data directly to a recovery cluster in vSAN or an attached file system like Azure NetApp Files. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
++
+JetStream DR software consists of three main components:
+
+- Management Server Virtual Appliance (MSA) is installed and configured before DR protection.
+
+- DR Virtual Appliance (DRVA) is an .ISO image that the JetStream DR MSA automatically deploys.
+
+- Host components (IO Filter packages)
+
+The MSA is used to install and configure host components on the compute cluster and then to administer JetStream DR software. The DRVA runs the data path DR components. Multiple DRVAs can run concurrently for better scalability. Each DRVA has one or more dedicated partitions attached as an iSCSI LUN or as a lowΓÇÉlatency VDISK. The partitions are used to maintain replication logs and repositories for persistent metadata.
+
+In this article, you'll deploy and learn how to use JetStream DR in your Azure VMware Solution private cloud and on-premises VMware workloads.
++
+## Supported scenarios
+Depending on the protection services required and the type of private cloud protected, you can deploy JetStream DR in two ways:
+
+- On-premises to cloud
+
+- Cloud to cloud
+
+### On-premises to cloud deployment
+VMs running in an organization's VMwareΓÇÉbased data center are continuously replicated to Microsoft Azure. As a result, VMs can resume operation in Azure VMware Solution if there's an incident in the on-premises data center. While the VMs are running in the recovery environment, they continue to replicate data for continued protection. After the onΓÇÉpremises data center is restored, the VMs and their data (including any new data generated by the VMs in the recovery environment) can return to their original data center without interruption.
++
+### Cloud to cloud deployment
+
+In this configuration, Azure VMware Solution hosts your primary environment in one data center. It protects the VMs and the data by continuously replicating to another private cloud in another of its data centers. If there is an incident, VMs and data are recovered in the second data center. This protection can be biΓÇÉdirectional, with data center "A" protecting "B", and viceΓÇÉversa.
+++
+## Prerequisites
+
+- Azure VMware Solution private cloud deployed as a secondary region.
+
+- An Ubuntu Linux jump box with an ExpressRoute connection to your Azure VMware Solution private cloud.
+
+- Latest PowerShell installed onto the Linux jump box.
+
+- Latest third-party module from PowerShell gallery installed.
+
+### Protected site
+
+The protected site hosts a **service cluster** for administrative services, such as vCenter, DNS, Active Directory, and **computer clusters** where protected line-of-business applications run. The protected site can be located on-premises or hosted in Azure VMware Solution.
+
+Any of the following types can be used:
+
+- Azure Blob Storage
+
+- Azure VMware Solution vSAN
+
+- Azure VMware Solution attached file system, such as Azure NetApp Files
++
+| **Item** | **Description** |
+| | |
+| **vCenter Server** | <ul><li>Supported version: 6.7</li><li>HTTPS port: If using a firewall, HTTPS port 443 must be open.</li><li>Connectivity: The JetStream DR Management Server Appliance FQDN must be reachable from vCenter. Otherwise, the plug-in installation fails.</li><li>Time: The vCenter and JetStream DR MSA clocks must be synchronized.</li></ul> |
+| **Distributed Resource Scheduler (DRS)** | ItΓÇÖs recommended on the compute cluster for resource balancing. |
+| **Cluster** | vSphere Hosts: VMs to be protected by JetStream DR must be part of a cluster. |
+| **vSphere Host** | <ul><li>Supported version: 6.7U1 (build #10302608) or later</li><li>Connectivity: vCenter Server FQDN must be reachable from the host. Otherwise, the host configuration fails.</li><li>Time: The vSphere hosts and JetStream DR MSA clocks must be synchronized.</li><li>CIM Service: The CIM server must be enabled, which is the default setting.</li></ul> |
+| **JetStream DR MSA** | <ul><li>CPU: 64 bit, 4 vCPUs</li><li>Memory: 4 GB</li><li>Disk space: 60 GB</li><li>Network: Static or dynamically assigned (DHCP) IP addresses can be used. The FQDN must be registered with DNS.</li><li>DNS: DNS name resolution for vSphere hosts and vCenter Server</li></ul> |
+| **JetStream DRVA** | <ul><li>CPU: 4 cores</li><li>Memory: 8 GB</li><li>Network: Static or dynamically assigned (DHCP) IP addresses can be used.</li></ul> |
+| **Replication Log Store** | The protected site should expose a low-latency, flash storage device that the hosts share in the cluster for optimal performance. This device can be controlled by the JetStream DR software or provided by a third party. It's used as a repository for the replication log. The DRVA and ESXi host(s) must have direct access to this storage over iSCSI. |
+| **Ports** | When JetStream DR software is installed, a range of ports automatically opens on the source ESXi hosts. So for most users, no more action is necessary. However, in cases where the on-premises/source setup has special firewall rules blocking these ports, you'll need to open these ports manually.<br /><br />Port range: 32873-32878 |
+++
+### Recovery site
+
+An Azure VMware Solution _pilot light_ cluster is established for failover recovery. Although the recovery site is created as part of the installation process, the recovery site cluster is not fully populated or used during normal operation. Failed over compute clusters are added to the recovery site on-demand in response to a disaster event.
+
+### Network
+
+A network with the following characteristics must be established between the protected site and the recovery site.
+
+| **Items** | **Description** |
+| | |
+| **JetStream DR MSA** | A management network is required for the MSA. This network is used for access to the JetStream DR RESTful APIs and making other data path calls. If a private network is available for connecting to the object store, this private network should be added to the MSA VM as a separate network. If no private network is available, make sure the management network can be used to connect to the object store. <br /><br />A dedicated external network can be used for object store access; otherwise, data traffic is sent over the management network. |
+| **JetStream DRVA** | If the only network used is the management network, make sure it has access to both IO Filter and the object store. If multiple networks exist within the cluster, all must be attached to the DRVA VMs. |
+| **Recovery from Object Cloud Virtual Appliance (RocVA)** | If the only network used is the management network, make sure it has access to both the ESXi host(s) and the object store. If multiple networks exist within the cluster, all must be attached to the RocVA VM. The RocVA is a temporary VM created automatically when needed for VM recovery, then removed when it is no longer needed. |
+| **Object store / blob storage** | The object store/Blob Storage should be accessible to both the protected site and the recovery site. |
+| **Replication log store** | DRVAs and ESXi host(s) must have direct access to this storage over iSCSI. |
++
+## Install JetStream DR
+
+JetStream DR installation is available through the Run command functionality in the Azure VMware Solution portal. You'll complete the installation in three steps.
+++
+>[!NOTE]
+>Run commands are executed one at a time in the order submitted.
+
+### Check the current state of the system
+
+You'll run the `Invoke-PreflightJetDRSystemCheck` cmdlet to check the state of your system and whether the minimal requirements for the script are met. It also checks the required vCenter configuration to executes other cmdlets.
+
+The cmdlet checks:
+
+- PowerShell
+- vCenter FQDN
+- CloudAdmin role
+- VMware modules
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Run command** > **Packages** > **Invoke-PreflightJetDRSystemCheck**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **checkDRsystem**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
+++
+### Check cluster configuration
+
+You'll run the `Invoke-PreflightJetDRInstall` cmdlet to check the following cluster configuration:
+
+- If the cluster details are correct
+- Has at least four hosts (minimum)
+- If there's a VM with the same name provided for installing MSA
+- If there's any **jetdr** plug-in present in vCenter
++
+1. Select **Run command** > **Packages** > **Invoke-PreflightJetDRInstall**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **VMName** | Name of MSA VM. For example, **jetstreamServer**. |
+ | **Cluster** | Cluster name where MSA will be deployed. For example, **Cluster-1**. |
+ | **ProtectedCluster** | Cluster to be protected. For example, **Cluster-1**. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **check_jetserverdetails**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
+
+1. If errors are reported, you can go to the next step to deploy JetDR MSA.
++
+### Deploy JetDR MSA
+
+You'll run the `Install-JetDR` cmdlet to deploy JetDR MSA, register vCenter to the JetDR MSA, and configure clusters. The deployment downloads the JetDR bundle from Microsoft Server Media (MMS) and creates a new user with elevated privileges assigned.
+
+1. Select **Run command** > **Packages** > **Install-JetDR**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **Network** | Network mapping for the MSA to be deployed. For example, **VM Network**. |
+ | **HostName** | Hostname (FQDN) of the MSA to be deployed. |
+ | **Credential** | Credentials of root user of the MSA to be deployed. |
+ | **Gateway** | Gateway of the MSA to be deployed. Leave blank for DHCP. |
+ | **Dns** | DNS IP that MSA should use. Leave blank for DHCP. |
+ | **MSIp** | IP address of the MSA to be deployed. Leave blank for DHCP. |
+ | **Netmask** | Netmask of the MSA to be deployed. Leave blank for DHCP. |
+ | **Cluster** | Cluster name where MSA will be deployed. For example, **Cluster-1**. |
+ | **VMName** | Name of MSA VM. For example, **jetstreamServer**. |
+ | **Datastore** | Datastore where MSA will be deployed. |
+ | **ProtectedCluster** | Cluster to be protected. For example, **Cluster-1**. |
+ | **RegisterWithIp** | Register MSA with Ip instead of hostname. <ul><li>True if hostname of the MSA is not DNS registered.</li><li>False if hostname of the MSA is DNS registered. </li></ul> |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **check_jetserverdetails**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++
+## Uninstall JetStream DR
+
+You'll uninstall JetStream DR in two steps.
++
+### Check current state of the JetStream appliance
+
+You'll run the `Invoke-PreflightJetDRUninstall` cmdlet to diagnose the existing MSA VM and cluster configuration. It checks the current state of the JetStream DR appliance and whether the minimal requirements for the script are met:
+
+- If the cluster details are correct
+- Has at least four hosts (minimum)
+- If vCenter is registered to the MSA
+
+1. Select **Run command** > **Packages** > **Invoke-PreflightJetDRUninstall**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **MSIp** | IP address of the MSA VM. |
+ | **Credential** | Credentials of root user of the MSA VM. It must be the same provided at the time of installation. |
+ | **ProtectedCluster** | Name of the protected cluster, for example, **Cluster-1**. It must be the cluster that was provided at the time of installation. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **uninstallcheck_jetserverdetails**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++++
+### Uninstall JetDR
+
+You'll run the `Uninstall-JetDR` cmdlet to uninstall JetStream DR and its components. It unconfigures the cluster, unregisters vCenter from the MSA, and then removes the user.
+
+1. Select **Run command** > **Packages** > **Uninstall-JetDR**.
+
+1. Provide the required values or change the default values, and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **MSIp** | IP address of the MSA VM. |
+ | **Credential** | Credentials of root user of the MSA VM. It must be the same provided at the time of installation. |
+ | **ProtectedCluster** | Name of the protected cluster, for example, **Cluster-1**. It must be the cluster that was provided at the time of installation. |
+ | **Retain up to** | Retention period of the cmdlet output. The default value is 60. |
+ | **Specify name for execution** | Alphanumeric name, for example, **uninstallcheck_jetserverdetails**. |
+ | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+
+1. Check **Notifications** to see the progress.
++
+## Next steps
+
+- [JetStream DR for Azure VMware Solution - Full demo](https://vimeo.com/475620858/2ce9413248)
+
+ - [Getting started with JetStream DR for Azure VMware Solution](https://vimeo.com/491880696/ec509ff8e3)
+
+ - [Configuration and Protecting VMs](https://vimeo.com/491881616/d887590fb2)
+
+ - [Failover to Azure VMware Solution](https://vimeo.com/491883564/ca9fc57092)
+
+ - [Failback to On-premises](https://vimeo.com/491884402/65ee817b60)
+
+- [JetStream DR for Azure VMware Solution Infrastructure Setup](https://vimeo.com/480574312/b5386a871c) (*technical details, no voice track*)
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
The Application Gateway instance gets deployed on the hub in a dedicated subnet
## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
- An Azure VMware Solution private cloud deployed and running. ## Deployment and configuration
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/rotate-cloudadmin-credentials.md
Title: Rotate the cloudadmin credentials for Azure VMware Solution description: Learn how to rotate the vCenter Server credentials for your Azure VMware Solution private cloud. Previously updated : 08/25/2021 Last updated : 08/31/2021 #Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter CloudAdmin credentials.
Last updated 08/25/2021
# Rotate the cloudadmin credentials for Azure VMware Solution
-In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time. After generating a new password, you must update VMware HCX Connector with the latest password.
- >[!IMPORTANT]
->Currently, rotating your NSX-T Manager *admin* credentials isn't supported.
+>Currently, rotating your NSX-T Manager *admin* credentials isn't supported. To rotate your NSX-T Manager password, submit a [support request](https://rc.portal.azure.com/#create/Microsoft.Support). This process might impact running HCX services.
+
+In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
+>[!CAUTION]
+>If you use your cloudadmin user credentials to connect services to vCenter in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
## Prerequisites
-If you use your cloudadmin credentials for connected services like HCX, vRealize Orchestrator, vRealize Operations Manager, or VMware Horizon, your connections stop working once you update your password. So stop these services before initiating the password rotation. Otherwise, you'll experience temporary locks on your vCenter CloudAdmin account, as these services continuously call your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
+Consider and determine which services connect to vCenter as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
+
+One way to determine which services authenticate to vCenter with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
+
+Instead of using the cloudadmin user to connect services to vCenter, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
## Reset your vCenter credentials
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
The subnets:
| | -- | :: | ::| | | Private Cloud DNS server | On-Premises DNS Server | UDP | 53 | DNS Client - Forward requests from PC vCenter for any on-premises DNS queries (check DNS section below) | | On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (check DNS section below) |
-| On-premises network | Private Cloud vCenter server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. <br><br>WS-Management (also requires port 443 to be open) <br><br>If you use a custom Microsoft SQL database and not the bundled SQL Server 2008 database on the vCenter Server, the SQL Reporting Services use port 80. When you install vCenter Server, the installer prompts you to change the HTTP port for the vCenter Server. Change the vCenter Server HTTP port to a custom value to ensure a successful installation. Microsoft Internet Information Services (IIS) also uses port 80. See Conflict Between vCenter Server and IIS for Port 80. |
-| Private Cloud management network | On-premises Active Directory | TCP | 389 | This port must be open on the local and all remote instances of vCenter Server. This port is the LDAP port number for the Directory Services for the vCenter Server group. The vCenter Server system needs to bind to port 389, even if you aren't joining this vCenter Server instance to a Linked Mode group. If another service is running on this port, it might be preferable to remove it or change its port to a different port. You can run the LDAP service on any port from 1025 through 65535. If this instance is serving as the Microsoft Windows Active Directory, change the port number from 389 to an available port from 1025 through 65535. This port is optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. |
-| On-premises network | Private Cloud vCenter server | TCP(HTTPS) | 443 | This port allows you to access vCenter from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. This port is also used for the following
-| Web Browser | Hybrid Cloud Manager | TCP(HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
+| On-premises network | Private Cloud vCenter server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. |
+| Private Cloud management network | On-premises Active Directory | TCP | 389/636 | These ports are open to allow communications for Azure VMware Solutions vCenter to communicate to any on-premises Active Directory/LDAP server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. |
+| Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 3269 is recommended for security purposes. |
+| On-premises network | Private Cloud vCenter server | TCP(HTTPS) | 443 | This port allows you to access vCenter from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
+| On-premises network | HCX Manager | TCP(HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
| Admin Network | Hybrid Cloud Manager | SSH | 22 | Administrator SSH access to Hybrid Cloud Manager. |
-| HCM | Cloud Gateway | TCP(HTTPS) | 8123 | Send host-based replication service instructions to the Hybrid Cloud Gateway. |
-| HCM | Cloud Gateway | HTTP TCP(HTTPS) | 9443 | Send management instructions to the local Hybrid Cloud Gateway using the REST API. |
+| HCX Manager | Cloud Gateway | TCP(HTTPS) | 8123 | Send host-based replication service instructions to the Hybrid Cloud Gateway. |
+| HCX Manager | Cloud Gateway | HTTP TCP(HTTPS) | 9443 | Send management instructions to the local Hybrid Cloud Gateway using the REST API. |
| Cloud Gateway | L2C | TCP(HTTPS) | 443 | Send management instructions from Cloud Gateway to L2C when L2C uses the same path as the Hybrid Cloud Gateway. | | Cloud Gateway | ESXi Hosts | TCP | 80,902 | Management and OVF deployment. | | Cloud Gateway (local)| Cloud Gateway (remote) | UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. |
azure-web-pubsub Tutorial Pub Sub Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-pub-sub-messages.md
Copy the fetched **ConnectionString** and it will be used later in this tutorial
## Set up the subscriber
-Clients connect to the Azure Web PubSub service through the standard WebSocket protocol using [JSON Web Token (JWT)](https://jwt.io/) authentication. The service SDK provides helper methods to generate the token. In this tutorial, the subscriber directly generates the token from *ConnectionString*. In real applications, we usually use a server-side application to handle the authentication/authorization workflow. Try the [Build a chat app](./tutorial-build-chat.md) tutorial to better understand the workflow.
+Clients connect to the Azure Web PubSub service through the standard WebSocket protocol using [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication. The service SDK provides helper methods to generate the token. In this tutorial, the subscriber directly generates the token from *ConnectionString*. In real applications, we usually use a server-side application to handle the authentication/authorization workflow. Try the [Build a chat app](./tutorial-build-chat.md) tutorial to better understand the workflow.
# [C#](#tab/csharp)
Clients connect to the Azure Web PubSub service through the standard WebSocket p
The code above creates a WebSocket connection to connect to a hub in Azure Web PubSub. Hub is a logical unit in Azure Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Azure Web PubSub.
- Azure Web PubSub service uses [JSON Web Token (JWT)](https://jwt.io/) authentication, so in the code sample we use `WebPubSubServiceClient.GenerateClientAccessUri()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
+ Azure Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication, so in the code sample we use `WebPubSubServiceClient.GenerateClientAccessUri()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
After the connection is established, you'll receive messages through the WebSocket connection. So we use `client.MessageReceived.Subscribe(msg => ...));` to listen to incoming messages.
Clients connect to the Azure Web PubSub service through the standard WebSocket p
The code above creates a WebSocket connection to connect to a hub in Azure Web PubSub. Hub is a logical unit in Azure Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Azure Web PubSub.
- Azure Web PubSub service uses [JSON Web Token (JWT)](https://jwt.io/) authentication, so in the code sample we use `WebPubSubServiceClient.getAuthenticationToken()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
+ Azure Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication, so in the code sample we use `WebPubSubServiceClient.getAuthenticationToken()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
After connection is established, you'll receive messages through the WebSocket connection. So we use `WebSocket.on('message', ...)` to listen to incoming messages.
Clients connect to the Azure Web PubSub service through the standard WebSocket p
The code above creates a WebSocket connection to connect to a hub in Azure Web PubSub. Hub is a logical unit in Azure Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Azure Web PubSub.
- Azure Web PubSub service uses [JSON Web Token (JWT)](https://jwt.io/) authentication, so in the code sample we use `build_authentication_token()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
+ Azure Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication, so in the code sample we use `build_authentication_token()` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
After connection is established, you'll receive messages through the WebSocket connection. So we use `await ws.recv()` to listen to incoming messages.
Clients connect to the Azure Web PubSub service through the standard WebSocket p
The code above creates a WebSocket connection to connect to a hub in Azure Web PubSub. Hub is a logical unit in Azure Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Azure Web PubSub.
- Azure Web PubSub service uses [JSON Web Token (JWT)](https://jwt.io/) authentication, so in the code sample we use `WebPubSubServiceClient.getAuthenticationToken(new GetAuthenticationTokenOptions())` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
+ Azure Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication, so in the code sample we use `WebPubSubServiceClient.getAuthenticationToken(new GetAuthenticationTokenOptions())` in Web PubSub SDK to generate a url to the service that contains the full URL with a valid access token.
After connection is established, you'll receive messages through the WebSocket connection. So we use `onMessage(String message)` to listen to incoming messages.
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support description: Learn about Archive Tier Support for Azure Backup Previously updated : 08/25/2021 Last updated : 08/31/2021
Stop protection and delete data deletes all the recovery points. For recovery po
| Workloads | Preview | Generally available | | | | |
-| SQL Server in Azure VM | East US, South Central US, North Central US, West Europe, UK South | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, Central US, East US 2, West US, West US 2, West Central US |
+| SQL Server in Azure VM | East US, South Central US, North Central US, West Europe | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, UK South, Central US, East US 2, West US, West US 2, West Central US |
| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South | None | ## Error codes and troubleshooting steps
backup Backup Instant Restore Capability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-instant-restore-capability.md
In the Azure portal, you can see a field added in the **VM Backup Policy** pane
> From Az PowerShell version 1.6.0 onwards, you can update the instant restore snapshot retention period in policy using PowerShell ```powershell
-$bkpPol = Get-AzureRmRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureVM"
+$bkpPol = Get-AzRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureVM"
$bkpPol.SnapshotRetentionInDays=5
-Set-AzureRmRecoveryServicesBackupProtectionPolicy -policy $bkpPol
+Set-AzRecoveryServicesBackupProtectionPolicy -policy $bkpPol
``` The default snapshot retention for each policy is set to two days. You can change the value to a minimum of 1 and a maximum of five days. For weekly policies, the snapshot retention is fixed to five days.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-overview.md
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtua
* **RDP and SSH directly in Azure portal:** You can get to the RDP and SSH session directly in the Azure portal using a single click seamless experience. * **Remote Session over TLS and firewall traversal for RDP/SSH:** Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. You get your RDP/SSH session over TLS on port 443, enabling you to traverse corporate firewalls securely. * **No Public IP required on the Azure VM:** Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using private IP on your VM. You don't need a public IP on your virtual machine.
-* **No hassle of managing NSGs:** Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines.
+* **No hassle of managing [network security group](../virtual-network/network-security-groups-overview.md#security-rules) NSGs:** Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines.
* **Protection against port scanning:** Because you do not need to expose your virtual machines to the public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network. * **Protect against zero-day exploits. Hardening in one place only:** Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you donΓÇÖt need to worry about hardening each of the virtual machines in your virtual network. The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-aad-auth.md
Once you've registered your application, follow these steps in the Azure portal
1. Search for the name of your application in the list of app registrations. 1. Select the application and select **API permissions**. 1. In the **API permissions** section, select **Add a permission**.
-1. In **Select an API**, search for the Batch API. Search for each of these strings until you find the API:
- 1. **Microsoft Azure Batch**
- 1. **ddbf3205-c6bd-46ae-8127-60eb93363864** is the ID for the Batch API.
-1. Once you find the Batch API, select it and then choose **Select**.
+1. In **Select an API**, search for "Microsoft Azure Batch" to find the Batch API. **ddbf3205-c6bd-46ae-8127-60eb93363864** is the Application ID for the Batch API.
+1. Select the Batch API, then choose **Select**.
1. In **Select permissions**, select the check box next to **Access Azure Batch Service** and then select **Add permissions**. The **API permissions** section now shows that your Azure AD application has access to both Microsoft Graph and the Batch service API. Permissions are granted to Microsoft Graph automatically when you first register your app with Azure AD.
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-account-create-portal.md
Title: Create an account in the Azure portal description: Learn how to create an Azure Batch account in the Azure portal to run large-scale parallel workloads in the cloud. Previously updated : 07/01/2021 Last updated : 08/31/2021
When creating your first Batch account in user subscription mode, you need to re
1. Return to the **Subscription** page, then select **Access control (IAM)**.
-1. Assign the **Contributor** or **Owner** role to the Batch API. You can find this account by searching for **Microsoft Azure Batch** or **MicrosoftAzureBatch**. (The Object ID for the Batch API is **f520d84c-3fd3-4cc8-88d4-2ed25b00d27a**, and the Application ID is **ddbf3205-c6bd-46ae-8127-60eb93363864**.)
+1. Assign the **Contributor** or **Owner** role to the Batch API. You can find this account by searching for **Microsoft Azure Batch**. (The Application ID for this account is **ddbf3205-c6bd-46ae-8127-60eb93363864**.)
For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
batch Batch Cli Sample Create User Subscription Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md
Title: Azure CLI Script Example - Create Batch account - user subscription description: This script creates an Azure Batch account in user subscription mode. This account allocates compute nodes into your subscription. Previously updated : 01/29/2018 Last updated : 08/31/2021 # CLI example: Create a Batch account in user subscription mode
-This script creates an Azure Batch account in user subscription mode. An account that allocates compute nodes into your subscription must be authenticated via an Azure Active Directory token. The compute nodes allocated count toward your subscription's vCPU (core) quota.
+This script creates an Azure Batch account in user subscription mode. An account that allocates compute nodes into your subscription must be authenticated via an Azure Active Directory token. The compute nodes allocated count toward your subscription's vCPU (core) quota.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
This script creates an Azure Batch account in user subscription mode. An account
## Clean up deployment
-Run the following command to remove the
-resource group and all resources associated with it.
+Run the following command to remove the resource group and all resources associated with it.
```azurecli-interactive az group delete --name myResourceGroup
blockchain Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/migration-guide.md
Azure Blockchain Service team pauses the consortium, exports a snapshot of data,
### Download data
+#### Data format v1
+ Download the data using the Microsoft Support provided short-lived SAS URL link. > [!IMPORTANT]
Decrypt the data using the API access key. You can [get the key from the Azure p
> > Do not reset the API access key in between of the migration.
+#### Data format v2
+
+In this version, the SAS token is encrypted instead of the data, resulting in faster snapshot creation. *If* you choose to migrate to ConsenSys Quorum Blockchain Service, importing to Quorum Blockchain Service is also faster.
+
+After the SAS token is decrypted, data can be downloaded as normal. The data itself won't have an additional layer of encryption.
+
+> [!IMPORTANT]
+> Creating a snapshot in data format v2 is about 8-10 times faster, so you have less downtime.
+
+> [!CAUTION]
+> The default transaction node API access key 1 is used to encrypt the SAS token.
+>
+> Do not reset the API access key between or during migration.
+ You can use the data with either ConsenSys Quorum Blockchain service or your IaaS VM-based deployment. For ConsenSys Quorum Blockchain Service migration, contact ConsenSys at [qbsmigration@consensys.net](mailto:qbsmigration@consensys.net).
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-sdk.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
```csharp public class CustomLoginCredentials : ServiceClientCredentials
- {
- private string AuthenticationToken { get; set; }
- public override void InitializeServiceClient<T>(ServiceClient<T> client)
- {
- var authenticationContext = new AuthenticationContext("https://login.windows.net/{tenantID}");
- var credential = new ClientCredential(clientId: "{clientID}", clientSecret: "{clientSecret}");
- var result = authenticationContext.AcquireTokenAsync(resource: "https://management.core.windows.net/", clientCredential: credential);
- if (result == null) throw new InvalidOperationException("Failed to obtain the JWT token");
- AuthenticationToken = result.Result.AccessToken;
- }
- public override async Task ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
- {
+ {
+ private string AuthenticationToken { get; set; }
+ public override void InitializeServiceClient<T>(ServiceClient<T> client)
+ {
+ var authenticationContext = new AuthenticationContext("https://login.windows.net/{tenantID}");
+ var credential = new ClientCredential(clientId: "{clientID}", clientSecret: "{clientSecret}");
+ var result = authenticationContext.AcquireTokenAsync(resource: "https://management.core.windows.net/", clientCredential: credential);
+ if (result == null) throw new InvalidOperationException("Failed to obtain the JWT token");
+ AuthenticationToken = result.Result.AccessToken;
+ }
+ public override async Task ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
+ {
if (request == null) throw new ArgumentNullException("request"); if (AuthenticationToken == null) throw new InvalidOperationException("Token Provider Cannot Be Null"); request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", AuthenticationToken);
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
//request.Version = new Version(apiVersion); await base.ProcessHttpRequestAsync(request, cancellationToken); }
+ }
var creds = new CustomLoginCredentials(); m_subId = Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID");
If you are using a Static IP you need to reference it as a Reserved IP in Servic
## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support). - Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).-- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/persisting-shell-storage.md
# Persist files in Azure Cloud Shell
-Cloud Shell utilizes Azure File storage to persist files across sessions. On initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
+Cloud Shell utilizes Azure Files to persist files across sessions. On initial start, Cloud Shell prompts you to associate a new or existing file share to persist files across sessions.
> [!NOTE] > Bash and PowerShell share the same file share. Only one file share can be associated with automatic mounting in Cloud Shell.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/overview.md
Title: What is Custom Vision?
-description: Learn how to use the Azure Custom Vision service to build custom AI image classifiers and object detectors in the Azure cloud.
+description: Learn how to use the Azure Custom Vision service to build custom AI models to detect objects or classify images.
Previously updated : 05/24/2021 Last updated : 08/25/2021 keywords: image recognition, image identifier, image recognition app, custom vision
keywords: image recognition, image identifier, image recognition app, custom vis
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifiers. An image identifier applies labels (which represent classes or objects) to images, according to their visual characteristics. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify the labels and train custom models to detect them.
+Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels (which represent classifications or objects) to images, according to their detected visual characteristics. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
This documentation contains the following types of articles: * The [quickstarts](./getting-started-build-a-classifier.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
## What it does
-The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to classify new images. You can also export the model itself for offline use.
+The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
### Classification and object detection
-Custom Vision functionality can be divided into two features. **Image classification** applies one or more labels to an image. **Object detection** is similar, but it also returns the coordinates in the image where the applied label(s) can be found.
+Custom Vision functionality can be divided into two features. **[Image classification](getting-started-build-a-classifier.md)** applies one or more labels to an image. **[Object detection](get-started-build-detector.md)** is similar, but it also returns the coordinates in the image where the applied label(s) can be found.
### Optimization The Custom Vision service is optimized to quickly recognize major differences between images, so you can start prototyping your model with a small amount of data. 50 images per label are generally a good start. However, the service is not optimal for detecting subtle differences in images (for example, detecting minor cracks or dents in quality assurance scenarios).
-Additionally, you can choose from several varieties of the Custom Vision algorithm that are optimized for images with certain subject material&mdash;for example, landmarks or retail items. For more information, see the [Build a classifier](getting-started-build-a-classifier.md) or [Build an object detector](get-started-build-detector.md) guides.
+Additionally, you can choose from several variations of the Custom Vision algorithm that are optimized for images with certain subject material&mdash;for example, landmarks or retail items. See [Select a domain](select-domain.md) for more information.
## What it includes
cognitive-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/quickstarts/object-detection.md
Title: "Quickstart: Object detection with Custom Vision client library"
-description: "Quickstart: Create an object detection project, add tags, upload images, train your project, and detect objects using the Custom Vision client library."
+description: "Quickstart: Create an object detection project, add custom tags, upload images, train the model, and detect objects in images using the Custom Vision client library."
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Additionally, you'll want to account for the following restrictions:
* Avoid repeating characters, words, or groups of words more than three times. For example: "aaaa", "yeah yeah yeah yeah", or "that's it that's it that's it that's it". The Speech service might drop lines with too many repetitions. * Don't use special characters or UTF-8 characters above `U+00A1`. * URIs will be rejected.
+* For some languages (for example Japanese or Korean), importing large amounts of text data can take very long or time out. Please consider to divide the uploaded data into text files of up to 20.000 lines each.
## Pronunciation data for training
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
* [Inspect your data](how-to-custom-speech-inspect-data.md) * [Evaluate your data](how-to-custom-speech-evaluate-data.md) * [Train custom model](how-to-custom-speech-train-model.md)
-* [Deploy model](./how-to-custom-speech-train-model.md)
+* [Deploy model](./how-to-custom-speech-train-model.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
Previously updated : 09/01/2020 Last updated : 08/31/2021 - keywords: text to speech
-# What is text-to-speech?
+# What is neural text-to-speech?
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
+Microsoft neural text-to-speech uses deep neural networks to make the voices of computers nearly indistinguishable from recordings of people. With the human-like natural prosody and clear articulation of words, neural text-to-speech has significantly reduced listening fatigue when you interact with AI systems.
+
+The patterns of stress and intonation in spoken language are called _prosody_. Traditional text-to-speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis. Microsoft neural text-to-speech capability does prosody prediction and voice synthesis simultaneously, uses deep neural networks to overcome the limits of traditional text-to-speech systems in matching the patterns of stress and intonation in spoken language, and synthesizes the units of speech into a computer voice. The result is a more fluid and natural-sounding voice.
+ In this overview, you learn about the benefits and capabilities of the text-to-speech service, which enables your applications, tools, or devices to convert text into human-like synthesized speech. Use human-like neural voices, or create a custom voice unique to your product or brand. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech). This documentation contains the following article types:
This documentation contains the following article types:
* **Tutorials** are longer guides that show you how to use the service as a component in broader business solutions. > [!NOTE]
+>
> Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs or Custom Speech, we've created guides to help you migrate to the Speech service.
-> - [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md)
+>
+> * [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md)
## Core features
-* Speech synthesis - Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech using standard, neural, or custom voices.
+* Speech synthesis - Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech using [platform neural voices](language-support.md#text-to-speech) or [custom neural voices](custom-neural-voice.md).
-* Asynchronous synthesis of long audio - Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example audio books or lectures). Unlike synthesis performed using the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and that the synthesized audio is downloaded when made available from the service. Only custom neural voices are supported.
+* Asynchronous synthesis of long audio - Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example audio books or lectures). Unlike synthesis performed using the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and that the synthesized audio is downloaded when made available from the service.
-* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
+* Platform neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of platform neural voices, see [supported languages](language-support.md#text-to-speech).
-* Fine-tune TTS output with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize text-to-speech outputs. With SSML, you can not only adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document, but also define your own lexicons or switch to different speaking styles. With the multi-lingual voices, you can also adjust the speaking languages via SSML. See [how to use SSML](speech-synthesis-markup.md) to fine-tune the voice output for your scenario.
+* Fine-tune TTS output with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize text-to-speech outputs. With SSML, you can not only adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document, but also define your own lexicons or switch to different speaking styles. With the [multi-lingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. See [how to use SSML](speech-synthesis-markup.md) to fine-tune the voice output for your scenario.
* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently only supported for the `en-US` English (United States) [neural voices](language-support.md#text-to-speech).
See the [quickstart](get-started-text-to-speech.md) to get started with text-to-
Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages. -- [Text-to-speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)-- [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+* [Text-to-speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+* [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
## Customization
In addition to neural voices, you can create and fine-tune custom voices unique
When using the text-to-speech service, you are billed for each character that is converted to speech, including punctuation. While the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable: -- Text passed to the text-to-speech service in the SSML body of the request-- All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags-- Letters, punctuation, spaces, tabs, markup, and all white-space characters-- Every code point defined in Unicode
+* Text passed to the text-to-speech service in the SSML body of the request
+* All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags
+* Letters, punctuation, spaces, tabs, markup, and all white-space characters
+* Every code point defined in Unicode
For detailed information, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). > [!IMPORTANT] > Each Chinese, Japanese, and Korean language character is counted as two characters for billing.
+## Migrate to Neural Voice
+
+We are retiring the standard voices on **31st August 2024** and they will no longer be supported after that date.ΓÇ» The announcement has been sent out to all existing Speech subscriptions before **31st August 2021**. During the retiring period (**31st August 2021- 31st August 2024**), existing users can continue to use their standard voices, all new users/new speech resources should move to the neural voices.
+
+**Action required**
+
+1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure and listen to the neural voice [samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) at the bottom of the page to determine the right voice for your business needs.
+1. To make the change, [follow the sample code](speech-synthesis-markup.md#choose-a-voice-for-text-to-speech) to update the voice name in your speech synthesis request to the supported neural voice names in chosen languages by 31 August 2024. **Starting 1st September 2024**, standard voices will no longer be supported, please use neural voices for your speech synthesis request, on cloud or on prem. For on-prem container, please use the [neural voice containers](../containers/container-image-tags.md) and follow the [instructions](speech-container-howto.md).
+ ## Reference docs -- [Speech SDK](speech-sdk.md)-- [REST API: Text-to-speech](rest-text-to-speech.md)
+* [Speech SDK](speech-sdk.md)
+* [REST API: Text-to-speech](rest-text-to-speech.md)
## Next steps -- [Get a free Speech service subscription](overview.md#try-the-speech-service-for-free)-- [Get the Speech SDK](speech-sdk.md)
+* [Get a free Speech service subscription](overview.md#try-the-speech-service-for-free)
+* [Get the Speech SDK](speech-sdk.md)
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/best-practices.md
Last updated 06/30/2021-+
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
Last updated 06/30/2021-+
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
description: Learn about Communication Services Chat concepts.
+ Last updated 06/30/2021-+
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
Title: Chat SDK overview for Azure Communication Services description: Learn about the Azure Communication Services Chat SDK. ----+++++ Last updated 06/30/2021--++ # Chat SDK overview Azure Communication Services Chat SDKs can be used to add rich, real-time chat to your applications.
-
+ ## Chat SDK capabilities The following list presents the set of features which are currently available in the Communication Services chat SDKs.
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/detailed-call-flows.md
Last updated 06/30/2021-+ - # Call flow topologies
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
Last updated 06/30/2021-+
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/logging-and-diagnostics.md
Last updated 06/30/2021-+ - # Communication Services logs
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/metrics.md
Last updated 06/30/2021-+ # Metrics overview
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
Last updated 06/30/2021-+ # Communication Services notifications
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Last updated 06/30/2021-+ # Pricing Scenarios
Alice makes an outbound call from an Azure Communication Services app to a telep
**Total cost for the call**: $0.04 + $0.04 = $0.08
-> [!Note]
-> Azure Communication Services direct routing leg is not charged until 08/01/2021.
-- ### Pricing example: Group audio call using JS SDK and one PSTN leg Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's PSTN number, a US phone number beginning with `+1-425`.
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Last updated 06/30/2021-+
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
Last updated 06/30/2021-+ # Reference documentation overview
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-endpoint.md
Last updated 06/30/2021-+ # Build a custom Teams endpoint
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Last updated 06/30/2021-+
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/certified-session-border-controllers.md
Last updated 06/30/2021-+ # List of Session Border Controllers certified for Azure Communication Services direct routing
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
Last updated 06/30/2021-+
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-infrastructure.md
Last updated 06/30/2021-+
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-provisioning.md
Last updated 06/30/2021-+
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/messaging-policy.md
Last updated 06/30/2021-+
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/plan-solution.md
Last updated 06/30/2021-+
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
description: Provides an overview of the SMS SDK and its offerings.
+ Last updated 06/30/2021-+ # SMS SDK overview
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sms-faq.md
Last updated 06/30/2021-+
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
Last updated 06/30/2021-+
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
Last updated 06/30/2021-+ - # Troubleshooting in Azure Communication Services
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/teams-embed.md
Title: Teams Embed SDK
description: In this document, review about the Teams Embed Capabilities and how is going to work in your applications + Last updated 06/30/2021 - # Teams Embed
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
Last updated 06/30/2021-+ # Voice and video concepts
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
Last updated 06/30/2021-+ # Call Automation overview
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
Last updated 06/30/2021-+
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Last updated 06/30/2021-+ # Calling SDK overview
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Last updated 06/30/2021-+
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
description: Learn how to manage identities and access tokens using the Azure Co
+ Last updated 06/30/2021
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
Title: Getting started with Teams interop on Azure Communication Services
description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Chat SDK + Last updated 06/30/2021 zone_pivot_groups: acs-web-ios-android- # Quickstart: Join your chat app to a Teams meeting
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
Last updated 06/30/2021-+ zone_pivot_groups: acs-plat-azp-azcli-net-ps
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/quick-create-identity.md
description: Learn how to use the Identities & Access Tokens tool in the Azure p
+ Last updated 07/19/2021
communication-services Service Principal From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal-from-cli.md
description: In this quick start we'll create an application and service princip
-++ Last updated 06/30/2021
communication-services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/service-principal.md
description: Azure Active Directory lets you authorize Azure Communication Services access from applications running in Azure VMs, function apps, and other resources. + -+ Last updated 06/30/2021
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
Last updated 06/30/2021-+ # Quickstart: Set up and manage Teams access tokens
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
Title: Quickstart - Add joining a Teams meeting to your app
description: In this quickstart, you'll learn how to add join Teams meeting capabilities to your app using Azure Communication Services. + Last updated 06/30/2021
communication-services Samples For Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/samples-for-teams-embed.md
Title: Using the Azure Communication Services Teams Embed Library description: Learn about the Communication Services Teams Embed library capabilities. + Last updated 06/30/2021-+ zone_pivot_groups: acs-plat-ios-android- # Use the Communication Services Teams Embed library
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
Last updated 06/30/2021-+ # Quickstart: Handle SMS events for Delivery Reports and Inbound Messages
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
Last updated 06/30/2021-+ zone_pivot_groups: acs-js-csharp-java-python
communication-services Call Automation Api Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-automation-api-sample.md
Last updated 06/30/2021-+ zone_pivot_groups: acs-csharp-java
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
Last updated 06/30/2021-+ zone_pivot_groups: acs-csharp-java
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
Last updated 06/30/2021-+ zone_pivot_groups: acs-plat-web-ios-android-windows
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
Last updated 06/30/2021-+
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Title: Quickstart - Teams interop on Azure Communication Services
description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK. + Last updated 06/30/2021
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
Title: Quickstart - Add video calling to your app (JavaScript)
description: In this quickstart, you'll learn how to add video calling capabilities to your app using Azure Communication Services. + Last updated 06/30/2021
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Title: Quickstart - Add voice calling to your app
description: In this quickstart, you'll learn how to add calling capabilities to your app using Azure Communication Services. + Last updated 06/30/2021
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/pstn-call.md
Title: Quickstart - Call To Phone
description: In this quickstart, you'll learn how to add PSTN calling capabilities to your app using Azure Communication Services. + Last updated 06/30/2021
communication-services Building App Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
description: Learn how to create a baseline web application that supports Azure Communication Services + Last updated 06/30/2021-+
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/hmac-header-tutorial.md
Last updated 06/30/2021-+
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/postman-tutorial.md
description: Learn how to sign and makes requests for ACS with Postman to send an SMS Message. + Last updated 06/30/2021-+ # Tutorial: Sign and make requests with Postman
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/trusted-service-tutorial.md
Last updated 06/30/2021-+
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/managed.md
In an integration service environment (ISE), these managed connectors also have
[**Azure Event Grid** ISE][azure-event-grid-doc] :::column-end::: :::column:::
- [![Azure File Storage ISE icon][azure-file-storage-icon]][azure-file-storage-doc]
+ [![Azure Files ISE icon][azure-file-storage-icon]][azure-file-storage-doc]
\ \
- [**Azure File Storage** ISE][azure-file-storage-doc]
+ [**Azure Files** ISE][azure-file-storage-doc]
:::column-end::: :::column::: [![Azure Key Vault ISE icon][azure-key-vault-icon]][azure-key-vault-doc]
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/database-security.md
description: Learn how Azure Cosmos DB provides database protection and data sec
Previously updated : 08/20/2021 Last updated : 08/30/2021
Each account consists of two keys: a primary key and secondary key. The purpose
Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but do not provide access to read permissions resources.
-Primary/secondary keys can be retrieved and regenerated using the Azure portal. For instructions, see [View, copy, and regenerate access keys](sql/manage-with-cli.md#regenerate-account-key).
+### <a id="key-rotation"></a> Key rotation and regeneration
+The process of key rotation and regeneration is simple. First, make sure that **your application is consistently using either the primary key or the secondary key** to access your Azure Cosmos DB account. Then, follow the steps outlined below.
+
+# [SQL API](#tab/sql-api)
+
+#### If your application is currently using the primary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+#### If your application is currently using the secondary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+# [Azure Cosmos DB API for MongoDB](#tab/mongo-api)
+
+#### If your application is currently using the primary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Password** from the ellipsis on the right of your secondary password.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+#### If your application is currently using the secondary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Password** from the ellipsis on the right of your primary password.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-mongo.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+# [Cassandra API](#tab/Cassandra-api)
+
+#### If your application is currently using the primary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Secondary Read-Write Password** from the ellipsis on the right of your secondary password.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+#### If your application is currently using the secondary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Primary Read-Write Password** from the ellipsis on the right of your primary password.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-cassandra.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+# [Gremlin API](#tab/gremlin-api)
+
+#### If your application is currently using the primary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+#### If your application is currently using the secondary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-gremlin.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+# [Table API](#tab/table-api)
+
+#### If your application is currently using the primary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+
+1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your primary key with the secondary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+#### If your application is currently using the secondary key
+
+1. Navigate to your Azure Cosmos DB account on the Azure portal.
+
+1. Select **Connection String** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
+
+ :::image type="content" source="./media/database-security/regenerate-primary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+
+1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
+
+1. Replace your secondary key with the primary key in your application.
+
+1. Go back to the Azure portal and trigger the regeneration of the secondary key.
+
+ :::image type="content" source="./media/database-security/regenerate-secondary-key-table.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
++ ## Next steps
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/create-mongodb-dotnet.md
If you prefer the CLI, run the following command in a command window to start th
dotnet run ```
-After the application is running, navigate to [https://localhost:5001/swagger/https://docsupdatetracker.net/index.html](https://localhost:5001/swagger/https://docsupdatetracker.net/index.html) to see the [swagger documentation](https://swagger.io/) for the web api and to submit sample requests.
+After the application is running, navigate to `https://localhost:5001/swagger/https://docsupdatetracker.net/index.html` to see the [swagger documentation](https://swagger.io/) for the web api and to submit sample requests.
Select the API you would like to test and select "Try it out".
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/secure-access-to-data.md
Previously updated : 08/20/2021 Last updated : 08/30/2021
Primary/secondary keys provide access to all the administrative resources for th
### <a id="key-rotation"></a> Key rotation and regeneration
-The process of key rotation and regeneration is simple. First, make sure that your application is consistently using either the primary key or the secondary key to access your Azure Cosmos DB account. Then, follow the steps outlined below.
+> [!NOTE]
+> Follow the instructions described [here](database-security.md#key-rotation) to rotate and regenerate keys on the Azure Cosmos DB API for Mongo DB, Cassandra API, Gremlin API or Table API.
+
+The process of key rotation and regeneration is simple. First, make sure that **your application is consistently using either the primary key or the secondary key** to access your Azure Cosmos DB account. Then, follow the steps outlined below.
# [If your application is currently using the primary key](#tab/using-primary-key)
The process of key rotation and regeneration is simple. First, make sure that yo
1. Select **Keys** from the left menu, then select **Regenerate Secondary Key** from the ellipsis on the right of your secondary key.
- :::image type="content" source="./media/secure-access-to-data/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
1. Validate that the new secondary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that yo
1. Go back to the Azure portal and trigger the regeneration of the primary key.
- :::image type="content" source="./media/secure-access-to-data/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
# [If your application is currently using the secondary key](#tab/using-secondary-key)
The process of key rotation and regeneration is simple. First, make sure that yo
1. Select **Keys** from the left menu, then select **Regenerate Primary Key** from the ellipsis on the right of your primary key.
- :::image type="content" source="./media/secure-access-to-data/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-primary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the primary key" border="true":::
1. Validate that the new primary key works consistently against your Azure Cosmos DB account. Key regeneration can take anywhere from one minute to multiple hours depending on the size of the Cosmos DB account.
The process of key rotation and regeneration is simple. First, make sure that yo
1. Go back to the Azure portal and trigger the regeneration of the secondary key.
- :::image type="content" source="./media/secure-access-to-data/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
+ :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true":::
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-marketplace-web-service.md
Previously updated : 08/01/2018 Last updated : 08/30/2021 # Copy data from Amazon Marketplace Web Service using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Amazon Marketplace Web Service using UI
+
+Use the following steps to create a linked service to Amazon Marketplace Web Service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Amazon and select the Amazon Marketplace Web Service connector.
+
+ :::image type="content" source="media/connector-amazon-marketplace-web-service/amazon-marketplace-web-service-connector.png" alt-text="Screenshot of the Amazon Marketplace Web Service connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-amazon-marketplace-web-service/configure-amazon-marketplace-web-service-linked-service.png" alt-text="Screenshot of linked service configuration for Amazon Marketplace Web Service.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Amazon Marketplace Web Service connector. ## Linked service properties
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-redshift.md
Previously updated : 12/09/2020 Last updated : 08/30/2021 # Copy data from Amazon Redshift using Azure Data Factory
Specifically, this Amazon Redshift connector supports retrieving data from Redsh
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Amazon Redshift using UI
+
+Use the following steps to create a linked service to Amazon Redshift in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Amazon and select the Amazon Redshift connector.
+
+ :::image type="content" source="media/connector-amazon-redshift/amazon-redshift-connector.png" alt-text="Select the Amazon Redshift connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-amazon-redshift/configure-amazon-redshift-linked-service.png" alt-text="Configure a linked service to Amazon Redshift.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Amazon Redshift connector. ## Linked service properties
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-s3-compatible-storage.md
Previously updated : 05/11/2021 Last updated : 08/30/2021 # Copy data from Amazon S3 Compatible Storage by using Azure Data Factory
For the full list of Amazon S3 permissions, see [Specifying Permissions in a Pol
## Getting started +
+## Create a linked service to Amazon S3 Compatible Storage using UI
+
+Use the following steps to create a linked service to Amazon S3 Compatible Storage in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Amazon and select the Amazon S3 Compatible Storage connector.
+
+ :::image type="content" source="media/connector-amazon-s3-compatible-storage/amazon-s3-compatible-storage-connector.png" alt-text="Select the Amazon S3 Compatible Storage connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-amazon-s3-compatible-storage/configure-amazon-s3-compatible-storage-linked-service.png" alt-text="Configure a linked service to Amazon S3 Compatible Storage.":::
+
+## Connector configuration details
The following sections provide details about properties that are used to define Data Factory entities specific to Amazon S3 Compatible Storage.
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
Use the following steps to create an Amazon S3 linked service in the Azure porta
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Amazon and select the Amazon S3 connector.
- :::image type="content" source="media/connector-amazon-simple-storage-service/amazon-simple-storage-service-connector.png" alt-text="Select the Amazon S3 connector.":::
+ :::image type="content" source="media/connector-amazon-simple-storage-service/amazon-simple-storage-service-connector.png" alt-text="Screenshot of the Amazon S3 connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-amazon-simple-storage-service/configure-amazon-simple-storage-service-linked-service.png" alt-text="Configure an Amazon S3 linked service.":::
+ :::image type="content" source="media/connector-amazon-simple-storage-service/configure-amazon-simple-storage-service-linked-service.png" alt-text="Screenshot of configuration for an Amazon S3 linked service.":::
## Connector configuration details
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Use the following steps to create an Azure Blob Storage linked service in the Az
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for blob and select the Azure Blob Storage connector.
Use the following steps to create an Azure Blob Storage linked service in the Az
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-blob-storage/configure-azure-blob-storage-linked-service.png" alt-text="Configure Azure Blob Storage linked service.":::
+ :::image type="content" source="media/connector-azure-blob-storage/configure-azure-blob-storage-linked-service.png" alt-text="Screenshot of configuration for Azure Blob Storage linked service.":::
## Connector configuration details
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
Previously updated : 08/20/2021 Last updated : 08/30/2021 # Copy data to or from Azure Cosmos DB's API for MongoDB by using Azure Data Factory
You can use the Azure Cosmos DB's API for MongoDB connector to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Azure Cosmos DB's API for MongoDB using UI
+
+Use the following steps to create a linked service to Azure Cosmos DB's API for MongoDB in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Cosmos and select the Azure Cosmos DB's API for MongoDB connector.
+
+ :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/azure-cosmos-db-mongodb-api-connector.png" alt-text="Select the Azure Cosmos DB's API for MongoDB connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/configure-azure-cosmos-db-mongodb-api-linked-service.png" alt-text="Configure a linked service to Azure Cosmos DB's API for MongoDB.":::
+
+## Connector configuration details
+ The following sections provide details about properties you can use to define Data Factory entities that are specific to Azure Cosmos DB's API for MongoDB. ## Linked service properties
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Use the following steps to create a linked service to Azure Cosmos DB in the Azu
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Cosmos and select the Azure Cosmos DB (SQL API) connector.
Use the following steps to create a linked service to Azure Cosmos DB in the Azu
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-cosmos-db/configure-azure-cosmos-db-linked-service.png" alt-text="Configure a linked service to Azure Cosmos DB.":::
+ :::image type="content" source="media/connector-azure-cosmos-db/configure-azure-cosmos-db-linked-service.png" alt-text="Screenshot of linked service configuration for Azure Cosmos DB.":::
## Connector configuration details
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
Use the following steps to create a linked service to Azure Data Explorer in the
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Explorer and select the Azure Data Explorer (Kusto) connector.
- :::image type="content" source="media/connector-azure-data-explorer/azure-data-explorer-connector.png" alt-text="Select the Azure Data Explorer (Kusto) connector.":::
+ :::image type="content" source="media/connector-azure-data-explorer/azure-data-explorer-connector.png" alt-text="Screenshot of the Azure Data Explorer (Kusto) connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-data-explorer/configure-azure-data-explorer-linked-service.png" alt-text="Configure a linked service to Azure Data Explorer.":::
+ :::image type="content" source="media/connector-azure-data-explorer/configure-azure-data-explorer-linked-service.png" alt-text="Screenshot of linked service configuration for Azure Data Explorer.":::
## Connector configuration details
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Use the following steps to create an Azure Data Lake Storage Gen2 linked service
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Data Lake and select the Azure Data Lake Storage Gen2 connector.
Use the following steps to create an Azure Data Lake Storage Gen2 linked service
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-data-lake-storage/configure-data-lake-storage-linked-service.png" alt-text="Configure Azure Data Lake Storage Gen2 linked service.":::
+ :::image type="content" source="media/connector-azure-data-lake-storage/configure-data-lake-storage-linked-service.png" alt-text="Screenshot of configuration for Azure Data Lake Storage Gen2 linked service.":::
## Connector configuration details
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
Use the following steps to create a linked service to Azure Data Lake Storage Ge
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for and select the Azure Data Lake Storage Gen1 connector.
- :::image type="content" source="media/connector-azure-data-lake-store/azure-data-lake-store-connector.png" alt-text="Select the Azure Data Lake Storage Gen1 connector.":::
+ :::image type="content" source="media/connector-azure-data-lake-store/azure-data-lake-store-connector.png" alt-text="Screenshot of the Azure Data Lake Storage Gen1 connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-data-lake-store/configure-azure-data-lake-store-linked-service.png" alt-text="Configure a linked service to Azure Data Lake Storage Gen1.":::
+ :::image type="content" source="media/connector-azure-data-lake-store/configure-azure-data-lake-store-linked-service.png" alt-text="Screenshot of linked service configuration for Azure Data Lake Storage Gen1.":::
## Connector configuration details
data-factory Connector Azure Database For Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mariadb.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Azure Database for MariaDB using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Azure Database for MariaDB using UI
+
+Use the following steps to create a linked service to Azure Database for MariaDB in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Maria and select the Azure Database for MariaDB connector.
+
+ :::image type="content" source="media/connector-azure-database-for-mariadb/azure-database-for-mariadb-connector.png" alt-text="Screenshot of the Azure Database for MariaDB connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-azure-database-for-mariadb/configure-azure-database-for-mariadb-linked-service.png" alt-text="Screenshot of linked service configuration for Azure Database for MariaDB.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Azure Database for MariaDB connector. ## Linked service properties
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mysql.md
Previously updated : 03/10/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Database for MySQL by using Azure Data Factory
This Azure Database for MySQL connector is supported for the following activitie
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Azure Database for MySQL using UI
+
+Use the following steps to create a linked service to Azure Database for MySQL in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for MySQL and select the Azure Database for MySQL connector.
+
+ :::image type="content" source="media/connector-azure-database-for-mysql/azure-database-for-mysql-connector.png" alt-text="Select the Azure Database for MySQL connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-azure-database-for-mysql/configure-azure-database-for-mysql-linked-service.png" alt-text="Configure a linked service to Azure Database for MySQL.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Azure Database for MySQL connector. ## Linked service properties
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
Previously updated : 06/16/2021 Last updated : 08/30/2021 # Copy and transform data in Azure Database for PostgreSQL by using Azure Data Factory
Currently, data flow in Azure Data Factory supports Azure database for PostgreSQ
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Azure database for PostgreSQL using UI
+
+Use the following steps to create a linked service to Azure database for PostgreSQL in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for PostgreSQL and select the Azure database for PostgreSQL connector.
+
+ :::image type="content" source="media/connector-azure-database-for-postgresql/azure-database-for-postgresql-connector.png" alt-text="Select the Azure database for PostgreSQL connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-azure-database-for-postgresql/configure-azure-database-for-postgresql-linked-service.png" alt-text="Configure a linked service to Azure database for PostgreSQL.":::
+
+## Connector configuration details
+ The following sections offer details about properties that are used to define Data Factory entities specific to Azure Database for PostgreSQL connector. ## Linked service properties
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Use the following steps to create a linked service to Azure Databricks Delta Lak
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for delta and select the Azure Databricks Delta Lake connector.
- :::image type="content" source="media/connector-azure-databricks-delta-lake/azure-databricks-delta-lake-connector.png" alt-text="Select the Azure Databricks Delta Lake connector.":::
+ :::image type="content" source="media/connector-azure-databricks-delta-lake/azure-databricks-delta-lake-connector.png" alt-text="Screenshot of the Azure Databricks Delta Lake connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-databricks-delta-lake/configure-azure-databricks-delta-lake-linked-service.png" alt-text="Configure an Azure Databricks Delta Lake linked service.":::
+ :::image type="content" source="media/connector-azure-databricks-delta-lake/configure-azure-databricks-delta-lake-linked-service.png" alt-text="Screenshot of configuration for an Azure Databricks Delta Lake linked service.":::
## Connector configuration details
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-file-storage.md
Title: Copy data from/to Azure File Storage
+ Title: Copy data from/to Azure Files
-description: Learn how to copy data from Azure File Storage to supported sink data stores (or) from supported source data stores to Azure File Storage by using Azure Data Factory.
+description: Learn how to copy data from Azure Files to supported sink data stores (or) from supported source data stores to Azure Files by using Azure Data Factory.
Last updated 03/17/2021
-# Copy data from or to Azure File Storage by using Azure Data Factory
+# Copy data from or to Azure Files by using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to copy data to and from Azure File Storage. To learn about Azure Data Factory, read the [introductory article](introduction.md).
+This article outlines how to copy data to and from Azure Files. To learn about Azure Data Factory, read the [introductory article](introduction.md).
## Supported capabilities
-This Azure File Storage connector is supported for the following activities:
+This Azure Files connector is supported for the following activities:
- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md) - [GetMetadata activity](control-flow-get-metadata-activity.md) - [Delete activity](delete-activity.md)
-You can copy data from Azure File Storage to any supported sink data store, or copy data from any supported source data store to Azure File Storage. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+You can copy data from Azure Files to any supported sink data store, or copy data from any supported source data store to Azure Files. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
-Specifically, this Azure File Storage connector supports:
+Specifically, this Azure Files connector supports:
- Copying files by using account key or service shared access signature (SAS) authentications. - Copying files as-is or parsing/generating files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md).
Specifically, this Azure File Storage connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
-## Create a linked service to Azure File Storage using UI
+## Create a linked service to Azure Files using UI
-Use the following steps to create a linked service to Azure File Storage in the Azure portal UI.
+Use the following steps to create a linked service to Azure Files in the Azure portal UI.
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New: # [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
--
-2. Search for file and select the Azure File Storage connector.
+2. Search for file and select the connector for Azure Files labeled *Azure File Storage*.
- :::image type="content" source="media/connector-azure-file-storage/azure-file-storage-connector.png" alt-text="Select the Azure File Storage connector.":::
+ :::image type="content" source="media/connector-azure-file-storage/azure-file-storage-connector.png" alt-text="Screenshot of the Azure File Storage connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-file-storage/configure-azure-file-storage-linked-service.png" alt-text="Configure a linked service to an Azure File Storage.":::
+ :::image type="content" source="media/connector-azure-file-storage/configure-azure-file-storage-linked-service.png" alt-text="Screenshot of linked service configuration for an Azure File Storage.":::
## Connector configuration details
-The following sections provide details about properties that are used to define entities specific to Azure File Storage.
+The following sections provide details about properties that are used to define entities specific to Azure Files.
## Linked service properties
-This Azure File Storage connector supports the following authentication types. See the corresponding sections for details.
+The Azure Files connector supports the following authentication types. See the corresponding sections for details.
- [Account key authentication](#account-key-authentication) - [Shared access signature authentication](#shared-access-signature-authentication) >[!NOTE]
-> If you were using Azure File Storage linked service with [legacy model](#legacy-model), where on ADF authoring UI shown as "Basic authentication", it is still supported as-is, while you are suggested to use the new model going forward. The legacy model transfers data from/to storage over Server Message Block (SMB), while the new model utilizes the storage SDK which has better throughput. To upgrade, you can edit your linked service to switch the authentication method to "Account key" or "SAS URI"; no change needed on dataset or copy activity.
+> If you were using Azure Files linked service with [legacy model](#legacy-model), where on ADF authoring UI shown as "Basic authentication", it is still supported as-is, while you are suggested to use the new model going forward. The legacy model transfers data from/to storage over Server Message Block (SMB), while the new model utilizes the storage SDK which has better throughput. To upgrade, you can edit your linked service to switch the authentication method to "Account key" or "SAS URI"; no change needed on dataset or copy activity.
### Account key authentication
-Data Factory supports the following properties for Azure File Storage account key authentication:
+Data Factory supports the following properties for Azure Files account key authentication:
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **AzureFileStorage**. | Yes |
-| connectionString | Specify the information needed to connect to Azure File Storage. <br/> You can also put the account key in Azure Key Vault and pull the `accountKey` configuration out of the connection string. For more information, see the following samples and the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article. |Yes |
+| connectionString | Specify the information needed to connect to Azure Files. <br/> You can also put the account key in Azure Key Vault and pull the `accountKey` configuration out of the connection string. For more information, see the following samples and the [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article. |Yes |
| fileShare | Specify the file share. | Yes | | snapshot | Specify the date of the [file share snapshot](../storage/files/storage-snapshots-files.md) if you want to copy from a snapshot. | No | | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No |
Data Factory supports the following properties for using shared access signature
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **AzureFileStorage**. | Yes |
-| host | Specifies the Azure File Storage endpoint as: <br/>-Using UI: specify `\\<storage name>.file.core.windows.net\<file service name>`<br/>- Using JSON: `"host": "\\\\<storage name>.file.core.windows.net\\<file service name>"`. | Yes |
-| userid | Specify the user to access the Azure File Storage as: <br/>-Using UI: specify `AZURE\<storage name>`<br/>-Using JSON: `"userid": "AZURE\\<storage name>"`. | Yes |
+| host | Specifies the Azure Files endpoint as: <br/>-Using UI: specify `\\<storage name>.file.core.windows.net\<file service name>`<br/>- Using JSON: `"host": "\\\\<storage name>.file.core.windows.net\\<file service name>"`. | Yes |
+| userid | Specify the user to access the Azure Files as: <br/>-Using UI: specify `AZURE\<storage name>`<br/>-Using JSON: `"userid": "AZURE\\<storage name>"`. | Yes |
| password | Specify the storage access key. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No for source, Yes for sink |
For a full list of sections and properties available for defining datasets, see
[!INCLUDE [data-factory-v2-file-formats](includes/data-factory-v2-file-formats.md)]
-The following properties are supported for Azure File Storage under `location` settings in format-based dataset:
+The following properties are supported for Azure Files under `location` settings in format-based dataset:
| Property | Description | Required | | - | | -- |
The following properties are supported for Azure File Storage under `location` s
## Copy activity properties
-For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Azure File Storage source and sink.
+For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Azure Files source and sink.
-### Azure File Storage as source
+### Azure Files as source
[!INCLUDE [data-factory-v2-file-formats](includes/data-factory-v2-file-formats.md)]
-The following properties are supported for Azure File Storage under `storeSettings` settings in format-based copy source:
+The following properties are supported for Azure Files under `storeSettings` settings in format-based copy source:
| Property | Description | Required | | | | | | type | The type property under `storeSettings` must be set to **AzureFileStorageReadSettings**. | Yes | | ***Locate the files to copy:*** | | | | OPTION 1: static path<br> | Copy from the given folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | |
-| OPTION 2: file prefix<br>- prefix | Prefix for the file name under the given file share configured in a dataset to filter source files. Files with name starting with `fileshare_in_linked_service/this_prefix` are selected. It utilizes the service-side filter for Azure File Storage, which provides better performance than a wildcard filter. This feature is not supported when using a [legacy linked service model](#legacy-model). | No |
+| OPTION 2: file prefix<br>- prefix | Prefix for the file name under the given file share configured in a dataset to filter source files. Files with name starting with `fileshare_in_linked_service/this_prefix` are selected. It utilizes the service-side filter for Azure Files, which provides better performance than a wildcard filter. This feature is not supported when using a [legacy linked service model](#legacy-model). | No |
| OPTION 3: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No | | OPTION 3: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes | | OPTION 4: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
The following properties are supported for Azure File Storage under `storeSettin
] ```
-### Azure File Storage as sink
+### Azure Files as sink
[!INCLUDE [data-factory-v2-file-sink-formats](includes/data-factory-v2-file-sink-formats.md)]
-The following properties are supported for Azure File Storage under `storeSettings` settings in format-based copy sink:
+The following properties are supported for Azure Files under `storeSettings` settings in format-based copy sink:
| Property | Description | Required | | | | -- |
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-search.md
Previously updated : 03/17/2021 Last updated : 08/30/2021 # Copy data to an Azure Cognitive Search index using Azure Data Factory
You can copy data from any supported source data store into search index. For a
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Azure Search using UI
+
+Use the following steps to create a linked service to Azure Search in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Search and select the Azure Search connector.
+
+ :::image type="content" source="media/connector-azure-search/azure-search-connector.png" alt-text="Select the Azure Search connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-azure-search/configure-azure-search-linked-service.png" alt-text="Configure a linked service to Azure Search.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Azure Cognitive Search connector. ## Linked service properties
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Use the following steps to create an Azure Synapse Analytics linked service in t
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Synapse and select the Azure Synapse Analytics connector.
- :::image type="content" source="media/connector-azure-sql-data-warehouse/azure-sql-data-warehouse-connector.png" alt-text="Select the Azure Synapse Analytics connector.":::
+ :::image type="content" source="media/connector-azure-sql-data-warehouse/azure-sql-data-warehouse-connector.png" alt-text="Screenshot of the Azure Synapse Analytics connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-sql-data-warehouse/configure-azure-sql-data-warehouse-linked-service.png" alt-text="Configure an Azure Synapse Analytics linked service.":::
+ :::image type="content" source="media/connector-azure-sql-data-warehouse/configure-azure-sql-data-warehouse-linked-service.png" alt-text="Screenshot of configuration for an Azure Synapse Analytics linked service.":::
## Connector configuration details
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Use the following steps to create an Azure SQL Database linked service in the Az
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-
+
2. Search for SQL and select the Azure SQL Database connector.
Use the following steps to create an Azure SQL Database linked service in the Az
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-sql-database/configure-azure-sql-database-linked-service.png" alt-text="Configure Azure SQL Database linked service.":::
+ :::image type="content" source="media/connector-azure-sql-database/configure-azure-sql-database-linked-service.png" alt-text="Screenshot of configuration for Azure SQL Database linked service.":::
## Connector configuration details
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Use the following steps to create a linked service to an SQL Managed instance in
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SQL and select the Azure SQL Server Managed Instance connector.
- :::image type="content" source="media/connector-azure-sql-managed-instance/azure-sql-managed-instance-connector.png" alt-text="Select the Azure SQL Server Managed Instance connector.":::
+ :::image type="content" source="media/connector-azure-sql-managed-instance/azure-sql-managed-instance-connector.png" alt-text="Screenshot of the Azure SQL Server Managed Instance connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-sql-managed-instance/configure-azure-sql-managed-instance-linked-service.png" alt-text="Configure a linked service to a SQL Managed instance.":::
+ :::image type="content" source="media/connector-azure-sql-managed-instance/configure-azure-sql-managed-instance-linked-service.png" alt-text="Screenshot of linked service configuration for a SQL Managed instance.":::
## Connector configuration details
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-table-storage.md
Use the following steps to create an Azure Table storage linked service in the A
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Azure Table and select the Azure Table storage connector.
- :::image type="content" source="media/connector-azure-table-storage/azure-table-storage-connector.png" alt-text="Select the Azure Table storage connector.":::
+ :::image type="content" source="media/connector-azure-table-storage/azure-table-storage-connector.png" alt-text="Screenshot of the Azure Table storage connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-table-storage/configure-azure-table-storage-linked-service.png" alt-text="Configure an Azure Table storage linked service.":::
+ :::image type="content" source="media/connector-azure-table-storage/configure-azure-table-storage-linked-service.png" alt-text="Screenshot of configuration for an Azure Table storage linked service.":::
## Connector configuration details
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-cassandra.md
Previously updated : 08/12/2019 Last updated : 08/30/2021 # Copy data from Cassandra using Azure Data Factory
The Integration Runtime provides a built-in Cassandra driver, therefore you don'
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Cassandra using UI
+
+Use the following steps to create a linked service to Cassandra in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Cassandra and select the Cassandra connector.
+
+ :::image type="content" source="media/connector-cassandra/cassandra-connector.png" alt-text="Screenshot of the Cassandra connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-cassandra/configure-cassandra-linked-service.png" alt-text="Screenshot of linked service configuration for Cassandra.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Cassandra connector. ## Linked service properties
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-concur.md
Previously updated : 11/25/2020 Last updated : 08/30/2021 # Copy data from Concur using Azure Data Factory (Preview)
You can copy data from Concur to any supported sink data store. For a list of da
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Concur using UI
+
+Use the following steps to create a linked service to Concur in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Concur and select the Concur connector.
+
+ :::image type="content" source="media/connector-concur/concur-connector.png" alt-text="Screenshot of the Concur connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-concur/configure-concur-linked-service.png" alt-text="Screenshot of linked service configuration for Concur.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Concur connector. ## Linked service properties
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-couchbase.md
Previously updated : 08/12/2019 Last updated : 08/30/2021 # Copy data from Couchbase using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Couchbase using UI
+
+Use the following steps to create a linked service to Couchbase in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Couchbase and select the Couchbase connector.
+
+ :::image type="content" source="media/connector-couchbase/couchbase-connector.png" alt-text="Screenshot of the Couchbase connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-couchbase/configure-couchbase-linked-service.png" alt-text="Screenshot of linked service configuration for Couchbase.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Couchbase connector. ## Linked service properties
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-db2.md
Use the following steps to create a linked service to DB2 in the Azure portal UI
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for DB2 and select the DB2 connector.
- :::image type="content" source="media/connector-db2/db2-connector.png" alt-text="Select the DB2 connector.":::
+ :::image type="content" source="media/connector-db2/db2-connector.png" alt-text="Screenshot of the DB2 connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-db2/configure-db2-linked-service.png" alt-text="Configure a linked service to DB2.":::
+ :::image type="content" source="media/connector-db2/configure-db2-linked-service.png" alt-text="Screenshot of linked service configuration for DB2.":::
## Connector configuration details
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-drill.md
Previously updated : 10/25/2019 Last updated : 08/30/2021 # Copy data from Drill using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Drill using UI
+
+Use the following steps to create a linked service to Drill in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Drill and select the Drill connector.
+
+ :::image type="content" source="media/connector-drill/drill-connector.png" alt-text="Screenshot of the Drill connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-drill/configure-drill-linked-service.png" alt-text="Screenshot of linked service configuration for Drill.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Drill connector. ## Linked service properties
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-ax.md
Previously updated : 06/12/2020 Last updated : 08/30/2021 # Copy data from Dynamics AX by using Azure Data Factory
Specifically, this Dynamics AX connector supports copying data from Dynamics AX
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Dynamics AX using UI
+
+Use the following steps to create a linked service to Dynamics AX in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Dynamics and select the Dynamics AX connector.
+
+ :::image type="content" source="media/connector-dynamics-ax/dynamics-ax-connector.png" alt-text="Select the Dynamics AX connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-dynamics-ax/configure-dynamics-ax-linked-service.png" alt-text="Configure a linked service to Dynamics AX.":::
+
+## Connector configuration details
+ The following sections provide details about properties you can use to define Data Factory entities that are specific to Dynamics AX connector. ## Prerequisites
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Use the following steps to create a linked service to Dynamics 365 in the Azure
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Dynamics and select the Dynamics 365 connector.
- :::image type="content" source="media/connector-azure-blob-storage/azure-blob-storage-connector.png" alt-text="Select the Dynamics 365 connector.":::
+ :::image type="content" source="media/connector-azure-blob-storage/azure-blob-storage-connector.png" alt-text="Screenshot of the Dynamics 365 connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-azure-blob-storage/configure-azure-blob-storage-linked-service.png" alt-text="Configure a linked service to Dynamics 365.":::
+ :::image type="content" source="media/connector-azure-blob-storage/configure-azure-blob-storage-linked-service.png" alt-text="Screenshot of linked service configuration for Dynamics 365.":::
## Connector configuration details
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
Use the following steps to create a file system linked service in the Azure port
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for file and select the File System connector.
- :::image type="content" source="media/connector-file-system/file-system-connector.png" alt-text="Select the File System connector.":::
+ :::image type="content" source="media/connector-file-system/file-system-connector.png" alt-text="Screenshot of the File System connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-file-system/configure-file-system-linked-service.png" alt-text="Configure a File System linked service.":::
+ :::image type="content" source="media/connector-file-system/configure-file-system-linked-service.png" alt-text="Screenshot of configuration for File System linked service.":::
## Connector configuration details
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-ftp.md
Use the following steps to create a linked service to an FTP server in the Azure
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for FTP and select the FTP connector.
- :::image type="content" source="media/connector-ftp/ftp-connector.png" alt-text="Select the FTP connector.":::
+ :::image type="content" source="media/connector-ftp/ftp-connector.png" alt-text="Screenshot of the FTP connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-ftp/configure-ftp-linked-service.png" alt-text="Configure a linked service to an FTP server.":::
+ :::image type="content" source="media/connector-ftp/configure-ftp-linked-service.png" alt-text="Screenshot of linked service configuration for an FTP server.":::
## Connector configuration details
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
Previously updated : 06/03/2020 Last updated : 08/30/2021
The GitHub connector in Azure Data Factory is only used to receive the entity reference schema for the [Common Data Model](format-common-data-model.md) format in mapping data flow.
+## Create a linked service to GitHub using UI
+
+Use the following steps to create a linked service to GitHub in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for GitHub and select the GitHub connector.
+
+ :::image type="content" source="media/connector-github/github-connector.png" alt-text="Screenshot of the GitHub connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-github/configure-github-linked-service.png" alt-text="Screenshot of linked service configuration for GitHub.":::
++ ## Linked service properties The following properties are supported for the GitHub linked service.
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-adwords.md
Previously updated : 10/25/2019 Last updated : 08/30/2021 # Copy data from Google AdWords using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Google AdWords using UI
+
+Use the following steps to create a linked service to Google AdWords in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Google and select the Google AdWords connector.
+
+ :::image type="content" source="media/connector-google-adwords/google-adwords-connector.png" alt-text="Screenshot of the Google AdWords connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-google-adwords/configure-google-adwords-linked-service.png" alt-text="Screenshot of linked service configuration for Google AdWords.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Google AdWords connector. ## Linked service properties
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-bigquery.md
Use the following steps to create a linked service to Google BigQuery in the Azu
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Google and select the Google BigQuery connector.
- :::image type="content" source="media/connector-google-bigquery/google-bigquery-connector.png" alt-text="Select the Google BigQuery connector.":::
+ :::image type="content" source="media/connector-google-bigquery/google-bigquery-connector.png" alt-text="Screenshot of the Google BigQuery connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-google-bigquery/configure-google-bigquery-linked-service.png" alt-text="Configure a linked service to Google BigQuery.":::
+ :::image type="content" source="media/connector-google-bigquery/configure-google-bigquery-linked-service.png" alt-text="Screenshot of linked service configuration for Google BigQuery.":::
## Connector configuration details
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-cloud-storage.md
Previously updated : 03/17/2021 Last updated : 08/30/2021
For the full list of Google Cloud Storage roles and associated permissions, see
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Google Cloud Storage using UI
+
+Use the following steps to create a linked service to Google Cloud Storage in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Google and select the Google Cloud Storage (S3 API) connector.
+
+ :::image type="content" source="media/connector-google-cloud-storage/google-cloud-storage-connector.png" alt-text="Select the Google Cloud Storage (S3 API) connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-google-cloud-storage/configure-google-cloud-storage-linked-service.png" alt-text="Configure a linked service to Google Cloud Storage.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Google Cloud Storage. ## Linked service properties
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-greenplum.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Greenplum using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Greenplum using UI
+
+Use the following steps to create a linked service to Greenplum in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Greenplum and select the Greenplum connector.
+
+ :::image type="content" source="media/connector-greenplum/greenplum-connector.png" alt-text="Screenshot of the Greenplum connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-greenplum/configure-greenplum-linked-service.png" alt-text="Screenshot of linked service configuration for Greenplum.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Greenplum connector. ## Linked service properties
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hbase.md
Previously updated : 08/12/2019 Last updated : 08/30/2021 # Copy data from HBase using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Hbase using UI
+
+Use the following steps to create a linked service to Hbase in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Hbase and select the Hbase connector.
+
+ :::image type="content" source="media/connector-hbase/hbase-connector.png" alt-text="Screenshot of the Hbase connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-hbase/configure-hbase-linked-service.png" alt-text="Screenshot of linked service configuration for Hbase.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to HBase connector. ## Linked service properties
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hdfs.md
Previously updated : 03/17/2021 Last updated : 08/30/2021
Specifically, the HDFS connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to HDFS using UI
+
+Use the following steps to create a linked service to HDFS in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for HDFS and select the HDFS connector.
+
+ :::image type="content" source="media/connector-hdfs/hdfs-connector.png" alt-text="Select the HDFS connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-hdfs/configure-hdfs-linked-service.png" alt-text="Configure a linked service to HDFS.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to HDFS. ## Linked service properties
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hive.md
Previously updated : 11/17/2020 Last updated : 08/30/2021 + # Copy and transform data from Hive using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Hive using UI
+
+Use the following steps to create a linked service to Hive in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Hive and select the Hive connector.
+
+ :::image type="content" source="media/connector-hive/hive-connector.png" alt-text="Select the Hive connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-hive/configure-hive-linked-service.png" alt-text="Configure a linked service to Hive.":::
+
+## Connector configuration details
+++ The following sections provide details about properties that are used to define Data Factory entities specific to Hive connector. ## Linked service properties
The following properties are supported for Hive linked service:
|: |: |: | | type | The type property must be set to: **Hive** | Yes | | host | IP address or host name of the Hive server, separated by ';' for multiple hosts (only when serviceDiscoveryMode is enabled). | Yes |
-| port | The TCP port that the Hive server uses to listen for client connections. If you connect to Azure HDInsights, specify port as 443. | Yes |
+| port | The TCP port that the Hive server uses to listen for client connections. If you connect to Azure HDInsight, specify port as 443. | Yes |
| serverType | The type of Hive server. <br/>Allowed values are: **HiveServer1**, **HiveServer2**, **HiveThriftServer** | No | | thriftTransportProtocol | The transport protocol to use in the Thrift layer. <br/>Allowed values are: **Binary**, **SASL**, **HTTP** | No | | authenticationType | The authentication method used to access the Hive server. <br/>Allowed values are: **Anonymous**, **Username**, **UsernameAndPassword**, **WindowsAzureHDInsightService**. Kerberos authentication is not supported now. | Yes |
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
Use the following steps to create a linked service to an HTTP source in the Azur
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for HTTP and select the HTTP connector.
- :::image type="content" source="media/connector-http/http-connector.png" alt-text="Select the HTTP connector.":::
+ :::image type="content" source="media/connector-http/http-connector.png" alt-text="Screenshot of the HTTP connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-http/configure-http-linked-service.png" alt-text="Configure an HTTP linked service.":::
+ :::image type="content" source="media/connector-http/configure-http-linked-service.png" alt-text="Screenshot of configuration for an HTTP linked service.":::
## Connector configuration details
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hubspot.md
Previously updated : 12/18/2020 Last updated : 08/30/2021 # Copy data from HubSpot using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to HubSpot using UI
+
+Use the following steps to create a linked service to HubSpot in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for HubSpot and select the HubSpot connector.
+
+ :::image type="content" source="media/connector-hubspot/hubspot-connector.png" alt-text="Select the HubSpot connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-hubspot/configure-hubspot-linked-service.png" alt-text="Configure a linked service to HubSpot.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to HubSpot connector. ## Linked service properties
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-impala.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Impala by using Azure Data Factory
Data Factory provides a built-in driver to enable connectivity. Therefore, you d
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Impala using UI
+
+Use the following steps to create a linked service to Impala in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Impala and select the Impala connector.
+
+ :::image type="content" source="media/connector-impala/impala-connector.png" alt-text="Screenshot of the Impala connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-impala/configure-impala-linked-service.png" alt-text="Screenshot of linked service configuration for Impala.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to the Impala connector. ## Linked service properties
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-informix.md
Previously updated : 03/17/2021 Last updated : 08/30/2021
To use this Informix connector, you need to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Informix using UI
+
+Use the following steps to create a linked service to Informix in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Informix and select the Informix connector.
+
+ :::image type="content" source="media/connector-informix/informix-connector.png" alt-text="Screenshot of the Informix connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-informix/configure-informix-linked-service.png" alt-text="Screenshot of linked service configuration for Informix.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Informix connector. ## Linked service properties
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-jira.md
Previously updated : 10/25/2019 Last updated : 08/30/2021 # Copy data from Jira using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Jira using UI
+
+Use the following steps to create a linked service to Jira in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Jira and select the Jira connector.
+
+ :::image type="content" source="media/connector-jira/jira-connector.png" alt-text="Select the Jira connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-jira/configure-jira-linked-service.png" alt-text="Configure a linked service to Jira.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Jira connector. ## Linked service properties
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-magento.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Magento using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Magento using UI
+
+Use the following steps to create a linked service to Magento in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Magento and select the Magento connector.
+
+ :::image type="content" source="media/connector-magento/magento-connector.png" alt-text="Screenshot of the Magento connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-magento/configure-magento-linked-service.png" alt-text="Screenshot of linked service configuration for Magento.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Magento connector. ## Linked service properties
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mariadb.md
Previously updated : 08/12/2019 Last updated : 08/30/2021 # Copy data from MariaDB using Azure Data Factory
This connector currently supports MariaDB of version 10.0 to 10.2.
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to MariaDB using UI
+
+Use the following steps to create a linked service to MariaDB in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Maria and select the MariaDB connector.
+
+ :::image type="content" source="media/connector-mariadb/mariadb-connector.png" alt-text="Screenshot of the MariaDB connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-mariadb/configure-mariadb-linked-service.png" alt-text="Screenshot of linked service configuration for MariaDB.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to MariaDB connector. ## Linked service properties
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-marketo.md
Previously updated : 06/04/2020 Last updated : 08/30/2021
Currently, Marketo instance which is integrated with external CRM is not support
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Marketo using UI
+
+Use the following steps to create a linked service to Marketo in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Marketo and select the Marketo connector.
+
+ :::image type="content" source="media/connector-marketo/marketo-connector.png" alt-text="Screenshot of the Marketo connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-marketo/configure-marketo-linked-service.png" alt-text="Screenshot of linked service configuration for Marketo.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Marketo connector. ## Linked service properties
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-microsoft-access.md
Previously updated : 08/20/2021 Last updated : 08/30/2021 # Copy data from and to Microsoft Access using Azure Data Factory
To use this Microsoft Access connector, you need to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Microsoft Access using UI
+
+Use the following steps to create a linked service to Microsoft Access in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Access and select the Microsoft Access connector.
+
+ :::image type="content" source="media/connector-microsoft-access/microsoft-access-connector.png" alt-text="Select the Microsoft Access connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-microsoft-access/configure-microsoft-access-linked-service.png" alt-text="Configure a linked service to Microsoft Access.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Access connector. ## Linked service properties
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-atlas.md
Previously updated : 06/01/2021 Last updated : 08/30/2021 # Copy data from or to MongoDB Atlas using Azure Data Factory
If you use Azure Integration Runtime for copy, make sure you add the effective r
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to MongoDB Atlas using UI
+
+Use the following steps to create a linked service to MongoDB Atlas in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for MongoDB and select the MongoDB Atlas connector.
+
+ :::image type="content" source="media/connector-mongodb-atlas/mongodb-atlas-connector.png" alt-text="Select the MongoDB Atlas connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-mongodb-atlas/configure-mongodb-atlas-linked-service.png" alt-text="Configure a linked service to MongoDB Atlas.":::
+
+## Connector configuration details
+++ The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB Atlas connector. ## Linked service properties
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-legacy.md
Previously updated : 08/12/2019 Last updated : 08/30/2021 # Copy data from MongoDB using Azure Data Factory (legacy)
The Integration Runtime provides a built-in MongoDB driver, therefore you don't
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to MongoDB using UI
+
+Use the following steps to create a linked service to MongoDB in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Mongo and select the MongoDB connector.
+
+ :::image type="content" source="media/connector-mongodb-legacy/mongodb-legacy-connector.png" alt-text="Screenshot of the MongoDB connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-mongodb-legacy/configure-mongodb-legacy-linked-service.png" alt-text="Screenshot of linked service configuration for MongoDB.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB connector. ## Linked service properties
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
Previously updated : 06/01/2021 Last updated : 08/30/2021 # Copy data from or to MongoDB by using Azure Data Factory
Specifically, this MongoDB connector supports **versions up to 4.2**.
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to MongoDB using UI
+
+Use the following steps to create a linked service to MongoDB in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for MongoDB and select the MongoDB connector.
+
+ :::image type="content" source="media/connector-mongodb/mongodb-connector.png" alt-text="Select the MongoDB connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-mongodb/configure-mongodb-linked-service.png" alt-text="Configure a linked service to MongoDB.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB connector.
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mysql.md
Previously updated : 09/09/2020 Last updated : 08/30/2021
The Integration Runtime provides a built-in MySQL driver starting from version 3
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to MySQL using UI
+
+Use the following steps to create a linked service to MySQL in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for MySQL and select the MySQL connector.
+
+ :::image type="content" source="media/connector-mysql/mysql-connector.png" alt-text="Select the MySQL connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-mysql/configure-mysql-linked-service.png" alt-text="Configure a linked service to MySQL.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to MySQL connector. ## Linked service properties
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-netezza.md
Previously updated : 05/28/2020 Last updated : 08/30/2021 # Copy data from Netezza by using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity. You don't
## Get started
-You can create a pipeline that uses a copy activity by using the .NET SDK, the Python SDK, Azure PowerShell, the REST API, or an Azure Resource Manager template. See the [Copy Activity tutorial](quickstart-create-data-factory-dot-net.md) for step-by-step instructions on how to create a pipeline that has a copy activity.
+You can create a pipeline that uses a copy activity by using the .NET SDK, the Python SDK, Azure PowerShell, the REST API, or an Azure Resource Manager template. See the [Copy Activity tutorial](quickstart-create-data-factory-dot-net.md)for step-by-step instructions to create a pipeline with a copy activity.
+
+## Create a linked service to Netezza using UI
+
+Use the following steps to create a linked service to Netezza in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Netezza and select the Netezza connector.
+
+ :::image type="content" source="media/connector-netezza/netezza-connector.png" alt-text="Screenshot of the Netezza connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-netezza/configure-netezza-linked-service.png" alt-text="Screenshot of linked service configuration for Netezza.":::
+
+## Connector configuration details
The following sections provide details about properties you can use to define Data Factory entities that are specific to the Netezza connector.
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
Use the following steps to create a linked service to an OData store in the Azur
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for OData and select the OData connector.
- :::image type="content" source="media/connector-odata/odata-connector.png" alt-text="Select the OData connector.":::
+ :::image type="content" source="media/connector-odata/odata-connector.png" alt-text="Screenshot of the OData connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-odata/configure-odata-linked-service.png" alt-text="Configure a linked service to an OData store.":::
+ :::image type="content" source="media/connector-odata/configure-odata-linked-service.png" alt-text="Screenshot of linked service configuration for an OData store.":::
## Connector configuration details
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odbc.md
Use the following steps to create a linked service to an ODBC data store in the
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for ODBC and select the ODBC connector.
- :::image type="content" source="media/connector-odbc/odbc-connector.png" alt-text="Select the ODBC connector.":::
+ :::image type="content" source="media/connector-odbc/odbc-connector.png" alt-text="Screenshot of the ODBC connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-odbc/configure-odbc-linked-service.png" alt-text="Configure a linked service to an ODBC data store.":::
+ :::image type="content" source="media/connector-odbc/configure-odbc-linked-service.png" alt-text="Screenshot of linked service configuration for an ODBC data store.":::
## Connector configuration details
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-office-365.md
Use the following steps to create a linked service to Office 365 in the Azure po
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Office and select the Office 365 connector.
- :::image type="content" source="media/connector-office-365/office-365-connector.png" alt-text="Select the Office 365 connector.":::
+ :::image type="content" source="media/connector-office-365/office-365-connector.png" alt-text="Screenshot of the Office 365 connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-office-365/configure-office-365-linked-service.png" alt-text="Configure a linked service to Office 365.":::
+ :::image type="content" source="media/connector-office-365/configure-office-365-linked-service.png" alt-text="Screenshot of linked service configuration for Office 365.":::
## Connector configuration details
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-cloud-storage.md
Previously updated : 05/11/2021 Last updated : 08/30/2021
To copy data from Oracle Cloud Storage, please refer [here](https://docs.oracle.
## Getting started +
+## Create a linked service to Oracle Cloud Storage using UI
+
+Use the following steps to create a linked service to Oracle Cloud Storage in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Oracle and select the Oracle Cloud Storage connector.
+
+ :::image type="content" source="media/connector-oracle-cloud-storage/oracle-cloud-storage-connector.png" alt-text="Screenshot of the Oracle Cloud Storage connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-oracle-cloud-storage/configure-oracle-cloud-storage-linked-service.png" alt-text="Screenshot of linked service configuration for Oracle Cloud Storage.":::
+
+## Connector configuration details
The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Cloud Storage.
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-eloqua.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Oracle Eloqua using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Oracle Eloqua using UI
+
+Use the following steps to create a linked service to Oracle Eloqua in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Oracle and select the Oracle Eloqua connector.
+
+ :::image type="content" source="media/connector-oracle-eloqua/oracle-eloqua-connector.png" alt-text="Screenshot of the Oracle Eloqua connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-oracle-eloqua/configure-oracle-eloqua-linked-service.png" alt-text="Screenshot of linked service configuration for Oracle Eloqua.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Eloqua connector. ## Linked service properties
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-responsys.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Oracle Responsys using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
You can create a pipeline with copy activity using .NET SDK, Python SDK, Azure PowerShell, REST API, or Azure Resource Manager template. See [Copy activity tutorial](quickstart-create-data-factory-dot-net.md) for step-by-step instructions to create a pipeline with a copy activity.
+## Create a linked service to Oracle Responsys using UI
+
+Use the following steps to create a linked service to Oracle Responsys in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Oracle and select the Oracle Responsys connector.
+
+ :::image type="content" source="media/connector-oracle-responsys/oracle-responsys-connector.png" alt-text="Screenshot of the Oracle Responsys connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-oracle-responsys/configure-oracle-responsys-linked-service.png" alt-text="Screenshot of linked service configuration for Oracle Responsys.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Responsys connector. ## Linked service properties
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-service-cloud.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Oracle Service Cloud using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Oracle Service Cloud using UI
+
+Use the following steps to create a linked service to Oracle Service Cloud in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Oracle and select the Oracle Service Cloud connector.
+
+ :::image type="content" source="media/connector-oracle-service-cloud/oracle-service-cloud-connector.png" alt-text="Select the Oracle Service Cloud connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-oracle-service-cloud/configure-oracle-service-cloud-linked-service.png" alt-text="Configure a linked service to Oracle Service Cloud.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Service Cloud connector. ## Linked service properties
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
Use the following steps to create a linked service to Oracle in the Azure portal
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Oracle and select the Oracle connector.
- :::image type="content" source="media/connector-oracle/oracle-connector.png" alt-text="Select the Oracle connector.":::
+ :::image type="content" source="media/connector-oracle/oracle-connector.png" alt-text="Screenshot of the Oracle connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-oracle/configure-oracle-linked-service.png" alt-text="Configure a linked service to Oracle.":::
+ :::image type="content" source="media/connector-oracle/configure-oracle-linked-service.png" alt-text="Screenshot of linked service configuration for Oracle.":::
## Connector configuration details
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-paypal.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from PayPal using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to PayPal using UI
+
+Use the following steps to create a linked service to PayPal in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for PayPal and select the PayPal connector.
+
+ :::image type="content" source="media/connector-paypal/paypal-connector.png" alt-text="Screenshot of the PayPal connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-paypal/configure-paypal-linked-service.png" alt-text="Screenshot of linked service configuration for PayPal.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to PayPal connector. ## Linked service properties
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-phoenix.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Phoenix using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Phoenix using UI
+
+Use the following steps to create a linked service to Phoenix in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Phoenix and select the Phoenix connector.
+
+ :::image type="content" source="media/connector-phoenix/phoenix-connector.png" alt-text="Screenshot of the Phoenix connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-phoenix/configure-phoenix-linked-service.png" alt-text="Screenshot of linked service configuration for Phoenix.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Phoenix connector. ## Linked service properties
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-postgresql.md
Previously updated : 02/19/2020 Last updated : 08/30/2021 # Copy data from PostgreSQL by using Azure Data Factory
The Integration Runtime provides a built-in PostgreSQL driver starting from vers
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to PostgreSQL using UI
+
+Use the following steps to create a linked service to PostgreSQL in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Postgre and select the PostgreSQL connector.
+
+ :::image type="content" source="media/connector-postgresql/postgresql-connector.png" alt-text="Select the PostgreSQL connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-postgresql/configure-postgresql-linked-service.png" alt-text="Configure a linked service to PostgreSQL.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to PostgreSQL connector. ## Linked service properties
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-presto.md
Previously updated : 12/18/2020 Last updated : 08/30/2021 # Copy data from Presto using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Presto using UI
+
+Use the following steps to create a linked service to Presto in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Presto and select the Presto connector.
+
+ :::image type="content" source="media/connector-presto/presto-connector.png" alt-text="Screenshot of the Presto connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-presto/configure-presto-linked-service.png" alt-text="Screenshot of linked service configuration for Presto.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Presto connector. ## Linked service properties
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-quickbooks.md
Previously updated : 01/15/2021 Last updated : 08/30/2021 # Copy data from QuickBooks Online using Azure Data Factory (Preview)
This connector supports QuickBooks OAuth 2.0 authentication.
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to QuickBooks using UI
+
+Use the following steps to create a linked service to QuickBooks in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for QuickBooks and select the QuickBooks connector.
+
+ :::image type="content" source="media/connector-quickbooks/quickbooks-connector.png" alt-text="Screenshot of the QuickBooks connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-quickbooks/configure-quickbooks-linked-service.png" alt-text="Screenshot of linked service configuration for QuickBooks.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to QuickBooks connector. ## Linked service properties
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
Use the following steps to create a REST linked service in the Azure portal UI.
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for REST and select the REST connector.
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-marketing-cloud.md
Previously updated : 07/17/2020 Last updated : 08/30/2021 # Copy data from Salesforce Marketing Cloud using Azure Data Factory
The Salesforce Marketing Cloud connector supports OAuth 2 authentication, and it
You can create a pipeline with copy activity using .NET SDK, Python SDK, Azure PowerShell, REST API, or Azure Resource Manager template. See [Copy activity tutorial](quickstart-create-data-factory-dot-net.md) for step-by-step instructions to create a pipeline with a copy activity.
+## Create a linked service to Salesforce Marketing Cloud using UI
+
+Use the following steps to create a linked service to Salesforce Marketing Cloud in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Salesforce and select the Salesforce Marketing Cloud connector.
+
+ :::image type="content" source="media/connector-salesforce-marketing-cloud/salesforce-marketing-cloud-connector.png" alt-text="Select the Salesforce Marketing Cloud connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-salesforce-marketing-cloud/configure-salesforce-marketing-cloud-linked-service.png" alt-text="Configure a linked service to Salesforce Marketing Cloud.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Salesforce Marketing Cloud connector. ## Linked service properties
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
Previously updated : 03/17/2021 Last updated : 08/30/2021 # Copy data from and to Salesforce Service Cloud by using Azure Data Factory
You might also receive the "REQUEST_LIMIT_EXCEEDED" error message in both scenar
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Salesforce Service Cloud using UI
+
+Use the following steps to create a linked service to Salesforce Service Cloud in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Salesforce and select the Salesforce Service Cloud connector.
+
+ :::image type="content" source="media/connector-salesforce-service-cloud/salesforce-service-cloud-connector.png" alt-text="Select the Salesforce Service Cloud connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-salesforce-service-cloud/configure-salesforce-service-cloud-linked-service.png" alt-text="Configure a linked service to Salesforce Service Cloud.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to the Salesforce Service Cloud connector. ## Linked service properties
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
Use the following steps to create a linked service to Salesforce in the Azure po
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Salesforce and select the Salesforce connector.
- :::image type="content" source="media/connector-salesforce/salesforce-connector.png" alt-text="Select the Salesforce connector.":::
+ :::image type="content" source="media/connector-salesforce/salesforce-connector.png" alt-text="Screenshot of the Salesforce connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-salesforce/configure-salesforce-linked-service.png" alt-text="Configure a linked service to Salesforce.":::
+ :::image type="content" source="media/connector-salesforce/configure-salesforce-linked-service.png" alt-text="Screenshot of linked service configuration for Salesforce.":::
## Connector configuration details
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Previously updated : 07/30/2021 Last updated : 08/30/2021 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from SAP Business Warehouse using Azure Data Factory
To use this SAP Business Warehouse connector, you need to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to SAP BW using UI
+
+Use the following steps to create a linked service to SAP BW in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for SAP and select the SAP BW via MDX connector.
+
+ :::image type="content" source="mediX connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-sap-business-warehouse/configure-sap-business-warehouse-linked-service.png" alt-text="Configure a linked service to SAP BW.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to SAP Business Warehouse connector. ## Linked service properties
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
Previously updated : 03/17/2021 Last updated : 08/30/2021 # Copy data from SAP Cloud for Customer (C4C) using Azure Data Factory
Specifically, this connector enables Azure Data Factory to copy data from/to SAP
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to SAP Cloud for Customer using UI
+
+Use the following steps to create a linked service to SAP Cloud for Customer in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for SAP and select the SAP Cloud for Customer connector.
+
+ :::image type="content" source="media/connector-sap-cloud-for-customer/sap-cloud-for-customer-connector.png" alt-text="Select the SAP Cloud for Customer connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-sap-cloud-for-customer/configure-sap-cloud-for-customer-linked-service.png" alt-text="Configure a linked service to SAP Cloud for Customer.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to SAP Cloud for Customer connector. ## Linked service properties
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-ecc.md
Use the following steps to create a linked service to SAP ECC in the Azure porta
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SAP and select the SAP ECC connector.
- :::image type="content" source="media/connector-sap-ecc/sap-ecc-connector.png" alt-text="Select the SAP ECC connector.":::
+ :::image type="content" source="media/connector-sap-ecc/sap-ecc-connector.png" alt-text="Screenshot of the SAP ECC connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sap-ecc/configure-sap-ecc-linked-service.png" alt-text="Configure a linked service to SAP ECC.":::
+ :::image type="content" source="media/connector-sap-ecc/configure-sap-ecc-linked-service.png" alt-text="Screenshot of linked service configuration for SAP ECC.":::
## Connector configuration details
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-hana.md
Use the following steps to create a linked service to SAP HANA in the Azure port
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SAP and select the SAP HANA connector.
- :::image type="content" source="media/connector-sap-hana/sap-hana-connector.png" alt-text="Select the SAP HANA connector.":::
+ :::image type="content" source="media/connector-sap-hana/sap-hana-connector.png" alt-text="Screenshot of the SAP HANA connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sap-hana/configure-sap-hana-linked-service.png" alt-text="Configure a linked service to SAP HANA.":::
+ :::image type="content" source="media/connector-sap-hana/configure-sap-hana-linked-service.png" alt-text="Screenshot of linked service configuration for SAP HANA.":::
## Connector configuration details
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
Use the following steps to create a linked service to an SAP table in the Azure
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SAP and select the SAP table connector.
- :::image type="content" source="media/connector-sap-table/sap-table-connector.png" alt-text="Select the SAP table connector.":::
+ :::image type="content" source="media/connector-sap-table/sap-table-connector.png" alt-text="Screenshot of the SAP table connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sap-table/configure-sap-table-linked-service.png" alt-text="Configure an SAP table linked service.":::
+ :::image type="content" source="media/connector-sap-table/configure-sap-table-linked-service.png" alt-text="Screenshot of configuration for an SAP table linked service.":::
## Connector configuration details
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-servicenow.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from ServiceNow using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to ServiceNow using UI
+
+Use the following steps to create a linked service to ServiceNow in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for ServiceNow and select the ServiceNow connector.
+
+ :::image type="content" source="media/connector-servicenow/servicenow-connector.png" alt-text="Select the ServiceNow connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-servicenow/configure-servicenow-linked-service.png" alt-text="Configure a linked service to ServiceNow.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to ServiceNow connector. ## Linked service properties
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
Use the following steps to create an SFTP linked service in the Azure portal UI.
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SFTP and select the SFTP connector.
- :::image type="content" source="media/connector-sftp/sftp-connector.png" alt-text="Select the SFTP connector.":::
+ :::image type="content" source="media/connector-sftp/sftp-connector.png" alt-text="Screenshot of the SFTP connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sftp/configure-sftp-linked-service.png" alt-text="Configure an SFTP linked service.":::
+ :::image type="content" source="media/connector-sftp/configure-sftp-linked-service.png" alt-text="Screenshot of configuration for an SFTP linked service.":::
## Connector configuration details
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
Use the following steps to create a linked service to a SharePoint Online List i
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SharePoint and select the SharePoint Online List connector.
- :::image type="content" source="media/connector-sharepoint-online-list/sharepoint-online-list-connector.png" alt-text="Select the SharePoint Online List connector.":::
+ :::image type="content" source="media/connector-sharepoint-online-list/sharepoint-online-list-connector.png" alt-text="Screenshot of the SharePoint Online List connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sharepoint-online-list/configure-sharepoint-online-list-linked-service.png" alt-text="Configure a linked service to a SharePoint Online List.":::
+ :::image type="content" source="media/connector-sharepoint-online-list/configure-sharepoint-online-list-linked-service.png" alt-text="Screenshot of linked service configuration for a SharePoint Online List.":::
## Connector configuration details
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-shopify.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Shopify using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Shopify using UI
+
+Use the following steps to create a linked service to Shopify in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Shopify and select the Shopify connector.
+
+ :::image type="content" source="media/connector-shopify/shopify-connector.png" alt-text="Screenshot of the Shopify connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-shopify/configure-shopify-linked-service.png" alt-text="Screenshot of linked service configuration for Shopify.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Shopify connector. ## Linked service properties
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
Use the following steps to create a linked service to Snowflake in the Azure por
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for Snowflake and select the Snowflake connector.
- :::image type="content" source="media/connector-snowflake/snowflake-connector.png" alt-text="Select the Snowflake connector.":::
+ :::image type="content" source="media/connector-snowflake/snowflake-connector.png" alt-text="Screenshot of the Snowflake connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-snowflake/configure-snowflake-linked-service.png" alt-text="Configure a linked service to Snowflake.":::
+ :::image type="content" source="media/connector-snowflake/configure-snowflake-linked-service.png" alt-text="Screenshot of linked service configuration for Snowflake.":::
## Connector configuration details
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-spark.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Spark using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Spark using UI
+
+Use the following steps to create a linked service to Spark in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Spark and select the Spark connector.
+
+ :::image type="content" source="media/connector-spark/spark-connector.png" alt-text="Screenshot of the Spark connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-spark/configure-spark-linked-service.png" alt-text="Screenshot of linked service configuration for Spark.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Spark connector. ## Linked service properties
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Use the following steps to create a SQL Server linked service in the Azure porta
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Synapse Analytics](#tab/synapse-analytics)
+ # [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
--
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
2. Search for SQL and select the SQL Server connector.
- :::image type="content" source="media/connector-sql-server/sql-server-connector.png" alt-text="Select the SQL Server connector.":::
+ :::image type="content" source="media/connector-sql-server/sql-server-connector.png" alt-text="Screenshot of the SQL Server connector.":::
1. Configure the service details, test the connection, and create the new linked service.
- :::image type="content" source="media/connector-sql-server/configure-sql-server-linked-service.png" alt-text="Configure a SQL Server linked service.":::
+ :::image type="content" source="media/connector-sql-server/configure-sql-server-linked-service.png" alt-text="Screenshot of configuration for SQL Server linked service.":::
## Connector configuration details
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-square.md
Previously updated : 08/03/2020 Last updated : 08/30/2021 # Copy data from Square using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Square using UI
+
+Use the following steps to create a linked service to Square in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Square and select the Square connector.
+
+ :::image type="content" source="media/connector-square/square-connector.png" alt-text="Screenshot of the Square connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-square/configure-square-linked-service.png" alt-text="Screenshot of linked service configuration for Square.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Square connector. ## Linked service properties
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sybase.md
Previously updated : 06/10/2020 Last updated : 08/30/2021 # Copy data from Sybase using Azure Data Factory
To use this Sybase connector, you need to:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Sybase using UI
+
+Use the following steps to create a linked service to Sybase in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Sybase and select the Sybase connector.
+
+ :::image type="content" source="media/connector-sybase/sybase-connector.png" alt-text="Select the Sybase connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-sybase/configure-sybase-linked-service.png" alt-text="Configure a linked service to Sybase.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Sybase connector. ## Linked service properties
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-teradata.md
Previously updated : 01/22/2021 Last updated : 08/30/2021
If you use Self-hosted Integration Runtime, note it provides a built-in Teradata
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Teradata using UI
+
+Use the following steps to create a linked service to Teradata in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Teradata and select the Teradata connector.
+
+ :::image type="content" source="media/connector-teradata/teradata-connector.png" alt-text="Select the Teradata connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-teradata/configure-teradata-linked-service.png" alt-text="Configure a linked service to Teradata.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to the Teradata connector. ## Linked service properties
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-vertica.md
Previously updated : 09/04/2019 Last updated : 08/30/2021 # Copy data from Vertica using Azure Data Factory
Azure Data Factory provides a built-in driver to enable connectivity, therefore
You can create a pipeline with copy activity using .NET SDK, Python SDK, Azure PowerShell, REST API, or Azure Resource Manager template. See [Copy activity tutorial](quickstart-create-data-factory-dot-net.md) for step-by-step instructions to create a pipeline with a copy activity.
+## Create a linked service to Vertica using UI
+
+Use the following steps to create a linked service to Vertica in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Vertica and select the Vertica connector.
+
+ :::image type="content" source="media/connector-vertica/vertica-connector.png" alt-text="Screenshot of the Vertica connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-vertica/configure-vertica-linked-service.png" alt-text="Screenshot of linked service configuration for Vertica.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Vertica connector. ## Linked service properties
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-web-table.md
Previously updated : 08/01/2019 Last updated : 08/30/2021 # Copy data from Web table by using Azure Data Factory
To use this Web table connector, you need to set up a Self-hosted Integration Ru
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Web Table using UI
+
+Use the following steps to create a linked service to Web Table in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Web and select the Web Table connector.
+
+ :::image type="content" source="media/connector-web-table/web-table-connector.png" alt-text="Select the Web Table connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-web-table/configure-web-table-linked-service.png" alt-text="Configure a linked service to Web Table.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Web table connector. ## Linked service properties
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-xero.md
Previously updated : 01/26/2021 Last updated : 08/30/2021 # Copy data from Xero using Azure Data Factory
Specifically, this Xero connector supports:
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Xero using UI
+
+Use the following steps to create a linked service to Xero in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Xero and select the Xero connector.
+
+ :::image type="content" source="media/connector-xero/xero-connector.png" alt-text="Select the Xero connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-xero/configure-xero-linked-service.png" alt-text="Configure a linked service to Xero.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Xero connector. ## Linked service properties
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-zoho.md
Previously updated : 08/03/2020 Last updated : 08/30/2021 # Copy data from Zoho using Azure Data Factory (Preview)
Azure Data Factory provides a built-in driver to enable connectivity, therefore
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
+## Create a linked service to Zoho using UI
+
+Use the following steps to create a linked service to Zoho in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Create a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
+
+2. Search for Zoho and select the Zoho connector.
+
+ :::image type="content" source="media/connector-zoho/zoho-connector.png" alt-text="Select the Zoho connector.":::
++
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-zoho/configure-zoho-linked-service.png" alt-text="Configure a linked service to Zoho.":::
+
+## Connector configuration details
+ The following sections provide details about properties that are used to define Data Factory entities specific to Zoho connector. ## Linked service properties
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-fault-tolerance.md
path | The path of the log files. | Specify the path that you use to store the l
> The followings are the prerequisites of enabling fault tolerance in copy activity when copying binary files. > For skipping particular files when they are being deleted from source store: > - The source dataset and sink dataset have to be binary format, and the compression type cannot be specified.
-> - The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure File Storage, File System, FTP, SFTP, Amazon S3, Google Cloud Storage and HDFS.
+> - The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Files, File System, FTP, SFTP, Amazon S3, Google Cloud Storage and HDFS.
> - Only if when you specify multiple files in source dataset, which can be a folder, wildcard or a list of files, copy activity can skip the particular error files. If a single file is specified in source dataset to be copied to the destination, copy activity will fail if any error occurred. > > For skipping particular files when their access are forbidden from source store: > - The source dataset and sink dataset have to be binary format, and the compression type cannot be specified.
-> - The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure File Storage, SFTP, Amazon S3 and HDFS.
+> - The supported data store types are Azure Blob storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure Files, SFTP, Amazon S3 and HDFS.
> - Only if when you specify multiple files in source dataset, which can be a folder, wildcard or a list of files, copy activity can skip the particular error files. If a single file is specified in source dataset to be copied to the destination, copy activity will fail if any error occurred. > > For skipping particular files when they are verified to be inconsistent between source and destination store:
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
The [copy activity monitoring](copy-activity-monitoring.md) experience shows you
## Resume from last failed run
-Copy activity supports resume from last failed run when you copy large size of files as-is with binary format between file-based stores and choose to preserve the folder/file hierarchy from source to sink, e.g. to migrate data from Amazon S3 to Azure Data Lake Storage Gen2. It applies to the following file-based connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md) [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+Copy activity supports resume from last failed run when you copy large size of files as-is with binary format between file-based stores and choose to preserve the folder/file hierarchy from source to sink, e.g. to migrate data from Amazon S3 to Azure Data Lake Storage Gen2. It applies to the following file-based connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md) [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
You can leverage the copy activity resume in the following two ways:
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
When the copy activity performance doesn't meet your expectation, to troubleshoo
- Check whether you can [copy files based on datetime partitioned file path or name](tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md). Such way doesn't bring burden on listing source side.
- - Check if you can use data store's native filter instead, specifically "**prefix**" for Amazon S3/Azure Blob/Azure File Storage and "**listAfter/listBefore**" for ADLS Gen1. Those filters are data store server-side filter and would have much better performance.
+ - Check if you can use data store's native filter instead, specifically "**prefix**" for Amazon S3/Azure Blob storage/Azure Files and "**listAfter/listBefore**" for ADLS Gen1. Those filters are data store server-side filter and would have much better performance.
- Consider to split single large data set into several smaller data sets, and let those copy jobs run concurrently each tackles portion of data. You can do this with Lookup/GetMetadata + ForEach + Copy. Refer to [Copy files from multiple containers](solution-template-copy-files-multiple-containers.md) or [Migrate data from Amazon S3 to ADLS Gen2](solution-template-migration-s3-azure.md) solution templates as general example.
When the copy performance doesn't meet your expectation, to troubleshoot single
- Check whether you can [copy files based on datetime partitioned file path or name](tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md). Such way doesn't bring burden on listing source side.
- - Check if you can use data store's native filter instead, specifically "**prefix**" for Amazon S3/Azure Blob/Azure File Storage and "**listAfter/listBefore**" for ADLS Gen1. Those filters are data store server-side filter and would have much better performance.
+ - Check if you can use data store's native filter instead, specifically "**prefix**" for Amazon S3/Azure Blob storage/Azure Files and "**listAfter/listBefore**" for ADLS Gen1. Those filters are data store server-side filter and would have much better performance.
- Consider to split single large data set into several smaller data sets, and let those copy jobs run concurrently each tackles portion of data. You can do this with Lookup/GetMetadata + ForEach + Copy. Refer to [Copy files from multiple containers](solution-template-copy-files-multiple-containers.md) or [Migrate data from Amazon S3 to ADLS Gen2](solution-template-migration-s3-azure.md) solution templates as general example.
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-preserve-metadata.md
When you use Azure Data Factory copy activity to copy data from source to sink,
## <a name="preserve-metadata"></a> Preserve metadata for lake migration
-When you migrate data from one data lake to another including [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), and [Azure File Storage](connector-azure-file-storage.md), you can choose to preserve the file metadata along with data.
+When you migrate data from one data lake to another including [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), and [Azure Files](connector-azure-file-storage.md), you can choose to preserve the file metadata along with data.
Copy activity supports preserving the following attributes during data copy:
Copy activity supports preserving the following attributes during data copy:
**Handle differences in metadata:** Amazon S3 and Azure Storage allow different sets of characters in the keys of customer specified metadata. When you choose to preserve metadata using copy activity, ADF automatically replaces the invalid characters with '_'.
-When you copy files as-is from Amazon S3/Azure Data Lake Storage Gen2/Azure Blob/Azure File Storage to Azure Data Lake Storage Gen2/Azure Blob/Azure File Storage with binary format, you can find the **Preserve** option on the **Copy Activity** > **Settings** tab for activity authoring or the **Settings** page in Copy Data Tool.
+When you copy files as-is from Amazon S3/Azure Data Lake Storage Gen2/Azure Blob storage/Azure Files to Azure Data Lake Storage Gen2/Azure Blob storage/Azure Files with binary format, you can find the **Preserve** option on the **Copy Activity** > **Settings** tab for activity authoring or the **Settings** page in Copy Data Tool.
![Copy activity preserve metadata](./media/copy-activity-preserve-metadata/copy-activity-preserve-metadata.png)
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
Use the following steps to create an Azure IR using UI.
:::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="The home page Manage button"::: -- 2. Select **Integration runtimes** on the left pane, and then select **+New**. # [Azure Data Factory](#tab/data-factory)
Use the following steps to create an Azure IR using UI.
:::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Screenshot that highlights Integration runtimes in the left pane and the +New button."::: -- 3. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**. 1. On the following page, select **Azure** to create an Azure IR, and then select **Continue**.
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-access-strategies.md
For more information about supported network security mechanisms on data stores
| | Azure Data Explorer | - | - | Yes* | Yes* | - | | | Azure Data Lake Gen1 | - | - | Yes | - | Yes | | | Azure Database for MariaDB, MySQL, PostgreSQL | - | - | Yes | - | Yes |
- | | Azure File Storage | Yes | - | Yes | - | . |
- | | Azure Storage (Blob, ADLS Gen2) | Yes | Yes (MSI auth only) | Yes | - | . |
+ | | Azure Files | Yes | - | Yes | - | . |
+ | | Azure Blob storage and ADLS Gen2 | Yes | Yes (MSI auth only) | Yes | - | . |
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes (only Azure SQL DB/DW) | - | Yes | - | Yes | | | Azure Key Vault (for fetching secrets/ connection string) | yes | Yes | Yes | - | - | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | - | - | Yes | - | - |
For more information about supported network security mechanisms on data stores
| | Azure Data Explorer | - | - | | | Azure Data Lake Gen1 | Yes | - | | | Azure Database for MariaDB, MySQL, PostgreSQL | Yes | - |
- | | Azure File Storage | Yes | - |
- | | Azure Storage (Blog, ADLS Gen2) | Yes | Yes (MSI auth only) |
+ | | Azure Files | Yes | - |
+ | | Azure Blob storage and ADLS Gen2 | Yes | Yes (MSI auth only) |
| | Azure SQL DB, Azure Synapse Analytics), SQL Ml | Yes | - | | | Azure Key Vault (for fetching secrets/ connection string) | Yes | Yes | | Other PaaS/ SaaS Data stores | AWS S3, SalesForce, Google Cloud Storage, etc. | Yes | - |
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
File format dataset can be used with all the file-based connectors, for example,
On ADF authoring UI, when you use a file format dataset in an activity - including Copy, Lookup, GetMetadata, Delete activities - and in dataset you want to point to a linked service of different type from the current (for example, switch from File System to ADLS Gen2), you would see the following warning message. To make sure itΓÇÖs a clean switch, upon your consent, the pipelines and activities, which reference this dataset will be modified to use the new type as well, and any existing data store settings, which are incompatible with the new type will be cleared as it no longer applies.
-To learn more on which the supported data store settings for each connector, you can go to the corresponding connector article -> copy activity properties to see the detailed property list. Refer to [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md).
+To learn more on which the supported data store settings for each connector, you can go to the corresponding connector article -> copy activity properties to see the detailed property list. Refer to [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md).
![Warning message](media/data-factory-ux-troubleshoot-guide/warning-message.png)
data-factory Delete Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/delete-activity.md
Here are some recommendations for using the Delete activity:
- [Azure Blob storage](connector-azure-blob-storage.md) - [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md) - [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md)-- [Azure File Storage](connector-azure-file-storage.md)
+- [Azure Files](connector-azure-file-storage.md)
- [File System](connector-file-system.md) - [FTP](connector-ftp.md) - [SFTP](connector-sftp.md)
data-factory Format Avro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-avro.md
Follow this article when you want to **parse the Avro files or write the data into Avro format**.
-Avro format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+Avro format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
## Dataset properties
data-factory Format Binary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-binary.md
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Binary format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+Binary format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
You can use Binary dataset in [Copy activity](copy-activity-overview.md), [GetMetadata activity](control-flow-get-metadata-activity.md), or [Delete activity](delete-activity.md). When using Binary dataset, ADF does not parse file content but treat it as-is.
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
Follow this article when you want to **parse the delimited text files or write the data into delimited text format**.
-Delimited text format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+Delimited text format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
## Dataset properties
data-factory Format Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-excel.md
Follow this article when you want to **parse the Excel files**. The service supports both ".xls" and ".xlsx".
-Excel format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It is supported as source but not sink.
+Excel format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It is supported as source but not sink.
>[!NOTE] >".xls" format is not supported while using [HTTP](connector-http.md).
data-factory Format Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-json.md
Follow this article when you want to **parse the JSON files or write the data into JSON format**.
-JSON format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+JSON format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
## Dataset properties
data-factory Format Orc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-orc.md
Follow this article when you want to **parse the ORC files or write the data into ORC format**.
-ORC format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+ORC format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
## Dataset properties
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-parquet.md
Follow this article when you want to **parse the Parquet files or write the data into Parquet format**.
-Parquet format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
+Parquet format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).
## Dataset properties
data-factory Format Xml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-xml.md
Follow this article when you want to **parse the XML files**.
-XML format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It is supported as source but not sink.
+XML format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md). It is supported as source but not sink.
## Dataset properties
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
# [Azure Synapse](#tab/synapse-analytics) :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI."::: -- 5. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events. > [!NOTE]
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
For firewall appliance to allow outbound traffic, you need to allow outbound to
- Port 445 with destination as Azure Storage (only required when you execute SSIS package stored in Azure Files).
- If you use Azure Firewall, you can specify network rule with Storage Service Tag, otherwise you might allow destination as specific azure file storage url in firewall appliance.
+ If you use Azure Firewall, you can specify network rule with Storage Service Tag, otherwise you might allow destination as specific Azure file share URL in firewall appliance.
> [!NOTE] > For Azure SQL and Storage, if you configure Virtual Network service endpoints on your subnet, then traffic between Azure-SSIS IR and Azure SQL in same region \ Azure Storage in same region or paired region will be routed to Microsoft Azure backbone network directly instead of your firewall appliance.
data-factory Supported File Formats And Compression Codecs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/supported-file-formats-and-compression-codecs-legacy.md
Last updated 12/10/2019
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-*This article applies to the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md).*
+*This article applies to the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md).*
>[!IMPORTANT] >Data Factory introduced new format-based dataset model, see corresponding format article with details: <br>- [Avro format](format-avro.md)<br>- [Binary format](format-binary.md)<br>- [Delimited text format](format-delimited-text.md)<br>- [JSON format](format-json.md)<br>- [ORC format](format-orc.md)<br>- [Parquet format](format-parquet.md)<br>The rest configurations mentioned in this article are still supported as-is for backward compabitility. You are suggested to use the new model going forward.
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/supported-file-formats-and-compression-codecs.md
# Supported file formats and compression codecs by copy activity in Azure Data Factory and Azure Synapse pipelines [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-*This article applies to the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).*
+*This article applies to the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure Files](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), [Oracle Cloud Storage](connector-oracle-cloud-storage.md) and [SFTP](connector-sftp.md).*
[!INCLUDE [data-factory-v2-file-formats](includes/data-factory-v2-file-formats.md)]
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
Title: Migrate Azure Data Analytics to Azure Synapse Analytics.
-description: This article describes how to migrate from Azure Data Analytics to Azure Synapse Analytics.
+ Title: Migrate Azure Data Lake Analytics to Azure Synapse Analytics.
+description: This article describes how to migrate from Azure Data Lake Analytics to Azure Synapse Analytics.
-+ Last updated 08/25/2021
-# Migrate Azure Data Analytics to Azure Synapse Analytics
+# Migrate Azure Data Lake Analytics to Azure Synapse Analytics
Microsoft launched the Azure Synapse Analytics which aims at bringing both data lakes and data warehouse together for a unique big data analytics experience. It will help customers gather and analyze all the varying data, to solve data inefficiency, and work together. Moreover, SynapseΓÇÖs integration with Azure Machine Learning and Power BI will allow the improved ability for organizations to get insights from its data as well as execute machine learning to all its smart apps.
-The document shows you how to do the migration from ADLA to Azure Synapse Analytics.
+The document shows you how to do the migration from Azure Data Lake Analytics to Azure Synapse Analytics.
## Recommended approach - Step 1: Assess readiness - Step 2: Prepare to migrate - Step 3: Migrate data and application workloads-- Step 4: Cutover from ADLA to Azure Synapse Analytics
+- Step 4: Cutover from Azure Data Lake Analytics to Azure Synapse Analytics
### Step 1: Assess readiness
-1. Look at [Apache Spark on Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-overview.md), and understand key differences of ADLA and Spark on Azure Synapse Analytics.
+1. Look at [Apache Spark on Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-overview.md), and understand key differences of Azure Data Lake Analytics and Spark on Azure Synapse Analytics.
- |Item | ADLA | Spark on Synapse |
+ |Item | Azure Data Lake Analytics | Spark on Synapse |
| | | | | Pricing |Per Analytic Unit-hour |Per vCore-hour| |Engine |Azure Data Lake Analytics |Apache Spark
The document shows you how to do the migration from ADLA to Azure Synapse Analyt
3. Transform or re-create your job orchestration pipelines to new Spark program.
-### Step 4: Cut over from ADLA to new Azure Analytics Services
+### Step 4: Cut over from Azure Data Lake Analytics to Azure Synapse Analytics
-After you're confident that your applications and workloads are stable, you can begin using Azure Synapse Analytics to satisfy your business scenarios. Turn off any remaining pipelines that are running on ADLA and decommission your ADLA accounts.
+After you're confident that your applications and workloads are stable, you can begin using Azure Synapse Analytics to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure Data Lake Analytics and decommission your Azure Data Lake Analytics accounts.
<a name="questionnaire"></a> ## Questionnaire for Migration Assessment |Category |Questions |Reference| | | | |
-|Evaluate the size of the Migration |How many ADLA accounts do you have? How many pipelines are in use? How many U-SQL scripts are in use?| The more data and scripts to be migrated, the more UDO/UDF are used in scripts, the more difficult it is to migrate. The time and resources required for migration need to be well planned according to the scale of the project.|
+|Evaluate the size of the Migration |How many Azure Data Lake Analytics accounts do you have? How many pipelines are in use? How many U-SQL scripts are in use?| The more data and scripts to be migrated, the more UDO/UDF are used in scripts, the more difficult it is to migrate. The time and resources required for migration need to be well planned according to the scale of the project.|
|Data source |WhatΓÇÖs the size of the data source? What kinds of data format for processing? |[Understand Apache Spark data formats for Azure Data Lake Analytics U-SQL developers](understand-spark-data-formats.md)| |Data output |Will you keep the output data for later use? If the output data is saved in U-SQL tables, how to handle it? | If the output data will be used often and saved in U-SQL tables, you need change the scripts and change the output data to Spark supported data format.| |Data migration |Have you made the storage migration plan? |[Migrate Azure Data Lake Storage from Gen1 to Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md) |
databox-online Azure Stack Edge Gpu System Requirements Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-system-requirements-rest.md
We recommend that you review the information carefully before you connect to the
| Feature | Azure Storage | Azure Stack Edge Blob storage | ||-||
-| Azure File storage | Cloud-based SMB file shares supported | Not supported |
+| Azure Files | Cloud-based SMB and NFS file shares supported | Not supported |
| Storage account type | General-purpose and Azure Blob storage accounts | General-purpose v1 only| | Blob name | 1,024 characters (2,048 bytes) | 880 characters (1,760 bytes)|
-| Block blob maximum size | 4.75 TB (100 MB X 50,000 blocks) | 4.75 TB (100 MB x 50,000 blocks) for Azure Stack Edge|
-| Page blob maximum size | 8 TB | 1 TB |
-| Page blob page size | 512 bytes | 4 KB |
+| Block blob maximum size | 4.75 TiB (100 MiB X 50,000 blocks) | 4.75 TiB (100 MiB x 50,000 blocks) for Azure Stack Edge|
+| Page blob maximum size | 8 TiB | 1 TiB |
+| Page blob page size | 512 bytes | 4 KiB |
## Supported API versions
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-export-ordered.md
The following xml shows an example of blob names, blob prefixes, and Azure Files
<BlobPathPrefix>/8mbfiles/</BlobPathPrefix> <BlobPathPrefix>/64mbfiles/</BlobPathPrefix> </BlobList>
- <!-- FileList/prefix/Share list for Azure File storage for export  -->
+ <!-- FileList/prefix/Share list for Azure Files for export  -->
<AzureFileList> <FilePathPrefix>/64mbfiles/</FilePathPrefix> <FilePathPrefix>/4mbfiles/prefix2/subprefix</FilePathPrefix>
databox Data Box System Requirements Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-system-requirements-rest.md
We recommend that you review the information carefully before you connect to the
| Feature | Azure Storage | Data Box Blob storage | ||-||
-| Azure File storage | Cloud-based SMB file shares supported | Not supported |
+| Azure Files | Cloud-based SMB and NFS file shares supported | Not supported |
| Service encryption for data at Rest | 256-bit AES encryption | 256-bit AES encryption | | Storage account type | General-purpose and Azure blob storage accounts | General-purpose v1 only| | Blob name | 1,024 characters (2,048 bytes) | 880 characters (1,760 bytes)|
-| Block blob maximum size | 4.75 TB (100 MB X 50,000 blocks) | 4.75 TB (100 MB x 50,000 blocks) for Azure Data Box v 3.0 onwards.|
-| Page blob maximum size | 8 TB | 1 TB |
-| Page blob page size | 512 bytes | 4 KB |
+| Block blob maximum size | 4.75 TiB (100 MB X 50,000 blocks) | 4.75 TiB (100 MB x 50,000 blocks) for Azure Data Box v 3.0 onwards.|
+| Page blob maximum size | 8 TiB | 1 TiB |
+| Page blob page size | 512 bytes | 4 KiB |
## Supported API versions
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-overview.md
Title: Azure DDoS Protection Standard Overview description: Learn how the Azure DDoS Protection Standard, when combined with application design best practices, provides defense against DDoS attacks.-+ documentationcenter: na -+ ms.devlang: na na
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-partner-onboarding.md
mms.devlang: na Last updated 08/28/2020-+ # Partnering with Azure DDoS Protection Standard This article describes partnering opportunities enabled by the Azure DDoS Protection Standard. This article is designed to help product managers and business development roles understand the investment paths and provide insight into the partnering value propositions.
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
Title: Manage Azure DDoS Protection Standard using the Azure portal
description: Learn how to use Azure DDoS Protection Standard to mitigate an attack. documentationcenter: na--+ editor: '' tags: azure-resource-manager
na Last updated 05/17/2019-+
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/dms-overview.md
Previously updated : 02/20/2020 Last updated : 09/01/2021 # What is Azure Database Migration Service? Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime (online migrations). + ## Migrate databases to Azure with familiar tools Azure Database Migration Service integrates some of the functionality of our existing tools and services. It provides customers with a comprehensive, highly available solution. The service uses the [Data Migration Assistant](/sql/dma/dma-overview) to generate assessment reports that provide recommendations to guide you through the changes required prior to performing a migration. It's up to you to perform any remediation required. When you're ready to begin the migration process, Azure Database Migration Service performs all of the required steps. You can fire and forget your migration projects with peace of mind, knowing that the process takes advantage of best practices as determined by Microsoft.
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/migration-using-azure-data-studio.md
+
+ Title: Migrate using Azure Data Studio
+description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.
++++++++ Last updated : 09/01/2021+++
+# Migrate databases with Azure SQL Migration extension for Azure Data Studio (Preview)
+
+The Azure SQL Migration extension for [Azure Data Studio](/sql/azure-data-studio/what-is-azure-data-studio.md) enables you to use the new SQL Server assessment and migration capability in Azure Data Studio.
+
+## Architecture of Azure SQL Migration extension for Azure Data Studio
+
+Azure Database Migration Service (DMS) is one of the core components in the overall architecture. DMS provides a reliable migration orchestrator to enable database migrations to Azure SQL.
+Create or reuse an existing DMS using the Azure SQL Migration extension in Azure Data Studio(ADS).
+DMS uses Azure Data Factory's self-hosted integration runtime to access and upload valid backup files from your on-premises network share or your Azure Storage account.
+
+The workflow of the migration process is illustrated below.
++
+1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All editions of SQL Server 2008 and above are supported.
+1. **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance or SQL Server on Azure Virtual Machines (registered with SQL IaaS Agent extension in [Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes))
+1. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported.
+1. **Azure Data Studio**: Download and install the [Azure SQL Migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. **Azure DMS**: Azure service that orchestrates migration pipelines to do data movement activities from on-premises to Azure. DMS is associated with Azure Data Factory's (ADF) self-hosted integration runtime (IR) and provides the capability to register and monitor the self-hosted IR.
+1. **Self-hosted integration runtime (IR)**: Self-hosted IR should be installed on a machine that can connect to the source SQL Server and the backup files location. DMS provides the authentication keys and registers the self-hosted IR.
+1. **Backup files upload to Azure Storage**: DMS uses self-hosted IR to upload valid backup files from the on-premises backup location to your provisioned Azure Storage account. Data movement activities and pipelines are automatically created in the migration workflow to upload the backup files.
+1. **Restore backups on target Azure SQL**: DMS restores backup files from your Azure Storage account to the supported target Azure SQL.
+ > [!IMPORTANT]
+ > With online migration mode, DMS continuously uploads the source backup files to Azure Storage and restores them to the target until you complete the final step of cutting over to the target.
+ >
+ > In offline migration mode, DMS uploads the source backup files to Azure Storage and restores them to the target without requiring you to perform a cutover.
+
+## Prerequisites
+
+Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to:
+
+* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio.md)
+* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension.md) from the Azure Data Studio marketplace
+* Have an Azure account that is assigned to one of the built-in roles listed below:
+ - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
+ - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Owner or Contributor role for the Azure subscription.
+* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md) or [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/create-sql-vm-portal.md).
+* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
+* Use one of the following storage options for the full database and transaction log backup files:
+ - SMB network share
+ - Azure storage account file share or blob container
+
+ > [!IMPORTANT]
+ > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
+ > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
+ > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
+ > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
+ > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
+* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
+* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](../azure-sql/managed-instance/tde-certificate-migrate.md) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+ > [!TIP]
+ > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
+
+* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
+
+ | Domain names | Outbound ports | Description |
+ | -- | -- | |
+ | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
+ | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. |
+ | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share |
+
+ > [!TIP]
+ > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
+
+* When using self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share.
+* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](/quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
+
+### Recommendations for using self-hosted integration runtime for database migrations
+- Use a single self-hosted integration runtime for multiple source SQL Server databases.
+- Install only one instance of self-hosted integration runtime on any single machine.
+- Associate only one self-hosted integration runtime with one DMS.
+- The self-hosted integration runtime uses resources (memory / CPU) on the machine where it's installed. Install the self-hosted integration runtime on a machine that is different from your source SQL Server. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source.
+- Use the self-hosted integration runtime only when you have your database backups in an on-premises SMB network share. Self-hosted integration runtime isn't required for database migrations if your source database backups are already in Azure storage blob container.
+- We recommend up to 10 concurrent database migrations per self-hosted integration runtime on a single machine. To increase the number of concurrent database migrations, scale out self-hosted runtime up to four nodes or create separate self-hosted integration runtime on different machines.
+- Configure self-hosted integration runtime to auto-update to automatically apply any new features, bug fixes, and enhancements that are released. To learn more, see [Self-hosted Integration Runtime Auto-update](../data-factory/self-hosted-integration-runtime-auto-update.md).
+
+## Known issues and limitations
+- Overwriting existing databases using DMS in your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine isn't supported.
+- Configuring high availability and disaster recovery on your target to match source topology is not supported by DMS.
+- The following server objects aren't supported:
+ - Logins
+ - SQL Server Agent jobs
+ - Credentials
+ - SSIS packages
+ - Server roles
+ - Server audit
+- Automating migrations with Azure Data Studio using PowerShell / CLI isn't supported.
+- Migrating to Azure SQL Database isn't supported.
+- Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations.
+- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
+> [!IMPORTANT]
+> **Known issue when migrating multiple databases to SQL Server on Azure VM:** Concurrently migrating multiple databases to the same SQL Server on Azure VM results in migration failures for most databases. Ensure you only migrate a single database to a SQL Server on Azure VM at any point in time.
+
+### Regions
+Migrate SQL Server database(s) to your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine to any of the following regions during Preview:
+- Australia East
+- Australia Southeast
+- Canada Central
+- Canada East
+- Central India
+- Central US
+- East US
+- East US 2
+- France Central
+- Japan East
+- North Central US
+- South Central US
+- Southeast Asia
+- South India
+- UK South
+- West Europe
+- West US
+- West US 2
+
+## Pricing
+- Azure Database Migration Service is free to use with the Azure SQL Migration extension in Azure Data Studio. You can migrate multiple SQL Server databases using the Azure Database Migration Service at no charge for using the service or the Azure SQL Migration extension.
+- There's no data movement or data ingress cost for migrating your databases from on-premises to Azure. If the source database is moved from another region or an Azure VM, you may incur [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) based on your bandwidth provider and routing scenario.
+- Provide your own machine or on-premises server to install Azure Data Studio.
+- A self-hosted integration runtime is needed to access database backups from your on-premises network share.
+
+## Next steps
+
+- For an overview and installation of the Azure SQL Migration extension, see [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension.md).
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/authenticate-shared-access-signature.md
SHA-256('https://<yournamespace>.servicebus.windows.net/'+'\n'+ 1438205742)
The token contains the non-hashed values so that the recipient can recompute the hash with the same parameters, verifying that the issuer is in possession of a valid signing key.
-The resource URI is the full URI of the Service Bus resource to which access is claimed. For example, http://<namespace>.servicebus.windows.net/<entityPath> or `sb://<namespace>.servicebus.windows.net/<entityPath>;` that is, `http://contoso.servicebus.windows.net/eh1`.
+The resource URI is the full URI of the Service Bus resource to which access is claimed. For example, `http://<namespace>.servicebus.windows.net/<entityPath>` or `sb://<namespace>.servicebus.windows.net/<entityPath>` that is, `http://contoso.servicebus.windows.net/eh1`.
The URI must be percent-encoded. The shared access authorization rule used for signing must be configured on the entity specified by this URI, or by one of its hierarchical parents. For example, `http://contoso.servicebus.windows.net/eh1` or `http://contoso.servicebus.windows.net` in the previous example.
-A SAS token is valid for all resources prefixed with the <resourceURI> used in the signature-string.
+A SAS token is valid for all resources prefixed with the `<resourceURI>` used in the signature-string.
> [!NOTE] > You generate an access token for Event Hubs using shared access policy. For more information, see [Shared access authorization policy](authorize-access-shared-access-signature.md#shared-access-authorization-policies).
event-hubs Event Hubs For Kafka Ecosystem Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
Title: Use event hub from Apache Kafka app - Azure Event Hubs | Microsoft Docs description: This article provides information on Apache Kafka support by Azure Event Hubs. Previously updated : 09/25/2020 Last updated : 08/30/2021 # Use Azure Event Hubs from Apache Kafka applications Event Hubs provides an endpoint compatible with the Apache Kafka® producer and consumer APIs that can be used by most existing Apache Kafka client applications as an alternative to running your own Apache Kafka cluster. Event Hubs supports Apache Kafka's producer and consumer APIs clients at version 1.0 and above.
Scale in Event Hubs is controlled by how many [throughput units (TUs)](event-hub
### Is Apache Kafka the right solution for your workload?
-Coming from building applications using Apache Kafka, it will also useful to understand that Azure Event Hubs is part of a fleet of services which also includes [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), and [Azure Event Grid](../event-grid/overview.md).
+Coming from building applications using Apache Kafka, it's also useful to understand that Azure Event Hubs is part of a fleet of services, which also includes [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), and [Azure Event Grid](../event-grid/overview.md).
-While some providers of commercial distributions of Apache Kafka might suggest that Apache Kafka is a one-stop-shop for all your messaging platform needs, the reality is that Apache Kafka does not implement, for instance, the [competing-consumer](/azure/architecture/patterns/competing-consumers) queue pattern, does not have support for [publish-subscribe](/azure/architecture/patterns/publisher-subscriber) at a level that allows subscribers access to the incoming messages based on server-evaluated rules other than plain offsets, and it has no facilities to track the lifecycle of a job initiated by a message or sidelining faulty messages into a dead-letter queue, all of which are foundational for many enterprise messaging scenarios.
+While some providers of commercial distributions of Apache Kafka might suggest that Apache Kafka is a one-stop-shop for all your messaging platform needs, the reality is that Apache Kafka doesn't implement, for instance, the [competing-consumer](/azure/architecture/patterns/competing-consumers) queue pattern, doesn't have support for [publish-subscribe](/azure/architecture/patterns/publisher-subscriber) at a level that allows subscribers access to the incoming messages based on server-evaluated rules other than plain offsets, and it has no facilities to track the lifecycle of a job initiated by a message or sidelining faulty messages into a dead-letter queue, all of which are foundational for many enterprise messaging scenarios.
-To understand the differences between patterns and which pattern is best covered by which service, please review the [Asynchronous messaging options in Azure](/azure/architecture/guide/technology-choices/messaging) guidance. As an Apache Kafka user, you may find that communication paths you have so far realized with Kafka, can be realized with far less basic complexity and yet more powerful capabilities using either Event Grid or Service Bus.
+To understand the differences between patterns and which pattern is best covered by which service, see the [Asynchronous messaging options in Azure](/azure/architecture/guide/technology-choices/messaging) guidance. As an Apache Kafka user, you may find that communication paths you have so far realized with Kafka, can be realized with far less basic complexity and yet more powerful capabilities using either Event Grid or Service Bus.
-If you need specific features of Apache Kafka that are not available through the Event Hubs for Apache Kafka interface or if your implementation pattern exceeds the [Event Hubs quotas](event-hubs-quotas.md), you can also run a [native Apache Kafka cluster in Azure HDInsight](../hdinsight/kafk).
+If you need specific features of Apache Kafka that aren't available through the Event Hubs for Apache Kafka interface or if your implementation pattern exceeds the [Event Hubs quotas](event-hubs-quotas.md), you can also run a [native Apache Kafka cluster in Azure HDInsight](../hdinsight/kafk).
## Security and authentication
-Every time you publish or consume events from an Event Hubs for Kafka, your client is trying to access the Event Hubs resources. You want to ensure that the resources are accessed using an authorized entity. When using Apache Kafka protocol with your clients, you can set your configuration for authentication and encryption using the SASL mechanisms. When using Event Hubs for Kafka requires the TLS-encryption (as all data in transit with Event Hubs is TLS encrypted). It can be done specifying the SASL_SSL option in your configuration file.
+Every time you publish or consume events from an Event Hubs for Kafka, your client is trying to access the Event Hubs resources. You want to ensure that the resources are accessed using an authorized entity. When using Apache Kafka protocol with your clients, you can set your configuration for authentication and encryption using the SASL mechanisms. When using Event Hubs for Kafka requires the TLS-encryption (as all data in transit with Event Hubs is TLS encrypted), it can be done specifying the SASL_SSL option in your configuration file.
Azure Event Hubs provides multiple options to authorize access to your secure resources. - OAuth 2.0 - Shared access signature (SAS)
-#### OAuth 2.0
+### OAuth 2.0
Event Hubs integrates with Azure Active Directory (Azure AD), which provides an **OAuth 2.0** compliant centralized authorization server. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant fine grained permissions to your client identities. You can use this feature with your Kafka clients by specifying **SASL_SSL** for the protocol and **OAUTHBEARER** for the mechanism. For details about Azure roles and levels for scoping access, see [Authorize access with Azure AD](authorize-access-azure-active-directory.md). ```properties
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginMo
sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler ```
-#### Shared Access Signature (SAS)
+### Shared Access Signature (SAS)
Event Hubs also provides the **Shared Access Signatures (SAS)** for delegated access to Event Hubs for Kafka resources. Authorizing access using OAuth 2.0 token-based mechanism provides superior security and ease of use over SAS. The built-in roles can also eliminate the need for ACL-based authorization, which has to be maintained and managed by the user. You can use this feature with your Kafka clients by specifying **SASL_SSL** for the protocol and **PLAIN** for the mechanism. ```properties
For more **samples** that show how to use OAuth with Event Hubs for Kafka, see [
## Other Event Hubs features
-The Event Hubs for Apache Kafka feature is one of three protocols concurrently available on Azure Event Hubs, complementing HTTP and AMQP. You can write with any of these protocols and read with any another, so that your current Apache Kafka producers can continue publishing via Apache Kafka, but your reader can benefit from the the native integration with Event Hubs' AMQP interface, such as Azure Stream Analytics or Azure Functions. Reversely, you can readily integrate Azure Event Hubs into AMQP routing networks as an target endpoint, and yet read data through Apache Kafka integrations.
+The Event Hubs for Apache Kafka feature is one of three protocols concurrently available on Azure Event Hubs, complementing HTTP and AMQP. You can write with any of these protocols and read with any another, so that your current Apache Kafka producers can continue publishing via Apache Kafka, but your reader can benefit from the native integration with Event Hubs' AMQP interface, such as Azure Stream Analytics or Azure Functions. Reversely, you can readily integrate Azure Event Hubs into AMQP routing networks as a target endpoint, and yet read data through Apache Kafka integrations.
-Additionally, Event Hubs features such as [Capture](event-hubs-capture-overview.md), which enables extremely cost efficient long term archival via Azure Blob Storage and Azure Data Lake Storage, and [Geo Disaster-Recovery](event-hubs-geo-dr.md) also work with the Event Hubs for Kafka feature.
+Additionally, Event Hubs features such as [Capture](event-hubs-capture-overview.md), which enables extremely cost efficient long-term archival via Azure Blob Storage and Azure Data Lake Storage, and [Geo Disaster-Recovery](event-hubs-geo-dr.md) also work with the Event Hubs for Kafka feature.
## Apache Kafka feature differences The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hub's capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster.
-As explained [above](#is-apache-kafka-the-right-solution-for-your-workload), the Azure Messaging fleet provides rich and robust coverage for a multitude of messaging scenarios, and although the following features are not currently supported through Event Hubs' support for the Apache Kafka API, we point out where and how the desired capability is available.
+As explained [above](#is-apache-kafka-the-right-solution-for-your-workload), the Azure Messaging fleet provides rich and robust coverage for a multitude of messaging scenarios, and although the following features aren't currently supported through Event Hubs' support for the Apache Kafka API, we point out where and how the desired capability is available.
### Transactions
-[Azure Service Bus](../service-bus-messaging/service-bus-transactions.md) has robust transaction support that allows receiving and settling messages and sessions while sending outbound messages resulting from message processing to multiple target entities under the consistency protection of a transaction. The feature set not only allows for exactly-once processing of each message in a sequence, but also avoids the risk of another consumer inadvertently reprocessing the same messages as it would be the case with Apache Kafka. Service Bus is the recommended service for transactional message workloads.
+[Azure Service Bus](../service-bus-messaging/service-bus-transactions.md) has robust transaction support that allows receiving and settling messages and sessions while sending outbound messages resulting from message processing to multiple target entities under the consistency protection of a transaction. The feature set not only allows for exactly once processing of each message in a sequence, but also avoids the risk of another consumer inadvertently reprocessing the same messages as it would be the case with Apache Kafka. Service Bus is the recommended service for transactional message workloads.
### Compression
The payload of any Event Hub event is a byte stream and the content can be compr
### Log Compaction
-Apache Kafka log compaction is a feature that allows evicting all but the last record of each key from a partition, which effectively turns an Apache Kafka topic into a key-value store where the last value added overrides the previous one. This feature is presently not implemented by Azure Event Hubs. The key-value store pattern, even with frequent updates, is far better supported by database services like [Azure Cosmos DB](../cosmos-db/introduction.md). Please refer to the [Log Projection](event-hubs-federation-overview.md#log-projections) topic in the Event Hubs federation guidance for more details.
+Apache Kafka log compaction is a feature that allows evicting all but the last record of each key from a partition, which effectively turns an Apache Kafka topic into a key-value store where the last value added overrides the previous one. This feature is presently not implemented by Azure Event Hubs. The key-value store pattern, even with frequent updates, is far better supported by database services like [Azure Cosmos DB](../cosmos-db/introduction.md). For more information, see [Log Projection](event-hubs-federation-overview.md#log-projections).
### Kafka Streams
-Kafka Streams is a client library for stream analytics that is part of the Apache Kafka open source project, but is separate from the Apache Kafka event stream broker.
+Kafka Streams is a client library for stream analytics that is part of the Apache Kafka open-source project, but is separate from the Apache Kafka event stream broker.
-The most common reason Azure Event Hubs customers ask for Kafka Streams support is because they are interested in Confluent's "ksqlDB" product. "ksqlDB" is a proprietary shared source project that is [licensed such](https://github.com/confluentinc/ksql/blob/master/LICENSE) that no vendor "offering software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online services that competes with Confluent products or services" is permitted to use or offer "ksqlDB" support. Practically, if you use ksqlDB, you must either operate Kafka yourself or you must use ConfluentΓÇÖs cloud offerings. The licensing terms might also affect Azure customers who offer services for a purpose excluded by the license.
+The most common reason Azure Event Hubs customers ask for Kafka Streams support is because they're interested in Confluent's "ksqlDB" product. "ksqlDB" is a proprietary shared source project that is [licensed such](https://github.com/confluentinc/ksql/blob/master/LICENSE) that no vendor "offering software-as-a-service, platform-as-a-service, infrastructure-as-a-service, or other similar online services that compete with Confluent products or services" is permitted to use or offer "ksqlDB" support. Practically, if you use ksqlDB, you must either operate Kafka yourself or you must use ConfluentΓÇÖs cloud offerings. The licensing terms might also affect Azure customers who offer services for a purpose excluded by the license.
Standalone and without ksqlDB, Kafka Streams has fewer capabilities than many alternative frameworks and services, most of which have built-in streaming SQL interfaces, and all of which integrate with Azure Event Hubs today:
Standalone and without ksqlDB, Kafka Streams has fewer capabilities than many al
- [Apache Flink](event-hubs-kafka-flink-tutorial.md) - [Akka Streams](event-hubs-kafka-akka-streams-tutorial.md)
-The listed services and frameworks can generally acquire event streams and reference data directly from a diverse set of sources through adapters. Kafka Streams can only acquire data from Apache Kafka and your analytics projects are therefore locked into Apache Kafka. To use data from other sources, you are required to first import data into Apache Kafka with the Kafka Connect framework.
+The listed services and frameworks can generally acquire event streams and reference data directly from a diverse set of sources through adapters. Kafka Streams can only acquire data from Apache Kafka and your analytics projects are therefore locked into Apache Kafka. To use data from other sources, you're required to first import data into Apache Kafka with the Kafka Connect framework.
If you must use the Kafka Streams framework on Azure, [Apache Kafka on HDInsight](../hdinsight/kafk) will provide you with that option. Apache Kafka on HDInsight provides full control over all configuration aspects of Apache Kafka, while being fully integrated with various aspects of the Azure platform, from fault/update domain placement to network isolation to monitoring integration.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-arm.md
Register-AzProviderFeature -FeatureName ExpressRouteVnetPeeringGatewayBypass -Pr
``` > [!NOTE]
+> Any connections configured for FastPath in the target subscription will be enrolled in this preview. We do not advise enabling this preview in production subscriptions.
> If you already have FastPath configured and want to enroll in the preview feature, you need to do the following: > 1. Enroll in the FastPath preview feature with the Azure PowerShell command above. > 1. Disable and then re-enable FastPath on the target connection.
frontdoor Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-geo-filtering.md
na ms.devlang: na Previously updated : 09/28/2020 Last updated : 08/31/2021
By default, Azure Front Door will respond to all user requests regardless of the
A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is REMOTE_ADDR, the operator is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule. - You can configure a geo-filtering policy for your Front Door by using [Azure PowerShell](front-door-tutorial-geo-filtering.md) or by using a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering).
+> [!IMPORTANT]
+> Include the country code **ZZ** whenever you use geo-filtering. The **ZZ** country code (or *Unknown* country) captures IP addresses that are not yet mapped to a country in our dataset. This avoids false positives.
+ ## Country/Region code reference |Country/Region code | Country/Region name |
genomics Business Continuity Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/business-continuity-genomics.md
description: This overview describes the capabilities that Microsoft Genomics pr
keywords: business continuity, disaster recovery -+ -+ Last updated 04/06/2018
genomics File Support Ticket Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/file-support-ticket-genomics.md
description: This article describes how to file a support request to contact Microsoft Genomics if you're not able to resolve your issue with the troubleshooting guide or FAQ. keywords: troubleshooting, error, debugging, support -+ -+ Last updated 05/23/2018
genomics Overview What Is Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/overview-what-is-genomics.md
Title: What is Microsoft Genomics?
description: Learn how Microsoft Genomics can power genome sequencing, using a cloud implementation of Burrows-Wheeler Aligner (BWA) and Genome Analysis Toolkit (GATK). -+ -+ Last updated 12/07/2017
genomics Quickstart Input Bam https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-bam.md
Title: Submit a workflow using BAM file input
description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input file is a single BAM file. -+ -+ Last updated 12/07/2017
genomics Quickstart Input Multiple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-multiple.md
description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input file is multiple FASTQ or BAM files from the same sample. -+ -+ Last updated 02/05/2018
genomics Quickstart Input Pair Fastq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-pair-fastq.md
Title: Submit a workflow using FASTQ file inputs
description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input files are a single pair of FASTQ files. -+ -+ Last updated 12/07/2017
genomics Quickstart Input Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-sas.md
Title: Workflow using shared access signatures
description: This article demonstrates how to submit a workflow to the Microsoft Genomics service using shared access signatures (SAS) instead of storage account keys. -+ -+ Last updated 03/02/2018
genomics Quickstart Run Genomics Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-run-genomics-workflow-portal.md
Title: 'Quickstart: Run a workflow - Microsoft Genomics' description: The quickstart shows how to load input data into Azure Blob Storage and run a workflow through the Microsoft Genomics service. -+ -+ Last updated 01/11/2019
genomics Troubleshooting Guide Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/troubleshooting-guide-genomics.md
description: Learn about troubleshooting strategies for using Microsoft Genomics, including error messages and how to resolve them. keywords: troubleshooting, error, debugging --++
genomics Version Release History Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/version-release-history-genomics.md
Title: Version release history
description: The release history of updates to the Microsoft Genomics Python client for fixes and new functionality. -+ -+ Last updated 01/11/2019
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Previously updated : 08/27/2021 Last updated : 08/31/2021
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
A challenge in previous versions of DSC has been correcting drift at scale
without a lot of custom code and reliance on WinRM remote connections. Guest configuration solves this problem. Users of guest configuration have control over drift correction through
-[Remediation On Demand](/guest-configuration-policy-effects.md#remediation-on-demand-applyandmonitor).
+[Remediation On Demand](./guest-configuration-policy-effects.md#remediation-on-demand-applyandmonitor).
+
+## Maximum size of custom configuration package
+
+In Azure Automation state configuration, DSC configurations were
+[limited in size](../../../automation/automation-dsc-compile.md#compile-your-dsc-configuration-in-windows-powershell).
+Guest configuration supports a total package size of 100MB (before
+compression). There is no specific limit on the size of the MOF file within
+the package.
## Special requirements for Get
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 08/27/2021 Last updated : 08/31/2021
For sample queries for this table, see [Resource Graph sample queries for guestc
- Sample query: [Count machines in scope of guest configuration policies](../samples/samples-by-category.md#count-machines-in-scope-of-guest-configuration-policies) - Sample query: [Count of non-compliant guest configuration assignments](../samples/samples-by-category.md#count-of-non-compliant-guest-configuration-assignments) - Sample query: [Find all reasons a machine is non-compliant for guest configuration assignments](../samples/samples-by-category.md#find-all-reasons-a-machine-is-non-compliant-for-guest-configuration-assignments)
- - Sample query: [Query details of guest configuration assignment reports](../samples/samples-by-category.md#query-details-of-guest-configuration-assignment-reports)
## healthresources
For sample queries for this table, see [Resource Graph sample queries for resour
- Citrix.Services/XenDesktopEssentials (Citrix Virtual Desktops Essentials) - conexlink.mycloudit/accounts - crypteron.datasecurity/apps
+- dynatrace.observability/monitors
- GitHub.Enterprise/accounts (GitHub AE) - gridpro.evops/accounts - gridpro.evops/accounts/eventrules
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [Count virtual machines by OS type](../samples/samples-by-category.md#count-virtual-machines-by-os-type) - Sample query: [Count virtual machines by OS type with extend](../samples/samples-by-category.md#count-virtual-machines-by-os-type-with-extend) - Sample query: [List all extensions installed on a virtual machine](../samples/samples-by-category.md#list-all-extensions-installed-on-a-virtual-machine)
+ - Sample query: [List machines that are not running and the last compliance status](../samples/samples-by-category.md#list-machines-that-are-not-running-and-the-last-compliance-status)
- Sample query: [List of virtual machines by availability state and power state with Resource Ids and resource Groups](../samples/samples-by-category.md#list-of-virtual-machines-by-availability-state-and-power-state-with-resource-ids-and-resource-groups) - Sample query: [List virtual machines with their network interface and public IP](../samples/samples-by-category.md#list-virtual-machines-with-their-network-interface-and-public-ip) - Sample query: [Show all virtual machines ordered by name in descending order](../samples/samples-by-category.md#show-all-virtual-machines-ordered-by-name-in-descending-order)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.ConnectedCache/cacheNodes (Connected Cache Resources) - Microsoft.ConnectedVehicle/platformAccounts (Connected Vehicle Platforms) - microsoft.connectedvmwarevsphere/clusters
+- microsoft.connectedvmwarevsphere/datastores
- microsoft.connectedvmwarevsphere/resourcepools - Microsoft.connectedVMwareVSphere/vCenters (VMware vCenters) - Microsoft.ConnectedVMwarevSphere/VirtualMachines (VMware + AVS virtual machines)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.DataProtection/BackupVaults (Backup vaults) - Microsoft.DataProtection/resourceGuards (Resource Guards (Preview)) - microsoft.dataprotection/resourceoperationgatekeepers
+- microsoft.datareplication/replicationfabrics
- microsoft.datareplication/replicationvaults - Microsoft.DataShare/accounts (Data Shares) - Microsoft.DBforMariaDB/servers (Azure Database for MariaDB servers)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.hardwaresecuritymodules/dedicatedhsms - Microsoft.HDInsight/clusterpools (HDInsight cluster pools) - Microsoft.HDInsight/clusterpools/clusters (HDInsight gen2 clusters)
+- microsoft.hdinsight/clusterpools/clusters/sessionclusters
- Microsoft.HDInsight/clusters (HDInsight clusters) - Microsoft.HealthBot/healthBots (Azure Health Bot) - Microsoft.HealthcareApis/services (Azure API for FHIR)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.HpcWorkbench/instances (HPC Workbenches (preview)) - Microsoft.HybridCompute/machines (Servers - Azure Arc) - Sample query: [Get count and percentage of Arc-enabled servers by domain](../samples/samples-by-category.md#get-count-and-percentage-of-arc-enabled-servers-by-domain)
+ - Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server)
- microsoft.hybridcompute/machines/extensions
+ - Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server)
- Microsoft.HybridCompute/privateLinkScopes (Azure Arc Private Link Scopes) - Microsoft.HybridData/dataManagers (StorSimple Data Managers) - Microsoft.HybridNetwork/devices (Azure Network Function Manager ΓÇô Devices (Preview))
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Search/searchServices (Search services) - microsoft.security/automations - microsoft.security/iotsecuritysolutions
+- microsoft.security/securityconnectors
- Microsoft.SecurityDetonation/chambers (Security Detonation Chambers) - Microsoft.ServiceBus/namespaces (Service Bus Namespaces) - Microsoft.ServiceFabric/clusters (Service Fabric clusters)
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/27/2021 Last updated : 08/31/2021
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 08/27/2021 Last updated : 08/31/2021
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/guides/operations/azure-operations-guide.md
You manage access to the virtual machine over the public IP address by using a n
Finally, as with the management of any computer system, you should provide security for an Azure virtual machine at the operating system by using security credentials and software firewalls.
-## Azure Storage
-
-Azure Storage is a Microsoft-managed service that provides durable, scalable, and redundant storage. You can add an Azure storage account as a resource to any resource group by using any resource deployment method. Azure includes four storage types: Blob storage, File Storage, Table storage, and Queue storage. When deploying a storage account, two account types are available, general-purpose and blob storage. A general-purpose storage account gives you access to all four storage types. Blob storage accounts are similar to general-purpose accounts, but contain specialized blobs that include hot and cold access tiers. For more information on blob storage, see [Azure Blob storage](../../storage/blobs/storage-blob-storage-tiers.md).
-
-Azure storage accounts can be configured with different levels of redundancy:
--- **Locally redundant storage** provides high availability by ensuring that three copies of all data are made synchronously before a write is deemed successful. These copies are stored in a single facility in a single region. The replicas reside in separate fault domains and upgrade domains. This means the data is available even if a storage node that's holding your data fails or is taken offline to be updated.--- **Geo-redundant storage** makes three synchronous copies of the data in the primary region for high availability, and then asynchronously makes three replicas in a paired region for disaster recovery.--- **Read-access geo-redundant storage** is geo-redundant storage plus the ability to read the data in the secondary region. This ability makes it suitable for partial disaster recovery. If there's a problem with the primary region, you can change your application to have read-only access to the paired region.
+## Azure storage
+Azure provides Azure Blob storage, Azure Files, Azure Table storage, and Azure Queue storage to address a variety of different storage use cases, all with high durability, scalability, and redundancy guarantees. Azure storage services are managed through an Azure storage account that can be deployed as a resource to any resource group by using any resource deployment method.
### Use cases- Each storage type has a different use case. #### Blob storage
+The word *blob* is an acronym for *binary large object*. Blobs are unstructured files like those that you store on your computer. Blob storage can store any type of text or binary data, such as a document, media file, or application installer. Blob storage is also referred to as object storage.
-The word *blob* is an acronym for *binary large object*. Blobs are unstructured files like those that you store on your computer. Blob storage can store any type of text or binary data, such as a document, media file, or application installer. Blob storage is also referred to as object storage. Azure Blob storage also holds Azure Virtual Machines data disks.
-
-Azure Storage supports three kinds of blobs:
--- **Block blobs** are used to hold ordinary files up to 195 GB in size (4 MB × 50,000 blocks). The primary use case for block blobs is the storage of files that are read from beginning to end, such as media files or image files for websites. They are named block blobs because files larger than 64 MB must be uploaded as small blocks. These blocks are then consolidated (or committed) into the final blob.
+Azure Blob storage supports three kinds of blobs:
-- **Page blobs** are used to hold random-access files up to 1 TB in size. Page blobs are used primarily as the backing storage for the VHDs that provide durable disks for Azure Virtual Machines, the IaaS compute service in Azure. They are named page blobs because they provide random read/write access to 512-byte pages.
+- **Block blobs** are used to hold ordinary files up to 195 GiB in size (4 MiB × 50,000 blocks). The primary use case for block blobs is the storage of files that are read from beginning to end, such as media files or image files for websites. They are named block blobs because files larger than 64 MiB must be uploaded as small blocks. These blocks are then consolidated (or committed) into the final blob.
-- **Append blobs** consist of blocks like block blobs, but they are optimized for append operations. These are frequently used for logging information from one or more sources to the same blob. For example, you might write all of your trace logging to the same append blob for an application that's running on multiple VMs. A single append blob can be up to 195 GB.
+- **Page blobs** are used to hold random-access files up to 1 TiB in size. Page blobs are used primarily as the backing storage for the VHDs that provide durable disks for Azure Virtual Machines, the IaaS compute service in Azure. They are named page blobs because they provide random read/write access to 512 byte pages.
-For more information, see [Get started with Azure Blob storage using .NET](../../storage/blobs/storage-quickstart-blobs-dotnet.md).
+- **Append blobs** consist of blocks like block blobs, but they are optimized for append operations. These are frequently used for logging information from one or more sources to the same blob. For example, you might write all of your trace logging to the same append blob for an application that's running on multiple VMs. A single append blob can be up to 195 GiB.
-#### File storage
+For more information, see [What is Azure Blob storage](../../storage/blobs/storage-blobs-overview.md).
-Azure File storage is a service that offers file shares in the cloud by using the standard Server Message Block (SMB) protocol. The service supports both SMB 2.1 and SMB 3.0. With Azure File storage, you can migrate applications that rely on file shares to Azure quickly and without costly rewrites. Applications running on Azure virtual machines, in cloud services, or from on-premises clients can mount a file share in the cloud. This is similar to how a desktop application mounts a typical SMB share. Any number of application components can then mount and access the File storage share simultaneously.
+#### Azure Files
+Azure Files offers fully managed file shares in the cloud that are accessble via the industry standard Server Message Block (SMB) or Network File System (NFS) protocols. The service supports both SMB 3.1.1, SMB 3.0, SMB 2.1, NFS 4.1. With Azure Files, you can migrate applications that rely on file shares to Azure quickly and without costly rewrites. Applications running on Azure virtual machines, in cloud services, or from on-premises clients can mount a file share in the cloud.
-Because a File storage share is a standard SMB file share, applications running in Azure can access data in the share via file system I/O APIs. Developers can therefore use their existing code and skills to migrate existing applications. IT pros can use PowerShell cmdlets to create, mount, and manage File storage shares as part of the administration of Azure applications.
+Because a Azure file shares expose a standard SMB or NFS endpoints, applications running in Azure can access data in the share via file system I/O APIs. Developers can therefore use their existing code and skills to migrate existing applications. IT pros can use PowerShell cmdlets to create, mount, and manage Azure file shares as part of the administration of Azure applications.
-For more information, see [Get started with Azure File storage on Windows](../../storage/files/storage-how-to-use-files-windows.md) or [How to use Azure File storage with Linux](../../storage/files/storage-how-to-use-files-linux.md).
+For more information, see [What is Azure Files](../../storage/files/storage-files-introduction.md).
#### Table storage- Azure Table storage is a service that stores structured NoSQL data in the cloud. Table storage is a key/attribute store with a schema-less design. Because Table storage is schema-less, it's easy to adapt your data as the needs of your application evolve. Access to data is fast and cost-effective for all kinds of applications. Table storage is typically significantly lower in cost than traditional SQL for similar volumes of data. You can use Table storage to store flexible datasets, such as user data for web applications, address books, device information, and any other type of metadata that your service requires. You can store any number of entities in a table. A storage account can contain any number of tables, up to the capacity limit of the storage account.
You can use Table storage to store flexible datasets, such as user data for web
For more information, see [Get started with Azure Table storage](../../cosmos-db/tutorial-develop-table-dotnet.md). #### Queue storage- Azure Queue storage provides cloud messaging between application components. In designing applications for scale, application components are often decoupled so that they can scale independently. Queue storage delivers asynchronous messaging for communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows. For more information, see [Get started with Azure Queue storage](../../storage/queues/storage-dotnet-how-to-use-queues.md).
In addition to deploying Azure resources individually, you can use the Azure Pow
#### Command-line interface (CLI)
-As with the PowerShell module, the Azure command-line Interface provides deployment automation and can be used on Windows, OS X, or Linux systems. You can use the Azure CLI **storage account create** command to create a storage account. For more information, see [Using the Azure CLI with Azure Storage.](../../storage/blobs/storage-quickstart-blobs-cli.md)
+As with the PowerShell module, the Azure command-line Interface provides deployment automation and can be used on Windows, macOS, or Linux systems. You can use the Azure CLI **storage account create** command to create a storage account. For more information, see [Using the Azure CLI with Azure Storage.](../../storage/blobs/storage-quickstart-blobs-cli.md)
Likewise, you can use the Azure CLI to deploy an Azure Resource Manager template. For more information, see [Deploy resources with Resource Manager templates and Azure CLI](../../azure-resource-manager/templates/deploy-cli.md).
-### Access and security for Azure Storage
+### Access and security for Azure storage services
-Azure Storage is accessed in various ways, including though the Azure portal, during VM creation and operation, and from Storage client libraries.
+Azure storage services are accessed in various ways, including though the Azure portal, during VM creation and operation, and from Storage client libraries.
#### Virtual machine disks
-When you're deploying a virtual machine, you also need to create a storage account to hold the virtual machine operating system disk and any additional data disks. You can select an existing storage account or create a new one. Because the maximum size of a blob is 1,024 GB, a single VM disk has a maximum size of 1,023 GB. To configure a larger data disk, you can present multiple data disks to the virtual machine and pool them together as a single logical disk. For more information, see "Manage Azure disks" for [Windows](../../virtual-machines/windows/tutorial-manage-data-disk.md) and [Linux](../../virtual-machines/linux/tutorial-manage-disks.md).
+When you're deploying a virtual machine, you also need to create a storage account to hold the virtual machine operating system disk and any additional data disks. You can select an existing storage account or create a new one. Because the maximum size of a blob is 1,024 GiB, a single VM disk has a maximum size of 1,023 GiB. To configure a larger data disk, you can present multiple data disks to the virtual machine and pool them together as a single logical disk. For more information, see "Manage Azure disks" for [Windows](../../virtual-machines/windows/tutorial-manage-data-disk.md) and [Linux](../../virtual-machines/linux/tutorial-manage-disks.md).
#### Storage tools
Azure storage accounts can be accessed through many different storage explorers,
#### Storage API
-Storage resources can be accessed by any language that can make HTTP/HTTPS requests. Additionally, Azure Storage offers programming libraries for several popular languages. These libraries simplify working with Azure Storage by handling details such as synchronous and asynchronous invocation, batching of operations, exception management, and automatic retries. For more information, see [Azure Storage service REST API reference](/rest/api/storageservices/Azure-Storage-Services-REST-API-Reference).
+Storage resources can be accessed by any language that can make HTTP/HTTPS requests. Additionally, the Azure storage service offer programming libraries for several popular languages. These libraries simplify working with the Azure storage platform by handling details such as synchronous and asynchronous invocation, batching of operations, exception management, and automatic retries. For more information, see [Azure storage services REST API reference](/rest/api/storageservices/Azure-Storage-Services-REST-API-Reference).
#### Storage access keys
hdinsight R Server Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/r-server/r-server-storage.md
ML Services on HDInsight can use different storage solutions to persist data, co
- [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/) - [Azure Data Lake Storage Gen1](https://azure.microsoft.com/services/storage/data-lake-storage/)-- [Azure File storage](https://azure.microsoft.com/services/storage/files/)
+- [Azure Files](https://azure.microsoft.com/services/storage/files/)
-You also have the option of accessing multiple Azure storage accounts or containers with your HDInsight cluster. Azure File storage is a convenient data storage option for use on the edge node that enables you to mount an Azure storage file share to, for example, the Linux file system. But Azure File shares can be mounted and used by any system that has a supported operating system such as Windows or Linux.
+You also have the option of accessing multiple Azure storage accounts or containers with your HDInsight cluster. Azure Files is a convenient data storage option for use on the edge node that enables you to mount an Azure file share to, for example, the Linux file system. But Azure file shares can be mounted and used by any system that has a supported operating system such as Windows or Linux.
When you create an Apache Hadoop cluster in HDInsight, you specify either an **Azure Blob storage** account or **Data Lake Storage Gen1**. A specific storage container from that account holds the file system for the cluster that you create (for example, the Hadoop Distributed File System). For more information and guidance, see:
hadoop fs -copyFromLocal /usr/lib64/R Server-7.4.1/library/RevoScaleR/SampleData
hadoop fs ΓÇôls adl://rkadl1.azuredatalakestore.net/share ```
-## Use Azure File storage with ML Services on HDInsight
+## Use Azure Files with ML Services on HDInsight
There's also a convenient data storage option for use on the edge node called [Azure Files](https://azure.microsoft.com/services/storage/files/). It enables you to mount an Azure Storage file share to the Linux file system. This option can be handy for storing data files, R scripts, and result objects that might be needed later, especially when it makes sense to use the native file system on the edge node rather than HDFS. A major benefit of Azure Files is that the file shares can be mounted and used by any system that has a supported OS such as Windows or Linux. For example, it can be used by another HDInsight cluster that you or someone on your team has, by an Azure VM, or even by an on-premises system. For more information, see: -- [How to use Azure File storage with Linux](../../storage/files/storage-how-to-use-files-linux.md)-- [How to use Azure File storage on Windows](../../storage/files/storage-dotnet-how-to-use-files.md)
+- [How to use Azure Files with Linux](../../storage/files/storage-how-to-use-files-linux.md)
+- [How to use Azure Files on Windows](../../storage/files/storage-dotnet-how-to-use-files.md)
## Next steps
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/authentication-authorization.md
The Healthcare APIs typically expect a [JSON Web Token](https://en.wikipedia.org
[ ![JASON web token signature.](media/azure-access-token.png) ](media/azure-access-token.png#lightbox)
-You can use online tools such as [https://jwt.ms](https://jwt.ms/) or [https://jwt.io](https://jwt.io/) to view the token content. For example, you can view the claims details.
+You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the token content. For example, you can view the claims details.
|**Claim type** |**Value** |**Notes** | |||-|
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
In this quickstart, you'll learn how to deploy the IoT Connector in the Azure po
It's important that you have the following prerequisites completed before you begin the steps of creating an IoT Connector instance in Azure Healthcare APIs. * [Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc)
-* [Resource group deployed in the Azure portal](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-portal)
-* [Event Hubs namespace and Event Hub deployed in the Azure portal](https://docs.microsoft.com/azure/event-hubs/event-hubs-create)
+* [Resource group deployed in the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md)
+* [Event Hubs namespace and Event Hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
* [Workspace deployed in Azure Healthcare APIs](../workspace-overview.md) * [FHIR service deployed in Azure Healthcare APIs](../fhir/fhir-portal-quickstart.md)
Under the **Basics** tab, complete the required fields under **Instance details*
The Event Hub name is the name of the **Event Hubs Instance** that you've deployed.
- For information about Azure Event Hubs, see [Quickstart: Create an Event Hub using Azure portal](https://docs.microsoft.com/azure/event-hubs/event-hubs-create#create-an-event-hubs-namespace).
+ For information about Azure Event Hubs, see [Quickstart: Create an Event Hub using Azure portal](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
3. Enter the **Consumer Group**.
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
Title: Architectural concepts in Azure IoT Central | Microsoft Docs
description: This article introduces key concepts relating the architecture of Azure IoT Central Previously updated : 12/19/2020 Last updated : 08/31/2021
In Azure IoT Central, the data that a device can exchange with your application
To learn more about how devices connect to your Azure IoT Central application, see [Device connectivity](concepts-get-connected.md).
-## Azure IoT Edge devices
+### Azure IoT Edge devices
-As well as devices created using the [Azure IoT SDKs](https://github.com/Azure/azure-iot-sdks), you can also connect [Azure IoT Edge devices](../../iot-edge/about-iot-edge.md) to an IoT Central application. IoT Edge lets you run cloud intelligence and custom logic directly on IoT devices managed by IoT Central. The IoT Edge runtime enables you to:
+As well as devices created using the [Azure IoT SDKs](https://github.com/Azure/azure-iot-sdks), you can also connect [Azure IoT Edge devices](../../iot-edge/about-iot-edge.md) to an IoT Central application. IoT Edge lets you run cloud intelligence and custom logic directly on IoT devices managed by IoT Central. You can also use IoT Edge as a gateway to enable other downstream devices to connect to IoT Central.
-- Install and update workloads on the device.-- Maintain IoT Edge security standards on the device.-- Ensure that IoT Edge modules are always running.-- Report module health to the cloud for remote monitoring.-- Manage communication between downstream leaf devices and an IoT Edge device, between modules on an IoT Edge device, and between an IoT Edge device and the cloud.-
-![Azure IoT Central with Azure IoT Edge](./media/concepts-architecture/iotedge.png)
-
-IoT Central enables the following capabilities to for IoT Edge devices:
--- Device templates to describe the capabilities of an IoT Edge device, such as:
- - Deployment manifest upload capability, which helps you manage a manifest for a fleet of devices.
- - Modules that run on the IoT Edge device.
- - The telemetry each module sends.
- - The properties each module reports.
- - The commands each module responds to.
- - The relationships between an IoT Edge gateway device and downstream device.
- - Cloud properties that aren't stored on the IoT Edge device.
- - Customizations that change how the UI shows device capabilities.
- - Device views and forms.
-
- For more information, see the [Connect Azure IoT Edge devices to an Azure IoT Central application](./concepts-iot-edge.md) article.
--- The ability to provision IoT Edge devices at scale using Azure IoT device provisioning service-- Rules and actions.-- Custom dashboards and analytics.-- Continuous data export of telemetry from IoT Edge devices.-
-### IoT Edge device types
-
-IoT Central classifies IoT Edge device types as follows:
--- Leaf devices. An IoT Edge device can have downstream leaf devices, but these devices aren't provisioned in IoT Central.-- Gateway devices with downstream devices. Both gateway device and downstream devices are provisioned in IoT Central-
-![IoT Central with IoT Edge Overview](./media/concepts-architecture/gatewayedge.png)
-
-> [!NOTE]
-> IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't support nested IoT Edge scenarios.
-
-### IoT Edge patterns
-
-IoT Central supports the following IoT Edge device patterns:
-
-#### IoT Edge as leaf device
-
-![IoT Edge as leaf device](./media/concepts-architecture/edgeasleafdevice.png)
-
-The IoT Edge device is provisioned in IoT Central and any downstream devices and their telemetry is represented as coming from the IoT Edge device. Downstream devices connected to the IoT Edge device aren't provisioned in IoT Central.
-
-#### IoT Edge gateway device connected to downstream devices with identity
-
-![IoT Edge with downstream device identity](./medieviceidentity.png)
-
-The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Runtime support for provisioning downstream devices through the gateway isn't currently supported.
-
-#### IoT Edge gateway device connected to downstream devices with identity provided by the IoT Edge gateway
-
-![IoT Edge with downstream device without identity](./medieviceidentity.png)
-
-The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Runtime support of gateway providing identity to downstream devices and provisioning of downstream devices isn't currently supported. If you bring your own identity translation module, IoT Central can support this pattern.
+To learn more, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md).
## Cloud gateway
In an Azure IoT Central application, you can [create and run jobs](howto-manage-
## Role-based access control (RBAC)
-Every IoT Central application has its own built-in RBAC system. An [administrator can define access rules](howto-manage-users-roles.md) for an Azure IoT Central application using one of the predefined roles or by creating a custom role. Roles determine what areas of the application a user has access to and what actions they can perform.
+Every IoT Central application has its own built-in RBAC system. An [administrator can define access rules](howto-manage-users-roles.md) for an Azure IoT Central application using one of the predefined roles or by creating a custom role. Roles determine what areas of the application a user has access to and what they can do.
## Security
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central | Microsoft Docs
description: Understand how to use Azure IoT Edge with an IoT Central application. Previously updated : 02/19/2021 Last updated : 08/31/2021
Azure IoT Edge moves cloud analytics and custom business logic to devices so you
This article describes:
+* IoT Edge gateway patterns with IoT Central.
* How IoT Edge devices connect to an IoT Central application. * How to use IoT Central to manage your IoT Edge devices.
To learn more about IoT Edge, see [What is Azure IoT Edge?](../../iot-edge/about
## IoT Edge
+![Azure IoT Central with Azure IoT Edge](./media/concepts-iot-edge/iotedge.png)
+ IoT Edge is made up of three components: * *IoT Edge modules* are containers that run Azure services, partner services, or your own code. Modules are deployed to IoT Edge devices, and run locally on those devices. To learn more, see [Understand Azure IoT Edge modules](../../iot-edge/iot-edge-modules.md). * The *IoT Edge runtime* runs on each IoT Edge device, and manages the modules deployed to each device. The runtime consists of two IoT Edge modules: *IoT Edge agent* and *IoT Edge hub*. To learn more, see [Understand the Azure IoT Edge runtime and its architecture](../../iot-edge/iot-edge-runtime.md). * A *cloud-based interface* enables you to remotely monitor and manage IoT Edge devices. IoT Central is an example of a cloud interface.
+IoT Central enables the following capabilities to for IoT Edge devices:
+
+* Device templates to describe the capabilities of an IoT Edge device, such as:
+ * Deployment manifest upload capability, which helps you manage a manifest for a fleet of devices.
+ * Modules that run on the IoT Edge device.
+ * The telemetry each module sends.
+ * The properties each module reports.
+ * The commands each module responds to.
+ * The relationships between an IoT Edge gateway device and downstream device.
+ * Cloud properties that aren't stored on the IoT Edge device.
+ * Customizations that change how the UI shows device capabilities.
+ * Device views and forms.
+* The ability to provision IoT Edge devices at scale using Azure IoT device provisioning service.
+* Rules and actions.
+* Custom dashboards and analytics.
+* Continuous data export of telemetry from IoT Edge devices.
+ An IoT Edge device can be: * A standalone device composed of modules. * A *gateway device*, with downstream devices connecting to it.
-## IoT Edge as a gateway
+![IoT Central with IoT Edge Overview](./media/concepts-iot-edge/gatewayedge.png)
-An IoT Edge device can operate as a gateway that provides a connection between other downstream devices on the network and your IoT Central application.
+A gateway device can be a:
-There are two gateway patterns:
-
-* In the *transparent gateway* pattern, the IoT Edge hub module behaves like IoT Central and handles connections from devices registered in IoT Central. Messages pass from downstream devices to IoT Central as if there's no gateway between them.
+* *Transparent gateway* where the IoT Edge hub module behaves like IoT Central and handles connections from devices registered in IoT Central. Messages pass from downstream devices to IoT Central as if there's no gateway between them.
> [!NOTE] > IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge transparent gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't support nested IoT Edge scenarios.
-* In the *translation gateway* pattern, devices that can't connect to IoT Central on their own, connect to a custom IoT Edge module instead. The module in the IoT Edge device processes incoming downstream device messages and then forwards them to IoT Central.
+* *Translation gateway* where devices that can't connect to IoT Central on their own, connect to a custom IoT Edge module instead. The module in the IoT Edge device processes incoming downstream device messages and then forwards them to IoT Central.
-The transparent and translation gateway patterns aren't mutually exclusive. A single IoT Edge device can function as both a transparent gateway and a translation gateway.
+A single IoT Edge device can function as both a transparent gateway and a translation gateway.
To learn more about the IoT Edge gateway patterns, see [How an IoT Edge device can be used as a gateway](../../iot-edge/iot-edge-as-gateway.md).
+## IoT Edge patterns
+
+IoT Central supports the following IoT Edge device patterns:
+
+### IoT Edge as leaf device
+
+![IoT Edge as leaf device](./media/concepts-iot-edge/edgeasleafdevice.png)
+
+The IoT Edge device is provisioned in IoT Central and any downstream devices and their telemetry is represented as coming from the IoT Edge device. Downstream devices connected to the IoT Edge device aren't provisioned in IoT Central.
+
+### IoT Edge gateway device connected to downstream devices with identity
+
+![IoT Edge with downstream device identity](./medieviceidentity.png)
+
+The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Runtime support for provisioning downstream devices through the gateway isn't currently supported.
+
+### IoT Edge gateway device connected to downstream devices with identity provided by the IoT Edge gateway
+
+![IoT Edge with downstream device without identity](./medieviceidentity.png)
+
+The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Currently, IoT Central doesn't have runtime support for a gateway to provide an identity and to provision downstream devices. If you bring your own identity translation module, IoT Central can support this pattern.
+ ### Downstream device relationships with a gateway and modules Downstream devices can connect to an IoT Edge gateway device through the *IoT Edge hub* module. In this scenario, the IoT Edge device is a transparent gateway:
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Install IoT Edge for Linux on Windows onto your target device if you have not al
Deploy-Eflow ```
- The `Deploy-Eflow` command takes optional parameters that help you customize your deployment.
+ >[!TIP]
+ >By default, the `Deploy-Eflow` command creates your Linux virtual machine with 1 GB of RAM, 1 vCPU core, and 16 GB of disk space. However, the resources your VM needs are highly dependent on the workloads you deploy. If your VM does not have sufficient memory to support your workloads, it will fail to start.
+ >
+ >You can customize the virtual machine's available resources using the `Deploy-Eflow` command's optional parameters.
+ >
+ >For example, the command below creates a virtual machine with 4 vCPU cores, 4 GB of RAM, and 20 GB of disk space:
+ >
+ > ```powershell
+ > Deploy-Eflow -cpuCount 4 -memoryInMB 4096 -vmDiskSize 20
+ > ```
+ >
+ >For information about all the optional parameters available, see [PowerShell functions for IoT Edge for Linux on Windows](reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow).
You can assign a GPU to your deployment to enable GPU-accelerated Linux modules. To gain access to these features, you will need to install the prerequisites detailed in [GPU acceleration for Azure IoT Edge for Linux on Windows](gpu-acceleration.md).
Install IoT Edge for Linux on Windows onto your target device if you have not al
>[!WARNING] >Enabling hardware device passthrough may increase security risks. Microsoft recommends a device mitigation driver from your GPU's vendor, when applicable. For more information, see [Deploy graphics devices using discrete device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda). -- 1. Enter 'Y' to accept the license terms. 1. Enter 'O' or 'R' to toggle **Optional diagnostic data** on or off, depending on your preference.
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| acceptOptionalTelemetry | **Yes** or **No** | A shortcut to accept/deny optional telemetry and bypass the telemetry prompt. | | cpuCount | Integer value between 1 and the device's CPU cores | Number of CPU cores for the VM.<br><br>**Default value**: 1 vCore. | | memoryInMB | Integer value between 1024 and the maximum amount of free memory of the device |Memory allocated for the VM.<br><br>**Default value**: 1024 MB. |
-| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 16 GB. |
+| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 10 GB. |
| vswitchName | Name of the virtual switch | Name of the virtual switch assigned to the EFLOW VM. | | vswitchType | **Internal** or **External** | Type of the virtual switch assigned to the EFLOW VM. | | ip4Address | IPv4 Address in the range of the DCHP Server Scope | Static Ipv4 address of the EFLOW VM. _NOTE: Only supported with ICS Default Switch_. |
iot-fundamentals Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-support-help.md
If you do submit a new question to Stack Overflow, please use one or more of the
- [Azure Maps](https://stackoverflow.com/questions/tagged/azure-maps) - [Azure Percept](https://stackoverflow.com/questions/tagged/azure-percept)
-## Submit feedback on Azure Feedback
-
-<div class='icon is-large'>
- <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
-</div>
-
-To request new features, post them on Azure Feedback. Share your ideas for making Azure IoT services work better for the applications you develop:
-
-| Service | Azure Feedback URL |
-|-||
-| Azure IoT (Hub, DPS, SDKs) | https://feedback.azure.com/forums/321918-azure-iot |
-| Azure IoT Central | https://feedback.azure.com/forums/911455-azure-iot-central |
-| Azure IoT Device Catalog | https://feedback.azure.com/forums/916948-azure-iot-device-catalog |
-| Azure IoT Edge | https://feedback.azure.com/forums/907045-azure-iot-edge |
-| Azure IoT Solution Accelerators | https://feedback.azure.com/forums/916438-azure-iot-solution-accelerators |
-| Azure Maps | https://feedback.azure.com/forums/909172-azure-maps |
-| Azure Time Series Insights | https://feedback.azure.com/forums/906859-azure-time-series-insights |
-| Azure Digital Twins | https://feedback.azure.com/forums/916621-azure-digital-twins |
-| Azure Sphere | https://feedback.azure.com/forums/915433-azure-sphere |
- ## Stay informed of updates and new releases <div class='icon is-large'>
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-openssl.md
You now have both a root CA certificate and a subordinate CA certificate. You ca
1. In the Azure portal, navigate to your IoTHub and select **Settings > Certificates**.
-1. Select **Add** to add your new subordinate CA certificate.
+2. Select **Add** to add your new subordinate CA certificate.
-1. Enter a display name in the **Certificate Name** field, and select the PEM certificate file you created previously.
+3. Enter a display name in the **Certificate Name** field, and select the PEM certificate file you created previously.
-1. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. The verification process will prove that you own the certificate.
+> [!NOTE]
+> The .crt certificates created above are the same as .pem certificates. You can simply change the extension when uploading a certificate to prove possession, or you can use the following OpenSSL command:
+
+```bash
+openssl x509 -in mycert.crt -out mycert.pem -outform PEM
+```
+
+4. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. The verification process will prove that you own the certificate.
-1. Select the certificate to view the **Certificate Details** dialog.
+5. Select the certificate to view the **Certificate Details** dialog.
-1. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
+6. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
-1. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in step 9.
+7. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in step 9.
-1. Generate a private key.
+8. Generate a private key.
```bash $ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
iot-hub Tutorial X509 Self Sign https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-self-sign.md
You can authenticate a device to your IoT Hub using two self-signed device certificates. This is sometimes called thumbprint authentication because the certificates contain thumbprints (hash values) that you submit to the IoT hub. The following steps tell you how to create two self-signed certificates.
+> [!NOTE]
+> This example was created using Cygwin64 for Windows. Cygwin is an open source tool collection that allows Unix or Linux applications to be run on Windows from within a Linux-like interface. CygWin64 is bundled with OpenSSL. If you are using Linux, you probably already have OpenSSL installed.
+ ## Step 1 - Create a key for the first certificate ```bash
openssl req -text -in device1.csr -noout
## Step 4 - Self-sign certificate 1 ```bash
-openssl x509 -req -days 365 -in device1.csr -signkey device1.key -out device.crt
+openssl x509 -req -days 365 -in device1.csr -signkey device1.key -out device1.crt
```
-## Step 5 - Create a key for certificate 2
+## Step 5 - Create a key for the second certificate
+
+```bash
+openssl genpkey -out device2.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+```
+
+## Step 6 - Create a CSR for the second certificate
When prompted, specify the same device ID that you used for certificate 1.
Organization Name (eg, company) [Default Company Ltd]:.
Organizational Unit Name (eg, section) []:. Common Name (eg, your name or your server hostname) []:{your-device-id} Email Address []:- ```
-## Step 6 - Self-sign certificate 2
+## Step 7 - Self-sign certificate 2
```bash openssl x509 -req -days 365 -in device2.csr -signkey device2.key -out device2.crt ```
-## Step 7 - Retrieve the thumbprint for certificate 1
+## Step 8 - Retrieve the thumbprint for certificate 1
```bash
-openssl x509 -in device.crt -noout -fingerprint
+openssl x509 -in device1.crt -noout -fingerprint
```
-## Step 8 - Retrieve the thumbprint for certificate 2
+## Step 9 - Retrieve the thumbprint for certificate 2
```bash openssl x509 -in device2.crt -noout -fingerprint ```
-## Step 9 - Create a new IoT device
+## Step 10 - Create a new IoT device
Navigate to your IoT Hub in the Azure portal and create a new IoT device identity with the following characteristics:
Navigate to your IoT Hub in the Azure portal and create a new IoT device identit
* Select the **X.509 Self-Signed** authentication type. * Paste the hex string thumbprints that you copied from your device primary and secondary certificates. Make sure that the hex strings have no colon delimiters. + ## Next Steps Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT Hub.
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/overview-renew-certificate.md
This article discusses how to renew your Azure Key Vault certificates.
To get notified about certificate life events, you would need to add certificate contact. Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. ### Steps to set certificate notifications:
-First, add a certificate contact to your key vault. You can add using Azure portal or PowerShell cmdlet [`Add-AzureKeyVaultCertificateContact`](/powershell/module/azurerm.keyvault/add-azurekeyvaultcertificatecontact).
+First, add a certificate contact to your key vault. You can add using the Azure portal or the PowerShell cmdlet [Add-AzKeyVaultCertificateContact](/powershell/module/az.keyvault/add-azkeyvaultcertificatecontact).
Second, configure when you want to be notified about the certificate expiration. To configure the lifecycle attributes of the certificate, see [Configure certificate autorotation in Key Vault](./tutorial-rotate-certificates.md#update-lifecycle-attributes-of-a-stored-certificate).
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/overview.md
# What is Azure Key Vault Managed HSM?
-Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. For pricing information please see Managed HSM Pools section on [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/).
+Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs (Hardware Security Modules). For pricing information please see Managed HSM Pools section on [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/).
## Why use Managed HSM?
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 08/27/2021 Last updated : 08/31/2021
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip-cli.md
This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses.
-![LB scenario image](./media/load-balancer-multiple-ip/lb-multi-ip.PNG)
- ## Steps to load balance on multiple IP configurations To achieve the scenario outlined in this article complete the following steps:
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip-powershell.md
This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses.
-![LB scenario image](./media/load-balancer-multiple-ip/lb-multi-ip.PNG)
- ## Steps to load balance on multiple IP configurations [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip.md
Title: Load balancing on multiple IP configurations - Azure portal
+ Title: 'Tutorial: Load balance multiple IP configurations - Azure portal'
-description: In this article, learn about load balancing across primary and secondary IP configurations using the Azure portal.
-
+description: In this article, learn about load balancing across primary and secondary NIC configurations using the Azure portal.
---- Previously updated : 09/25/2017 ++ Last updated : 08/08/2021+
-# Load balancing on multiple IP configurations by using the Azure portal
+# Tutorial: Load balance multiple IP configurations using the Azure portal
-> [!div class="op_single_selector"]
-> * [Portal](load-balancer-multiple-ip.md)
-> * [PowerShell](load-balancer-multiple-ip-powershell.md)
-> * [CLI](load-balancer-multiple-ip-cli.md)
+To host multiple websites, you can use another network interface associated with a virtual machine. Azure Load Balancer supports deployment of load-balancing to support the high availability of the websites.
-In this article, we're going to show you how to use Azure Load Balancer with multiple IP addresses on a secondary network interface controller (NIC). The following diagram illustrates our scenario:
+In this tutorial, you learn how to:
-![Load balancer scenario](./media/load-balancer-multiple-ip/lb-multi-ip.PNG)
+> [!div class="checklist"]
+> * Create and configure a virtual network, subnet, and NAT gateway.
+> * Create two Windows server virtual machines
+> * Create a secondary NIC and network configurations for each virtual machine
+> * Create two Internet Information Server (IIS) websites on each virtual machine
+> * Bind the websites to the network configurations
+> * Create and configure an Azure Load Balancer
+> * Test the load balancer
-In our scenario, we're using the following configuration:
+## Prerequisites
-- Two virtual machines (VMs) that are running Windows.-- Each VM has a primary and a secondary NIC.-- Each secondary NIC has two IP configurations.-- Each VM hosts two websites: contoso.com and fabrikam.com.-- Each website is bound to an IP configuration on the secondary NIC.-- Azure Load Balancer is used to expose two front-end IP addresses, one for each website. The front-end addresses are used to distribute traffic to the respective IP configuration for each website.-- The same port number is used for both front-end IP addresses and back-end pool IP addresses.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Prerequisites
+## Create virtual network
-Our scenario example assumes that you have a resource group named **contosofabrikam** that is configured as follows:
+In this section, you'll create a virtual network for the load balancer and virtual machines.
-- The resource group includes a virtual network named **myVNet**.-- The **myVNet** network includes two VMs named **VM1** and **VM2**.-- VM1 and VM2 are in the same availability set named **myAvailset**. -- VM1 and VM2 each have a primary NIC named **VM1NIC1** and **VM2NIC1**, respectively. -- VM1 and VM2 each have a secondary NIC named **VM1NIC2** and **VM2NIC2**, respectively.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-For more information about creating VMs with multiple NICs, see [Create a VM with multiple NICs by using PowerShell](../virtual-machines/windows/multiple-nics.md).
+2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-## Perform load balancing on multiple IP configurations
+3. In **Virtual networks**, select **+ Create**.
-Complete the following steps to achieve the scenario outlined in this article.
+4. In **Create virtual network**, enter or select this information in the **Basics** tab:
-### Step 1: Configure the secondary NICs
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **TutorialLBIP-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(Europe) West Europe** |
-For each VM in your virtual network, add the IP configuration for the secondary NIC:
+5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-1. Browse to the Azure portal: https://portal.azure.com. Sign in with your Azure account.
+6. In the **IP Addresses** tab, enter this information:
-2. In the upper left of the screen, select the **Resource Group** icon. Then select the resource group where your VMs are located (for example, **contosofabrikam**). The **Resource groups** pane displays all of the resources and NICs for the VMs.
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
-3. For the secondary NIC of each VM, add the IP configuration:
+7. Under **Subnet name**, select the word **default**.
- 1. Select the secondary NIC that you want to configure.
-
- 2. Select **IP configurations**. In the next pane, near the top, select **Add**.
+8. In **Edit subnet**, enter this information:
- 3. Under **Add IP configurations**, add a second IP configuration to the NIC:
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
- 1. Enter a name for the secondary IP configuration. (For example, for VM1 and VM2, name the IP configuration **VM1NIC2-ipconfig2** and **VM2NIC2-ipconfig2**, respectively.)
+9. Select **Save**.
- 2. For the **Private IP address**, **Allocation** setting, select **Static**.
+10. Select the **Security** tab.
- 3. Select **OK**.
+11. Under **BastionHost**, select **Enable**. Enter this information:
-After the second IP configuration for the secondary NIC is complete, it's displayed under the **IP configurations** settings for the given NIC.
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-### Step 2: Create the load balancer
-Create your load balancer for the configuration:
+12. Select the **Review + create** tab or select the **Review + create** button.
-1. Browse to the Azure portal: https://portal.azure.com. Sign in with your Azure account.
+13. Select **Create**.
-2. In the upper left of the screen, select **Create a resource** > **Networking** > **Load Balancer**. Next, select **Create**.
+## Create NAT gateway
-3. Under **Create load balancer**, type a name for your load balancer. In this scenario, we're using the name **mylb**.
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-4. Under **Public IP address**, create a new public IP called **PublicIP1**.
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-5. Under **Resource Group**, select the existing resource group for your VMs (for example, **contosofabrikam**). Select the location to deploy your load balancer to, and then select **OK**.
+2. In **NAT gateways**, select **+ Create**.
-The load balancer starts to deploy. Deployment can take a few minutes to successfully complete. After deployment is complete, the load balancer is displayed as a resource in your resource group.
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-### Step 3: Configure the front-end IP pool
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBIP-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
-For each website (contoso.com and fabrikam.com), configure the front-end IP pool on your load balancer:
+4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-1. In the portal, select **More services**. In the filter box, type **Public IP address** and then select **Public IP addresses**. In the next pane, near the top, select **Add**.
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-2. Configure two public IP addresses (**PublicIP1** and **PublicIP2**) for both websites (contoso.com and fabrikam.com):
+6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
- 1. Type a name for your front-end IP address.
+7. Select **OK**.
- 2. For **Resource Group**, select the existing resource group for your VMs (for example, **contosofabrikam**).
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
- 3. For **Location**, select the same location as the VMs.
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
- 4. Select **OK**.
+10. Select **myBackendSubnet** under **Subnet name**.
- After the public IP addresses are created, they are displayed under the **Public IP** addresses.
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-3. <a name="step3-3"></a>In the portal, select **More services**. In the filter box, type **load balancer** and then select **Load Balancer**.
+12. Select **Create**.
-4. Select the load balancer (**mylb**) that you want to add the front-end IP pool to.
+## Create virtual machines
-5. Under **Settings**, select **Frontend IP configuration**. In the next pane, near the top, select **Add**.
+In this section, you'll create two virtual machines to host the IIS websites.
-6. Type a name for your front-end IP address (for example, **contosofe** or **fabrikamfe**).
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-7. <a name="step3-7"></a>Select **IP address**. Under **Choose Public IP address**, select the IP addresses for your front-end (**PublicIP1** or **PublicIP2**).
+2. In **Virtual machines**, select **+ Create** then **+ Virtual machine**.
-8. Create the second front-end IP address by repeating <a href="#step3-3">step 3</a> through <a href="#step3-7">step 7</a> in this section.
+3. In **Create virtual machine**, enter or select the following information:
-After the front-end pool is configured, the IP addresses are displayed under your load balancer **Frontend IP configuration** settings.
-
-### Step 4: Configure the back-end pool
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **TutorialLBIP-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **(Europe) West Europe** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **1** |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
-For each website (contoso.com and fabrikam.com), configure the back-end address pool on your load balancer:
-
-1. In the portal, select **More services**. In the filter box, type **load balancer** and then select **Load Balancer**.
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+4. In the Networking tab, select or enter:
-2. Select the load balancer (**mylb**) that you want to add the back-end pool to.
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGrule** </br> Select **Add** </br> Select **OK** |
+
+7. Select **Review + create**.
+
+8. Review the settings, and then select **Create**.
-3. Under **Settings**, select **Backend Pools**. Type a name for your back-end pool (for example, **contosopool** or **fabrikampool**). In the next pane, near the top, select **Add**.
+9. Follow the steps 1 to 8 to create another VM with the following values and all the other settings the same as **myVM1**:
-4. For **Associated to**, select **Availability set**.
+ | Setting | VM 2 |
+ | - | - |
+ | Name | **myVM2** |
+ | Availability zone | **2** |
+ | Network security group | Select the existing **myNSG** |
-5. For **Availability set**, select **myAvailset**.
-6. Add the target network IP configurations for both VMs:
+## Create secondary network configurations
- ![Configure back-end pools for load balancer](./media/load-balancer-multiple-ip/lb-backendpool.PNG)
-
- 1. For **Target virtual machine**, select the VM that you want to add to the back-end pool (for example, **VM1** or **VM2**).
+In this section, you'll change the private IP address of the existing NIC of each virtual machine to **Static**. Next, you'll add a new NIC resource to each virtual machine with a **Static** private IP address configuration.
- 2. For **Network IP configuration**, select the IP configuration of the secondary NIC for the VM that you selected in the previous step (for example, **VM1NIC2-ipconfig2** or **VM2NIC2-ipconfig2**).
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-7. Select **OK**.
+2. Select **myVM1**.
+
+3. If the virtual machine is running, stop the virtual machine.
+
+4. Select **Networking** in **Settings**.
+
+5. In **Networking**, select the name of the network interface next to **Network interface**. The network interface will begin with the name of the VM and have a random number assigned. In this example, **myVM1266**.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/myvm1-nic.png" alt-text="Screenshot of myVM1 networking configuration in Azure portal.":::
+
+6. In the network interface page, select **IP configurations** in **Settings**.
+
+7. In **IP configurations**, select **ipconfig1**.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/myvm1-ipconfig1.png" alt-text="Screenshot of myVM1 network interface configuration.":::
+
+8. Select **Static** in **Assignment** in the **ipconfig1** configuration.
+
+9. Select **Save**.
+
+10. Return to the **Overview** page of **myVM1**.
+
+11. Select **Networking** in **Settings**.
+
+12. In the **Networking** page, select **Attach network interface**.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/myvm1-attach-nic.png" alt-text="Screenshot of myVM1 attach network interface.":::
+
+13. In **Attach network interface**, select **Create and attach network interface**.
+
+14. In **Create network interface**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Resource group | Select **TutorialLBIP-rg**. |
+ | **Network interface** | |
+ | Name | Enter **myVM1NIC2** |
+ | Subnet | Select **myBackendSubnet (10.1.0.0/24)**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **myNSG**. |
+ | Private IP address assignment | Select **Static**. |
+ | Private IP address | Enter **10.1.0.6**. |
+
+15. Select **Create**.
+
+16. Start the virtual machine.
+
+17. Repeat steps 1 through 16 for **myVM2**, replacing the following information:
+
+ | Setting | myVM2 |
+ | | -- |
+ | Name | **myVM2NIC2** |
+ | Private IP address | **10.1.0.7** |
+
+## Configure virtual machines
+
+You'll connect to **myVM1** and **myVM2** with Azure Bastion and configure the secondary network configuration in this section. You'll add a route for the gateway for the secondary network configuration. You'll then install IIS on each virtual machine and customize the websites to display the hostname of the virtual machine.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM1**.
+
+3. Start **myVM1**.
+
+4. In **Overview**, select **Connect** then **Bastion**.
+
+5. Select **Use Bastion**.
+
+6. Enter the username and password you entered when you created the virtual machine.
+
+7. Select **Allow** for Bastion to use the clipboard.
+
+8. On the server desktop, navigate to Start > Windows Administrative Tools > Windows PowerShell > Windows PowerShell.
+
+9. In the PowerShell window, execute the `route print` command, which returns output similar to the following output for a virtual machine with two attached network interfaces:
+
+ ```console
+ ===========================================================================
+ Interface List
+ 6...00 22 48 86 00 53 ......Microsoft Hyper-V Network Adapter #2
+ 13...00 22 48 83 0b da ......Microsoft Hyper-V Network Adapter #3
+ 1...........................Software Loopback Interface 1
+ ===========================================================================
+ ```
+ In this example, **Microsoft Hyper-V Network Adapter #3 (interface 13)** is the secondary network interface that doesn't have a default gateway assigned to it.
+
+10. In the PowerShell window, execute the `ipconfig /all` command to see which IP address is assigned to the secondary network interface. In this example, 10.1.0.6 is assigned to interface 13. No default gateway address is returned for the secondary network interface.
+
+11. To route all traffic for addresses outside the subnet to the gateway, execute the following command:
+
+ ```console
+ route -p add 0.0.0.0 MASK 0.0.0.0 10.1.0.1 METRIC 5015 IF 13
+ ```
+
+ In this example, **10.1.0.1** is the default gateway for the virtual network you created previously.
+
+12. Execute the following commands in the PowerShell windows to install and configure IIS and the test websites:
+
+ ```powershell
+ ## Install IIS and the management tools. ##
+ Install-WindowsFeature -Name Web-Server -IncludeManagementTools
+
+ ## Set the binding for the Default website to 10.1.0.4:80. ##
+ $para1 = @{
+ Name = 'Default Web Site'
+ BindingInformation = '10.1.0.4:80:'
+ Protocol = 'http'
+ }
+ New-IISSiteBinding @para1
+
+ ## Remove the default site binding. ##
+ $para2 = @{
+ Name = 'Default Web Site'
+ BindingInformation = '*:80:'
+ }
+ Remove-IISSiteBinding @para2 -Force
+
+ ## Remove the default htm file. ##
+ Remove-Item c:\inetpub\wwwroot\iisstart.htm
+
+ ## Add a new htm file that displays the Contoso website. ##
+ $para3 = @{
+ Path = 'c:\inetpub\wwwroot\iisstart.htm'
+ Value = $("Hello World from www.contoso.com" + "-" + $env:computername)
+ }
+ Add-Content @para3
+
+ ## Create folder to host website. ##
+ $para4 = @{
+ Path = 'c:\inetpub\'
+ Name = 'fabrikam'
+ Type = 'directory'
+ }
+ New-Item @para4
+
+ ## Create a new website and site binding for the second IP address 10.1.0.6. ##
+ $para5 = @{
+ Name = 'Fabrikam'
+ PhysicalPath = 'c:\inetpub\fabrikam'
+ BindingInformation = '10.1.0.6:80:'
+ }
+ New-IISSite @para5
+
+ ## Add a new htm file that displays the Fabrikam website. ##
+ $para6 = @{
+ Path = 'C:\inetpub\fabrikam\iisstart.htm'
+ Value = $("Hello World from www.fabrikam.com" + "-" + $env:computername)
+
+ }
+ Add-Content @para6
+
+ ```
+13. Close the Bastion connection to **myVM1**.
+
+14. Repeat steps 1 through 13 for **myVM2**. Use the PowerShell code below for **myVM2** for the IIS install.
+
+ ```powershell
+ ## Install IIS and the management tools. ##
+ Install-WindowsFeature -Name Web-Server -IncludeManagementTools
+
+ ## Set the binding for the Default website to 10.1.0.5:80. ##
+ $para1 = @{
+ Name = 'Default Web Site'
+ BindingInformation = '10.1.0.5:80:'
+ Protocol = 'http'
+ }
+ New-IISSiteBinding @para1
+
+ ## Remove the default site binding. ##
+ $para2 = @{
+ Name = 'Default Web Site'
+ BindingInformation = '*:80:'
+ }
+ Remove-IISSiteBinding @para2
+
+ ## Remove the default htm file. ##
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ ## Add a new htm file that displays the Contoso website. ##
+ $para3 = @{
+ Path = 'c:\inetpub\wwwroot\iisstart.htm'
+ Value = $("Hello World from www.contoso.com" + "-" + $env:computername)
+ }
+ Add-Content @para3
+
+ ## Create folder to host website. ##
+ $para4 = @{
+ Path = 'c:\inetpub\'
+ Name = 'fabrikam'
+ Type = 'directory'
+ }
+ New-Item @para4
+
+ ## Create a new website and site binding for the second IP address 10.1.0.7. ##
+ $para5 = @{
+ Name = 'Fabrikam'
+ PhysicalPath = 'c:\inetpub\fabrikam'
+ BindingInformation = '10.1.0.7:80:'
+ }
+ New-IISSite @para5
+
+ ## Add a new htm file that displays the Fabrikam website. ##
+ $para6 = @{
+ Path = 'C:\inetpub\fabrikam\iisstart.htm'
+ Value = $("Hello World from www.fabrikam.com" + "-" + $env:computername)
+ }
+ Add-Content @para6
+
+ ```
+
+## Create load balancer
+
+You'll create a zone redundant load balancer that load balances virtual machines in this section.
-After the back-end pool is configured, the addresses are displayed under your load balancer **Backend pool** settings.
+With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-### Step 5: Configure the health probe
+During the creation of the load balancer, you'll configure:
-Configure a health probe for your load balancer:
+* Two frontend IP addresses, one for each website.
+* Backend pool
+* Inbound load-balancing rules
-1. In the portal, select **More services**. In the filter box, type **load balancer** and then select **Load Balancer**.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. Select the load balancer (**mylb**) that you want to add the health probe to.
+2. In the **Load balancer** page, select **Create**.
-3. Under **Settings**, select **Health probe**. In the next pane, near the top, select **Add**.
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-4. Type a name for the health probe (for example, **HTTP**). Select **OK**.
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialLBIP-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **(Europe) West Europe**. |
+ | Type | Select **Public**. |
+ | SKU | Leave the default **Standard**. |
+ | Tier | Leave the default **Regional**. |
-### Step 6: Configure load balancing rules
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
-For each website (contoso.com and fabrikam.com), configure the load balancing rules:
-
-1. <a name="step6-1"></a>Under **Settings**, select **Load balancing rules**. In the next pane, near the top, select **Add**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-2. For **Name**, type a name for the load balancing rule (for example, **HTTPc** for contoso.com, or **HTTPf** for fabrikam.com).
+6. Enter **Frontend-contoso** in **Name**.
-3. For **Frontend IP address**, select the front-end IP address that you previously created (for example, **contosofe** or **fabrikamfe**).
+7. Select **IPv4** for the **IP version**.
-4. For **Port** and **Backend port**, keep the default value **80**.
+ > [!NOTE]
+ > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-5. For **Floating IP (direct server return)**, select **Disabled**.
+8. Select **IP address** for the **IP type**.
-6. <a name="step6-6"></a>Select **OK**.
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/public-ip-address-prefix.md).
-7. Create the second load balancer rule by repeating <a href="#step6-1">step 1</a> through <a href="#step6-6">step 6</a> in this section.
+9. Select **Create new** in **Public IP address**.
-After the rules are configured, they are displayed under your load balancer **Load balancing rules** settings.
+10. In **Add a public IP address**, enter **myPublicIP-contoso** for **Name**.
-### Step 7: Configure DNS records
+11. Select **Zone-redundant** in **Availability zone**.
-As the last step, configure your DNS resource records to point to the respective front-end IP addresses for your load balancer. You can host your domains in Azure DNS. For more information about using Azure DNS with Load Balancer, see [Using Azure DNS with other Azure services](../dns/dns-for-azure-services.md).
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+14. Select **+ Add a frontend IP**.
+
+15. Enter **Frontend-fabrikam** in **Name**.
+
+7. Select **IPv4** for the **IP version**.
+
+8. Select **IP address** for the **IP type**.
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP-fabrikam** for **Name**.
+
+11. Select **Zone-redundant** in **Availability zone**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter **myBackendPool-contoso** for **Name** in **Add backend pool**.
+
+18. Select **myVNet** in **Virtual network**.
+
+19. Select **NIC** for **Backend Pool Configuration**.
+
+20. Select **IPv4** for **IP version**.
+
+21. In **Virtual machines**, select **+ Add**.
+
+22. Select **myVM1** and **myVM2** that correspond with **ipconfig1 (10.1.0.4)** and **ipconfig1 (10.1.0.5)**.
+
+23. Select **Add**.
+
+21. Select **Add**.
+
+22. Select **+ Add a backend pool**.
+
+23. Enter **myBackendPool-fabrikam** for **Name** in **Add backend pool**.
+
+24. Select **myVNet** in **Virtual network**.
+
+19. Select **NIC** for **Backend Pool Configuration**.
+
+20. Select **IPv4** for **IP version**.
+
+21. In **Virtual machines**, select **+ Add**.
+
+22. Select **myVM1** and **myVM2** that correspond with **ipconfig1 (10.1.0.6)** and **ipconfig1 (10.1.0.7)**.
+
+23. Select **Add**.
+
+21. Select **Add**.
+
+22. Select the **Next: Inbound rules** button at the bottom of the page.
+
+23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+24. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule-contoso** |
+ | IP Version | Select **IPv4**. |
+ | Frontend IP address | Select **Frontend-contoso**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool-contoso**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-contoso**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+25. Select **Add**.
+
+26. Select **Add a load balancing rule**.
+
+27. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule-fabrikam** |
+ | IP Version | Select **IPv4**. |
+ | Frontend IP address | Select **Frontend-fabrikam**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select **myBackendPool-fabrikam**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-fabrikam**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+ > [!NOTE]
+ > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
+
+## Test load balancer
+
+In this section, you'll discover the public IP address for each website. You'll enter the IP into a browser to test the websites you created earlier.
+
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
+
+2. Select **myPublicIP-contoso**.
+
+3. Copy the **IP address** in the overview page of **myPublicIP-contoso**.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/public-ip-contoso.png" alt-text="Screenshot of myPublicIP-fabrikam public IP address.":::
+
+4. Open a web browser and paste the public IP address into the address bar.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/test-contoso.png" alt-text="Screenshot of contoso website in web browser.":::
+
+5. Return to **Public IP addresses**. Select **myPublicIP-fabrikam**.
+
+6. Copy the **IP address** in the overview page of **myPublicIP-fabrikam**.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/public-ip-fabrikam.png" alt-text="Screenshot of myPublicIP-contoso public IP address.":::
+
+7. Open a web browser and paste the public IP address into the address bar.
+
+ :::image type="content" source="./media/load-balancer-multiple-ip/test-fabrikam.png" alt-text="Screenshot of fabrikam website in web browser.":::
+
+8. To test the load balancer, refresh the browser or shut down one of the virtual machines.
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual machines and load balancer with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+2. Select **TutorialLBIP-rg** in **Resource groups**.
+
+3. Select **Delete resource group**.
+
+4. Enter **TutorialLBIP-rg** in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**.
## Next steps-- Learn more about how to combine load balancing services in Azure in [Using load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md).-- Learn how you can use different types of logs to manage and troubleshoot load balancer in [Azure Monitor logs for Azure Load Balancer](./monitor-load-balancer.md).+
+Advance to the next article to learn how to create a cross-region load balancer:
+
+> [!div class="nextstepaction"]
+> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
load-balancer Update Load Balancer With Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/update-load-balancer-with-vm-scale-set.md
To add multiple IP configurations:
1. On the left menu, select **All resources**. Then select **MyLoadBalancer** from the resource list. 1. Under **Settings**, select **Frontend IP configuration**. Then select **Add**. 1. On the **Add frontend IP address** page, enter the values and select **OK**.
-1. Follow [step 5](./load-balancer-multiple-ip.md#step-5-configure-the-health-probe) and [step 6](./load-balancer-multiple-ip.md#step-5-configure-the-health-probe) in this tutorial if new load-balancing rules are needed.
+1. Refer to [Manage rules for Azure Load Balancer - Azure portal](manage-rules-how-to.md) if new load-balancing rules are needed.
1. Create a new set of inbound NAT rules by using the newly created front-end IP configurations if needed. An example is found in the previous section. ## Multiple Virtual Machine Scale Sets behind a single Load Balancer
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments in Azure Machine Learning. Curated e
## PyTorch
-**Name** - AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
+**Name** - AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
**Description** - An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages. **Dockerfile configuration** - The following Dockerfile can be customized for your personal workflows:
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-dependency-visualization.md
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | |
-**Support** | In preview for servers on VMware only. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In general availability (GA).
+**Support** | Available for servers on VMware only. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In general availability (GA).
**Agent** | No agents needed on servers you want to analyze. | Agents required on each on-premises server that you want to analyze. **Log Analytics** | Not required. | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency analysis.<br/><br/> You associate a Log Analytics workspace with a project. The workspace must reside in the East US, Southeast Asia, or West Europe regions. The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). **Process** | Captures TCP connection data. After discovery, it gathers data at intervals of five minutes. | Service Map agents installed on a server gather data about TCP processes, and inbound/outbound connections for each process.
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/deploy-appliance-script.md
Check that the zipped file is secure, before you deploy it.
### Run the script
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+1. Extract the zipped file to a folder on the server that will host the appliance.
+> [!NOTE]
+> Make sure you don't run the script on a server with an existing Azure Migrate appliance. Running the script on the Azure Migrate appliance will remove the working configuration and replace it with newly defined configuration.
+ 2. Launch PowerShell on the above server with administrative (elevated) privilege. 3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file. 4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
Check that the zipped file is secure, before you deploy it.
### Run the script
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+1. Extract the zipped file to a folder on the server that will host the appliance.
+> [!NOTE]
+> Make sure you don't run the script on an existing Azure Migrate appliance. Running the script on the Azure Migrate appliance will remove the working configuration and replace it with newly defined configuration.
+ 2. Launch PowerShell on the above server with administrative (elevated) privilege. 3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file. 4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical.md
Set up an account that the appliance can use to access the physical servers.
- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users. - If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**. - The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed [here.](migrate-appliance.md#collected-dataphysical)-- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md#access-is-denied-when-connecting-to-physical-servers-during-validation) to enable the required permissions.
+- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md#access-is-denied-error-occurs-when-you-connect-to-physical-servers-during-validation) to enable the required permissions.
> [!Note] > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-agentless-migration.md
The preparation script executes the following changes based on the OS type of th
Azure Migrate will attempt to install the Microsoft Azure Linux Agent (waagent), a secure, lightweight process that manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. [Learn more](../virtual-machines/extensions/agent-linux.md) about the functionality enabled for Linux and FreeBSD IaaS deployments via the Linux agent.
- Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported like RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, and Ubuntu 20.04 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../virtual-machines/extensions/agent-linux.md#installation) for other OS versions.
+ Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8/7/6, CentOS 8/7/6, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration. Follow these instructions to [install the Linux Agent manually](../virtual-machines/extensions/agent-linux.md#installation) for other OS versions.
You can use the command to verify the service status of the Azure Linux Agent to make sure it's running. The service name might be **walinuxagent** or **waagent**. Once the hydration changes are done, the script will unmount all the partitions mounted, deactivate volume groups, and then flush the devices.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-migration.md
The following table summarizes the steps performed automatically for the operati
Learn more about steps for [running a Linux VM on Azure](../virtual-machines/linux/create-upload-generic.md), and get instructions for some of the popular Linux distributions.
-Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL6, RHEL7, CentOS7 (6 should be supported similar to RHEL), Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04, Ubuntu 19.04, Ubuntu 19.10, and Ubuntu 20.04 when using the agentless method of VMware migration.
+Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8/7/6, CentOS 8/7/6, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration.
## Check Azure VM requirements
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance.md
Title: Troubleshoot Azure Migrate appliance
+ Title: Troubleshoot the Azure Migrate appliance
description: Get help to troubleshoot problems that might occur with the Azure Migrate appliance.
Last updated 07/01/2020
# Troubleshoot the Azure Migrate appliance
-This article helps you troubleshoot issues when deploying the [Azure Migrate](migrate-services-overview.md) appliance, and using the appliance to discover on-premises servers.
+This article helps you troubleshoot issues when you deploy the [Azure Migrate](migrate-services-overview.md) appliance and use the appliance to discover on-premises servers.
## What's supported? [Review](migrate-appliance.md) the appliance support requirements.
-## "Invalid OVF manifest entry" during appliance set up
+## "Invalid OVF manifest entry" error occurs during appliance setup
-**Error**
+You get the error "The provided manifest file is invalid: Invalid OVF manifest entry" when you set up an appliance by using the OVA template.
-You are getting the error "The provided manifest file is invalid: Invalid OVF manifest entry" when setting up an appliance using OVA template.
-
-**Remediation**
+### Remediation
1. Verify that the Azure Migrate appliance OVA file is downloaded correctly by checking its hash value. [Learn more](./tutorial-discover-vmware.md). If the hash value doesn't match, download the OVA file again and retry the deployment.
-2. If deployment still fails, and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser.
-3. If you're using the vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA directly on the ESXi host:
+1. If deployment still fails and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser.
+1. If you're using the vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA directly on the ESXi host:
- Connect to the ESXi host directly (instead of vCenter Server) with the web client (https://<*host IP Address*>/ui). - In **Home** > **Inventory**, select **File** > **Deploy OVF template**. Browse to the OVA and complete the deployment.
-4. If the deployment still fails, contact Azure Migrate support.
-
-## Connectivity check failing during 'Set up prerequisites'
+1. If the deployment still fails, contact Azure Migrate support.
-**Error**
+## Connectivity check fails during the prerequisites setup
-You are getting an error in the connectivity check on the appliance.
+You get an error in the connectivity check on the appliance.
-**Remediation**
+### Remediation
1. Ensure that you can connect to the required [URLs](./migrate-appliance.md#url-access) from the appliance.
-1. Check if there is a proxy or firewall blocking access to these URLs. If you are required to create an allowlist, make sure that you include all of the URLs.
-1. If there is a proxy server configured on-premises, make sure that you provide the proxy details correctly by selecting **Setup proxy** in the same step. Make sure that you provide the authorization credentials if the proxy needs them.
-1. Ensure that the server has not been previously used to set up the [replication appliance](./migrate-replication-appliance.md) or that you have the mobility service agent installed on the server.
-
-## Connectivity check failing for aka.ms URL during 'Set up prerequisites'
+1. Check if there's a proxy or firewall blocking access to these URLs. If you're required to create an allowlist, make sure that you include all of the URLs.
+1. If there's a proxy server configured on-premises, enter the proxy details correctly by selecting **Setup proxy** in the same step. Enter the authorization credentials if the proxy needs them.
+1. Ensure that the server hasn't been previously used to set up the [replication appliance](./migrate-replication-appliance.md) or that you have the mobility service agent installed on the server.
-**Error**
+## Connectivity check fails for the aka.ms URL during the prerequisites setup
-You are getting an error in the connectivity check on the appliance for aka.ms URL.
+You get an error in the connectivity check on the appliance for the aka.ms URL.
-**Remediation**
+### Remediation
-1. Ensure that you have connectivity to internet and have allowlisted the URL-aka.ms/* to download the latest versions of the services.
-2. Check if there is a proxy/firewall blocking access to this URL. Ensure that you have provided the proxy details correctly in the prerequisites step of the configuration manager.
-3. You can go back to the appliance configuration manager and rerun prerequisites to initiate auto-update.
-3. If retry doesn't help, you can download the *latestcomponents.json* file from [here](https://aka.ms/latestapplianceservices) to check the latest versions of the services that are failing and manually update them from the download links in the file.
+1. Ensure that you have connectivity to the internet and have added the URL-aka.ms/* to the allowlist to download the latest versions of the services.
+1. Check if there's a proxy or firewall blocking access to this URL. Ensure that you've provided the proxy details correctly in the prerequisites step of the configuration manager.
+1. Go back to the appliance configuration manager and rerun prerequisites to initiate auto-update.
+1. If retry doesn't help, download the *latestcomponents.json* file from [this website](https://aka.ms/latestapplianceservices) to check the latest versions of the services that are failing. Manually update them from the download links in the file.
- If you have enabled the appliance for **private endpoint connectivity**, and don't want to allow access to this URL over internet, you can [disable auto-update](./migrate-appliance.md#turn-off-auto-update), as the aka.ms link is required for this service.
+ If you've enabled the appliance for **private endpoint connectivity** and don't want to allow access to this URL over the internet, [disable auto-update](./migrate-appliance.md#turn-off-auto-update) because the aka.ms link is required for this service.
>[!Note]
- >If you disable auto-update service, the services running on the appliance will not get the latest updates automatically. To get around this, [update the appliance services manually](./migrate-appliance.md#manually-update-an-older-version).
+ >If you disable the auto-update service, the services running on the appliance won't get the latest updates automatically. To get around this situation, [update the appliance services manually](./migrate-appliance.md#manually-update-an-older-version).
-## Auto Update check failing during 'Set up prerequisites'
+## Auto-update check fails during the prerequisites setup
-**Error**
+You get an error in the auto-update check on the appliance.
-You are getting an error in the auto update check on the appliance.
-
-**Remediation**
+### Remediation
1. Make sure that you created an allowlist for the [required URLs](./migrate-appliance.md#url-access) and that no proxy or firewall setting is blocking them. 1. If the update of any appliance component is failing, either rerun the prerequisites or [manually update the appliance services](./migrate-appliance.md#manually-update-an-older-version).
-## Time sync check failing during 'Set up prerequisites'
-
-**Error**
+## Time sync check fails during the prerequisites setup
An error about time synchronization indicates that the server clock might be out of synchronization with the current time by more than five minutes.
-**Remediation**
+### Remediation
-- Ensure that the appliance server time is synchronized with the internet time by checking the date and time settings from control panel.
+- Ensure that the appliance server time is synchronized with the internet time by checking the date and time settings from Control Panel.
- You can also change the clock time on the appliance server to match the current time by following these steps: 1. Open an admin command prompt on the server.
- 2. To check the time zone, run **w32tm /tz**.
- 3. To synchronize the time, run **w32tm /resync**.
-
-## VDDK check failing during 'Set up prerequisites' on VMware appliance
+ 1. To check the time zone, run **w32tm /tz**.
+ 1. To synchronize the time, run **w32tm /resync**.
-**Error**
+## VDDK check fails during the prerequisites setup on the VMware appliance
-The VDDK check failed as appliance could not find the required VDDK kit installed on the appliance. This can result in failures with ongoing replication.
+The Virtual Disk Development Kit (VDDK) check failed because the appliance couldn't find the required VDDK installed on the appliance. This issue can result in failures with ongoing replication.
-**Remediation**
+### Remediation
-1. Ensure that you have downloaded VDDK kit 6.7 and have copied its files to- **C:\Program Files\VMware\VMware Virtual Disk Development Kit** on the appliance server.
-2. Ensure that no other software or application is using another version of the VDDK Kit on the appliance.
+1. Ensure that you've downloaded VDDK 6.7 and have copied its files to- **C:\Program Files\VMware\VMware Virtual Disk Development Kit** on the appliance server.
+1. Ensure that no other software or application is using another version of the VDDK on the appliance.
-## Getting project key related error during appliance registration
+## Project key-related error occurs during appliance registration
-**Error**
+ You're having issues when you try to register the appliance by using the Azure Migrate project key copied from the project.
-You are having issues when you try to register the appliance using the Azure Migrate project key copied from the project.
+### Remediation
-**Remediation**
+1. Ensure that you've copied the correct key from the project. On the **Azure Migrate: Discovery and Assessment** card in your project, select **Discover**. Then select **Manage Existing appliance** in step 1. Select the appliance name for which you previously generated a key from the dropdown menu. Copy the corresponding key.
+1. Ensure that you're pasting the key to the appliance of the right **cloud type** (Public/US Gov) and **appliance type** (VMware/Hyper-V/Physical or other). Check at the top of the appliance configuration manager to confirm the cloud and scenario type.
-1. Ensure that you've copied the correct key from the project: On the **Azure Migrate: Discovery and Assessment** card in your project, select **Discover**, and then select **Manage Existing appliance** in step 1. Select the appliance name (for which you previously generated a key) from the drop-down menu and copy the corresponding key.
-2. Ensure that you're pasting the key to the appliance of the right **cloud type** (Public/US Gov) and **appliance type** (VMware/Hyper-V/Physical or other). Check at the top of appliance configuration manager to confirm the cloud and scenario type.
+## "Failed to connect to the Azure Migrate project" error occurs during appliance registration
-## "Failed to connect to the Azure Migrate project" during appliance registration
+After a successful sign-in with an Azure user account, the appliance registration step fails with the message, "Failed to connect to the Azure Migrate project. Check the error detail and follow the remediation steps by clicking Retry."
-**Error**
+This issue happens when the Azure user account that was used to sign in from the appliance configuration manager is different from the user account that was used to generate the Azure Migrate project key on the portal.
-After a successful login with an Azure user account, the appliance registration step fails with the message, **"Failed to connect to the Azure Migrate project. Check the error detail and follow the remediation steps by clicking Retry"**.
+### Remediation
-This issue happens when the Azure user account that was used to log in from the appliance configuration manager is different from the user account that was used to generate the Azure Migrate project key on the portal.
+You have two options:
-**Remediation**
-1. To complete the registration of the appliance, use the same Azure user account that generated the Azure Migrate project key on the portal
- OR
-1. Assign the required roles and [permissions](./tutorial-discover-vmware.md#prepare-an-azure-user-account) to the other Azure user account being used for appliance registration
+- To complete the registration of the appliance, use the same Azure user account that generated the Azure Migrate project key on the portal.
+- You can also assign the required roles and [permissions](./tutorial-discover-vmware.md#prepare-an-azure-user-account) to the other Azure user account being used for appliance registration.
-## "Azure Active Directory (AAD) operation failed with status Forbidden" during appliance registration
+## "Azure Active Directory (AAD) operation failed with status Forbidden" error occurs during appliance registration
-**Error**
+You're unable to complete registration because of insufficient Azure Active Directory privileges and get the error "Azure Active Directory (AAD) operation failed with status Forbidden."
-You are unable to complete registration due to insufficient AAD privileges and get the error, "Azure Active Directory (AAD) operation failed with status Forbidden".
+### Remediation
-**Remediation**
+Ensure that you have the [required permissions](./tutorial-discover-vmware.md#prepare-an-azure-user-account) to create and manage Azure Active Directory applications in Azure. You should have the **Application Developer** role *or* the user role with **User can register applications** allowed at the tenant level.
-Ensure that you have the [required permissions](./tutorial-discover-vmware.md#prepare-an-azure-user-account) to create and manage AAD Applications in Azure. You should have the **Application Developer** role OR the user role with **User can register applications** allowed at the tenant level.
+## "Forbidden to access Key Vault" error occurs during appliance registration
-## "Forbidden to access Key Vault" during appliance registration
+The Azure Key Vault create or update operation failed for "{KeyVaultName}" because of the error "{KeyVaultErrorMessage}."
-**Error**
+This issue usually happens when the Azure user account used to register the appliance is different from the account used to generate the Azure Migrate project key on the portal (that is, when the key vault was created).
-Azure Key Vault create or update operation failed for "{KeyVaultName}" due to the error: "{KeyVaultErrorMessage}".
+### Remediation
-This usually happens when the Azure user account that was used to register the appliance is different from the account used to generate the Azure Migrate project key on the portal (that is, when the Key vault was created).
-
-**Remediation**
-
-1. Ensure that the currently logged in user account on the appliance has the required permissions on the Key Vault (mentioned in the error message). The user account needs permissions as mentioned [here](./tutorial-discover-vmware.md#prepare-an-azure-user-account).
-2. Go to the Key Vault and ensure that your user account has an access policy with all the _Key, Secret and Certificate_ permissions assigned under Key vault Access Policy. [Learn more](../key-vault/general/assign-access-policy-portal.md)
-3. If you have enabled the appliance for **private endpoint connectivity**, ensure that the appliance is either hosted in the same VNet where the Key Vault has been created or it is connected to the Azure VNet (where Key Vault has been created) over a private link. Make sure that the Key Vault private link is resolvable from the appliance. Go to **Azure Migrate**: **Discovery** and **assessment**> **Properties** to find the details of private endpoints for resources like the Key Vault created during the Azure Migrate key creation. [Learn more](./troubleshoot-network-connectivity.md)
-4. If you have the required permissions and connectivity, re-try the registration on the appliance after some time.
+1. Ensure that the currently signed-in user account on the appliance has the required permissions on the key vault mentioned in the error message. The user account needs permissions as mentioned at [this website](./tutorial-discover-vmware.md#prepare-an-azure-user-account).
+1. Go to the key vault and ensure that your user account has an access policy with all the **Key**, **Secret**, and **Certificate** permissions assigned under **Key Vault Access Policy**. [Learn more](../key-vault/general/assign-access-policy-portal.md).
+1. If you enabled the appliance for **private endpoint connectivity**, ensure that the appliance is either hosted in the same virtual network where the key vault was created or it's connected to the Azure virtual network where the key vault was created over a private link. Make sure that the key vault private link is resolvable from the appliance. Go to **Azure Migrate: Discovery and assessment** > **Properties** to find the details of private endpoints for resources like the key vault created during the Azure Migrate key creation. [Learn more](./troubleshoot-network-connectivity.md).
+1. If you have the required permissions and connectivity, retry the registration on the appliance after some time.
## Unable to connect to vCenter Server during validation
-**Error**
-
-If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate that there's no endpoint listening at `https://\*servername*.com:9443/sdk` that can accept the message.
+If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate there's no endpoint listening at `https://\*servername*.com:9443/sdk` that can accept the message.
-**Remediation**
+### Remediation
- Check whether you're running the latest version of the appliance. If you're not, upgrade the appliance to the [latest version](./migrate-appliance.md).-- If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port is not specified, the collector will try to connect to port number 443.
+- If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port isn't specified, the collector tries to connect to port number 443.
1. Ping *Servername*.com from the appliance.
- 2. If step 1 fails, try to connect to the vCenter server using the IP address.
- 3. Identify the correct port number to connect to vCenter Server.
- 4. Verify that vCenter Server is up and running.
-
-## Server credentials (domain) failing validation on VMware appliance
-
-**Error**
-
-You are getting "Validation failed" for domain credentials added on VMware appliance to perform software inventory, agentless dependency analysis.
+ 1. If step 1 fails, try to connect to the vCenter server by using the IP address.
+ 1. Identify the correct port number to connect to the vCenter server.
+ 1. Verify that the vCenter server is up and running.
-**Remediation**
+## Server credentials (domain) fails validation on the VMware appliance
-1. Check that you have provided the correct domain name and credentials
-1. Ensure that the domain is reachable from the appliance to validate the credentials. The appliance may be having line of sight issues or the domain name may not be resolvable from the appliance server.
-1. You can select **Edit** to update the domain name or credentials, and select **Revalidate credentials** to validate the credentials again after some time
+You get "Validation failed" for domain credentials added on the VMware appliance to perform software inventory and agentless dependency analysis.
-## "Access is denied" when connecting to Hyper-V hosts or clusters during validation
+### Remediation
-**Error**
+1. Check that you've provided the correct domain name and credentials.
+1. Ensure that the domain is reachable from the appliance to validate the credentials. The appliance might be having line-of-sight issues, or the domain name might not be resolvable from the appliance server.
+1. Select **Edit** to update the domain name or credentials. Select **Revalidate credentials** to validate the credentials again after some time.
-You are unable to validate the added Hyper-V host/cluster due to an error-"Access is denied".
+## "Access is denied" error occurs when you connect to Hyper-V hosts or clusters during validation
-**Remediation**
+You're unable to validate the added Hyper-V host or cluster because of the error "Access is denied."
-1. Ensure that you have met all the [prerequisites for the Hyper-V hosts](./migrate-support-matrix-hyper-v.md#hyper-v-host-requirements).
-1. Check the steps [**here**](./tutorial-discover-hyper-v.md#prepare-hyper-v-hosts) on how to prepare the Hyper-V hosts manually or using a provisioning PowerShell script.
+### Remediation
-## "The server does not support WS-Management Identify operations" during validation
+1. Ensure that you've met all the [prerequisites for the Hyper-V hosts](./migrate-support-matrix-hyper-v.md#hyper-v-host-requirements).
+1. Check the steps on [this website](./tutorial-discover-hyper-v.md#prepare-hyper-v-hosts) on how to prepare the Hyper-V hosts manually or by using a provisioning PowerShell script.
-**Error**
+## "The server does not support WS-Management Identify operations" error occurs during validation
-You are not able to validate Hyper-V clusters on the appliance due to the error: "The server does not support WS-Management Identify operations. Skip the TestConnection part of the request and try again."
+You're unable to validate Hyper-V clusters on the appliance because of the error "The server does not support WS-Management Identify operations. Skip the TestConnection part of the request and try again."
-**Remediation**
+### Remediation
-This is usually seen when you have provided a proxy configuration on the appliance. The appliance connects to the clusters using the short name for the cluster nodes, even if you have provided the FQDN of the node. Add the short name for the cluster nodes to the bypass proxy list on the appliance, the issue gets resolved and validation of the Hyper-V cluster succeeds.
+This error usually occurs when you've provided a proxy configuration on the appliance. The appliance connects to the clusters by using the short name for the cluster nodes, even if you've provided the FQDN of the node. Add the short name for the cluster nodes to the bypass proxy list on the appliance, the issue gets resolved, and validation of the Hyper-V cluster succeeds.
-## "Can't connect to host or cluster" during validation on Hyper-V appliance
+## "Can't connect to host or cluster" error occurs during validation on a Hyper-V appliance
-**Error**
+The error "Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
-"Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
+This issue usually happens when you've added the IP address of a host that can't be resolved by DNS. You might also see this error for hosts in a cluster. It indicates that the appliance can connect to the cluster, but the cluster returns host names that aren't FQDNs.
-This usually happens when you have added the IP address of a host which cannot be resolved by DNS. You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that are not FQDNs.
+### Remediation
-**Remediation**
-
-To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names:
+To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names.
1. Open Notepad as an admin. 1. Open the C:\Windows\System32\Drivers\etc\hosts file. 1. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error. 1. Save and close the hosts file.
-1. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
-
-## "Unable to connect to server" during validation of Physical servers
+1. Check whether the appliance can connect to the hosts by using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
-**Remediation**
+## "Unable to connect to server" error occurs during validation of physical servers
-- Ensure there is connectivity from the appliance to the target server.-- If it is a Linux server, ensure password-based authentication is enabled using the following steps:
- 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
- 2. Set "PasswordAuthentication" option to yes. Save the file.
- 3. Restart ssh service by running "service sshd restart"
-- If it is a Windows server, ensure the port 5985 is open to allow for remote WMI calls.-- If you are discovering a GCP Linux server and using a root user, use the following commands to change the default setting for root login
- 1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
- 2. Set "PermitRootLogin" option to yes.
- 3. Restart ssh service by running "service sshd restart"
+### Remediation
-## "Failed to fetch BIOS GUID" for server during validation
+- Ensure there's connectivity from the appliance to the target server.
+- If it's a Linux server, ensure password-based authentication is enabled by following these steps:
+ 1. Sign in to the Linux server, and open the ssh configuration file by using the command **vi /etc/ssh/sshd_config**.
+ 1. Set the **PasswordAuthentication** option to yes. Save the file.
+ 1. Restart the ssh service by running **service sshd restart**.
+- If it's a Windows server, ensure the port 5985 is open to allow for remote WMI calls.
+- If you're discovering a GCP Linux server and using a root user, use the following commands to change the default setting for the root login:
+ 1. Sign in to the Linux server, and open the ssh configuration file by using the command **vi /etc/ssh/sshd_config**.
+ 1. Set the **PermitRootLogin** option to yes.
+ 1. Restart the ssh service by running **service sshd restart**.
-**Error**
+## "Failed to fetch BIOS GUID" error occurs for the server during validation
-The validation of a physical server fails on the appliance with the error message-"Failed to fetch BIOS GUID".
+The validation of a physical server fails on the appliance with the error message "Failed to fetch BIOS GUID."
-**Remediation**
+### Remediation
**Linux servers:**
-Connect to the target server that is failing validation and run the following commands to see if it returns the BIOS GUID of the server:
+
+Connect to the target server that's failing validation. Run the following commands to see if it returns the BIOS GUID of the server:
+ ```` cat /sys/class/dmi/id/product_uuid dmidecode | grep -i uuid | awk '{print $2}' ````
-You can also run the commands from command prompt on the appliance server by making an SSH connection with the target Linux server using the following command:
+You can also run the commands from the command prompt on the appliance server by making an SSH connection with the target Linux server by using the following command:
```` ssh <username>@<servername> ```` **Windows servers:**
-Run the following code in PowerShell from the appliance server for the target server that is failing validation to see if it returns the BIOS GUID of the server:
+
+Run the following code in PowerShell from the appliance server for the target server that's failing validation to see if it returns the BIOS GUID of the server:
+ ```` [CmdletBinding()] Param(
$HostIntance = $Session.QueryInstances($HostNS, "WQL", "Select UUID from Win32_C
$HostIntance | fl * ````
-On executing the code above, you need to provide the hostname of the target server which can be IP address/FQDN/hostname. After that you will be prompted to provide the credentials to connect to the server.
+When you run the preceding code, you need to provide the hostname of the target server. It can be IP address/FQDN/hostname. After that, you're prompted to provide the credentials to connect to the server.
-## "No suitable authentication method found" for server during validation
+## "No suitable authentication method found" error occurs for the server during validation
-**Error**
+You get the error "No suitable authentication method found" when you try to validate a Linux server through the physical appliance.
-You are getting this error when you are trying to validate a Linux server through the physical appliance- ΓÇ£No suitable authentication method foundΓÇ¥.
+### Remediation
-**Remediation**
+Ensure password-based authentication is enabled on the Linux server by following these steps:
-Ensure password-based authentication is enabled on the linux server using the following steps:
+1. Sign in to the Linux server. Open the ssh configuration file by using the command **vi /etc/ssh/sshd_config**.
+1. Set the **PasswordAuthentication** option to **yes**. Save the file.
+1. Restart the ssh service by running **service sshd restart**.
-1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config'
-2. Set "PasswordAuthentication" option to yes. Save the file.
-3. Restart ssh service by running "service sshd restart"
+## "Access is denied" error occurs when you connect to physical servers during validation
-## "Access is denied" when connecting to physical servers during validation
+You get the error "WS-Management service cannot process the request. The WMI service returned an access denied error" when you try to validate a Windows server through the physical appliance.
-**Error**
+### Remediation
-You are getting this error when you are trying to validate a Windows server through the physical appliance- ΓÇ£WS-Management service cannot process the request. The WMI service returned an access denied error.ΓÇ¥
-
-**Remediation**
--- If you are getting this error, make sure that the user account provided(domain/local) on the appliance configuration manager has been added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.-- If Remote management Users group isn't present then add user account to the group: WinRMRemoteWMIUsers_.-- You can also check if the WS-Management protocol is enabled on the server by running following command in the command prompt of the target server.
+- If you get this error, make sure that the user account provided (domain/local) on the appliance configuration manager was added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- If the Remote Management Users group isn't present, add the user account to the group WinRMRemoteWMIUsers_.
+- You can also check if the WS-Management protocol is enabled on the server by running the following command in the command prompt of the target server:
```` winrm qc ````-- If you are still facing the issue, make sure that the user account has access permissions to CIMV2 Namespace and sub-namespaces in WMI Control Panel. You can set the access by following these steps:
- 1. Go to the server which is failing validation on the appliance
- 2. Search and select ΓÇÿRunΓÇÖ from the Start menu. In the ΓÇÿRunΓÇÖ dialog box, type wmimgmt.msc in the ΓÇÿOpen:ΓÇÖ text field and press enter.
- 3. The wmimgmt console will open where you can find ΓÇ£WMI Control (Local)ΓÇ¥ in the left panel. Right-click on it and select ΓÇÿPropertiesΓÇÖ from the menu.
- 4. In the ΓÇÿWMI Control (Local) PropertiesΓÇÖ dialog box, click on ΓÇÿSecuritiesΓÇÖ tab.
- 5. On the Securities tab, expand the ΓÇ£RootΓÇ¥ folder in the namespace tree and select ΓÇ£cimv2ΓÇ¥ namespace.
- 6. Click on ΓÇÿSecurityΓÇÖ button that will open ΓÇÿSecurity for ROOT\cimv2ΓÇÖ dialog box.
- 7. Under ΓÇÿGroup or users namesΓÇÖ section, click on ΓÇÿAddΓÇÖ button to open ΓÇÿSelect Users, Computers, Service Accounts or GroupsΓÇÖ dialog box.
- 8. Search for the user account, select it and click on ΓÇÿOKΓÇÖ button to return to the ΓÇÿSecurity for ROOT\cimv2ΓÇÖ dialog box.
- 9. In the ΓÇÿGroup or users namesΓÇÖ section, select the user account just added and check if the following permissions are allowed:<br/>
- Enable account <br/>
- Remote enable
- 10. Click on ΓÇ£ApplyΓÇ¥ to enable the permissions set on the user account.
--- The same steps are also applicable on a local user account for non-domain/workgroup servers but in some cases, [UAC](/windows/win32/wmisdk/user-account-control-and-wmi) filtering may block some WMI properties as the commands run as a standard user, so you can either use a local administrator account or disable UAC so that the local user account is not filtered and instead becomes a full administrator.-- Disabling Remote UAC by changing the registry entry that controls Remote UAC is not recommended but may be necessary in a workgroup. The registry entry is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system\LocalAccountTokenFilterPolicy. When the value of this entry is zero (0), Remote UAC access token filtering is enabled. When the value is 1, remote UAC is disabled.
-## Appliance is disconnected
+- If you're still facing the issue, make sure that the user account has access permissions to CIMV2 Namespace and sub-namespaces in the WMI Control Panel. You can set the access by following these steps:
-**Error**
+ 1. Go to the server that's failing validation on the appliance.
+ 1. Search and select **Run** from the **Start** menu. In the **Run** dialog, enter **wmimgmt.msc** in the **Open** text box and select **Enter**.
+ 1. The wmimgmt console opens where you can find **WMI Control (Local)** in the left pane. Right-click it, and select **Properties** from the menu.
+ 1. In the **WMI Control (Local) Properties** dialog, select the **Securities** tab.
+ 1. On the **Securities** tab, expand the **Root** folder in the namespace tree and select the **cimv2** namespace.
+ 1. Select **Security** to open the **Security for ROOT\cimv2** dialog.
+ 1. Under the **Group or users names** section, select **Add** to open the **Select Users, Computers, Service Accounts or Groups** dialog.
+ 1. Search for the user account, select it, and select **OK** to return to the **Security for ROOT\cimv2** dialog.
+ 1. In the **Group or users names** section, select the user account just added. Check if the following permissions are allowed:<br/>
+ - Enable account <br/>
+ - Remote enable
+ 1. Select **Apply** to enable the permissions set on the user account.
+
+- The same steps are also applicable on a local user account for non-domain/workgroup servers. In some cases, [UAC](/windows/win32/wmisdk/user-account-control-and-wmi) filtering might block some WMI properties as the commands run as a standard user, so you can either use a local administrator account or disable UAC so that the local user account isn't filtered and instead becomes a full administrator.
+- Disabling Remote UAC by changing the registry entry that controls Remote UAC isn't recommended but might be necessary in a workgroup. The registry entry is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system\LocalAccountTokenFilterPolicy. When the value of this entry is zero (0), Remote UAC access token filtering is enabled. When the value is 1, remote UAC is disabled.
+
+## Appliance is disconnected
-You are getting "appliance is disconnected" error message when you try to enable replication on a few VMware servers from the portal.
+You get an "Appliance is disconnected" error message when you try to enable replication on a few VMware servers from the portal.
-This can happen if the appliance is in a shut-down state or the DRA service on the appliance cannot communicate with Azure.
+This error can occur if the appliance is in a shut-down state or the DRA service on the appliance can't communicate with Azure.
-**Remediation**
+### Remediation
- 1. Go to the appliance configuration manager and rerun prerequisites to see the status of the DRA service under **View appliance services**.
- 1. If the service is not running, stop and restart the service from the command prompt, using following commands:
+ 1. Go to the appliance configuration manager, and rerun prerequisites to see the status of the DRA service under **View appliance services**.
+ 1. If the service isn't running, stop and restart the service from the command prompt by using the following commands:
```` net stop dra
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-assessment.md
This article helps you troubleshoot issues with assessment and dependency visual
## Assessment readiness issues
-Fix assessment readiness issues as follows:
+This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported boot type | Azure doesn't support VMs with an EFI boot type. We recommend that you convert the boot type to BIOS before you run a migration. <br/><br/>You can use Azure Migrate Server Migration to handle the migration of such VMs. It will convert the boot type of the VM to BIOS during the migration.
-Conditionally supported Windows operating system | The operating system has passed its end-of-support date, and needs a Custom Support Agreement (CSA) for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure.
+Unsupported boot type | Azure doesn't support VMs with an EFI boot type. Convert the boot type to BIOS before you run a migration. <br/><br/>You can use Azure Migrate Server Migration to handle the migration of such VMs. It will convert the boot type of the VM to BIOS during the migration.
+Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure.
Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure.
-Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. Also refer [here](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment) for more details.
+Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. For more information, see [this website](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
Unendorsed Linux OS | The server might start in Azure, but Azure provides no operating system support. Consider upgrading to an [endorsed Linux version](../virtual-machines/linux/endorsed-distros.md) before you migrate to Azure. Unknown operating system | The operating system of the VM was specified as "Other" in vCenter Server. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Make sure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
-Unsupported bit version | VMs with a 32-bit operating systems might boot in Azure, but we recommended that you upgrade to 64-bit before you migrate to Azure.
+Unsupported bit version | VMs with a 32-bit operating systems might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure.
Requires a Microsoft Visual Studio subscription | The server is running a Windows client operating system, which is supported only through a Visual Studio subscription. VM not found for the required storage performance | The storage performance (input/output operations per second [IOPS] and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration. VM not found for the required network performance | The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server. VM not found in the specified location | Use a different target location before migration.
-One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.A<br/><br/> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br/><br/> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br/><br/> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
-One or more unsuitable network adapters. | Remove unused network adapters from the server before migration.
+One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br/><br/> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br/><br/> For each disk attached to the VM, make sure that the size of the disk is <64 TB (supported by Ultra SSD disks).<br/><br/> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
+One or more unsuitable network adapters | Remove unused network adapters from the server before migration.
Disk count exceeds limit | Remove unused disks from the server before migration. Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64-TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Disk unavailable in the specified location | Make sure the disk is in your target location before you migrate. Disk unavailable for the specified redundancy | The disk should use the redundancy storage type defined in the assessment settings (LRS by default).
-Could not determine disk suitability because of an internal error | Try creating a new assessment for the group.
+Couldn't determine disk suitability because of an internal error | Try creating a new assessment for the group.
VM with required cores and memory not found | Azure couldn't find a suitable VM type. Reduce the memory and number of cores of the on-premises server before you migrate.
-Could not determine VM suitability because of an internal error | Try creating a new assessment for the group.
-Could not determine suitability for one or more disks because of an internal error | Try creating a new assessment for the group.
-Could not determine suitability for one or more network adapters because of an internal error | Try creating a new assessment for the group.
-No VM size found for offer currency Reserved Instance | Server marked Not suitable because the VM size was not found for the selected combination of RI, offer and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
-Conditionally ready Internet Protocol | Only applicable to Azure VMware Solution (AVS) assessments. AVS does not support IPv6 internet addresses factor. Contact the AVS team for remediation guidance if your server is detected with IPv6.
+Couldn't determine VM suitability because of an internal error | Try creating a new assessment for the group.
+Couldn't determine suitability for one or more disks because of an internal error | Try creating a new assessment for the group.
+Couldn't determine suitability for one or more network adapters because of an internal error | Try creating a new assessment for the group.
+No VM size found for offer currency Reserved Instance (RI) | Server marked "not suitable" because the VM size wasn't found for the selected combination of RI, offer, and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
+Conditionally ready Internet Protocol | Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses. Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
-## Suggested migration tool in import-based AVS assessment marked as unknown
+## Suggested migration tool in an import-based Azure VMware Solution assessment is unknown
-For servers imported via a CSV file, the default migration tool in and AVS assessment is unknown. Though, for servers in VMware environment, its is recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/configure-vmware-hcx.md).
+For servers imported via a CSV file, the default migration tool in an Azure VMware Solution assessment is unknown. For servers in a VMware environment, use the VMware Hybrid Cloud Extension (HCX) solution. [Learn more](../azure-vmware/configure-vmware-hcx.md).
## Linux VMs are "conditionally ready" in an Azure VM assessment
-In the case of VMware and Hyper-V VMs, Azure VM assessment marks Linux VMs as "Conditionally ready" due to a known gap.
+In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VMs as "conditionally ready" because of a known gap.
- The gap prevents it from detecting the minor version of the Linux OS installed on the on-premises VMs.-- For example, for RHEL 6.10, currently Azure VM assessment detects only RHEL 6 as the OS version. This is because the vCenter Server ar the Hyper-V host do not provide the kernel version for Linux VM operating systems.-- Because Azure endorses only specific versions of Linux, the Linux VMs are currently marked as conditionally ready in Azure VM assessment.
+- For example, for RHEL 6.10, currently an Azure VM assessment detects only RHEL 6 as the OS version. This behavior occurs because the vCenter Server and the Hyper-V host don't provide the kernel version for Linux VM operating systems.
+- Because Azure endorses only specific versions of Linux, the Linux VMs are currently marked as "conditionally ready" in an Azure VM assessment.
- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md). - After you've verified the endorsed distribution, you can ignore this warning.
-This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. Azure VM assessment uses the operating system detected from the VM using the guest credentials provided. This operating system data identifies the right OS information in the case of both Windows and Linux VMs.
+This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. An Azure VM assessment uses the operating system detected from the VM by using the guest credentials provided. This operating system data identifies the right OS information in the case of both Windows and Linux VMs.
## Operating system version not available
-For physical servers, the operating system minor version information should be available. If not available, contact Microsoft Support. For servers in VMware environment, Azure Migrate uses the operating system information specified for the VM in vCenter Server. However, vCenter Server doesn't provide the minor version for operating systems. To discover the minor version, you need to set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery is not supported.
+For physical servers, the operating system minor version information should be available. If it isn't available, contact Microsoft Support. For servers in a VMware environment, Azure Migrate uses the operating system information specified for the VM in vCenter Server. But vCenter Server doesn't provide the minor version for operating systems. To discover the minor version, set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery isn't supported.
## Azure SKUs bigger than on-premises in an Azure VM assessment
-Azure VM assessment might recommend Azure VM SKUs with more cores and memory than current on-premises allocation based on the type of assessment:
+An Azure VM assessment might recommend Azure VM SKUs with more cores and memory than the current on-premises allocation based on the type of assessment:
- The VM SKU recommendation depends on the assessment properties.-- This is affected by the type of assessment you perform in Azure VM assessment: *Performance-based*, or *As on-premises*.-- For performance-based assessments, Azure VM assessment considers the utilization data of the on-premises VMs (CPU, memory, disk, and network utilization) to determine the right target VM SKU for your on-premises VMs. It also adds a comfort factor when determining effective utilization.-- For on-premises sizing, performance data is not considered, and the target SKU is recommended based on-premises allocation.
+- The recommendation is affected by the type of assessment you perform in an Azure VM assessment. The two types are **Performance-based** or **As on-premises**.
+- For performance-based assessments, the Azure VM assessment considers the utilization data of the on-premises VMs (CPU, memory, disk, and network utilization) to determine the right target VM SKU for your on-premises VMs. It also adds a comfort factor when determining effective utilization.
+- For on-premises sizing, performance data isn't considered, and the target SKU is recommended based on on-premises allocation.
-To show how this can affect recommendations, let's take an example:
+Let's look at an example recommendation:
-We have an on-premises VM with four cores and eight GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
+We have an on-premises VM with four cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
- If the assessment is **As on-premises**, an Azure VM SKU with four cores and 8 GB of memory is recommended.-- If the assessment is performance-based, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of four cores (nearest supported core count) and eight GB of memory (nearest supported memory size) is recommended.
+- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of four cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
- [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing.
-## Why is the recommended Azure disk SKUs bigger than on-premises in an Azure VM assessment?
+## Why is the recommended Azure disk SKU bigger than on-premises in an Azure VM assessment?
-Azure VM assessment might recommend a bigger disk based on the type of assessment.
+Azure VM assessment might recommend a bigger disk based on the type of assessment:
- Disk sizing depends on two assessment properties: sizing criteria and storage type.-- If the sizing criteria is **Performance-based**, and the storage type is set to **Automatic**, the IOPS, and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, Premium, or Ultra disk). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.-- If the sizing criteria is **Performance-based**, and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, **Premium**, or **Ultra disk**.
+- If the sizing criteria is **Performance-based** and the storage type is set to **Automatic**, the IOPS and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, Premium, or Ultra disk). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.
+- If the sizing criteria is **Performance-based** and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, **Premium**, or **Ultra disk**.
-As an example, if you have an on-premises disk with 32 GB of memory, but the aggregated read and write IOPS for the disk is 800 IOPS, Azure VM assessment recommends a premium disk (because of the higher IOPS requirements), and then recommends a disk SKU that can support the required IOPS and size. The nearest match in this example would be P15 (256 GB, 1100 IOPS). Even though the size required by the on-premises disk was 32 GB, Azure VM assessment recommends a larger disk because of the high IOPS requirement of the on-premises disk.
+For example, say you have an on-premises disk with 32 GB of memory, but the aggregated read and write IOPS for the disk is 800 IOPS. The Azure VM assessment recommends a premium disk because of the higher IOPS requirements. It also recommends a disk SKU that can support the required IOPS and size. The nearest match in this example would be P15 (256 GB, 1100 IOPS). Even though the size required by the on-premises disk was 32 GB, the Azure VM assessment recommended a larger disk because of the high IOPS requirement of the on-premises disk.
-## Why is performance data missing for some/all VMs in my assessment report?
+## Why is performance data missing for some or all VMs in my assessment report?
-For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises VMs. Please check:
+For **Performance-based** assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises VMs. Make sure to check:
-- If the VMs are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. There is a known issue currently due to which Azure Migrate appliance cannot collect memory utilization for such VMs.-- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access) and [physical](./migrate-support-matrix-physical.md#port-access) assessment.
-Note- If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
+- If the VMs are powered on for the duration for which you're creating the assessment.
+- If only memory counters are missing and you're trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. Because of a known issue, currently the Azure Migrate appliance can't collect memory utilization for such VMs.
+- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access), and [physical](./migrate-support-matrix-physical.md#port-access) assessment.
+If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
-## Why is performance data missing for some/all servers in my Azure VM and/or AVS assessment report?
+## Why is performance data missing for some or all servers in my Azure VM or Azure VMware Solution assessment report?
-For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Please check:
--- If the servers are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, please enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
+For **Performance-based** assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises servers. Make sure to check:
+- If the servers are powered on for the duration for which you're creating the assessment.
+- If only memory counters are missing and you're trying to assess servers in a Hyper-V environment. In this scenario, enable dynamic memory on the servers and recalculate the assessment to reflect the latest changes. The appliance can collect memory utilization values for servers in a Hyper-V environment only when the server has dynamic memory enabled.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed. > [!Note] > If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
-## Why is performance data missing for some/all SQL instances/databases in my Azure SQL assessment?
+## Why is performance data missing for some or all SQL instances or databases in my Azure SQL assessment?
-To ensure performance data is collected, please check:
+To ensure performance data is collected, make sure to check:
-- If the SQL Servers are powered on for the duration for which you are creating the assessment-- If the connection status of the SQL agent in Azure Migrate is 'Connected' and check the last heartbeat -- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance blade-- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed
+- If the SQL servers are powered on for the duration for which you're creating the assessment.
+- If the connection status of the SQL agent in Azure Migrate is **Connected**, and also check the last heartbeat.
+- If the Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.
+- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
-If any of the performance counters are missing, Azure SQL assessment recommends the smallest Azure SQL configuration for that instance/database.
+If any of the performance counters are missing, the Azure SQL assessment recommends the smallest Azure SQL configuration for that instance or database.
## Why is the confidence rating of my assessment low?
-The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating:
--- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and **Recalculate** the assessment.-- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that:
- - Servers are powered on for the duration of the assessment
- - Outbound connections on ports 443 are allowed
- - For Hyper-V Servers dynamic memory is enabled
- - The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat
- - For For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+The confidence rating is calculated for **Performance-based** assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. An assessment could get a low confidence rating for the following reasons:
- Please **Recalculate** the assessment to reflect the latest changes in confidence rating.
+- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a shorter period and recalculate the assessment.
+- Assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+ - Servers are powered on for the duration of the assessment.
+ - Outbound connections on ports 443 are allowed.
+ - For Hyper-V Servers, dynamic memory is enabled.
+ - The connection status of agents in Azure Migrate is "Connected." Also check the last heartbeat.
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance pane.
-- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based)
+ Recalculate the assessment to reflect the latest changes in confidence rating.
-- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings)
+- For Azure VM and Azure VMware Solution assessments, few servers were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few servers were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).
+- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
## Is the operating system license included in an Azure VM assessment?
-Azure VM assessment currently considers the operating system license cost only for Windows servers. License costs for Linux servers aren't currently considered.
+An Azure VM assessment currently considers the operating system license cost only for Windows servers. License costs for Linux servers aren't currently considered.
## How does performance-based sizing work in an Azure VM assessment?
-Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn how](concepts-assessment-calculation.md#calculate-sizing-performance-based) performance-based data is collected.
+An Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn how](concepts-assessment-calculation.md#calculate-sizing-performance-based) performance-based data is collected.
-## Can I migrate my disks to Ultra disk using Azure Migrate?
+## Can I migrate my disks to an Ultra disk by using Azure Migrate?
-No. Currently, both Azure Migrate and Azure Site Recovery do not support migration to Ultra disks. Find steps to deploy Ultra disk [here](https://docs.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal#deploy-an-ultra-disk)
+No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. Find steps to deploy an Ultra disk at [this website](https://docs.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal#deploy-an-ultra-disk).
## Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
-As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra Disk is billed based on the provisioned size, provisioned IOPS and provisioned throughput. As per an example provided:
-If you provisioned a 200 GiB Ultra Disk, with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB and you'll be billed for the 256 GiB, 20,000 IOPS and 1,000 MB/second for 20 hours.
+As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra disk is billed based on the provisioned size, provisioned IOPS, and provisioned throughput. For example, if you provisioned a 200-GiB Ultra disk with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB. You'll be billed for 256 GiB, 20,000 IOPS, and 1,000 MB/second for 20 hours.
-IOPS to be provisioned = (Throughput discovered) *1024/256
+IOPS to be provisioned = (Throughput discovered) *1024/256
## Does the Ultra disk recommendation consider latency?
-No, currently only disk size, total throughput and total IOPS is used for sizing and costing.
+No, currently only disk size, total throughput, and total IOPS are used for sizing and costing.
-## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says ΓÇ£No VM found for this locationΓÇ¥?
+## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says "No VM found for this location"
-This is possible as not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+This result is possible because not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
-## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime and Discount (%)?
+## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)?
-When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable. As your assessment was created with an invalid combination of these properties, the edit and recalculate buttons are disabled. Please create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
+When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
-## I do not see performance data for some network adapters on my physical servers
+## I don't see performance data for some network adapters on my physical servers
-This can happen if the physical server has Hyper-V virtualization enabled. On these servers, due to a product gap, Azure Migrate currently discovers both the physical and virtual network adapters. The network throughput is captured only on the virtual network adapters discovered.
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual network adapters. The network throughput is captured only on the virtual network adapters discovered.
-## Recommended Azure VM SKU for my physical server is oversized
+## The recommended Azure VM SKU for my physical server is oversized
-This can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. Hence, the no. of network adapters discovered is higher than actual. As Azure VM assessment picks an Azure VM that can support the required number of network adapters, this can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of no. of network adapters on sizing. This is a product gap that will be addressed going forward.
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. As a result, the number of network adapters discovered is higher than the actual number. The Azure VM assessment picks an Azure VM that can support the required number of network adapters, which can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of network adapters on sizing. This product gap will be addressed going forward.
-## Readiness category "Not ready" for my physical server
+## The readiness category is marked "Not ready" for my physical server
-Readiness category may be incorrectly marked as "Not Ready" in the case of a physical server that has Hyper-V virtualization enabled. On these servers, due to a product gap, Azure Migrate currently discovers both the physical and virtual adapters. Hence, the no. of network adapters discovered is higher than actual. In both as-on-premises and performance-based assessments, Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be being higher than 32, the maximum no. of NICs supported on Azure VMs, the server will be marked ΓÇ£Not readyΓÇ¥. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of no. of NICs on sizing.
+The readiness category might be incorrectly marked as "Not ready" in the case of a physical server that has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of network adapters discovered