Updates from: 02/12/2021 04:16:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/language-customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/language-customization.md
@@ -177,7 +177,7 @@ https://wingtiptoysb2c.blob.core.windows.net/fr/wingtip/unified.html
## Add custom languages
-You can also add languages that Microsoft currently does not provide translations for. You'll need to provide the translations for all the strings in the user flow. Language and locale codes are limited to those in the ISO 639-1 standard.
+You can also add languages that Microsoft currently does not provide translations for. You'll need to provide the translations for all the strings in the user flow. Language and locale codes are limited to those in the ISO 639-1 standard. The locale code format should be "ISO_639-1_code"-"CountryCode" For Eg: en-GB. For more information on locale ID formats, please refer to https://docs.microsoft.com/openspecs/office_standards/ms-oe376/6c085406-a698-4e12-9d4d-c3b0ee3dbc4a
1. In your Azure AD B2C tenant, select **User flows**. 2. Click the user flow where you want to add custom languages, and then click **Languages**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
@@ -62,9 +62,7 @@ To block a user, complete the following steps:
1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**. 1. Select **Add** to block a user.
-1. Select the **Replication Group**, then choose *Azure Default*.
-
- Enter the username for the blocked user as `username\@domain.com`, then provide a comment in the *Reason* field.
+1. Enter the username for the blocked user as `username@domain.com`, then provide a comment in the *Reason* field.
1. When ready, select **OK** to block the user. ### Unblock a user
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-users-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
@@ -39,6 +39,9 @@ The following options are available to include when creating a Conditional Acces
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
+> [!IMPORTANT]
+> When selecting which users and groups are included in a Conditional Access Policy, there is a limit to the number of individual users that can be added directly to a Conditional Access policy. If there are a large amount of individual users that are needed to be added to directly to a Conditional Access policy, we recommend placing the users in a group, and assigning the group to the Conditional Access policy instead.
+ > [!WARNING] > If users or groups are a member of over 2048 groups their access may be blocked. This limit applies to both direct and nested group membership.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-js-initializing-client-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-js-initializing-client-applications.md
@@ -151,9 +151,9 @@ This MSAL.js 2.x code sample on GitHub demonstrates instantiation of a [PublicCl
<!-- LINKS - External --> [msal-browser]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-browser/ [msal-core]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-core/
-[msal-js-acquiretokenredirect]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-core/classes/_useragentapplication_.useragentapplication.html#acquiretokenredirect
-[msal-js-configuration]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-core/modules/_configuration_.html
-[msal-js-handleredirectpromise]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-browser/classes/_src_app_publicclientapplication_.publicclientapplication.html#handleredirectpromise
-[msal-js-loginredirect]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-core/classes/_useragentapplication_.useragentapplication.html#loginredirect
-[msal-js-publicclientapplication]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-browser/classes/_src_app_publicclientapplication_.publicclientapplication.html
-[msal-js-useragentapplication]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/msal-core/modules/_useragentapplication_.html
+[msal-js-acquiretokenredirect]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal.useragentapplication.html#acquiretokenredirect
+[msal-js-configuration]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal.html#configuration
+[msal-js-handleredirectpromise]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html#handleredirectpromise
+[msal-js-loginredirect]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal.useragentapplication.html#loginredirect
+[msal-js-publicclientapplication]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html
+[msal-js-useragentapplication]: https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal.useragentapplication.html
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/faq.md
@@ -301,6 +301,11 @@ If a password is changed outside the corporate network (for example, by using Az
- For iOS and Android, you can use the Microsoft Authenticator application **Settings** > **Device Registration** and select **Unregister device**. - For macOS, you can use the Microsoft Intune Company Portal application to unenroll the device from management and remove any registration.
+For Windows 10 devices, this process can be automated with the [Workplace Join (WPJ) removal tool](https://download.microsoft.com/download/8/e/f/8ef13ae0-6aa8-48a2-8697-5b1711134730/WPJCleanUp.zip)
+
+> [!NOTE]
+> This tool removes all SSO accounts on the device. After this operation, all applications will lose SSO state, and the device will be unenrolled from management tools (MDM) and unregistered from the cloud. The next time an application tries to sign in, users will be asked to add the account again.
+ ### Q: How can I block users from adding additional work accounts (Azure AD registered) on my corporate Windows 10 devices?
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/troubleshoot-hybrid-join-windows-legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
@@ -38,6 +38,7 @@ This article provides you with troubleshooting guidance on how to resolve potent
**What you should know:** - Hybrid Azure AD join for downlevel Windows devices works slightly differently than it does in Windows 10. Many customers do not realize that they need AD FS (for federated domains) or Seamless SSO configured (for managed domains).
+- Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode.
- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices will not work. - The same physical device appears multiple times in Azure AD when multiple domain users sign-in the downlevel hybrid Azure AD joined devices. For example, if *jdoe* and *jharnett* sign-in to a device, a separate registration (DeviceID) is created for each of them in the **USER** info tab. - You can also get multiple entries for a device on the user info tab because of a reinstallation of the operating system or a manual re-registration.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-naming-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-naming-policy.md
@@ -73,8 +73,6 @@ To configure naming policy, one of the following roles is required:
Selected administrators can be exempted from these policies, across all group workloads and endpoints, so that they can create groups using blocked words and with their own naming conventions. The following are the list of administrator roles exempted from the group naming policy. - Global administrator-- Partner Tier 1 Support-- Partner Tier 2 Support - User administrator ## Configure naming policy in Azure portal
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/licensing-service-plan-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
@@ -5,7 +5,7 @@ description: Identifier map to manage Azure Active Directory licensing in the Az
keywords: Azure Active Directory licensing service plans documentationcenter: ''-+ editor: ''
@@ -14,8 +14,8 @@
Last updated 12/02/2020--++ #Nick Kramer is minding this reference until it can be automated
@@ -42,8 +42,7 @@ When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| AZURE ACTIVE DIRECTORY PREMIUM P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9) | | AZURE ACTIVE DIRECTORY PREMIUM P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998) | | AZURE INFORMATION PROTECTION PLAN 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
-| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>
-MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
+| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
| COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) | | DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN ENTERPRISE EDITION | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>PROJECT ONLINE SERVICE (fe71d6c3-a2ea-4499-9778-da042bf08063) | | DYNAMICS 365 FOR CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR CUSTOMER SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
@@ -97,7 +96,7 @@ MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM
| MICROSOFT 365 PHONE SYSTEM FOR TELSTRA | MCOEV_TELSTRA | ffaf2d68-1c95-4eb3-9ddd-59b81fba0f61 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM_USGOV_DOD | MCOEV_USGOV_DOD | b0e7de67-e503-4934-b729-53d595ba5cd1 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
-| MICROSOFT 365 PHONE SYSTEM - VIRTUAL USER | MCOEV_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | MICROSOFT 365 PHONE SYSTEM VIRTUAL USER (f47330e9-c134-43b3-9993-e7f004506889)
+| MICROSOFT 365 PHONE SYSTEM - VIRTUAL USER | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | MICROSOFT 365 PHONE SYSTEM VIRTUAL USER (f47330e9-c134-43b3-9993-e7f004506889)
| Microsoft Defender Advanced Threat Protection | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender Advanced Threat Protection (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE BASIC(bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/reset-redemption-status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/reset-redemption-status.md
@@ -25,7 +25,7 @@ After a guest user has redeemed your invitation for B2B collaboration, there mig
- The user has moved to a different company, but they still need the same access to your resources - The userΓÇÖs responsibilities have been passed along to another user
-To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while retaining the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the new email address becomes the user's UPN. The user can subsequently sign in using the new email or an email you've added to the `otherMails` property of the user object.
+To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while retaining the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the UPN of the user doesn't change, but the user's sign-in name changes to the new email. The user can subsequently sign in using the new email or an email you've added to the `otherMails` property of the user object.
## Use PowerShell to reset redemption status
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/8-secure-access-sensitivity-labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
@@ -61,7 +61,7 @@ As you think about governing external access to your content, determine the foll
* What defaults should be in place for HBI data, sites, or Microsoft 365 Groups?
-* Where will you use sensitivity labels to [label and monitor](/microsoft-365/compliance/label-analytics?view=o365-worldwide), versus to [enforce encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide) or to [enforce container access restrictions](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide)?
+* Where will you use sensitivity labels to [label and monitor](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide), versus to [enforce encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide) or to [enforce container access restrictions](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide)?
**For email and content**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-computer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-computer.md
@@ -0,0 +1,98 @@
+
+ Title: Securing computer accounts | Azure Active Directory
+description: A guide to securing on-premises computer accounts.
+++++++ Last updated : 2/15/2021++++++
+# Securing computer accounts
+
+The computer account, or LocalSystem account, is a built-in, highly privileged account with access to virtually all resources on the local computer. This account is not associated with any signed-on user account. Services running as LocalSystem access network resources by presenting the computer's credentials to remote servers. It presents credentials in the form <domain_name>\<computer_name>$. A computer accountΓÇÖs pre-defined name is NT AUTHORITY\SYSTEM. It can be used to start a service and provide security context for that service.
+
+![[Picture 4](.\media\securing-service-accounts\secure-computer-accounts-image-1.png)](.\media\securing-service-accounts\secure-computer-accounts-image-1.png)
+
+## Benefits of using the computer account
+
+The computer account provides the following benefits.
+
+* **Unrestricted local access**: The computer account provides complete access to the machineΓÇÖs local resources.
+
+* **Automatic password management**: The computer account removes the need for you to manually change passwords. Instead, this account is a member of Active Directory and the account password is changed automatically. It also eliminates the need to register the service principal name for the service.
+
+* **Limited access rights off-machine**: The default Access Control List (ACL) in Active Directory Domain Services permits minimal access for computer accounts. If this service were to be hacked, it would only have limited access to resources on your network.
+
+## Assess security posture of computer accounts
+
+Potential challenges and associated mitigations when using computer accounts.
+
+| Issues| Mitigations |
+| - | - |
+| Computer accounts are subject to deletion and recreation when the computer leaves and rejoins the domain.| Validate the need to add a computer to an AD group and verify which computer account has been added to a group using the example scripts provided on this page.|
+| If you add a computer account to a group, all services running as LocalSystem on that computer are given access rights of the group.| Be selective of the group memberships of your computer account. Avoid making computer accounts members of any domain administrator groups because the associated service has complete access to Active Directory Domain Services. |
+| Improper network defaults for LocalSystem| Do not assume that the computer account has the default limited access to network resources. Instead, check group memberships for this account carefully. |
+| Unknown services running as LocalSystem| Ensure that all services running under the LocalSystem account are Microsoft services or trusted services from third parties. |
++
+## Find services running under the computer account
+
+Use the following PowerShell cmdlet to find services running under LocalSystem context
+
+```powershell
+
+Get-WmiObject win32_service | select Name, StartName | Where-Object {($_.StartName -eq "LocalSystem")}
+```
+
+**Find Computers accounts that are members of a specific group**
+
+Use the following PowerShell cmdlet to find computer accounts that are member of a specific group.
+
+```powershell
+
+```Get-ADComputer -Filter {Name -Like "*"} -Properties MemberOf | Where-Object {[STRING]$_.MemberOf -like "Your_Group_Name_here*"} | Select Name, MemberOf
+```
+
+**Find Computers accounts that are members of privileged groups**
+
+Use the following PowerShell cmdlet to find computer accounts that are member of Identity Administrators groups (Domain Admins, Enterprise Admins, Administrators)
+
+```powershell
+Get-ADGroupMember -Identity Administrators -Recursive | Where objectClass -eq "computer"
+```
+## Move from computer accounts
+
+> [!IMPORTANT]
+> Computer accounts are highly privileged accounts and should be used only when your service needs unrestricted access to local resources on the machine, and you cannot use a managed service account (MSA).
+
+* Check with your service owner if their service can be run using an MSA, and use a group managed service account (gMSA) or a standalone managed service account (sMSA) if your service supports it.
+
+* Use a domain user account with just the privileges needed to run your service.
+
+## Next Steps
+
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
+
+
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-govern-on-premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
@@ -0,0 +1,208 @@
+
+ Title: Governing on-premises service accounts | Azure Active Directory
+description: A guide to creating and running an account lifecycle process for service accounts
+++++++ Last updated : 2/15/2021++++++
+# Governing on-premises service accounts
+
+There are four types of on-premises service accounts in Windows Active Directory:
+
+* [Group managed service accounts](service-accounts-group-managed.md) (gMSAs)
+
+* [standalone managed service accounts](service-accounts-standalone-managed.md) (sMSAs)
+
+* [Computer accounts](service-accounts-computer.md)
+
+* [User accounts functioning as service accounts](service-accounts-user-on-premises.md)
++
+It is critical to govern service accounts closely to:
+
+* Protect service accounts based on their use-case requirements and purpose.
+
+* Manage the lifecycle of service accounts and their credentials.
+
+* Assess service accounts based on the risk they'll be exposed to and the permissions they carry,
+
+* Ensure that Active Directory and Azure Active Directory have no stale service accounts with potentially far-reaching permissions.
+
+## Principles for creating a new service account
+
+Use the following criteria when creating a new service account.
+
+| Principles| Considerations |
+| - |- |
+| Service account mapping| Tie the service account to a single service, application, or script. |
+| Ownership| Ensure that there's an owner who requests and assumes responsibility for the account. |
+| Scope| Define the scope clearly and anticipate usage duration for the service account. |
+| Purpose| Create service accounts for a single specific purpose. |
+| Privilege| Apply the principle of least privilege by: <br>Never assigning them to built-in groups like administrators.<br> Removing local machine privileges where appropriate.<br>Tailoring access and using Active Directory delegation for directory access.<br>Using granular access permissions.<br>Setting account expirations and location-based restrictions on user-based service accounts |
+| Monitor and audit use| Monitor sign-in data and ensure it matches the intended usage. Set alerts for anomalous usage. |
+
+### Enforce least privilege for user accounts and limit account overuse
+
+Use the following settings with user accounts used as service accounts:
+
+* [**Account Expiry**](https://docs.microsoft.com/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps): set the service account to automatically expire a set time after its review period unless it's determined that it should continue
+
+* **LogonWorkstations**: restrict permissions for where the service account can sign in. If it runs locally on a machine and accesses only resources on that machine, restrict it from logging on anywhere else.
+
+* [**Cannot change password**](https://docs.microsoft.com/powershell/module/addsadministration/set-aduser?view=win10-ps): prevent the service account from changing its own password by setting the parameter to false.
+
+
+## Build a lifecycle management process
+
+To maintain security of your service accounts, you must manage them from the time you identify the need until they're decommissioned.
+
+Use the following process for lifecycle management of service accounts:
+
+1. Collect usage information for the account
+1. Onboard the service account and app to configuration management database (CMDB)
+1. Perform risk assessment or formal review
+1. Create the service account and apply restrictions.
+1. Schedule and perform recurring reviews. Adjust permissions and scopes as necessary.
+1. Deprovision account when appropriate.
+
+### Collect usage information for the service account
+
+Collect the relevant business information for each service account. The below table shows minimum information to be collected, but you should collect everything necessary to make the business case for the accountsΓÇÖ existence.
+
+| Data| Details |
+| - | - |
+| Owner| User or group that is accountable for the service account |
+| Purpose| Purpose of the service account |
+| Permissions (Scopes)| Expected set of permissions |
+| Configuration management database (CMDB) links| Cross-link service account with target script/application and owner(s) |
+| Risk| Risk and business impact scoring based on security risk assessment |
+| Lifetime| Anticipated maximum lifetime to enable scheduling of account expiration or recertification |
++
+
+
+Ideally, make the request for an account self-service, and require the relevant information. The owner, who can be an application or business owner, an IT member, or an infrastructure owner. Using a tool such as Microsoft forms for this request and associated information will make it easy to port it to your CMDB inventory tool if the account is approved.
+
+### Onboard service account to CMDB
+
+Store the collected information in a CMDB-type application. In addition to the business information, include all dependencies to other infrastructure, apps, and processes. This central repository will make it easier to:
+
+* Assess risk.
+
+* Configure the service account with required restrictions.
+
+* Understand relevant functional and security dependencies.
+
+* Conduct regular reviews for security and continued need.
+
+* Contact the owner(s) for reviewing, retiring, and changing the service account.
+
+Consider a service account that is used to run a web site and has privileges to connect to one or more SQL databases. Information stored in your CMDB for this service account could be:
+
+|Data | Details|
+| - | - |
+| Owner, Deputy| John Bloom, Anna Mayers |
+| Purpose| Run the HR webpage and connect to HR-databases. Can impersonate end user when accessing databases. |
+| Permissions, Scopes| HR-WEBServer: log on locally, run web page<br>HR-SQL1: log on locally, Read on all HR* database<br>HR-SQL2: log on locally, READ on SALARY* database |
+| Cost Center| 883944 |
+| Risk Assessed| Medium; Business Impact: Medium; private information; Medium |
+| Account Restrictions| Log on to: only aforementioned servers; Cannot change password; MBI-Password Policy; |
+| Lifetime| unrestricted |
+| Review Cycle| Bi-annually (by owner, by security team, by privacy) |
+
+### Perform risk assessment or formal review of service account usage
+
+Given its permissions and purpose, assess the risk the account may pose to its associated application or service and to your infrastructure if it is compromised. Consider both direct and indirect risk.
+
+* What would an adversary gain direct access to?
+
+* What other information or systems can the service account access?
+
+* Can the account be used to grant additional permissions?
+
+* How will you know when permissions change?
+
+The risk assessment, once conducted and documented, may have impact on:
+
+* Account restrictions
+
+* Account lifetime
+
+* Account review requirements (cadence and reviewers)
+
+### Create a service account and apply account restrictions
+
+Create service account only after relevant information is documented in your CMDB and you perform a risk assessment. Account restrictions should be aligned to risk assessment. Consider the following restrictions when relevant to you assessment.:
+
+* [Account Expiry](https://docs.microsoft.com/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps)
+
+ * For all user accounts used as service accounts, define a realistic and definite end-date for use. Set this using the ΓÇ£Account ExpiresΓÇ¥ flag. For more details, refer to[ Set-ADAccountExpiration](https://docs.microsoft.com/powershell/module/addsadministration/set-adaccountexpiration?view=win10-ps).
+
+* Log On To ([LogonWorkstation](https://docs.microsoft.com/powershell/module/addsadministration/set-aduser?view=win10-ps))
+
+* [Password Policy](https://docs.microsoft.com/azure/active-directory-domain-services/password-policy) requirements
+
+* Creation in an [OU location](https://docs.microsoft.com/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) that ensures management only for privileged users
+
+* Set up and collect auditing [that detects changes](https://docs.microsoft.com/windows/security/threat-protection/auditing/audit-directory-service-changes) to the service account ΓÇô and [service account use](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html).
+
+When ready to put into production, grant access to the service account securely.
+
+### Schedule regular reviews of service accounts
+
+Set up regular reviews of service accounts classified as medium and high risk. Reviews should include:
+
+* Owner attestation to the continued need for the account, and justification of privileges and scopes.
+
+* Review by privacy and security teams, including evaluation of upstream and downstream connections.
+
+* Data from audits ensuring it is being used only for intended purposes
+
+### Deprovision service accounts
+
+In your deprovisioning process, first remove permissions and monitor, then remove the account if appropriate.
+
+Deprovision service accounts when:
+
+* The script or application the service account was created for is retired.
+
+* The function within the script or application, which the service account is used for (for example, access to a specific resource) is retired.
+
+* The service account has been replaced with a different service account.
+
+After removing all permissions, use this process for removing the account.
+
+1. Once the associated application or script is deprovisioned, monitor sign-ins and resource access for the associated service account(s) to be sure it is not used in another process. If you are sure it is no longer needed, go to next step.
+
+2. Disable the service account from signing in and be sure it is no longer needed. Create a business policy for the time accounts should remain disabled.
+
+3. Delete the service account after the remain disabled policy is fulfilled.
+
+ * For MSAs, you can [uninstall it](https://docs.microsoft.com/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps) using PowerShell or delete manually from the managed service account container.
+
+ * For computer or user accounts, you can manually delete the account from in Active Directory.
+
+## Next steps
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-group-managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-group-managed.md
@@ -0,0 +1,141 @@
+
+ Title: Securing group managed service accounts | Azure Active Directory
+description: A guide to securing group managed service accounts computer accounts.
+++++++ Last updated : 2/15/2021++++++
+# Securing group managed service accounts
+
+Group managed service accounts (gMSAs) are managed domain accounts that are used for securing services. gMSAs can run on a single server, or in a server farm, such as systems behind a Network Load Balancer (NLB) or an Internet Information Services (IIS) server. Once you configure your services to use a gMSA principal, password management for that account is handled by Windows.
+
+## Benefits of using gMSAs
+
+gMSAs offer a single identity solution with greater security while reducing administrative overhead by:
+
+* **Setting strong passwords**. gMSAs use 240 byte randomly generated complex passwords. The complexity and length of gMSA passwords minimizes the likelihood of a service getting compromised by brute force or dictionary attacks.
+
+* **Cycling passwords regularly**. gMSAs shift password management to Windows, which changes the password every 30 days. Service and domain administrators no longer need to schedule password changes or manage service outages to keep service accounts secure.
+
+* **Supporting deployment to server farms**. The ability to deploy gMSAs to multiple servers allows for the support of load balanced solutions where multiple hosts run the same service.
+
+* **Supporting simplified Server Principal Name (SPN) management**. You can set up SPN using PowerShell at the time of account creation. In addition, services that support automatic SPN registrations may do so against the gMSA, provided gMSA permissions are correctly set.
+
+## When to use gMSAs
+
+Use gMSAs as the preferred account type for on-premises services unless a service, such as Failover Clustering, doesn't support it.
+
+> [!IMPORTANT]
+> You must test your service with gMSAs prior to deployment into production. To do so, set up a test environment and ensure the application can use the gMSA, and access the resources it needs to access. For more information, see [Support for group managed service accounts](https://docs.microsoft.com/system-center/scom/support-group-managed-service-accounts?view=sc-om-2019).
++
+If a service doesn't support the use of gMSAs, your next best option is to use a standalone Managed Service Account (sMSA). sMSAs provide the same functionality as a gMSA, but are intended for deployment on a single server only.
+
+If you can't use a gMSA or sMSA is supported by your service, then the service must be configured to run as a standard user account. Service and domain administrators are required to observe strong password management processes to keep the account secure.
+
+## Assess the security posture of gMSAs
+
+gMSAs are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider gMSAsΓÇÖ scope of access as you look at their overall security posture.
+
+The following table shows potential security issues and mitigations for using gMSAs.
+
+| Security issues| Mitigations |
+| - | - |
+| gMSA is a member of privileged groups. | Review your group memberships. To do so you can create a PowerShell script to enumerate all group memberships, and then filter a resultant CSV file by the names of your gMSA files. <br>Remove the gMSA from privileged groups.<br> Grant the gMSA only the rights and permissions it requires to run its service (consult with your service vendor).
+| gMSA has read/write access to sensitive resources. | Audit access to sensitive resources. Archive audit logs to a SIEM, for example Azure Log Analytics or Azure Sentinel, for analysis. Remove unnecessary resource permissions if an undesirable level of access is detected. |
++
+## Find gMSAs
+
+Your organization may already have gMSAs created. Run the following PowerShell cmdlet to retrieve these accounts:
+
+To work effectively, gMSAs must be in the Managed Service Accounts organizational unit (OU).
+
+
+![Screen shot of managed service account OU.](./media/securing-service-accounts/secure-gmsa-image-1.png)
+
+To find service MSAs that may not be there, see the following commands.
+
+**To find all service accounts, including gMSAs and sMSAs:**
++
+```powershell
+
+Get-ADServiceAccount -Filter *
+
+# This PowerShell cmdlet will return all Managed Service Accounts (both gMSAs and sMSAs). An administrator can differentiate between the two by examining the ObjectClass attribute on returned accounts.
+
+# For gMSA accounts, ObjectClass = msDS-GroupManagedServiceAccount
+
+# For sMSA accounts, ObjectClass = msDS-ManagedServiceAccount
+
+# To filter results to only gMSAs:
+
+Get-ADServiceAccount ΓÇôFilter * | where $_.ObjectClass -eq "msDS-GroupManagedServiceAccountΓÇ¥}
+```
+
+## Manage gMSAs
+
+You can use the following Active Directory PowerShell cmdlets for managing gMSAs:
+
+`Get-ADServiceAccount`
+
+`Install-ADServiceAccount`
+
+`New-ADServiceAccount`
+
+`Remove-ADServiceAccount`
+
+`Set-ADServiceAccount`
+
+`Test-ADServiceAccount`
+
+`Uninstall-ADServiceAccount`
+
+> [!NOTE]
+> Beginning with Windows Server 2012, the *-ADServiceAccount cmdlets work with gMSAs by default. For more information on usage of the above cmdlets, see [**Getting Started with Group Managed Service Accounts**](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts).
+
+## Move to a gMSA
+gMSAs are the most secure type of service account for on-premises needs. If you can move to one, you should. Additionally, consider moving your services to Azure and your service accounts to Azure Active directory.
+
+1. Ensure that the [KDS Root Key is deployed in the forest](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key). This is a one-time operation.
+
+2. [Create a new gMSA](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts).
+
+3. Install the new gMSA on each host running the service.
+ > [!NOTE]
+ > For more information on creation and installation of gMSA on a host, prior to configuring your service to use gMSA, see [Getting Started with Group Managed Service Accounts](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11))
+
+
+4. Change your service identity to gMSA and specify a blank password.
+
+5. Validate that your service is working under the new gMSA identity.
+
+6. Delete the old service account identity.
+
+
+
+## Next steps
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-on-premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-on-premises.md
@@ -0,0 +1,137 @@
+
+ Title: Introduction to Active Directory service accounts | Azure Active Directory
+description: An introduction to the types of service accounts in Active Directory, and how to secure them.
+++++++ Last updated : 2/15/2021++++++
+# Introduction to Active Directory service accounts
+
+A service has a primary security identity that determines the access rights for local and network resources. The security context for a Microsoft Win32 service is determined by the service account that is used to start the service. A service account is used to:
+* identify and authenticate a service
+* successfully start a service
+* access or execute code or an application
+* start a process.
+
+## Types of on-premises service accounts
+
+Based on your use case, you can use a managed service account (MSA), a computer account, or a user account to run a service. Services must be tested to confirm they can use a managed service account. If they can, you should use one.
+
+### Group MSA accounts
+
+Use [group managed service accounts](service-accounts-group-managed.md) (gMSAs) whenever possible for services running in your on-premises environment. gMSAs provide a single identity solution for a service running on a server farm, or behind a network load balancer. They can also be used for a service running on a single server. [gMSAs have specific requirements that must be met](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts)
+
+### Standalone MSA accounts
+
+If you can't use a gMSA, use a [standalone managed service accounts](service-accounts-standalone-managed.md)(sMSA). sMSAs require at least Windows Server 2008R2. Unlike gMSAs, sMSAs run only on one server. They can be used for multiple services on that server.
+
+### Computer account
+
+If you can't use an MSA, investigate using a [computer accounts](service-accounts-computer.md). The LocalSystem account is a predefined local account that has extensive privileges on the local computer, and acts as the computer identity on the network.
+ΓÇÄServices that run as a LocalSystem account access network resource by using the credentials of the computer account in the format
+<domain_name>\<computer_name>.
+
+NT AUTHORITY\SYSTEM is the predefined name for the LocalSystem account. It can be used to start a service and provide the security context for that service.
+
+> [!NOTE]
+> When a computer account is used, you cannot tell which service on the computer is using that account, and therefore cannot audit which service is making changes.
+
+### User account
+
+If you can't use an MSA, investigate using a [user accounts](service-accounts-user-on-premises.md). User accounts can be a domain user account or a local user account.
+
+A domain user account enables the service to take full advantage of the service security features of Windows and Microsoft Active Directory Domain Services. The service will have the local and network access granted to the account. It will also have the permissions of any groups of which the account is a member. Domain service accounts support Kerberos mutual authentication.
+
+A local user account (name format: ".\UserName") exists only in the SAM database of the host computer; it doesn't have a user object in Active Directory Domain Services. A local account can't be authenticated by the domain. So, a service that runs in the security context of a local user account doesn't have access to network resources (except as an anonymous user). Services running in the local user context can't support Kerberos mutual authentication in which the service is authenticated by its clients. For these reasons, local user accounts are typically inappropriate for directory-enabled services.
+
+> [!IMPORTANT]
+> Service accounts should not be members of any privileged groups, as privileged group membership confers permissions that may be a security risk. Each service should have its own service account for auditing and security purposes.
+
+## Choose the right type of service account
++
+| Criteria| gMSA| sMSA| Computer account| User account |
+| - | - | - | - | - |
+| App runs on single server| Yes| Yes. Use a gMSA if possible| Yes. Use an MSA if possible| Yes. Use MSA if possible. |
+| App runs on multiple servers| Yes| No| No. Account is tied to the server| Yes. Use MSA if possible. |
+| App runs behind load balancers| Yes| No| No| Yes. Use only if you can't use a gMSA |
+| App runs on Windows Server 2008 R2| No| Yes| Yes. Use MSA if possible.| Yes. Use MSA if possible. |
+| Runs on Windows server 2012| Yes| Yes. Use gMSA if possible| Yes. Use MSA if possible| Yes. Use MSA if possible. |
+| Requirement to restrict service account to single server| No| Yes| Yes. Use sMSA if possible| No. |
++
+
+
+### Use server logs and PowerShell to investigate
+
+You can use server logs to determine which servers, and how many servers, an application is running on.
+
+You can run the following PowerShell command to get a listing of the Windows Server version for all servers on your network.
+
+```PowerShell
+
+Get-ADComputer -Filter 'operatingsystem -like "*server*" -and enabled -eq "true"' `
+
+-Properties Name,Operatingsystem,OperatingSystemVersion,IPv4Address |
+
+sort-Object -Property Operatingsystem |
+
+Select-Object -Property Name,Operatingsystem,OperatingSystemVersion,IPv4Address |
+
+Out-GridView
+
+```
+
+## Find on-premises service accounts
+
+We recommend that you add a prefix such as ΓÇ£svc.ΓÇ¥ To all accounts used as service accounts. This naming convention will make them easier to find and manage. Also consider the use of a description attribute for the service account and the owner of the service account, this may be a team alias or security team owner.
+
+Finding on-premises service accounts is key to ensuring their security. And, it can be difficult for non-MSA accounts. We recommend reviewing all the accounts that have access to your important on-premises resources, and determining which computer or user accounts may be acting as service accounts. You can also use the following methods to find accounts.
+
+* The articles for each type of account have detailed steps for finding that account type. For links to these articles, see the Next steps section of this article.
+
+## Document service accounts
+
+Once you have found the service accounts in your on-premises environment, document the following information about each account.
+
+* The owner. The person accountable for maintaining the account.
+
+* The purpose. The application the account represents, or other purpose.
+
+* Permission scopes. What permissions does it have, and should it have? What if any groups is it a member of?
+
+* Risk profile. What is the risk to your business if this account is compromised? If high risk, use an MSA.
+
+* Anticipated lifetime and periodic attestation. How long do you anticipate this account being live? How often must the owner review and attest to ongoing need?
+
+* Password security. For user and local computer accounts, where the password is stored. Ensure passwords are kept secure, and document who has access. Consider using [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to secure stored passwords.
+
+
+
+## Next steps
+
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-standalone-managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-standalone-managed.md
@@ -0,0 +1,131 @@
+
+ Title: Securing standalone managed service accounts | Azure Active Directory
+description: A guide to securing standalone managed service accounts.
+++++++ Last updated : 2/15/2021++++++
+# Securing standalone managed service accounts
+
+Standalone Managed Service Accounts (sMSAs) are managed domain accounts used to secure one or more services running on a server. They cannot be reused across multiple servers. sMSAs provide automatic password management, simplified service principal name (SPN) management, and the ability to delegate management to other administrators.
+
+In Active Directory, sMSAs are tied to a specific server that runs a service. You can find these accounts listed in the Active Directory Users and Computers snap-in of the Microsoft Management Console.
+
+![A screen shot of the Active Directory users and computers snap-in showing the managed service accounts OU.](./media/securing-service-accounts/secure-standalone-msa-image-1.png)
+
+Managed Service Accounts were introduced with Windows Server 2008R2 Active Directory Schema and require a minimum OS level of Windows Server 2008R2ΓÇï.
+
+## Benefits of using sMSAs
+
+sMSAs offer greater security than user accounts used as service accounts, while reducing administrative overhead by:
+
+* Setting strong passwords. sMSAs use 240 byte randomly generated complex passwords. The complexity and length of sMSA passwords minimizes the likelihood of a service getting compromised by brute force or dictionary attacks.
+
+* Cycling passwords regularly. Windows automatically changes the sMSA password every 30 days. Service and domain administrators donΓÇÖt need to schedule password changes or manage the associated downtime.
+
+* Simplifying SPN management. Service principal names are automatically updated if the Domain Functional Level (DFL) is Windows Server 2008 R2. ΓÇïFor instance, the service principal name is automatically updated in the following scenarios:
+
+ * The host computer account is renamed. ΓÇï
+
+ * The DNS name of the host computer is changed.
+
+ * When adding or removing an additional sam-accountname or dns-hostname parameters using [PowerShell](https://docs.microsoft.com/powershell/module/addsadministration/set-adserviceaccount?view=win10-ps)
+
+## When to use sMSAs
+
+sMSAs can simplify management and security tasks. Use sMSAs when you've one or more services deployed to a single server, and you cannot use a gMSA.
+
+> [!NOTE]
+> While you can use sMSAs for more than one service, we recommend that each service have its own identity for auditing purposes.
+
+If the creator of the software canΓÇÖt tell you if it can use an MSA, you must test your application. To do so, create a test environment and ensure it can access all required resources. See [create and install an sMSA](https://docs.microsoft.com/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting) for step-by-step directions.
+
+### Assess security posture of sMSAs
+
+sMSAs are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider sMSAsΓÇÖ scope of access as part of their overall security posture.
+
+The following table shows how to mitigate potential security issues posed by sMSAs.
+
+| Security issues| Mitigations |
+| - | - |
+| sMSA is a member of privileged groups|Remove the sMSA from elevated privileged groups (such as Domain Admins). <br> Use the least privileged model and grant the sMSA only the rights and permissions it requires to run its service(s). <br> If you're unsure of the required permissions, consult the service creator. |
+| sMSA has read/write access to sensitive resources.|Audit access to sensitive resources. Archive audit logs to a SIEM (Azure Log Analytics or Azure Sentinel) for analysis. <br> Remediate resource permissions if an undesirable level of access is detected. |
+| By default, sMSA password rollover frequency is 30 days| Group policy can be used to tune the duration depending on enterprise security requirements. <br> *You can set the password expiration duration using the following path. <br>Computer Configuration\Policies\Windows Settings\Security Settings\Security Options\ΓÇïDomain member: Maximum machine account password age |
+++
+### Challenges with sMSAs
+
+The challenges associated with sMSAs are as follows:
+
+| Challenges| Mitigations |
+| - | - |
+| They can be used on a single server.| Use gMSAs if you need to use the account across servers. |
+| They cannot be used across domains.| Use gMSAs if you need to use the account across domains. |
+| Not all applications support sMSAs.| Use gMSAs if possible. If not use a standard user account or a computer account as recommended by the application creator. |
++
+## Find sMSAs
+
+On any domain controller, run DSA.msc and expand the Managed Service Accounts container to view all sMSAs.
+
+The following PowerShell command returns all sMSAs and gMSAs in the Active Directory domain.
+
+`Get-ADServiceAccount -Filter *`
+
+The following command returns only sMSAs in the Active Directory domain.
+
+`Get-ADServiceAccount -Filter * | where { $_.objectClass -eq "msDS-ManagedServiceAccount" }`
+
+## Manage sMSAs
+
+You can use the following Active Directory PowerShell cmdlets for managing sMSAs:
+
+`Get-ADServiceAccount`
+
+` Install-ADServiceAccount`
+
+` New-ADServiceAccount`
+
+` Remove-ADServiceAccount`
+
+`Set-ADServiceAccount`
+
+`Test-ADServiceAccount`
+
+`Ininstall-ADServiceAccount`
+
+## Move to sMSAs
+
+If an application service supports sMSA but not gMSAs, and is currently using a user account or computer account for the security context, [create and install an sMSA](https://docs.microsoft.com/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting) on the server.
+
+Ideally, move resources to Azure, and use Azure Managed Identities or service principals.
+
+
+
+## Next steps
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/service-accounts-user-on-premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-user-on-premises.md
@@ -0,0 +1,131 @@
+
+ Title: Securing user-based service accounts | Azure Active Directory
+description: A guide to securing on-premises user accounts.
+++++++ Last updated : 2/15/2021++++++
+# Securing user-based service accounts in Active Directory
+
+On-premises user accounts are the traditional approach for securing services running on Windows. Use these accounts as a last resort when global managed service accounts (gMSAs) and standalone managed service accounts (sMSAs) are not supported by your service. See overview of on-premises service accounts for information on selecting the best type of account to use. Also investigate if you can move your service to use an Azure service account like a managed identity or a service principle.
+
+On-premises user accounts can be created to provide a security context for services and granted privileges required for the services to access local and network resources. They require manual password management much like any other Active Directory (AD) user account. Service and domain administrators are required to observe strong password management processes to keep these accounts secure.
+
+When using a user account as a service account, use it for a single service only. Name it in a way that makes it clear that it is a service account and for which service.
+
+## Benefits and challenges
+
+Benefits
+
+On-premises user accounts are the most versatile account type for use with services. User accounts used as service accounts can be controlled by all the policies govern normal user accounts. That said, use them only if you can't use an MSA. Also evaluate if a computer account is a better option.
+
+Challenges with on-premises user accounts
+
+The following challenges are associated with the use of on-premises user accounts.
+
+| Challenges| Mitigations |
+| - | - |
+| Password management is a manual process that may lead to weaker security and service downtime.| Ensure that password complexity and password change are governed by a robust process that ensures regular updates with strong password. <br> Coordinate password change with a password update on the service, which will result in service downtime. |
+| Identifying on-premises user accounts that are acting as service accounts can be difficult.| Document and maintain records of service accounts deployed in your environment. <br> Track the account name and the resources to which they're assigned access. <br> Consider adding a prefix of svc_ to all user accounts used as service accounts. |
++
+## Find on-premises user accounts used as service accounts
+
+On-premises user accounts are just like any other AD user account. Consequently, it can be difficult to find these accounts as there's no single attribute of a user account that identifies it as a service account.
+
+We recommend that you create an easily identifiable naming convention for any user account used as a service account.
+
+For example, add "service-" as a prefix, and name the service: ΓÇ£service-HRDataConnectorΓÇ¥.
+
+You can use some of the indicators below to find these service accounts, however, this may not find all such accounts.
+
+* Accounts trusted for delegation.
+
+* Accounts with service principal names.
+
+* Accounts whose password is set to never expire.
+
+You can run the following PowerShell commands to find the on-premises user accounts created for services.
+
+### Find accounts trusted for delegation
+
+```PowerShell
+
+Get-ADObject -Filter {(msDS-AllowedToDelegateTo -like '*') -or (UserAccountControl -band 0x0080000) -or (UserAccountControl -band 0x1000000)} -prop samAccountName,msDS-AllowedToDelegateTo,servicePrincipalName,userAccountControl | select DistinguishedName,ObjectClass,samAccountName,servicePrincipalName, @{name='DelegationStatus';expression={if($_.UserAccountControl -band 0x80000){'AllServices'}else{'SpecificServices'}}}, @{name='AllowedProtocols';expression={if($_.UserAccountControl -band 0x1000000){'Any'}else{'Kerberos'}}}, @{name='DestinationServices';expression={$_.'msDS-AllowedToDelegateTo'}}
+
+```
+
+### Find accounts with service principle names
+
+```PowerShell
+
+Get-ADUser -Filter * -Properties servicePrincipalName | where {$_.servicePrincipalName -ne $null}
+
+```
+
+
+
+### Find accounts with passwords set to never expire
+
+```PowerShell
+
+Get-ADUser -Filter * -Properties PasswordNeverExpires | where {$_.PasswordNeverExpires -eq $true}
+
+```
++
+You can also audit access to sensitive resources, and archive audit logs to a security information and event management (SIEM) system. Using systems such as Azure Log Analytics or Azure Sentinel, you can search for and analyze and service accounts.
+
+## Assess security of on-premises user accounts
+
+Assess the security of your on-premises user accounts being used as service accounts using the following criteria:
+
+* What is the password management policy?
+
+* Is the account a member of any privileged groups?
+
+* Does the account have read/write access to important resources?
+
+### Mitigate potential security issues
+
+The following table shows potential security issues and corresponding mitigations for on-premises user accounts.
+
+| Security issues| Mitigations |
+| - | - |
+| Password management|* Ensure that password complexity and password change are governed by a robust process that ensures regular updates with strong password requirements. <br> * Coordinate password change with a password update to minimize service downtime. |
+| Account is a member of privileged groups.| Review group memberships. Remove the account from privileged groups. Grant the account only the rights and permissions it requires to run its service (consult with service vendor). For example, you may be able to deny sign-in locally or deny interactive sign-in. |
+| Account has read/write access to sensitive resources.| Audit access to sensitive resources. Archive audit logs to a SIEM (Azure Log Analytics or Azure Sentinel) for analysis. Remediate resource permissions if an undesirable level of access is detected. |
++
+## Move to more secure account types
+
+Microsoft does not recommend that customers use on-premises user accounts as service accounts. For any service using this type of account, assess if it can instead be configured to use a gMSA or a sMSA.
+
+Additionally, evaluate if the service itself could be moved to Azure so that more secure service account types can be used.
+
+## Next steps
+See the following articles on securing service accounts
+
+* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+
+* [Secure group managed service accounts](service-accounts-group-managed.md)
+
+* [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
+
+* [Secure computer accounts](service-accounts-computer.md)
+
+* [Secure user accounts](service-accounts-user-on-premises.md)
+
+* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-health-agent-install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
@@ -28,7 +28,7 @@ The following table lists requirements for using Azure AD Connect Health.
| Requirement | Description | | | |
-| Azure AD Premium is installed. |Azure AD Connect Health is a feature of Azure AD Premium. For more information, see [Sign up for Azure AD Premium](../fundamentals/active-directory-get-started-premium.md). <br /><br />To start a free 30-day trial, see [Start a trial](https://azure.microsoft.com/trial/get-started-active-directory/). |
+| There is an Azure AD Premium (P1 or P2) Subsciption. |Azure AD Connect Health is a feature of Azure AD Premium (P1 or P2). For more information, see [Sign up for Azure AD Premium](../fundamentals/active-directory-get-started-premium.md). <br /><br />To start a free 30-day trial, see [Start a trial](https://azure.microsoft.com/trial/get-started-active-directory/). |
| You're a global administrator in Azure AD. |By default, only global administrators can install and configure the health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). | | The Azure AD Connect Health agent is installed on each targeted server. | Health agents must be installed and configured on targeted servers so that they can receive data and provide monitoring and analytics capabilities. <br /><br />For example, to get data from your Active Directory Federation Services (AD FS) infrastructure, you must install the agent on the AD FS server and the Web Application Proxy server. Similarly, to get data from your on-premises Azure AD Domain Services (Azure AD DS) infrastructure, you must install the agent on the domain controllers. | | The Azure service endpoints have outbound connectivity. | During installation and runtime, the agent requires connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, add the [outbound connectivity endpoints](how-to-connect-health-agent-install.md#outbound-connectivity-to-the-azure-service-endpoints) to the allow list. |
@@ -189,7 +189,7 @@ To verify the agent has been installed, look for the following services on the s
![Screenshot showing the running Azure AD Connect Health for Sync services on the server.](./media/how-to-connect-health-agent-install/services.png) > [!NOTE]
-> Remember that you must have Azure AD Premium to use Azure AD Connect Health. If you don't have Azure AD Premium, you can't complete the configuration in the Azure portal. For more information, see the [requirements](how-to-connect-health-agent-install.md#requirements).
+> Remember that you must have Azure AD Premium (P1 or P2) to use Azure AD Connect Health. If you don't have Azure AD Premium, you can't complete the configuration in the Azure portal. For more information, see the [requirements](how-to-connect-health-agent-install.md#requirements).
> >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-install-prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
@@ -68,6 +68,7 @@ To read more about securing your Active Directory environment, see [Best practic
- You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - If your global administrators have MFA enabled, the URL https://secure.aadcdn.microsoftonline-p.com *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites.
+- If you plan to use Azure AD Connect Health for syncing, ensure that the prerequisites for Azure AD Connect Health are also met. For more information, see [Azure AD Connect Health agent installation](how-to-connect-health-agent-install.md).
#### Harden your Azure AD Connect server We recommend that you harden your Azure AD Connect server to decrease the security attack surface for this critical component of your IT environment. Following these recommendations will help to mitigate some security risks to your organization.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/reference-connect-health-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-health-faq.md
@@ -23,7 +23,7 @@ This article includes answers to frequently asked questions (FAQs) about Azure A
## General questions **Q: I manage multiple Azure AD directories. How do I switch to the one that has Azure Active Directory Premium?**
-To switch between different Azure AD tenants, select the currently signed-in **User Name** on the upper-right corner, and then choose the appropriate account. If the account is not listed here, select **Sign out**, and then use the global admin credentials of the directory that has Azure Active Directory Premium enabled to sign in.
+To switch between different Azure AD tenants, select the currently signed-in **User Name** on the upper-right corner, and then choose the appropriate account. If the account is not listed here, select **Sign out**, and then use the global admin credentials of the directory that has Azure Active Directory Premium (P1 or P2) enabled to sign in.
**Q: What version of identity roles are supported by Azure AD Connect Health?**
@@ -41,8 +41,8 @@ Note that the features provided by the service may differ based on the role and
**Q: How many licenses do I need to monitor my infrastructure?**
-* The first Connect Health Agent requires at least one Azure AD Premium license.
-* Each additional registered agent requires 25 additional Azure AD Premium licenses.
+* The first Connect Health Agent requires at least one Azure AD Premium (P1 or P2) license.
+* Each additional registered agent requires 25 additional Azure AD Premium (P1 or P2) licenses.
* Agent count is equivalent to the total number of agents that are registered across all monitored roles (AD FS, Azure AD Connect, and/or AD DS). * AAD Connect Health licensing does not require you to assign the license to specific users. You only need to have the requisite number of valid licenses.
@@ -206,4 +206,4 @@ The agent certification will be automatic renewed **6 months** before its expira
* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with AD DS](how-to-connect-health-adds.md)
-* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
+* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-resource-roles-start-access-review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
@@ -11,7 +11,7 @@ na
ms.devlang: na Previously updated : 12/08/2020 Last updated : 02/11/2021
@@ -25,7 +25,7 @@ This article describes how to create one or more access reviews for privileged A
## Prerequisites
-[Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator)
+ To create access reviews, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) Azure role for the resource.
## Open access reviews
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/delegate-by-task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
@@ -106,7 +106,7 @@ View sync service metrics and alerts | Reader ([see documentation](../fundamenta
Task | Least privileged role | Additional roles - | | -
-Manage domains | Global Administrator |
+Manage domains | Domain Name Administrator |
Read all configuration | Directory readers | Default user role ([see documentation](../fundamentals/users-default-permissions.md)) ## Domain Services
@@ -213,9 +213,9 @@ Read sign-in logs | Reports reader | Security Reader, Security administrator
Task | Least privileged role | Additional roles - | | - Delete all existing app passwords generated by the selected users | Global Administrator |
-Disable MFA | Global Administrator |
-Enable MFA | Global Administrator |
-Manage MFA service settings | Global Administrator |
+Disable MFA | Authentication Administrator (via PowerShell) | Privileged Authentication Administrator (via PowerShell)
+Enable MFA | Authentication Administrator (via PowerShell) | Privileged Authentication Administrator (via PowerShell)
+Manage MFA service settings | Authentication Policy Administrator |
Require selected users to provide contact methods again | Authentication Administrator | Restore multi-factor authentication on all remembered devices  | Authentication Administrator |
@@ -223,15 +223,15 @@ Restore multi-factor authentication on all remembered devices  | Authentication
Task | Least privileged role | Additional roles - | | -
-Block/unblock users | Global Administrator |
-Configure account lockout | Global Administrator |
-Configure caching rules | Global Administrator |
-Configure fraud alert | Global Administrator
-Configure notifications | Global Administrator |
-Configure one-time bypass | Global Administrator |
-Configure phone call settings | Global Administrator |
-Configure providers | Global Administrator |
-Configure server settings | Global Administrator |
+Block/unblock users | Authentication Policy Administrator |
+Configure account lockout | Authentication Policy Administrator |
+Configure caching rules | Authentication Policy Administrator |
+Configure fraud alert | Authentication Policy Administrator
+Configure notifications | Authentication Policy Administrator |
+Configure one-time bypass | Authentication Policy Administrator |
+Configure phone call settings | Authentication Policy Administrator |
+Configure providers | Authentication Policy Administrator |
+Configure server settings | Authentication Policy Administrator |
Read activity report | Global reader | Read all configuration | Global reader | Read server status | Global reader |
@@ -360,4 +360,4 @@ Submit support ticket | Service Administrator | Application Administrator, Azure
## Next steps * [How to assign or remove azure AD administrator roles](manage-roles-portal.md)
-* [Azure AD administrator roles reference](permissions-reference.md)
+* [Azure AD administrator roles reference](permissions-reference.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -72,19 +72,45 @@ Users in this role can create and manage all aspects of attack simulation creati
### [Authentication Administrator](#authentication-administrator-permissions)
-Users with this role can set or reset non-password credentials for some users and can update passwords for all users. Authentication administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. Whether an Authentication Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that an Authentication Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
+Users with this role can set or reset any authentication method (including passwords) for non-administrators and some roles. Authentication administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. For a list of the roles that an Authentication Administrator can read or update authentcation methods, see [Password reset permissions](#password-reset-permissions).
-The [Privileged Authentication Administrator](#privileged-authentication-administrator) role has permission can force re-registration and multi-factor authentication for all users.
+The [Privileged authentication administrator](#privileged-authentication-administrator) role has permission to force re-registration and multi-factor authentication for all users.
+
+The [Authentication policy administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
+
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| - | - | - | - | - | - |
+| Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
+| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
+| Authentication policy administrator | No |No | Yes | Yes | Yes |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example: >
->* Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Authentication Administrators. Through this path an Authentication Administrator may be able to assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application.
+>* Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Authentication Administrators. Through this path an Authentication Administrator can assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application.
>* Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. >* Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere. >* Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
+> [!IMPORTANT]
+> This role is not currently capable of managing per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the [Set-MsolUser](https://docs.microsoft.com/powershell/module/msonline/set-msoluser) commandlet Azure AD Powershell module.
+
+### [Authentication Policy Administrator](#authentication-policy-administrator-permissions)
+
+Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list.
+
+The [Authentication administrator](#authentication-administrator) and [Privileged authentication administrator](#privileged-authentication-administrator) roles have permission to manage registered authentication methods on users and can force re-registration and multi-factor authentication for all users.
+
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| - | - | - | - | - | - |
+| Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
+| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
+| Authentication policy administrator | No |No | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> This role is not currently capable of managing MFA settings in the legacy MFA management portal.
+ ### [Azure AD Joined Device Local Administrator](#azure-ad-joined-device-local-administrator-permissions)/Device Administrators This role is available for assignment only as an additional local administrator in [Device settings](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/DeviceSettings/menuId/). Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They do not have the ability to manage devices objects in Azure Active Directory.
@@ -180,6 +206,10 @@ Do not use. This role is automatically assigned to the Azure AD Connect service,
### [Directory Writers](#directory-writers-permissions) Users in this role can read and update basic information of users, groups, and service principals. Assign this role only to applications that donΓÇÖt support the [Consent Framework](../develop/quickstart-register-app.md). It should not be assigned to any users.
+### [Domain Name Administrator](#domain-name-administrator-permissions)
+
+Users with this role can manage (read, add, verify, update, and delete) domain names. They can also read directory information about users, groups, and applications, as these objects possess domain dependencies. For on-premises environments, users with this role can configure domain names for federation so that associated users are always authenticated on-premises. These users can then sign into Azure AD-based services with their on-premises passwords via single sign-on. Federation settings need to be synced via Azure AD Connect, so users also have permissions to manage Azure AD Connect.
+ ### [Dynamics 365 administrator / CRM Administrator](#crm-service-administrator-permissions) Users with this role have global permissions within Microsoft Dynamics 365 Online, when the service is present, as well as the ability to manage support tickets and monitor service health. More information at [Use the service admin role to manage your Azure AD organization](/dynamics365/customer-engagement/admin/use-service-admin-role-manage-tenant).
@@ -348,7 +378,30 @@ Users with this role can register printers and manage printer status in the Micr
### [Privileged Authentication Administrator](#privileged-authentication-administrator-permissions)
-Users with this role can set or reset non-password credentials for all users, including Global Administrators, and can update passwords for all users. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users.
+Users with this role can set or reset any authentication method (including passwords) for any user, including Global Administrators. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users.
+
+The [Authentication administrator](#authentication-administrator) role has permission to force re-registration and multi-factor authentication for standard users and users with some admin roles.
+
+The [Authentication policy administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
+
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
+| - | - | - | - | - | - |
+| Authentication administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
+| Privileged authentication administrator| Yes for all users | Yes for all users |No | No |No |
+| Authentication policy administrator | No |No | Yes | Yes | Yes |
+
+> [!IMPORTANT]
+> Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
+>
+>* Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to Authentication Administrators. Through this path an Authentication Administrator can assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application.
+>* Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure.
+>* Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere.
+>* Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems.
+>* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
++
+> [!IMPORTANT]
+> This role is not currently capable of managing per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the [Set-MsolUser](https://docs.microsoft.com/powershell/module/msonline/set-msoluser) commandlet Azure AD Powershell module.
### [Privileged Role Administrator](#privileged-role-administrator-permissions)
@@ -597,6 +650,23 @@ Allowed to view, set and reset authentication method information for any non-adm
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. | > | microsoft.directory/users/password/update | Update passwords for all users in the Microsoft 365 organization. See online documentation for more detail. |
+### Authentication Policy Administrator permissions
+
+Allowed to view and set authentication methods policy, password protection policy, and tenant-wide MFA settings.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/organization/strongAuthentication/update | Update strong auth properties of an organization in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/create | Create credential policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/delete | Delete credential policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/standard/read | Read standard properties of credential policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/owners/read | Read owners of credential policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/policyAppliedTo/read | Read policy.appliesTo navigation link in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/basic/update | Update basic policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/owners/update | Update owners of credential policies for users in Azure Active Directory. |
+> | microsoft.directory/userCredentialPolicies/tenantDefault/update | Update policy.isOrganizationDefault property in Azure Active Directory. |
+ ### Azure AD Joined Device Local Administrator permissions Users assigned to this role are added to the local administrators group on Azure AD-joined devices.
@@ -963,6 +1033,16 @@ Can read & write basic directory information. For granting access to application
> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for a user in Azure Active Directory. | > | microsoft.directory/users/userPrincipalName /update | Update the users.userPrincipalName property in Azure Active Directory. |
+### Domain Name Administrator permissions
+
+Can manage domain names in cloud and on-premises.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties in Azure Active Directory. |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+ ### Exchange Service Administrator permissions Can manage all aspects of the Exchange product.
@@ -1960,6 +2040,7 @@ Graph displayName | Azure portal display name | directoryRoleTemplateId
Application Administrator | Application administrator | 9B895D92-2CD3-44C7-9D02-A6AC2D5EA5C3 Application Developer | Application developer | CF1C38E5-3621-4004-A7CB-879624DCED7C Authentication Administrator | Authentication administrator | c4e39bd9-1100-46d3-8c65-fb160da0071f
+Authentication Policy Administrator | Authentication policy administrator | 0526716b-113d-4c15-b2c8-68e3c22b9f80
Attack Payload Author | Attack payload author | 9c6df0f2-1e7c-4dc3-b195-66dfbd24aa8f Attack Simulation Administrator | Attack simulation administrator | c430b396-e693-46cc-96f3-db01bf8bb62a Azure AD Joined Device Local Administrator | Azure AD joined device local administrator | 9f06204d-73c1-4d4c-880a-6edb90606fd8
@@ -1981,6 +2062,7 @@ Device Users | Deprecated | d405c6df-0af8-4e3b-95e4-4d06e542189e
Directory Readers | Directory readers | 88d8e3e3-8f55-4a1e-953a-9b9898b8876b Directory Synchronization Accounts | Not shown because it shouldn't be used | d29b2b05-8046-44ba-8758-1e26182fcf32 Directory Writers | Directory Writers | 9360feb5-f418-4baa-8175-e2a00bac4301
+Domain Name Administrator | Domain name administrator | 8329153b-31d0-4727-b945-745eb3bc5f31
Dynamics 365 Administrator | Dynamics 365 administrator | 44367163-eba1-44c3-98af-f5787879f96a Exchange Administrator | Exchange administrator | 29232cdf-9323-42fd-ade2-1d097af3e4de External Id User flow Administrator | External Id User flow Administrator | 6e591065-9bad-43ed-90f3-e9424366d2f0
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/aws-multi-accounts-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aws-multi-accounts-tutorial.md
@@ -1,6 +1,6 @@
Title: 'Tutorial: Azure Active Directory integration with Amazon Web Services (AWS) to connect multiple accounts | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure AD and Amazon Web Services (AWS) (Legacy Tutorial).
+ Title: 'Tutorial: Azure Active Directory integration with Amazon Web Services to connect multiple accounts | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure AD and Amazon Web Services (legacy tutorial).
@@ -13,273 +13,273 @@ Last updated 12/24/2020
-# Tutorial: Azure Active Directory integration with Amazon Web Services (AWS) (Legacy Tutorial)
+# Tutorial: Azure Active Directory integration with Amazon Web Services
-In this tutorial, you learn how to integrate Azure Active Directory (Azure AD) with Amazon Web Services (AWS) (Legacy Tutorial).
+In this tutorial, you learn how to integrate Azure Active Directory (Azure AD) with Amazon Web Services (AWS) (legacy tutorial).
-Integrating Amazon Web Services (AWS) with Azure AD provides you with the following benefits:
+This integration provides the following benefits:
-- You can control in Azure AD who has access to Amazon Web Services (AWS).-- You can enable your users to automatically get signed-on to Amazon Web Services (AWS) (Single Sign-On) with their Azure AD accounts.-- You can manage your accounts in one central location - the Azure portal.
+- You can control in Azure AD who has access to AWS.
+- You can enable your users to automatically sign in to AWS by using single sign-on (SSO) with their Azure AD accounts.
+- You can manage your accounts in one central location, the Azure portal.
-![Amazon Web Services (AWS) in the results list](./media/aws-multi-accounts-tutorial/amazonwebservice.png)
+![Diagram of Azure AD integration with AWS.](./media/aws-multi-accounts-tutorial/amazonwebservice.png)
> [!NOTE]
-> Please note connecting one AWS app to all your AWS accounts is not our recommended approach. Instead we recommend you to use [this](./amazon-web-service-tutorial.md) approach to configure multiple instances of AWS account to Multiple instances of AWS apps in Azure AD. You should only use this approach if you have few AWS Accounts and Roles in it, this model is not scalable as the AWS accounts and roles inside these accounts grow. This approach does not use AWS Role import functionality using Azure AD User Provisioning, so you have to manually add/update/delete the roles. For other limitations on this approach please see the details below.
+> We recommend that you *not* connect one AWS app to all your AWS accounts. Instead, we recommend that you use [Azure AD SSO integration with AWS](./amazon-web-service-tutorial.md) to configure multiple instances of your AWS account to multiple instances of AWS apps in Azure AD.
-**Please note that we do not recommend to use this approach for following reasons:**
+We recommend that you *not* connect one AWS app to all your AWS accounts, for the following reasons:
+
+* Use this approach only if you have a small number of AWS accounts and roles, because this model isn't scalable as the number of AWS accounts and the roles within them increase. The approach doesn't use AWS role-import functionality with Azure AD user provisioning, so you have to manually add, update, or delete the roles.
* You have to use the Microsoft Graph Explorer approach to patch all the roles to the app. We donΓÇÖt recommend using the manifest file approach.
-* We have seen customers reporting that after adding ~1200 app roles for a single AWS app, any operation on the app started throwing the errors related to size. There is a hard limit of size on the application object.
+* Customers report that after they've added ~1,200 app roles for a single AWS app, any further operation on the app starts throwing the errors related to size. There is a hard size limit to the application object.
-* You have to manually update the role as the roles get added in any of the accounts, which is a Replace approach and not Append unfortunately. Also if your accounts are growing then this becomes n x n relationship with accounts and roles.
+* You have to manually update the roles as they get added in any of the accounts. This is unfortunately a *replace* approach, not an *append* approach. Also, if your account numbers are growing, this becomes an *n* &times; *n* relationship with accounts and roles.
-* All the AWS accounts will be using the same Federation Metadata XML file and at the time of certificate rollover you have to drive this massive exercise to update the Certificate on all the AWS accounts at the same time
+* All the AWS accounts use the same federation metadata XML file. At the time of certificate rollover, updating the certificate on all the AWS accounts at the same time can be a massive exercise.
## Prerequisites
-To configure Azure AD integration with Amazon Web Services (AWS), you need the following items:
+To configure Azure AD integration with AWS, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Amazon Web Services (AWS) single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD subscription, you can get a [one-month trial](https://azure.microsoft.com/pricing/free-trial/).
+* An AWS SSO-enabled subscription.
> [!NOTE]
-> To test the steps in this tutorial, we do not recommend using a production environment.
-
-To test the steps in this tutorial, you should follow these recommendations:
--- Do not use your production environment, unless it is necessary.-- If you don't have an Azure AD trial environment, you can [get a one-month trial](https://azure.microsoft.com/pricing/free-trial/).
+> We do not recommend that you test the steps in this tutorial in a production environment unless it is necessary.
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Amazon Web Services (AWS) supports **SP and IDP** initiated SSO
+AWS supports SP-initiated and IDP-initiated SSO.
-## Adding Amazon Web Services (AWS) from the gallery
+## Add AWS from the gallery
-To configure the integration of Amazon Web Services (AWS) into Azure AD, you need to add Amazon Web Services (AWS) from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS into Azure AD, you add AWS from the gallery to your list of managed software as a service (SaaS) apps.
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Amazon Web Services (AWS)** in the search box.
-1. Select **Amazon Web Services (AWS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
+1. On the left pane, select the Azure AD service you want to work with.
+1. Go to **Enterprise Applications**, and then select **All Applications**.
+1. To add an application, select **New application**.
+1. In the **Add from the gallery** section, type **Amazon Web Services** in the search box.
+1. In the results list, select **Amazon Web Services**, and then add the app. In a few seconds, the app is added to your tenant.
-1. Once the application is added, go to **Properties** page and copy the **Object ID**.
+1. Go to the **Properties** pane, and then copy the value that's displayed in the **Object ID** box.
- ![Object ID](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-properties.png)
+ ![Screenshot of the Object ID box on the Properties pane.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-properties.png)
## Configure and test Azure AD SSO
-In this section, you configure and test Azure AD single sign-on with Amazon Web Services (AWS) based on a test user called "Britta Simon".
+In this section, you configure and test Azure AD single sign-on with AWS based on a test user called "Britta Simon."
-For single sign-on to work, Azure AD needs to know what the counterpart user in Amazon Web Services (AWS) is to a user in Azure AD. In other words, a link relationship between an Azure AD user and the related user in Amazon Web Services (AWS) needs to be established.
+For single sign-on to work, Azure AD needs to know what the counterpart user in AWS is to the Azure AD user. In other words, a link relationship between the Azure AD user and the same user in AWS needs to be established.
-In Amazon Web Services (AWS), assign the value of the **user name** in Azure AD as the value of the **Username** to establish the link relationship.
+In AWS, assign the value of the **user name** in Azure AD as the value of the AWS **Username** to establish the link relationship.
-To configure and test Azure AD single sign-on with Amazon Web Services (AWS), perform the following steps:
+To configure and test Azure AD single sign-on with AWS, do the following:
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure Amazon Web Services (AWS) SSO](#configure-amazon-web-services-aws-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature.
+1. **[Configure AWS SSO](#configure-aws-sso)** to configure SSO settings on the application side.
+1. **[Test SSO](#test-sso)** to verify that the configuration works.
### Configure Azure AD SSO
-In this section, you enable Azure AD single sign-on in the Azure portal and configure single sign-on in your Amazon Web Services (AWS) application.
+In this section, you enable Azure AD SSO in the Azure portal and configure SSO in your AWS application by doing the following:
-**To configure Azure AD single sign-on with Amazon Web Services (AWS), perform the following steps:**
+1. In the Azure portal, on the left pane of the **Amazon Web Services (AWS)** application integration page, select **Single sign-on**.
-1. In the Azure portal, on the **Amazon Web Services (AWS)** application integration page, select **Single sign-on**.
+ ![Screenshot of the "Single sign-on" command.](common/select-sso.png)
- ![Configure single sign-on link](common/select-sso.png)
+1. On the **Select a single sign-on method** pane, select **SAML/WS-Fed** mode to enable single sign-on.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot of the "Select a single sign-on method" pane.](common/select-saml-option.png)
- ![Single sign-on select mode](common/select-saml-option.png)
+1. On the **Set up Single Sign-On with SAML** pane, select the **Edit** button (pencil icon).
-3. On the **Set up Single Sign-On with SAML** page, click **pencil** icon to open **Basic SAML Configuration** dialog.
+ ![Screenshot of the Edit button on the "Set up Single Sign-On with SAML" pane.](common/edit-urls.png)
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. The **Basic SAML Configuration** pane opens. Skip this section, because the app is preintegrated with Azure. Select **Save**.
-4. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure and click **Save**.
+ The AWS application expects the SAML assertions in a specific format. You can manage the values of these attributes from the **User Attributes & Claims** section on the **Application integration** page.
+
+1. On the **Set up Single Sign-On with SAML** page, select the **Edit** button.
-5. Amazon Web Services (AWS) application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes & Claims** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes & Claims** dialog.
+ ![Screenshot of the Edit button on the "User Attributes" pane.](common/edit-attribute.png)
- ![Screenshot shows User Attributes with the edit control called out.](common/edit-attribute.png)
+1. In the **User Claims** section of the **User Attributes** pane, configure the SAML token attribute by using the values in the following table:
-6. In the **User Claims** section on the **User Attributes** dialog, configure SAML token attribute as shown in the image above and perform the following steps:
-
- | Name | Source Attribute | Namespace |
+ | Name | Source attribute | Namespace |
| | | | | RoleSessionName | user.userprincipalname | `https://aws.amazon.com/SAML/Attributes` | | Role | user.assignedroles | `https://aws.amazon.com/SAML/Attributes`|
- | SessionDuration | "provide a value between 900 seconds (15 minutes) to 43200 seconds (12 hours)" | `https://aws.amazon.com/SAML/Attributes` |
-
- 1. Click **Add new claim** to open the **Manage user claims** dialog.
-
- ![Screenshot shows User claims with Add new claim and Save called out.](common/new-save-attribute.png)
+ | SessionDuration | "provide a value from 900 seconds (15 minutes) to 43200 seconds (12 hours)" | `https://aws.amazon.com/SAML/Attributes` |
+
+ a. Select **Add new claim** and then, on the **Manage user claims** pane, do the following:
- ![Screenshot shows Manage user claims where you can enter the values described in this step.](common/new-attribute-details.png)
+ ![Screenshot of "Add new claim" and "Save" buttons on the "User claims" pane.](common/new-save-attribute.png)
- b. In the **Name** textbox, type the attribute name shown for that row.
+ ![Screenshot of the "Manage user claims" pane.](common/new-attribute-details.png)
- c. In the **Namespace** textbox, type the Namespace value shown for that row.
+ b. In the **Name** box, enter the attribute name.
- d. Select Source as **Attribute**.
+ c. In the **Namespace** box, enter the namespace value.
- e. From the **Source attribute** list, type the attribute value shown for that row.
+ d. For the **Source**, select **Attribute**.
- f. Click **Ok**
+ e. In the **Source attribute** drop-down list, select the attribute.
- g. Click **Save**.
+ f. Select **Ok**, and then select **Save**.
- >[!NOTE]
- >For more information about roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui--preview).
+ >[!NOTE]
+ >For more information about roles in Azure AD, see [Add app roles to your application and receive them in the token](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui--preview).
-7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the federation metadata XML file, and then save it to your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot of the "Federation Metadata XML" download link on the "SAML Signing Certificate" pane.](common/metadataxml.png)
-### Configure Amazon Web Services (AWS) SSO
+### Configure AWS SSO
-1. In a different browser window, sign-on to your Amazon Web Services (AWS) company site as administrator.
+1. In a new browser window, sign in to your AWS company site as administrator.
-1. Click **AWS Home**.
+1. Select the **AWS Home** icon.
- ![Configure Single Sign-On home][11]
+ ![Screenshot of the "AWS Home" icon.][11]
-1. Click **Identity and Access Management**.
+1. On the **AWS services** pane, under **Security, Identity & Compliance**, select **IAM (Identity & Access Management)**.
- ![Configure Single Sign-On Identity][12]
+ ![Screenshot of the "Identity and Access Management" link on the "AWS Services" pane.][12]
-1. Click **Identity Providers**, and then click **Create Provider**.
+1. On the left pane, select **Identity Providers**, and then select **Create Provider**.
- ![Configure Single Sign-On Provider][13]
+ ![Screenshot of the "Create Provider" button.][13]
-1. On the **Configure Provider** dialog page, perform the following steps:
+1. On the **Configure Provider** pane, do the following:
- ![Configure Single Sign-On dialog][14]
+ ![Screenshot of the "Configure Provider" pane.][14]
- a. As **Provider Type**, select **SAML**.
+ a. In the **Provider Type** drop-down list, select **SAML**.
- b. In the **Provider Name** textbox, type a provider name (for example: *WAAD*).
+ b. In the **Provider Name** box, enter a provider name (for example. *WAAD*).
- c. To upload your downloaded **metadata file** from Azure portal, click **Choose File**.
+ c. Next to the **Metadata Document** box, select **Choose File** to upload your downloaded federation metadata XML file to the Azure portal.
- d. Click **Next Step**.
+ d. Select **Next Step**.
-1. On the **Verify Provider Information** dialog page, click **Create**.
+1. On the **Verify Provider Information** pane, select **Create**.
- ![Configure Single Sign-On Verify][15]
+ ![Screenshot of the "Verify Provider Information" pane.][15]
-1. Click **Roles**, and then click **Create role**.
+1. On the left pane, select **Roles**, and then select **Create role**.
- ![Configure Single Sign-On Roles][16]
+ ![Screenshot of the "Create role" button on the Roles pane.][16]
> [!NOTE]
- > The combined length of the Role ARN and the SAML provider ARN for a role being imported must be 240 characters or less.
+ > The combined length of the role Amazon Resource Name (ARN) and the SAML provider ARN for a role that's being imported must be 240 or fewer characters.
-1. On the **Create role** page, perform the following steps:
+1. On the **Create role** page, do the following:
- ![Configure Single Sign-On Trust][19]
+ ![Screenshot of the "SAML 2.0 federation" trusted entity button on the "Create role" page.][19]
- a. Select **SAML 2.0 federation** under **Select type of trusted entity**.
+ a. Under **Select type of trusted entity**, select **SAML 2.0 federation**.
- b. Under **Choose a SAML 2.0 Provider section**, select the **SAML provider** you have created previously (for example: *WAAD*)
+ b. Under **Choose a SAML 2.0 provider**, select the SAML provider that you created previously (for example, *WAAD*)
c. Select **Allow programmatic and AWS Management Console access**.
- d. Click **Next: Permissions**.
+ d. Select **Next: Permissions**.
-1. Search **Administrator Access** in the search bar and select the **AdministratorAccess** checkbox and then click **Next: Tags**.
+1. In the search box, enter **Administrator Access**, select the **AdministratorAccess** check box, and then select **Next: Tags**.
- ![Screenshot shows AdministratorAccess selected as a Policy name.](./media/aws-multi-accounts-tutorial/administrator-access.png)
+ ![Screenshot of the "Policy name" list with the AdministratorAccess policy selected.](./media/aws-multi-accounts-tutorial/administrator-access.png)
-1. On the **Add tags (optional)** section, perform the following steps:
+1. On the **Add tags (optional)** pane, do the following:
- ![Add tags](./media/aws-multi-accounts-tutorial/config2.png)
+ ![Screenshot of the "Add tags (optional)" pane.](./media/aws-multi-accounts-tutorial/config2.png)
- a. In the **Key** textbox, enter the key name for ex: Azureadtest.
+ a. In the **Key** box, enter the key name (for example, *Azureadtest*).
- b. In the **Value (optional)** textbox, enter the key value using the following format `accountname-aws-admin`. The account name should be in all lowercase.
+ b. In the **Value (optional)** box, enter the key value in the following format: `<accountname-aws-admin>`. The account name should be in all lowercase letters.
- c. Click **next: Review**.
+ c. Select **Next: Review**.
-1. On the **Review** dialog, perform the following steps:
+1. On the **Review** pane, do the following:
- ![Configure Single Sign-On Review][34]
+ ![Screenshot of the Review pane, with the "Role name" and "Role description" boxes highlighted.][34]
- a. In the **Role name** textbox, enter the value in the following pattern `accountname-aws-admin`.
+ a. In the **Role name** box, enter the value in the following format: `<accountname-aws-admin>`.
- b. In the **Role description** textbox, enter the same value which you have used for the role name.
+ b. In the **Role description** box, enter the value that you used for the role name.
- c. Click **Create Role**.
+ c. Select **Create role**.
- d. Create as many roles as needed and map them to the Identity Provider.
+ d. Create as many roles as you need, and map them to the identity provider.
> [!NOTE]
- > Similarly create remaining other roles like accountname-finance-admin, accountname-read-only-user, accountname-devops-user, accountname-tpm-user with different policies to be attached. Later also these role policies can be changed as per requirements per AWS account but its always better to keep same policies for each role across the AWS accounts.
+ > Similarly, you can create other roles, such as *accountname-finance-admin*, *accountname-read-only-user*, *accountname-devops-user*, or *accountname-tpm-user*, each with a different policy attached to it. You can change these role policies later, according to the requirements for each AWS account. It's a good idea to keep the same policies for each role across the AWS accounts.
-1. Please make a note of account ID for that AWS account either from EC2 properties or IAM dashboard as highlighted below:
+1. Be sure to note the account ID for the AWS account either from the Amazon Elastic Compute Cloud (Amazon EC2) properties pane or the IAM dashboard, as shown in the following screenshot:
- ![Screenshot shows where the account I D appears in the A W S window.](./media/aws-multi-accounts-tutorial/aws-accountid.png)
+ ![Screenshot showing where the account ID is displayed on the "Identity and Access Management" pane.](./media/aws-multi-accounts-tutorial/aws-accountid.png)
-1. Now sign into Azure portal and navigate to **Groups**.
+1. Sign in to the Azure portal, and then go to **Groups**.
-1. Create new groups with the same name as that of IAM Roles created earlier and note down the **Object IDs** of these new groups.
+1. Create new groups with the same name as that of the IAM roles you created earlier, and then note the value in the **Object Id** box of each of these new groups.
- ![Select Administrator Access1](./media/aws-multi-accounts-tutorial/copy-objectids.png)
+ ![Screenshot of the account details for a new group.](./media/aws-multi-accounts-tutorial/copy-objectids.png)
-1. Sign out from current AWS account and login with other account where you want to configure single sign on with Azure AD.
+1. Sign out of the current AWS account, and then sign in to another account where you want to configure SSO with Azure AD.
-1. Once all the roles are created in the accounts, they show up in the **Roles** list for those accounts.
+1. After you've created all the roles in the accounts, they're displayed in the **Roles** list for those accounts.
- ![Screenshot shows the Roles list with Role name, Description, and Trusted entities.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-listofroles.png)
+ ![Screenshot of the roles list, showing each role's name, description, and trusted entities.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-listofroles.png)
-1. We need to capture all the Role ARN and Trusted Entities for all the roles across all the accounts, which we need to map manually with Azure AD application.
+You next need to capture all the role ARNs and trusted entities for all roles across all accounts. You'll need to map them manually with the Azure AD application. To do so:
-1. Click on the roles to copy **Role ARN** and **Trusted Entities** values. You need these values for all the roles that you need to create in Azure AD.
+1. Select each role to copy its role ARN and trusted entity values. You'll need them for all the roles that you'll create in Azure AD.
- ![Roles setup2](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-role-summary.png)
+ ![Screenshot of the Summary pane for the role ARNs and trusted entities.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-role-summary.png)
-1. Perform the above step for all the roles in all the accounts and store all of them in format **Role ARN,Trusted entities** in a notepad.
+1. Repeat the preceding step for all the roles in all the accounts, and then store them in a text file in the following format: `<Role ARN>,<Trusted entities>`.
-1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) in another window.
+1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), and then do the following:
- 1. Sign in to the Microsoft Graph Explorer site using the Global Admin/Co-admin credentials for your tenant.
+ a. Sign in to the Microsoft Graph Explorer site with the Global Admin or Co-admin credentials for your tenant.
- 1. You need to have sufficient permissions to create the roles. Click on **modify permissions** to get the required permissions.
+ b. You need sufficient permissions to create the roles. Select **modify permissions**.
- ![Microsoft Graph Explorer dialog box1](./media/aws-multi-accounts-tutorial/graph-explorer-new9.png)
+ ![Screenshot of the "modify permissions" link on the Microsoft Graph Explorer Authentication pane.](./media/aws-multi-accounts-tutorial/graph-explorer-new9.png)
- 1. Select following permissions from the list (if you don't have these already) and click "Modify Permissions"
+ c. In the permissions list, if you don't already have the permissions that are shown in the following screenshot, select each one, and then select **Modify Permissions**.
- ![Microsoft Graph Explorer dialog box2](./media/aws-multi-accounts-tutorial/graph-explorer-new10.png)
+ ![Screenshot of the Microsoft Graph Explorer permissions list, with the appropriate permissions highlighted.](./media/aws-multi-accounts-tutorial/graph-explorer-new10.png)
- 1. This will ask you to login again and accept the consent. After accepting the consent, you are logged into the Microsoft Graph Explorer again.
+ d. Sign in to Graph Explorer again, and accept the site usage conditions.
- 1. Change the version dropdown to **beta**. To fetch all the Service Principals from your tenant, use the following query: `https://graph.microsoft.com/beta/servicePrincipals`. If you are using multiple directories, then you can use following pattern, which has your primary domain in it: `https://graph.microsoft.com/beta/contoso.com/servicePrincipals`.
+ e. At the top of the pane, select **GET** for the method, select **beta** for the version, and then, in the query box, enter either of the following:
+
+ * To fetch all the service principals from your tenant, use `https://graph.microsoft.com/beta/servicePrincipals`.
+ * If you're using multiple directories, use `https://graph.microsoft.com/beta/contoso.com/servicePrincipals`, which contains your primary domain.
- ![Microsoft Graph Explorer dialog box3](./media/aws-multi-accounts-tutorial/graph-explorer-new1.png)
+ ![Screenshot of the Microsoft Graph Explorer query "Request Body" pane.](./media/aws-multi-accounts-tutorial/graph-explorer-new1.png)
- 1. From the list of Service Principals fetched, get the one you need to modify. You can also use the Ctrl+F to search the application from all the listed ServicePrincipals. You can use following query by using the **Service Principal Object ID** which you have copied from Azure AD Properties page to get to the respective Service Principal.
+ f. From the list of service principals, get the one you need to modify.
+
+ You can also search the application for all the listed service principals by selecting Ctrl+F. To get a specific service principal, include in the query the service principal object ID, which you copied earlier from the Azure AD Properties pane, as shown here:
- `https://graph.microsoft.com/beta/servicePrincipals/<objectID>`.
+ `https://graph.microsoft.com/beta/servicePrincipals/<objectID>`.
- ![Microsoft Graph Explorer dialog box4](./media/aws-multi-accounts-tutorial/graph-explorer-new2.png)
+ ![Screenshot showing a service principal query that includes the object ID.](./media/aws-multi-accounts-tutorial/graph-explorer-new2.png)
- 1. Extract the appRoles property from the service principal object.
+ g. Extract the appRoles property from the service principal object.
- ![Microsoft Graph Explorer dialog box5](./media/aws-multi-accounts-tutorial/graph-explorer-new3.png)
+ ![Screenshot of the code for extracting the appRoles property from the service principal object.](./media/aws-multi-accounts-tutorial/graph-explorer-new3.png)
- 1. You now need to generate new roles for your application.
+ h. You now need to generate new roles for your application.
- 1. Below JSON is an example of appRoles object. Create a similar object to add the roles you want for your application.
+ i. The following JSON code is an example of an appRoles object. Create a similar object to add the roles you want for your application.
``` {
@@ -321,47 +321,47 @@ In this section, you enable Azure AD single sign-on in the Azure portal and conf
``` > [!Note]
- > You can only add new roles after the **msiam_access** for the patch operation. Also, you can add as many roles as you want per your Organization need. Azure AD will send the **value** of these roles as the claim value in SAML response.
+ > You can add new roles only after you've added *msiam_access* for the patch operation. You can also add as many roles as you want, depending on your organization's needs. Azure AD sends the *value* of these roles as the claim value in the SAML response.
- 1. Go back to your Microsoft Graph Explorer and change the method from **GET** to **PATCH**. Patch the Service Principal object to have desired roles by updating appRoles property similar to the one shown above in the example. Click **Run Query** to execute the patch operation. A success message confirms the creation of the role for your Amazon Web Services application.
+ j. In Microsoft Graph Explorer, change the method from **GET** to **PATCH**. Patch the service principal object with the roles you want by updating the appRoles property, like the one shown in the preceding example. Select **Run Query** to execute the patch operation. A success message confirms the creation of the role for your AWS application.
- ![Microsoft Graph Explorer dialog box6](./media/aws-multi-accounts-tutorial/graph-explorer-new11.png)
+ ![Screenshot of the Microsoft Graph Explorer pane, with the method changed to PATCH.](./media/aws-multi-accounts-tutorial/graph-explorer-new11.png)
-1. After the Service Principal is patched with more roles, you can assign Users/Groups to the respective roles. This can be done by going to portal and navigating to the Amazon Web Services application. Click on the **Users and Groups** tab on the top.
+1. After the service principal is patched with more roles, you can assign users and groups to their respective roles. You do this in the Azure portal by going to the AWS application and then selecting the **Users and Groups** tab at the top.
-1. We recommend you to create new groups for every AWS role so that you can assign that particular role in that group. Note that this is one to one mapping for one group to one role. You can then add the members who belong to that group.
+1. We recommend that you create a new group for every AWS role so that you can assign that particular role in the group. This one-to-one mapping means that one group is assigned to one role. You can then add the members who belong to that group.
-1. Once the Groups are created, select the group and assign to the application.
+1. After you've created the groups, select the group and assign it to the application.
- ![Configure Single Sign-On Add1](./media/aws-multi-accounts-tutorial/graph-explorer-new5.png)
+ ![Screenshot of the "Users and groups" pane.](./media/aws-multi-accounts-tutorial/graph-explorer-new5.png)
> [!Note]
- > Nested groups are not supported when assigning groups.
+ > Nested groups are not supported when you assign groups.
-1. To assign the role to the group, select the role and click on **Assign** button in the bottom of the page.
+1. To assign the role to the group, select the role, and then select **Assign**.
- ![Configure Single Sign-On Add2](./media/aws-multi-accounts-tutorial/graph-explorer-new6.png)
+ ![Screenshot of the "Add Assignment" pane.](./media/aws-multi-accounts-tutorial/graph-explorer-new6.png)
> [!Note]
- > Please note that you need to refresh your session in Azure portal to see new roles.
+ > After you've assigned the roles, you can view them by refreshing your Azure portal session.
### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the My Apps.
+In this section, you test your Azure AD single sign-on configuration by using Microsoft My Apps.
-When you click the Amazon Web Services (AWS) tile in the My Apps, you should get Amazon Web Services (AWS) application page with option to select the role.
+When you select the **AWS** tile in My Apps, the AWS application page opens with an option to select the role.
-![Test single sign-on1](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-test-screen.png)
+![Screenshot of the AWS page for testing SSO.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-test-screen.png)
You can also verify the SAML response to see the roles being passed as claims.
-![Test single sign-on2](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-test-saml.png)
+![Screenshot of the SAML response.](./media/aws-multi-accounts-tutorial/tutorial-amazonwebservices-test-saml.png)
-For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+For more information about My Apps, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Amazon Web Services (AWS) you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+After you configure AWS you can enforce session control, which protects the exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from conditional access. For more information, see [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
<!--Image references-->
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/jfrog-artifactory-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jfrog-artifactory-tutorial.md
@@ -78,21 +78,24 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
`<servername>.jfrog.io` b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
+
+ - For Artifactory 6.x: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory 7.x: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<servername>.jfrog.io/<servername>/webapp/`
+ - For Artifactory 6.x: `https://<servername>.jfrog.io/<servername>/webapp/`
+ - For Artifactory 7.x: `https://<servername>.jfrog.io/ui/login`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [JFrog Artifactory Client support team](https://support.jfrog.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. JFrog Artifactory application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog.
+1. JFrog Artifactory application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click the **Edit** icon to open the User Attributes dialog.
![Screenshot shows User Attributes with the edit control called out.](common/edit-attribute.png)
-1. In addition to above, JFrog Artifactory application expects few more attributes to be passed back in SAML response. In the **User Attributes & Claims** section on the **Group Claims (Preview)** dialog, perform the following steps:
+1. In addition to the above, JFrog Artifactory expects a number of additional attributes to be passed back in the SAML response. In the **User Attributes & Claims** section on the **Group Claims (Preview)** dialog, perform the following steps:
a. Click the **pen** next to **Groups returned in claim**.
@@ -104,17 +107,20 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
c. Click **Save**.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
+4. In the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, locate the **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateraw.png)
+ ![The Certificate download link](./media/jfrog-artifactory-tutorial/certificate-base.png)
-6. On the **Set up JFrog Artifactory** section, copy the appropriate URL(s) based on your requirement.
+6. Configure the Artifactory (SAML Service Provider Name) with the 'Identifier' field (see step 4). In the **Set up JFrog Artifactory** section, copy the appropriate URL(s) based on your requirement.
+
+ - For Artifactory 6.x: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory 7.x: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
![Copy configuration URLs](common/copy-configuration-urls.png) ### Configure JFrog Artifactory SSO
-To configure single sign-on on **JFrog Artifactory** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [JFrog Artifactory support team](https://support.jfrog.com). They set this setting to have the SAML SSO connection set properly on both sides.
+Everything you need to configure single sign-on on the **JFrog Artifactory** side is configurable by the Artifactory admin in the SAML configugration screen.
### Create an Azure AD test user
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/rstudio-server-pro-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/rstudio-server-pro-tutorial.md
@@ -1,6 +1,6 @@
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with RStudio Server Pro SAML Authentication | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and RStudio Server Pro SAML Authentication.
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with RStudio Server Pro | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and RStudio Server Pro.
@@ -13,12 +13,12 @@ Last updated 10/28/2020
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with RStudio Server Pro SAML Authentication
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with RStudio Server Pro
-In this tutorial, you'll learn how to integrate RStudio Server Pro SAML Authentication with Azure Active Directory (Azure AD). When you integrate RStudio Server Pro SAML Authentication with Azure AD, you can:
+In this tutorial, you'll learn how to integrate RStudio Server Pro (RSP) with Azure Active Directory (Azure AD). When you integrate RSP with Azure AD, you can:
-* Control in Azure AD who has access to RStudio Server Pro SAML Authentication.
-* Enable your users to be automatically signed-in to RStudio Server Pro SAML Authentication with their Azure AD accounts.
+* Control in Azure AD who has access to RSP.
+* Enable your users to be automatically signed-in to RSP with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
@@ -26,17 +26,17 @@ In this tutorial, you'll learn how to integrate RStudio Server Pro SAML Authenti
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* RStudio Server Pro SAML Authentication single sign-on (SSO) enabled subscription.
+* RSP (version >= 1.4) installation.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* RStudio Server Pro SAML Authentication supports **SP and IDP** initiated SSO
+* RSP supports **SP and IDP** initiated SSO
-## Adding RStudio Server Pro SAML Authentication from the gallery
+## Adding RStudio Server Pro from the gallery
-To configure the integration of RStudio Server Pro SAML Authentication into Azure AD, you need to add RStudio Server Pro SAML Authentication from the gallery to your list of managed SaaS apps.
+To configure the integration of RSP into Azure AD, you need to add RStudio Server Pro SAML Authentication from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service.
@@ -46,17 +46,17 @@ To configure the integration of RStudio Server Pro SAML Authentication into Azur
1. Select **RStudio Server Pro SAML Authentication** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for RStudio Server Pro SAML Authentication
+## Configure and test Azure AD SSO for RStudio Server Pro
-Configure and test Azure AD SSO with RStudio Server Pro SAML Authentication using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RStudio Server Pro SAML Authentication.
+Configure and test Azure AD SSO with RSP using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RSP.
To configure and test Azure AD SSO with RStudio Server Pro SAML Authentication, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure RStudio Server Pro SAML Authentication SSO](#configure-rstudio-server-pro-saml-authentication-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create RStudio Server Pro SAML Authentication test user](#create-rstudio-server-pro-saml-authentication-test-user)** - to have a counterpart of B.Simon in RStudio Server Pro SAML Authentication that is linked to the Azure AD representation of user.
+1. **[Configure RStudio Server Pro SSO](#configure-rstudio-server-pro-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RStudio Server Pro test user](#create-rstudio-server-pro-test-user)** - to have a counterpart of B.Simon in RStudio Server Pro that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
@@ -72,18 +72,18 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields: a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.rstudioservices.com/<PATH>/saml/metadata`
+ `https://<RSP-SERVER>/<PATH>/saml/metadata`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.rstudioservices.com/<PATH>/saml/acs`
+ `https://<RSP-SERVER>/<PATH>/saml/acs`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.rstudioservices.com`
+ `https://<RSP-SERVER>/<PATH>/`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [RStudio Server Pro SAML Authentication Client support team](mailto:support@rstudio.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual URI of your RSP installation. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
@@ -113,13 +113,27 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure RStudio Server Pro SAML Authentication SSO
+## Configure RStudio Server Pro SSO
-To configure single sign-on on **RStudio Server Pro SAML Authentication** side, you need to send the **App Federation Metadata Url** to [RStudio Server Pro SAML Authentication support team](mailto:support@rstudio.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Update the RSP configuration file `/etc/rstudio/rserver.conf` with the following:
-### Create RStudio Server Pro SAML Authentication test user
+ ```
+ auth-saml=1
+ auth-saml-metadata-url=<federation-metadata-URI>
+ auth-saml-sp-name-id-format=emailaddress
+ auth-saml-sp-attribute-username=NameID
+ auth-saml-sp-base-uri=<RSP-Server-URI>
+ ```
-In this section, you create a user called B.Simon in RStudio Server Pro SAML Authentication. Work with [RStudio Server Pro SAML Authentication support team](mailto:support@rstudio.com) to add the users in the RStudio Server Pro SAML Authentication platform. Users must be created and activated before you use single sign-on.
+2. Restart RSP by running the following:
+
+ ```
+ sudo rstudio-server restart
+ ```
+
+### Create RStudio Server Pro test user
+
+All users that are to use RSP have to be provisioned on the server. You can create the user with the `useradd` or `adduser` command.
## Test SSO
@@ -139,4 +153,4 @@ You can also use Microsoft Access Panel to test the application in any mode. Whe
## Next steps
-Once you configure RStudio Server Pro SAML Authentication you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure RStudio Server Pro SAML Authentication you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
aks https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-autoscaler.md
@@ -270,6 +270,9 @@ az aks nodepool update \
If you wish to re-enable the cluster autoscaler on an existing cluster, you can re-enable it using the [az aks nodepool update][az-aks-nodepool-update] command, specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
+> [!NOTE]
+> If you are planning on using the cluster autoscaler with nodepools that span multiple zones and leverage scheduling features related to zones such as volume topological scheduling, the recommendation is to have one nodepool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This will ensure that the autoscaler will scale up succesfully and try and keep the sizes of the nodepools balanced.
+ ## Next steps This article showed you how to automatically scale the number of AKS nodes. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][aks-scale-apps].
aks https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
@@ -3,7 +3,7 @@ Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS)
description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS) Previously updated : 06/03/2019 Last updated : 12/07/2020
@@ -27,8 +27,8 @@ Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduce
A Kubernetes cluster is divided into two components: -- *Control plane* nodes provide the core Kubernetes services and orchestration of application workloads.-- *Nodes* run your application workloads.
+- The *Control plane* provides the core Kubernetes services and orchestration of application workloads.
+- *Nodes* which run your application workloads.
![Kubernetes control plane and node components](media/concepts-clusters-workloads/control-plane-and-nodes.png)
aks https://docs.microsoft.com/en-us/azure/aks/manage-azure-rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/manage-azure-rbac.md
@@ -282,3 +282,4 @@ az group delete -n MyResourceGroup
[az-feature-list]: /cli/azure/feature#az-feature-list [az-feature-register]: /cli/azure/feature#az-feature-register [az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest#az-aks-install-cli&preserve-view=true
+[az-provider-register]: /cli/azure/provider?view=azure-cli-latest#az-provider-register
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
@@ -8,7 +8,7 @@
ms.assetid: 034febe3-465f-4840-9fc6-c448ef520b0f Previously updated : 11/23/2020 Last updated : 02/09/2021
@@ -18,7 +18,7 @@ This topic provides a reference for the following API Management policies. For i
## <a name="AccessRestrictionPolicies"></a> Access restriction policies -- [Check HTTP header](#CheckHTTPHeader) - Enforces existence and/or value of a HTTP Header.
+- [Check HTTP header](#CheckHTTPHeader) - Enforces existence and/or value of a HTTP header.
- [Limit call rate by subscription](#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis. - [Limit call rate by key](#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis. - [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
@@ -76,7 +76,7 @@ This policy can be used in the following policy [sections](./api-management-howt
## <a name="LimitCallRate"></a> Limit call rate by subscription
-The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When this policy is triggered the caller receives a `429 Too Many Requests` response status code.
+The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
> [!IMPORTANT] > This policy can be used only once per policy document.
@@ -94,18 +94,25 @@ The `rate-limit` policy prevents API usage spikes on a per subscription basis by
```xml <rate-limit calls="number" renewal-period="seconds"> <api name="API name" id="API id" calls="number" renewal-period="seconds" />
- <operation name="operation name" id="operation id" calls="number" renewal-period="seconds" />
+ <operation name="operation name" id="operation id" calls="number" renewal-period="seconds"
+ retry-after-header-name="header name"
+ retry-after-variable-name="policy expression variable name"
+ remaining-calls-header-name="header name"
+ remaining-calls-variable-name="policy expression variable name"
+ total-calls-header-name="header name"/>
</api> </rate-limit> ``` ### Example
+In the following example, the per subscription rate limit is 20 calls per 90 seconds. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerSubscription`.
+ ```xml <policies> <inbound> <base />
- <rate-limit calls="20" renewal-period="90" />
+ <rate-limit calls="20" renewal-period="90" remaining-calls-variable-name="remainingCallsPerSubscription"/>
</inbound> <outbound> <base />
@@ -127,7 +134,12 @@ The `rate-limit` policy prevents API usage spikes on a per subscription basis by
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Yes | N/A |
-| renewal-period | The time period in seconds after which the quota resets. | Yes | N/A |
+| renewal-period | The time period in seconds after which the rate resets. | Yes | N/A |
+| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
### Usage
@@ -142,7 +154,7 @@ This policy can be used in the following policy [sections](./api-management-howt
> [!IMPORTANT] > This feature is unavailable in the **Consumption** tier of API Management.
-The `rate-limit-by-key` policy prevents API usage spikes on a per key basis by limiting the call rate to a specified number per a specified time period. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the limit. When this policy is triggered the caller receives a `429 Too Many Requests` response status code.
+The `rate-limit-by-key` policy prevents API usage spikes on a per key basis by limiting the call rate to a specified number per a specified time period. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the limit. When this call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
@@ -158,13 +170,16 @@ For more information and examples of this policy, see [Advanced request throttli
<rate-limit-by-key calls="number" renewal-period="seconds" increment-condition="condition"
- counter-key="key value" />
+ counter-key="key value"
+ retry-after-header-name="header name" retry-after-variable-name="policy expression variable name"
+ remaining-calls-header-name="header name" remaining-calls-variable-name="policy expression variable name"
+ total-calls-header-name="header name"/>
``` ### Example
-In the following example, the rate limit is keyed by the caller IP address.
+In the following example, the rate limit of 10 calls per 60 seconds is keyed by the caller IP address. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerIP`.
```xml <policies>
@@ -173,7 +188,8 @@ In the following example, the rate limit is keyed by the caller IP address.
<rate-limit-by-key calls="10" renewal-period="60" increment-condition="@(context.Response.StatusCode == 200)"
- counter-key="@(context.Request.IpAddress)"/>
+ counter-key="@(context.Request.IpAddress)"
+ remaining-calls-variable-name="remainingCallsPerIP"/>
</inbound> <outbound> <base />
@@ -193,8 +209,13 @@ In the following example, the rate limit is keyed by the caller IP address.
| - | -- | -- | - | | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Yes | N/A | | counter-key | The key to use for the rate limit policy. | Yes | N/A |
-| increment-condition | The boolean expression specifying if the request should be counted towards the quota (`true`). | No | N/A |
-| renewal-period | The time period in seconds after which the quota resets. | Yes | N/A |
+| increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
+| renewal-period | The time period in seconds after which the rate resets. | Yes | N/A |
+| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
### Usage
@@ -315,7 +336,7 @@ This policy can be used in the following policy [sections](./api-management-howt
> [!IMPORTANT] > This feature is unavailable in the **Consumption** tier of API Management.
-The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the call limit is reached, the caller receives a `403 Forbidden` response status code.
+The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the call rate is exceeded, the caller receives a `403 Forbidden` response status code.
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-sample-flexible-throttling.md
@@ -36,14 +36,14 @@ Within Azure API Management, rate limits are typically propagated faster across
> Due to the distributed nature of throttling architecture, rate limiting is never completely accurate. The difference between the configured and the real number of allowed requests vary based on request volume and rate, backend latency, and other factors. ## Product-based throttling
-To date, the rate throttling capabilities have been limited to being scoped to a particular Product subscription, defined in the Azure portal. This is useful for the API provider to apply limits on the developers who have signed up to use their API, however, it does not help, for example, in throttling individual end users of the API. It is possible that for single user of the developer's application to consume the entire quota and then prevent other customers of the developer from being able to use the application. Also, several customers who might generate a high volume of requests may limit access to occasional users.
+Rate throttling capabilities that are scoped to a particular subscription are useful for the API provider to apply limits on the developers who have signed up to use their API. However, it does not help, for example, in throttling individual end users of the API. It is possible for a single user of the developer's application to consume the entire quota and then prevent other customers of the developer from being able to use the application. Also, several customers who might generate a high volume of requests may limit access to occasional users.
## Custom key-based throttling > [!NOTE] > The `rate-limit-by-key` and `quota-by-key` policies are not available when in the Consumption tier of Azure API Management.
-The new [rate-limit-by-key](./api-management-access-restriction-policies.md#LimitCallRateByKey) and [quota-by-key](./api-management-access-restriction-policies.md#SetUsageQuotaByKey) policies provide a more flexible solution to traffic control. These new policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
+The [rate-limit-by-key](./api-management-access-restriction-policies.md#LimitCallRateByKey) and [quota-by-key](./api-management-access-restriction-policies.md#SetUsageQuotaByKey) policies provide a more flexible solution to traffic control. These policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
## IP address throttling The following policies restrict a single client IP address to only 10 calls every minute, with a total of 1,000,000 calls and 10,000 kilobytes of bandwidth per month.
@@ -73,7 +73,7 @@ If an end user is authenticated, then a throttling key can be generated based on
This example shows how to extract the Authorization header, convert it to `JWT` object and use the subject of the token to identify the user and use that as the rate limiting key. If the user identity is stored in the `JWT` as one of the other claims, then that value could be used in its place. ## Combined policies
-Although the new throttling policies provide more control than the existing throttling policies, there is still value combining both capabilities. Throttling by product subscription key ([Limit call rate by subscription](./api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](./api-management-access-restriction-policies.md#SetUsageQuota)) is a great way to enable monetizing of an API by charging based on usage levels. The finer grained control of being able to throttle by user is complementary and prevents one user's behavior from degrading the experience of another.
+Although the user-based throttling policies provide more control than the subscription-based throttling policies, there is still value combining both capabilities. Throttling by product subscription key ([Limit call rate by subscription](./api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](./api-management-access-restriction-policies.md#SetUsageQuota)) is a great way to enable monetizing of an API by charging based on usage levels. The finer grained control of being able to throttle by user is complementary and prevents one user's behavior from degrading the experience of another.
## Client driven throttling When the throttling key is defined using a [policy expression](./api-management-policy-expressions.md), then it is the API provider that is choosing how the throttling is scoped. However, a developer might want to control how they rate limit their own customers. This could be enabled by the API provider by introducing a custom header to allow the developer's client application to communicate the key to the API.
app-service https://docs.microsoft.com/en-us/azure/app-service/environment/management-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/management-addresses.md
@@ -5,7 +5,7 @@
ms.assetid: a7738a24-89ef-43d3-bff1-77f43d5a3952 Previously updated : 11/20/2020 Last updated : 02/11/2021
@@ -23,6 +23,7 @@ The addresses noted below can be configured in a route table to avoid asymmetric
|--|--| | All public regions | 13.66.140.0, 13.67.8.128, 13.69.64.128, 13.69.227.128, 13.70.73.128, 13.71.170.64, 13.71.194.129, 13.75.34.192, 13.75.127.117, 13.77.50.128, 13.78.109.0, 13.89.171.0, 13.94.141.115, 13.94.143.126, 13.94.149.179, 20.36.106.128, 20.36.114.64, 23.102.135.246, 23.102.188.65, 40.69.106.128, 40.70.146.128, 40.71.13.64, 40.74.100.64, 40.78.194.128, 40.79.130.64, 40.79.178.128, 40.83.120.64, 40.83.121.56, 40.83.125.161, 40.112.242.192, 51.140.146.64, 51.140.210.128, 52.151.25.45, 52.162.106.192, 52.165.152.214, 52.165.153.122, 52.165.154.193, 52.165.158.140, 52.174.22.21, 52.178.177.147, 52.178.184.149, 52.178.190.65, 52.178.195.197, 52.187.56.50, 52.187.59.251, 52.187.63.19, 52.187.63.37, 52.224.105.172, 52.225.177.153, 52.231.18.64, 52.231.146.128, 65.52.172.237, 70.37.57.58, 104.44.129.141, 104.44.129.243, 104.44.129.255, 104.44.134.255, 104.208.54.11, 104.211.81.64, 104.211.146.128, 157.55.208.185, 191.233.203.64, 191.236.154.88, 52.181.183.11 | | Microsoft Azure Government | 23.97.29.209, 13.72.53.37, 13.72.180.105, 52.181.183.11, 52.227.80.100, 52.182.93.40, 52.244.79.34, 52.238.74.16 |
+| Azure China | 42.159.4.236, 42.159.80.125 |
## Configuring a Network Security Group
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/faq.md
@@ -1,130 +0,0 @@
- Title: Azure App Configuration FAQ
-description: Read answers to frequently asked questions (FAQ) about Azure App Configuration, such as how it's different from Azure Key Vault.
---- Previously updated : 02/19/2020---
-# Azure App Configuration FAQ
-
-This article answers frequently asked questions about Azure App Configuration.
-
-## How is App Configuration different from Azure Key Vault?
-
-App Configuration helps developers manage application settings and control feature availability. It aims to simplify many of the tasks of working with complex configuration data.
-
-App Configuration supports:
--- Hierarchical namespaces-- Labeling-- Extensive queries-- Batch retrieval-- Specialized management operations-- A feature-management user interface-
-App Configuration complements Key Vault, and the two should be used side by side in most application deployments.
-
-## Should I store secrets in App Configuration?
-
-Although App Configuration provides hardened security, Key Vault is still the best place for storing application secrets. Key Vault provides hardware-level encryption, granular access policies, and management operations such as certificate rotation.
-
-You can create App Configuration values that reference secrets stored in Key Vault. For more information, see [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md).
-
-## Does App Configuration encrypt my data?
-
-Yes. App Configuration encrypts all key values it holds, and it encrypts network communication. Key names and labels are used as indexes for retrieving configuration data and aren't encrypted.
-
-## Where does data stored in App Configuration reside?
-
-Customer data stored in App Configuration resides in the region where the customer's App Configuration store was created. App Configuration may replicate data to [paired regions](../best-practices-availability-paired-regions.md) for data resiliency, but it won't replicate or move customer data outside their Geo as defined by [data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/). Customers and end users may move, copy, or access their customer data from any location globally.
-
-## How is App Configuration different from Azure App Service settings?
-
-Azure App Service allows you to define app settings for each App Service instance. These settings are passed as environment variables to the application code. You can associate a setting with a specific deployment slot, if you want. For more information, see [Configure app settings](../app-service/configure-common.md#configure-app-settings).
-
-In contrast, Azure App Configuration allows you to define settings that can be shared among multiple apps. This includes apps running in App Service, as well as other platforms. Your application code accesses these settings through the configuration providers for .NET and Java, through the Azure SDK, or directly via REST APIs.
-
-You can also import and export settings between App Service and App Configuration. This capability allows you to quickly set up a new App Configuration store based on existing App Service settings. You can also share configuration with an existing app that relies on App Service settings.
-
-## Are there any size limitations on keys and values stored in App Configuration?
-
-There's a limit of 10 KB for a single key-value item.
-
-## How should I store configurations for multiple environments (test, staging, production, and so on)?
-
-You control who can access App Configuration at a per-store level. Use a separate store for each environment that requires different permissions. This approach provides the best security isolation.
-
-If you do not need security isolation between environments, you can use labels to differentiate between configuration values. [Use labels to enable different configurations for different environments](./howto-labels-aspnet-core.md) provides a complete example.
-
-## What are the recommended ways to use App Configuration?
-
-See [best practices](./howto-best-practices.md).
-
-## How much does App Configuration cost?
-
-There are two pricing tiers:
--- Free tier-- Standard tier.-
-If you created a store prior to the introduction of the Standard tier, it automatically moved to the Free tier upon general availability. You can choose to upgrade to the Standard tier or remain on the Free tier.
-
-You can't downgrade a store from the Standard tier to the Free tier. You can create a new store in the Free tier and then import configuration data into that store.
-
-## Which App Configuration tier should I use?
-
-Both App Configuration tiers offer core functionality, including config settings, feature flags, Key Vault references, basic management operations, metrics, and logs.
-
-The following are considerations for choosing a tier.
--- **Resources per subscription**: A resource consists of a single configuration store. Each subscription is limited to one configuration store in the free tier. Subscriptions can have an unlimited number of configuration stores in the standard tier.-- **Storage per resource**: In the free tier, each configuration store is limited to 10 MB of storage. In the standard tier, each configuration store can use up to 1 GB of storage.-- **Revision history**: App Configuration stores a history of all changes made to keys. In the free tier, this history is stored for seven days. In the standard tier, this history is stored for 30 days.-- **Requests quota**: Free tier stores are limited to 1,000 requests per day. When a store reaches 1,000 requests, it returns HTTP status code 429 for all requests until midnight UTC.-
- Standard tier stores are limited to 20,000 requests per hour. When the quota is exhausted, HTTP status code 429 is returned for all requests until the end of the hour.
--- **Service level agreement**: The standard tier has an SLA of 99.9% availability. The free tier doesn't have an SLA.-- **Security features**: Both tiers include basic security functionality, including encryption with Microsoft-managed keys, authentication via HMAC or Azure Active Directory, Azure RBAC support, managed identity, and service tags. The Standard tier offers more advanced security functionality, including Private Link support and encryption with customer-managed keys.-- **Cost**: Standard tier stores have a daily usage charge. The first 200,000 requests each day are included in the daily charge. There's also an overage charge for requests past the daily allocation. There's no cost to use a free tier store.-
-## Can I upgrade a store from the Free tier to the Standard tier? Can I downgrade a store from the Standard tier to the Free tier?
-
-You can upgrade from the Free tier to the Standard tier at any time.
-
-You can't downgrade a store from the Standard tier to the Free tier. You can create a new store in the Free tier and then [import configuration data into that store](howto-import-export-data.md).
-
-## Are there any limits on the number of requests made to App Configuration?
-
-In App Configuration, when reading key-values, data will be paginated and each request can read up to 100 key-values. When writing key-values, each request can create or update one key-value. This is supported through the REST API, App Configuration SDKs, and configuration providers. Configuration stores in the Free tier are limited to 1,000 requests per day. Configuration stores in the Standard tier may experience temporary throttling when the request rate exceeds 20,000 requests per hour.
-
-When a store reaches its limit, it will return HTTP status code 429 for all requests made until the time period expires. The `retry-after-ms` header in the response gives a suggested wait time (in milliseconds) before retrying the request.
-
-If your application regularly experiences HTTP status code 429 responses, consider redesigning it to reduce the number of requests made. For more information, see [Reduce requests made to App Configuration](./howto-best-practices.md#reduce-requests-made-to-app-configuration)
-
-## My application receives HTTP status code 429 responses. Why?
-
-You'll receive an HTTP status code 429 response under these circumstances:
-
-* Exceeding the daily request limit for a store in the Free tier.
-* Temporary throttling due to a high request rate for a store in the Standard tier.
-* Excessive bandwidth usage.
-* Attempting to create or modify a key when the storage quote is exceeded.
-
-Check the body of the 429 response for the specific reason why the request failed.
-
-## How can I receive announcements on new releases and other information related to App Configuration?
-
-Subscribe to our [GitHub announcements repo](https://github.com/Azure/AppConfiguration-Announcements).
-
-## How can I report an issue or give a suggestion?
-
-You can reach us directly on [GitHub](https://github.com/Azure/AppConfiguration/issues).
-
-## Next steps
-
-* [About Azure App Configuration](./overview.md)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-best-practices.md
@@ -88,7 +88,7 @@ App Configuration is regional service. For applications with different configura
## Client Applications in App Configuration
-Excessive requests to App Configuration can result in throttling or overage charges. Applications take advantage of the caching and intelligent refreshing currently available to optimize the number of requests they send. This process can be mirrored in high volume client applications by avoiding direct connections to the configuration store. Instead, client applications connect to a custom service, and this service communicates with the configuration store. This proxy solution can ensure the client applications do not approach the throttling limit on the configuration store. For more information on throttling, see [the FAQ](./faq.md#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration).
+Excessive requests to App Configuration can result in throttling or overage charges. Applications take advantage of the caching and intelligent refreshing currently available to optimize the number of requests they send. This process can be mirrored in high volume client applications by avoiding direct connections to the configuration store. Instead, client applications connect to a custom service, and this service communicates with the configuration store. This proxy solution can ensure the client applications do not approach the throttling limit on the configuration store. For more information on throttling, see [the FAQ](./faq.yml#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration).
## Next steps
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-baseline.md
@@ -436,7 +436,7 @@ For more information, see the following references:
- [Authorize access to Azure App Configuration using Azure Active Directory](concept-enable-rbac.md) -- [App Configuration Data Encryption](faq.md#does-app-configuration-encrypt-my-data)
+- [App Configuration Data Encryption](faq.yml#does-app-configuration-encrypt-my-data)
- [Azure Role Based Access Control (RBAC)](../role-based-access-control/overview.md)
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/use-feature-flags-dotnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
@@ -216,7 +216,7 @@ By convention, the `FeatureManagement` section of this JSON document is used for
## Use dependency injection to access IFeatureManager
-For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanage). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
+For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](https://docs.microsoft.com/dotnet/api/microsoft.featuremanagement.ifeaturemanager?view=azure-dotnet-preview). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
### [.NET 5.x](#tab/core5x)
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-data-controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller.md
@@ -7,7 +7,7 @@
Previously updated : 09/22/2020 Last updated : 02/11/2021
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools.md
@@ -39,14 +39,14 @@ data:
password: <your base64 encoded password> kind: Secret metadata:
- name: example-login-secret
+ name: pg1-login-secret
type: Opaque apiVersion: arcdata.microsoft.com/v1alpha1 kind: postgresql-12 metadata: generation: 1
- name: example
+ name: pg1
spec: engine: extensions:
@@ -102,7 +102,7 @@ echo '<your string to encode here>' | base64
### Customizing the name
-The template has a value of 'example' for the name attribute. You can change this but it must be characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the PostgreSQL Hyperscale server group to 'postgres1', you must change the name of the secret from 'example-login-secret' to 'postgres1-login-secret'
+The template has a value of 'pg1' for the name attribute. You can change this but it must be characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the PostgreSQL Hyperscale server group to 'pg2', you must change the name of the secret from 'pg1-login-secret' to 'pg2-login-secret'
### Customizing the engine version
@@ -147,10 +147,10 @@ kubectl create -n <your target namespace> -f <path to your yaml file>
Creating the PostgreSQL Hyperscale server group will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands: > [!NOTE]
-> The example commands below assume that you created a PostgreSQL Hyperscale server group named 'postgres1' and Kubernetes namespace with the name 'arc'. If you used a different namespace/PostgreSQL Hyperscale server group name, you can replace 'arc' and 'postgres1' with your names.
+> The example commands below assume that you created a PostgreSQL Hyperscale server group named 'pg1' and Kubernetes namespace with the name 'arc'. If you used a different namespace/PostgreSQL Hyperscale server group name, you can replace 'arc' and 'pg1' with your names.
```console
-kubectl get postgresql-12/postgres1 --namespace arc
+kubectl get postgresql-12/pg1 --namespace arc
``` ```console
@@ -163,7 +163,7 @@ You can also check on the creation status of any particular pod by running a com
kubectl describe po/<pod name> --namespace arc #Example:
-#kubectl describe po/postgres1-0 --namespace arc
+#kubectl describe po/pg1-0 --namespace arc
``` ## Troubleshooting creation problems
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-postgresql-hyperscale-server-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
@@ -7,7 +7,7 @@
Previously updated : 09/22/2020 Last updated : 02/11/2021
@@ -73,9 +73,16 @@ azdata arc postgres server create -n <name> --workers <# worker nodes with #>=2>
#azdata arc postgres server create -n postgres01 --workers 2 ```
+> [!IMPORTANT]
+> - The storage class used for backups (_--storage-class-backups -scb_) defaults to the data controller's data storage class if it is not provided.
+> - To restore a server group to a separate server group (like point in time restore) you must configure your server group to use PVCs with ReadWriteMany access mode. It is required to do so at the creation of the server group. It cannot be changed after creating it. For more details read:
+> - [Create a server group that is ready for backups and restores](backup-restore-postgresql-hyperscale.md#create-a-server-group-that-is-ready-for-backups-and-restores)
+> - [Limitations of Azure Arc enabled PostgreSQL Hyperscale](limitations-postgresql-hyperscale.md)
++ > [!NOTE] > - **There are other command-line parameters available. See the complete list of options by running `azdata arc postgres server create --help`.**
-> - The storage class used for backups (_--storage-class-backups -scb_) defaults to the data controller's data storage class if it is not provided.
+>
> - The unit accepted by the --volume-size-* parameters is a Kubernetes resource quantity (an integer followed by one of these SI suffices (T, G, M, K, m) or their power-of-two equivalents (Ti, Gi, Mi, Ki)). > - Names must be 12 characters or fewer in length and conform to DNS naming conventions. > - You will be prompted to enter the password for the _postgres_ standard administrative user. You can skip the interactive prompt by setting the `AZDATA_PASSWORD` session environment variable before you run the create command.
@@ -194,4 +201,4 @@ psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)-- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
@@ -7,7 +7,7 @@
Previously updated : 09/22/2020 Last updated : 02/11/2021
@@ -40,13 +40,13 @@ data:
username: <your base64 encoded user name. 'sa' is not allowed> kind: Secret metadata:
- name: example-login-secret
+ name: sql1-login-secret
type: Opaque apiVersion: sql.arcdata.microsoft.com/v1alpha1 kind: sqlmanagedinstance metadata:
- name: example
+ name: sql1
spec: limits: memory: 4Gi
@@ -57,15 +57,9 @@ spec:
service: type: LoadBalancer storage:
- backups:
- className: default
- size: 5Gi
data: className: default size: 5Gi
- datalogs:
- className: default
- size: 5Gi
logs: className: default size: 1Gi
@@ -102,7 +96,7 @@ echo '<your string to encode here>' | base64
### Customizing the name
-The template has a value of 'example' for the name attribute. You can change this but it must be characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to 'sql1', you must change the name of the secret from 'example-login-secret' to 'sql1-login-secret'
+The template has a value of 'sql1' for the name attribute. You can change this but it must be characters that follow the DNS naming standards. You must also change the name of the secret to match. For example, if you change the name of the SQL managed instance to 'sql2', you must change the name of the secret from 'sql1-login-secret' to 'sql2-login-secret'
### Customizing the resource requirements
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/limitations-postgresql-hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/limitations-postgresql-hyperscale.md
@@ -0,0 +1,66 @@
+
+ Title: Limitations of Azure Arc enabled PostgreSQL Hyperscale
+description: Limitations of Azure Arc enabled PostgreSQL Hyperscale
++++++ Last updated : 02/11/2021+++
+# Limitations of Azure Arc enabled PostgreSQL Hyperscale
+
+This article describes limitations of Azure Arc enabled PostgreSQL Hyperscale.
++
+## Backup and restore
+
+- Point in time restore (like restoring to specific date and time) to the same server group is not supported. When doing a point in time restore, you must restore on a different server group that you have deployed before restoring. After restoring to the new server group, you may delete the server group of origin.
+- Restoring the entire content of a backup (as opposed to restoring up to a specific point in time) to the same server group is supported for PostgreSQL version 12. It is not supported for PostgreSQL version 11 due to a limitation of the PostgreSQL engine with timelines. To restore the entire content of a backup for a PostgreSQL server group of version 11, you must restore it to a different server group.
++
+## Databases
+
+Hosting more than one database in a server group is not supported.
++
+## Security
+
+Managing users and roles is not supported. For now, continue to use the postgres standard user.
+
+## Roles and responsibilities
+
+The roles and responsibilities between Microsoft and its customers differ between Azure PaaS services (Platform As A Service) and Azure hybrid (like Azure Arc enabled PostgreSQL Hyperscale).
+
+### Frequently asked questions
+
+The table below summarizes answers to frequently asked questions regarding support roles and responsibilities.
+
+| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services |
+|:-|::|::|
+| Who provides the infrastructure? | Microsoft | Customer |
+| Who provides the software?* | Microsoft | Microsoft |
+| Who does the operations? | Microsoft | Customer |
+| Does Microsoft provide SLAs? | Yes | No |
+| WhoΓÇÖs in charge of SLAs? | Microsoft | Customer |
+
+\* Azure services
+
+__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because Microsoft does not own the infrastructure and does not operate it. Customers do.
+
+## Next steps
+
+- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
+
+- **Create your own.** Follow these steps to create on your own Kubernetes cluster:
+ 1. [Install the client tools](install-client-tools.md)
+ 2. [Create the Azure Arc data controller](create-data-controller.md)
+ 3. [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md)
+
+- **Learn**
+ - [Read more about Azure Arc enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
+ - [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes.md
@@ -7,29 +7,29 @@
Previously updated : 09/22/2020 Last updated : 02/11/2021 # Azure Arc enabled PostgreSQL Hyperscale server group placement
-In this article we are taking an example to illustrate how the PostgreSQL instances of Azure Arc enabled PostgreSQL Hyperscale server group are placed on the physical nodes of the Kubernetes cluster that hosts them.
+In this article, we are taking an example to illustrate how the PostgreSQL instances of Azure Arc enabled PostgreSQL Hyperscale server group are placed on the physical nodes of the Kubernetes cluster that hosts them.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Configuration
-In this example we are using an Azure Kubernetes Service (AKS) cluster that has four physical nodes.
+In this example, we are using an Azure Kubernetes Service (AKS) cluster that has four physical nodes.
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/1_cluster_portal.png" alt-text="4 node AKS cluster in Azure portal":::
-List the physical nodes of the Kubernetes cluster by running the command:
+List the physical nodes of the Kubernetes cluster. Run the command:
```console kubectl get nodes ```
-Which shows the four physical nodes inside the Kubernetes cluster:
+`kubectl` returns four physical nodes inside the Kubernetes cluster:
```output NAME STATUS ROLES AGE VERSION
@@ -51,22 +51,22 @@ List the pods with the command:
```console kubectl get pods -n arc3 ```
-Which produces the following output:
+`kubectl` returns:
```output NAME READY STATUS RESTARTS AGE …
-postgres01-0 3/3 Running 0 9h
-postgres01-1 3/3 Running 0 9h
-postgres01-2 3/3 Running 0 9h
+postgres01c-0 3/3 Running 0 9h
+postgres01w-0 3/3 Running 0 9h
+postgres01w-1 3/3 Running 0 9h
```
-Each of those pods host a PostgreSQL instance. Together they form the Azure Arc enabled PostgreSQL Hyperscale server group:
+Each of those pods host a PostgreSQL instance. Together, the pods form the Azure Arc enabled PostgreSQL Hyperscale server group:
```output Pod name Role in the server group
-postgres01-0 Coordinator
-postgres01-1 Worker
-postgres01-2 Worker
+postgres01c-0 Coordinator
+postgres01w-0 Worker
+postgres01w-1 Worker
``` ## Placement
@@ -74,13 +74,13 @@ LetΓÇÖs look at how Kubernetes places the pods of the server group. Describe eac
For example, for the Coordinator, run the following command: ```console
-kubectl describe pod postgres01-0 -n arc3
+kubectl describe pod postgres01c-0 -n arc3
```
-Which produces the following output:
+`kubectl` returns:
```output
-Name: postgres01-0
+Name: postgres01c-0
Namespace: arc3 Priority: 0 Node: aks-agentpool-42715708-vmss000000
@@ -98,10 +98,10 @@ As we run this command for each of the pods, we summarize the current placement
And note also, in the description of the pods, the names of the containers that each pod hosts. For example, for the second worker, run the following command: ```console
-kubectl describe pod postgres01-2 -n arc3
+kubectl describe pod postgres01w-1 -n arc3
```
-Which produces the following output:
+`kubectl` returns:
```output …
@@ -128,7 +128,7 @@ The architecture looks like:
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/3_pod_placement.png" alt-text="3 pods each placed on separate nodes":::
-It means that, at this point, each PostgreSQL instance constituting the Azure Arc enabled PostgreSQL Hyperscale server group is hosted on specific physical host within the Kubernetes container. This is the best configuration to help get the most performance out of the Azure Arc enabled PostgreSQL Hyperscale server group as each role (coordinator and workers) uses the resources of each physical node. Those resources are not shared among several PostgreSQL roles.
+It means that, at this point, each PostgreSQL instance constituting the Azure Arc enabled PostgreSQL Hyperscale server group is hosted on specific physical host within the Kubernetes container. This configuration provides the most performance out of the Azure Arc enabled PostgreSQL Hyperscale server group as each role (coordinator and workers) uses the resources of each physical node. Those resources are not shared among several PostgreSQL roles.
## Scale out Azure Arc enabled PostgreSQL Hyperscale
@@ -169,23 +169,23 @@ kubectl get pods -n arc3
```output NAME READY STATUS RESTARTS AGE …
-postgres01-0 3/3 Running 0 11h
-postgres01-1 3/3 Running 0 11h
-postgres01-2 3/3 Running 0 11h
-postgres01-3 3/3 Running 0 5m2s
+postgres01c-0 3/3 Running 0 11h
+postgres01w-0 3/3 Running 0 11h
+postgres01w-1 3/3 Running 0 11h
+postgres01w-2 3/3 Running 0 5m2s
``` And describe the new pod to identify on which of the physical nodes of the Kubernetes cluster it is hosted. Run the command: ```console
-kubectl describe pod postgres01-3 -n arc3
+kubectl describe pod postgres01w-2 -n arc3
``` To identify the name of the hosting node: ```output
-Name: postgres01-3
+Name: postgres01w-2
Namespace: arc3 Priority: 0 Node: aks-agentpool-42715708-vmss000000
@@ -200,7 +200,7 @@ The placement of the PostgreSQL instances on the physical nodes of the cluster i
|Worker|postgres01-2|aks-agentpool-42715708-vmss000003 |Worker|postgres01-3|aks-agentpool-42715708-vmss000000
-And notice that the pod of the new worker (postgres01-3) has been placed on the same node as the coordinator.
+And notice that the pod of the new worker (postgres01w-2) has been placed on the same node as the coordinator.
The architecture looks like:
@@ -215,19 +215,19 @@ Using the same commands as above; we see what each physical node is hosting:
|Other pods names\* |Usage|Kubernetes physical node hosting the pods |-|-|-
-|bootstrapper-jh48b|This is a service which handles incoming requests to create, edit, and delete custom resources such as SQL managed instances, PostgreSQL Hyperscale server groups, and data controllers|aks-agentpool-42715708-vmss000003
+|bootstrapper-jh48b|A service which handles incoming requests to create, edit, and delete custom resources such as SQL managed instances, PostgreSQL Hyperscale server groups, and data controllers|aks-agentpool-42715708-vmss000003
|control-gwmbs||aks-agentpool-42715708-vmss000002
-|controldb-0|This is the controller data store which is used to store configuration and state for the data controller.|aks-agentpool-42715708-vmss000001
-|controlwd-zzjp7|This is the controller "watch dog" service that keeps an eye on the availability of the data controller.|aks-agentpool-42715708-vmss000000
-|logsdb-0|This is an Elastic Search instance that is used to store all the logs collected across all the Arc data services pods. Elasticsearch, receives data from `Fluentbit` container of each pod|aks-agentpool-42715708-vmss000003
-|logsui-5fzv5|This is a Kibana instance that sits on top of the Elastic Search database to present a log analytics GUI.|aks-agentpool-42715708-vmss000003
-|metricsdb-0|This is an InfluxDB instance that is used to store all the metrics collected across all the Arc data services pods. InfluxDB, receives data from the `Telegraf` container of each pod|aks-agentpool-42715708-vmss000000
-|metricsdc-47d47|This is a daemonset deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000002
-|metricsdc-864kj|This is a daemonset deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000001
-|metricsdc-l8jkf|This is a daemonset deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000003
-|metricsdc-nxm4l|This is a daemonset deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000000
-|metricsui-4fb7l|This is a Grafana instance that sits on top of the InfluxDB database to present a monitoring dashboard GUI.|aks-agentpool-42715708-vmss000003
-|mgmtproxy-4qppp|This is a web application proxy layer that sits in front of the Grafana and Kibana instances.|aks-agentpool-42715708-vmss000002
+|controldb-0|The controller data store which is used to store configuration and state for the data controller.|aks-agentpool-42715708-vmss000001
+|controlwd-zzjp7|The controller "watch dog" service that keeps an eye on the availability of the data controller.|aks-agentpool-42715708-vmss000000
+|logsdb-0|An Elastic Search instance that is used to store all the logs collected across all the Arc data services pods. Elasticsearch, receives data from `Fluentbit` container of each pod|aks-agentpool-42715708-vmss000003
+|logsui-5fzv5|A Kibana instance that sits on top of the Elastic Search database to present a log analytics GUI.|aks-agentpool-42715708-vmss000003
+|metricsdb-0|An InfluxDB instance that is used to store all the metrics collected across all the Arc data services pods. InfluxDB, receives data from the `Telegraf` container of each pod|aks-agentpool-42715708-vmss000000
+|metricsdc-47d47|A daemon set deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000002
+|metricsdc-864kj|A daemon set deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000001
+|metricsdc-l8jkf|A daemon set deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000003
+|metricsdc-nxm4l|A daemon set deployed on all the Kubernetes nodes in the cluster to collect node-level metrics about the nodes.|aks-agentpool-42715708-vmss000000
+|metricsui-4fb7l|A Grafana instance that sits on top of the InfluxDB database to present a monitoring dashboard GUI.|aks-agentpool-42715708-vmss000003
+|mgmtproxy-4qppp|A web application proxy layer that sits in front of the Grafana and Kibana instances.|aks-agentpool-42715708-vmss000002
> \* The suffix on pod names will vary on other deployments. Also, we are listing here only the pods hosted inside the Kubernetes namespace of the Azure Arc Data Controller.
@@ -235,7 +235,7 @@ The architecture looks like:
:::image type="content" source="media/migrate-postgresql-data-into-postgresql-hyperscale-server-group/5_full_list_of_pods.png" alt-text="All pods in namespace on various nodes":::
-This means that the coordinator nodes (Pod 1) of the Azure Arc enabled Postgres Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable as the coordinator node is typically using very little resources in comparison to what a Worker node may be using. From this you may infer that you should carefully chose:
+As described above, the coordinator nodes (Pod 1) of the Azure Arc enabled Postgres Hyperscale server group shares the same physical resources as the third worker node (Pod 4) of the server group. That is acceptable because the coordinator node typically uses very few resources in comparison to what a worker node may be using. For this reason, carefully chose:
- the size of the Kubernetes cluster and the characteristics of each of its physical nodes (memory, vCore) - the number of physical nodes inside the Kubernetes cluster - the applications or workloads you host on the Kubernetes cluster.
@@ -318,38 +318,38 @@ kubectl get pods -n arc3
NAME READY STATUS RESTARTS AGE …
-postgres01-0 3/3 Running 0 13h
-postgres01-1 3/3 Running 0 13h
-postgres01-2 3/3 Running 0 13h
-postgres01-3 3/3 Running 0 179m
-postgres01-4 3/3 Running 0 3m13s
+postgres01c-0 3/3 Running 0 13h
+postgres01w-0 3/3 Running 0 13h
+postgres01w-1 3/3 Running 0 13h
+postgres01w-2 3/3 Running 0 179m
+postgres01w-3 3/3 Running 0 3m13s
``` The shape of the server group is now: |Server group role|Server group pod |-|--
-|Coordinator|postgres01-0
-|Worker|postgres01-1
-|Worker|postgres01-2
-|Worker|postgres01-3
-|Worker|postgres01-4
+|Coordinator|postgres01c-0
+|Worker|postgres01w-0
+|Worker|postgres01w-1
+|Worker|postgres01w-2
+|Worker|postgres01w-3
-LetΓÇÖs describe the postgres01-4 pod to identify in what physical node it is hosted:
+LetΓÇÖs describe the postgres01w-3 pod to identify in what physical node it is hosted:
```console
-kubectl describe pod postgres01-4 -n arc3
+kubectl describe pod postgres01w-3 -n arc3
``` And observe on what pods it runs: |Server group role|Server group pod| Pod |-|--|
-|Coordinator|postgres01-0|aks-agentpool-42715708-vmss000000
-|Worker|postgres01-1|aks-agentpool-42715708-vmss000002
-|Worker|postgres01-2|aks-agentpool-42715708-vmss000003
-|Worker|postgres01-3|aks-agentpool-42715708-vmss000000
-|Worker|postgres01-4|aks-agentpool-42715708-vmss000004
+|Coordinator|postgres01c-0|aks-agentpool-42715708-vmss000000
+|Worker|postgres01w-0|aks-agentpool-42715708-vmss000002
+|Worker|postgres01w-1|aks-agentpool-42715708-vmss000003
+|Worker|postgres01w-2|aks-agentpool-42715708-vmss000000
+|Worker|postgres01w-3|aks-agentpool-42715708-vmss000004
And the architecture looks like:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
@@ -7,7 +7,7 @@
Previously updated : 12/09/2020 Last updated : 02/11/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
@@ -16,6 +16,12 @@
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+## January 2021
+
+### New capabilities and features
+
+Azure Data CLI (`azdata`) version number: 20.3.0. Download at [https://aka.ms/azdata](https://aka.ms/azdata).
+ ## December 2020 ### New capabilities & features
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
@@ -7,7 +7,7 @@
Previously updated : 09/22/2020 Last updated : 02/11/2021
@@ -43,13 +43,13 @@ This is the hyperscale form factor of the Postgres database engine available as
This is the hyperscale form factor of the Postgres database engine that is available with Azure Arc enabled data services. It is also powered by the Citus extension that enables the hyperscale experience. In this form factor, our customers provide the infrastructure that hosts the systems and operate them. ## Next steps-- **Create**
- > **Just want to try things out? You do not have a Kubernetes cluster available? We provide you with a sandbox:**
- > Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
-
- - **Create:**
- - [Install the client tools](install-client-tools.md)
- - [Create the Azure Arc data controller](create-data-controller.md) (requires installing the client tools first)
- - [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) (Requires creation of an Azure Arc data controller first.)
-- [**Read more about Azure Arc enabled data services**](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)-- [**Read about Azure Arc**](https://aka.ms/azurearc)
+- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
+
+- **Create your own.** Follow these steps to create on your own Kubernetes cluster:
+ 1. [Install the client tools](install-client-tools.md)
+ 2. [Create the Azure Arc data controller](create-data-controller.md)
+ 3. [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md)
+
+- **Learn**
+ - [Read more about Azure Arc enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services)
+ - [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/connect-cluster.md
@@ -1,18 +1,18 @@
Title: "Connect an Azure Arc-enabled Kubernetes cluster (Preview)"
+ Title: "Connect an Azure Arc enabled Kubernetes cluster (Preview)"
# Previously updated : 05/19/2020 Last updated : 02/09/2021
-description: "Connect an Azure Arc-enabled Kubernetes cluster with Azure Arc"
+description: "Connect an Azure Arc enabled Kubernetes cluster with Azure Arc"
keywords: "Kubernetes, Arc, Azure, K8s, containers"
-# Connect an Azure Arc-enabled Kubernetes cluster (Preview)
+# Connect an Azure Arc enabled Kubernetes cluster (Preview)
This article covers the process of connecting any Cloud Native Computing Foundation (CNCF) certified Kubernetes cluster, such as AKS-engine on Azure, AKS-engine on Azure Stack Hub, GKE, EKS, and VMware vSphere cluster to Azure Arc.
@@ -23,11 +23,11 @@ Verify you have prepared the following prerequisites:
* An up-and-running Kubernetes cluster. If you do not have an existing Kubernetes cluster, you can use one of the following guides to create a test cluster: * Create a Kubernetes cluster using [Kubernetes in Docker (kind)](https://kind.sigs.k8s.io/). * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes).
-* A kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc-enabled Kubernetes agents.
+* A kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents.
* The user or service principal used with `az login` and `az connectedk8s connect` commands must have the 'Read' and 'Write' permissions on the 'Microsoft.Kubernetes/connectedclusters' resource type. The "Kubernetes Cluster - Azure Arc Onboarding" role has these permissions and can be used for role assignments on the user or service principal. * Helm 3 for the onboarding the cluster using a connectedk8s extension. [Install the latest release of Helm 3](https://helm.sh/docs/intro/install) to meet this requirement.
-* Azure CLI version 2.15+ for installing the Azure Arc-enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version.
-* Install the Arc-enabled Kubernetes CLI extensions:
+* Azure CLI version 2.15+ for installing the Azure Arc enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version.
+* Install the Arc enabled Kubernetes CLI extensions:
* Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
@@ -68,7 +68,7 @@ Azure Arc agents require the following protocols/ports/outbound URLs to function
| `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com` | Required to pull system-assigned managed identity certificates. |
-## Register the two providers for Azure Arc-enabled Kubernetes:
+## Register the two providers for Azure Arc enabled Kubernetes:
```console az provider register --namespace Microsoft.Kubernetes
@@ -109,14 +109,14 @@ eastus AzureArcTest
Next, we will connect our Kubernetes cluster to Azure using `az connectedk8s connect`: 1. Verify connectivity to your Kubernetes cluster via one of the following:
- 1. `KUBECONFIG`
- 1. `~/.kube/config`
- 1. `--kube-config`
+ * `KUBECONFIG`
+ * `~/.kube/config`
+ * `--kube-config`
1. Deploy Azure Arc agents for Kubernetes using Helm 3 into the `azure-arc` namespace:
-```console
-az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
-```
+ ```console
+ az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
+ ```
**Output:**
@@ -165,14 +165,13 @@ Name Location ResourceGroup
AzureArcTest1 eastus AzureArcTest ```
-You can also view this resource on the [Azure portal](https://portal.azure.com/). Open the portal in your browser and navigate to the resource group and the Azure Arc-enabled Kubernetes resource, based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
-
+You can also view this resource on the [Azure portal](https://portal.azure.com/). Open the portal in your browser and navigate to the resource group and the Azure Arc enabled Kubernetes resource, based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
> [!NOTE]
-> After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal.
+> After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc enabled Kubernetes resource in Azure portal.
## Connect using an outbound proxy server
-If your cluster is behind an outbound proxy server, Azure CLI and the Arc-enabled Kubernetes agents need to route their requests via the outbound proxy server:
+If your cluster is behind an outbound proxy server, Azure CLI and the Arc enabled Kubernetes agents need to route their requests via the outbound proxy server:
1. Check the version of `connectedk8s` extension installed on your machine:
@@ -207,13 +206,13 @@ If your cluster is behind an outbound proxy server, Azure CLI and the Arc-enable
``` > [!NOTE]
-> 1. Specifying `excludedCIDR` under `--proxy-skip-range` is important to ensure in-cluster communication is not broken for the agents.
-> 2. While `--proxy-http`, `--proxy-https`, and `--proxy-skip-range` are expected for most outbound proxy environments, `--proxy-cert` is only required if trusted certificates from proxy need to be injected into trusted certificate store of agent pods.
-> 3. The above proxy specification is currently applied only for Arc agents and not for the flux pods used in sourceControlConfiguration. The Arc-enabled Kubernetes team is actively working on this feature and it will be available soon.
+> * Specifying `excludedCIDR` under `--proxy-skip-range` is important to ensure in-cluster communication is not broken for the agents.
+> * While `--proxy-http`, `--proxy-https`, and `--proxy-skip-range` are expected for most outbound proxy environments, `--proxy-cert` is only required if trusted certificates from proxy need to be injected into trusted certificate store of agent pods.
+> * The above proxy specification is currently applied only for Arc agents and not for the flux pods used in sourceControlConfiguration. The Arc enabled Kubernetes team is actively working on this feature and it will be available soon.
## Azure Arc agents for Kubernetes
-Azure Arc-enabled Kubernetes deploys a few operators into the `azure-arc` namespace. You can view these deployments and pods using:
+Azure Arc enabled Kubernetes deploys a few operators into the `azure-arc` namespace. You can view these deployments and pods using:
```console kubectl -n azure-arc get deployments,pods
@@ -241,7 +240,7 @@ pod/metrics-agent-58b765c8db-n5l7k 2/2 Running 0 16h
pod/resource-sync-agent-5cf85976c7-522p5 3/3 Running 0 16h ```
-Azure Arc-enabled Kubernetes consists of a few agents (operators) that run in your cluster deployed to the `azure-arc` namespace.
+Azure Arc enabled Kubernetes consists of a few agents (operators) that run in your cluster deployed to the `azure-arc` namespace.
| Agents (Operators) | Description | | | |
@@ -250,7 +249,7 @@ Azure Arc-enabled Kubernetes consists of a few agents (operators) that run in yo
| `deployment.apps/metrics-agent` | Collects performance metrics of other Arc agents. | | `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, such as cluster version, node count, and Azure Arc agent version. | | `deployment.apps/resource-sync-agent` | Syncs the above mentioned cluster metadata to Azure. |
-| `deployment.apps/clusteridentityoperator` | Azure Arc-enabled Kubernetes currently supports system-assigned identity. `clusteridentityoperator` maintains the managed service identity (MSI) certificate used by other agents for communication with Azure. |
+| `deployment.apps/clusteridentityoperator` | Azure Arc enabled Kubernetes currently supports system-assigned identity. `clusteridentityoperator` maintains the managed service identity (MSI) certificate used by other agents for communication with Azure. |
| `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration. | ## Delete a connected cluster
@@ -258,13 +257,13 @@ Azure Arc-enabled Kubernetes consists of a few agents (operators) that run in yo
You can delete a `Microsoft.Kubernetes/connectedcluster` resource using the Azure CLI or Azure portal.
-* **Deletion using Azure CLI**: Use the following Azure CLI command to initiate deletion of the Azure Arc-enabled Kubernetes resource.
+* **Deletion using Azure CLI**: Use the following Azure CLI command to initiate deletion of the Azure Arc enabled Kubernetes resource.
```console az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ``` This command removes the `Microsoft.Kubernetes/connectedCluster` resource and any associated `sourcecontrolconfiguration` resources in Azure. The Azure CLI uses `helm uninstall` to remove the agents running on the cluster as well.
-* **Deletion on Azure portal**: Deletion of the Azure Arc-enabled Kubernetes resource on Azure portal deletes the `Microsoft.Kubernetes/connectedcluster` resource and any associated `sourcecontrolconfiguration` resources in Azure, but it *does not* remove the agents running on the cluster.
+* **Deletion on Azure portal**: Deletion of the Azure Arc enabled Kubernetes resource on Azure portal deletes the `Microsoft.Kubernetes/connectedcluster` resource and any associated `sourcecontrolconfiguration` resources in Azure, but it *does not* remove the agents running on the cluster.
To remove the agents running on the cluster, run the following command:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/create-onboarding-service-principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/create-onboarding-service-principal.md
@@ -1,27 +1,27 @@
Title: "Create an Azure Arc-enabled onboarding Service Principal (Preview)"
+ Title: "Create an Azure Arc enabled onboarding service principal (Preview)"
# Previously updated : 05/19/2020 Last updated : 02/09/2021
-description: "Create an Azure Arc-enabled onboarding Service Principal "
+description: "Create an Azure Arc enabled onboarding service principal "
keywords: "Kubernetes, Arc, Azure, containers"
-# Create an Azure Arc-enabled onboarding Service Principal (Preview)
+# Create an Azure Arc enabled onboarding service principal (Preview)
## Overview
-It is possible to use service principals having a role assignment with limited privileges for onboarding Kubernetes clusters to Azure Arc. This is useful in continuous integration and continuous deployment (CI/CD) pipelines like Azure Pipelines and GitHub Actions.
+You can onboard Kubernetes clusters to Azure Arc using service principals with limited-privilege role assignments. This capability is useful in continuous integration and continuous deployment (CI/CD) pipelines, like Azure Pipelines and GitHub Actions.
-The following steps provide a walkthrough on using service principals for onboarding Kubernetes clusters to Azure Arc.
+Walk through the following steps to learn how to use service principals for onboarding Kubernetes clusters to Azure Arc.
-## Create a new Service Principal
+## Create a new service principal
-Create a new Service Principal with an informative name. Note that this name must be unique for your Azure Active Directory tenant:
+Create a new service principal with an informative name that is unique for your Azure Active Directory tenant.
```console az ad sp create-for-RBAC --skip-assignment --name "https://azure-arc-for-k8s-onboarding"
@@ -41,16 +41,16 @@ az ad sp create-for-RBAC --skip-assignment --name "https://azure-arc-for-k8s-onb
## Assign permissions
-After creating the new Service Principal, assign the "Kubernetes Cluster - Azure Arc Onboarding" role to the newly created principal. This is a built-in Azure role with limited permissions, which only allows the principal to register clusters to Azure. The principal cannot update, delete, or modify any other clusters or resources within the subscription.
+Assign the "Kubernetes Cluster - Azure Arc Onboarding" role to the newly created service principal. This built-in Azure role with limited permissions only allows the principal to register clusters to Azure. The principal with this assigned role cannot update, delete, or modify any other clusters or resources within the subscription.
Given the limited abilities, customers can easily re-use this principal to onboard multiple clusters.
-Permissions may be further limited by passing in the appropriate `--scope` argument when assigning the role. This allows customers to restrict cluster registration. The following scenarios are supported by various `--scope` parameters:
+You can limit permissions further by passing in the appropriate `--scope` argument when assigning the role. This allows customers to restrict cluster registration. The following scenarios are supported by various `--scope` parameters:
| Resource | `scope` argument| Effect | | - | - | - |
-| Subscription | `--scope /subscriptions/0b1f6471-1bf0-4dda-aec3-111122223333` | Service principal can register any cluster in an existing Resource Group in the given subscription |
-| Resource Group | `--scope /subscriptions/0b1f6471-1bf0-4dda-aec3-111122223333/resourceGroups/myGroup` | Service principal can __only__ register clusters in the Resource Group `myGroup` |
+| Subscription | `--scope /subscriptions/0b1f6471-1bf0-4dda-aec3-111122223333` | Service principal can register any cluster in an existing Resource Group in the given subscription. |
+| Resource Group | `--scope /subscriptions/0b1f6471-1bf0-4dda-aec3-111122223333/resourceGroups/myGroup` | Service principal can __only__ register clusters in the Resource Group `myGroup`. |
```console az role assignment create \
@@ -74,9 +74,9 @@ az role assignment create \
} ```
-## Use Service Principal with the Azure CLI
+## Use service principal with the Azure CLI
-Reference the newly created Service Principal:
+Reference the newly created service principal with the following commands:
```azurecli az login --service-principal -u mySpnClientId -p mySpnClientSecret --tenant myTenantID
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/deploy-azure-iot-edge-workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/deploy-azure-iot-edge-workloads.md
@@ -3,7 +3,7 @@ Title: "Deploy Azure IoT Edge workloads (Preview)"
# Previously updated : 05/19/2020 Last updated : 02/10/2021
@@ -16,29 +16,35 @@ keywords: "Kubernetes, Arc, Azure, K8s, containers"
## Overview
-Azure Arc and Azure IoT Edge complement each other's capabilities quite well. Azure Arc provides mechanisms for cluster operators to the configure the foundational components of a cluster as well as apply and enforce cluster policies. And IoT Edge allows application operators to remotely deploy and manage the workloads at scale with convenient cloud ingestion and bi-directional communication primitives. The diagram below illustrates this:
+Azure Arc and Azure IoT Edge easily complement each other's capabilities.
+
+Azure Arc provides mechanisms for cluster operators to configure the foundational components of a cluster, and apply and enforce cluster policies.
+
+Azure IoT Edge allows application operators to remotely deploy and manage the workloads at scale with convenient cloud ingestion and bi-directional communication primitives.
+
+The diagram below illustrates Azure Arc and Azure IoT Edge's relationship:
![IoT Arc configuration](./media/edge-arc.png) ## Pre-requisites
-* [Register an IoT Edge device](../../iot-edge/quickstart-linux.md#register-an-iot-edge-device) and [deploy the simulated temperature sensor module](../../iot-edge/quickstart-linux.md#deploy-a-module). Be sure to note the device's connection string.
+* [Register an IoT Edge device](../../iot-edge/quickstart-linux.md#register-an-iot-edge-device) and [deploy the simulated temperature sensor module](../../iot-edge/quickstart-linux.md#deploy-a-module). Note the device's connection string for the *values.yaml* mentioned below.
* Use [IoT Edge's support for Kubernetes](https://aka.ms/edgek8sdoc) to deploy it via Azure Arc's Flux operator.
-* Download the [**values.yaml**](https://github.com/Azure/iotedge/blob/preview/iiot/kubernetes/charts/edge-kubernetes/values.yaml) file for IoT Edge Helm chart and replace the **deviceConnectionString** placeholder at the end of the file with the one noted in Step 1. You can set any other supported chart installation options as required. Create a namespace for the IoT Edge workload and create a secret in it:
+* Download the [*values.yaml*](https://github.com/Azure/iotedge/blob/preview/iiot/kubernetes/charts/edge-kubernetes/values.yaml) file for IoT Edge Helm chart and replace the `deviceConnectionString` placeholder at the end of the file with the connection string you noted earlier. Set any other supported chart installation options as needed. Create a namespace for the IoT Edge workload and generate a secret in it:
- ```
- $ kubectl create ns iotedge
+ ```
+ $ kubectl create ns iotedge
- $ kubectl create secret generic dcs --from-file=fully-qualified-path-to-values.yaml --namespace iotedge
- ```
+ $ kubectl create secret generic dcs --from-file=fully-qualified-path-to-values.yaml --namespace iotedge
+ ```
- You can also set this up remotely using the [cluster config example](./use-gitops-connected-cluster.md).
+ You can also set up remotely using the [cluster config example](./use-gitops-connected-cluster.md).
## Connect a cluster
-Use the `az` CLI `connectedk8s` extension to connect a Kubernetes cluster to Azure Arc:
+Use the `az` Azure CLI `connectedk8s` extension to connect a Kubernetes cluster to Azure Arc:
``` az connectedk8s connect --name AzureArcIotEdge --resource-group AzureArcTest
@@ -46,21 +52,21 @@ Use the `az` CLI `connectedk8s` extension to connect a Kubernetes cluster to Azu
## Create a configuration for IoT Edge
-Example repo: https://github.com/veyalla/edgearc
+The [example Git repo](https://github.com/veyalla/edgearc) points to the IoT Edge Helm chart and references the secret created in the pre-requisites section.
-This repo points to the IoT Edge Helm chart and references the secret created in the pre-requisites section.
+Use the `az` Azure CLI `k8sconfiguration` extension to create a configuration that links the connected cluster to the Git repo:
-1. Use the `az` CLI `k8sconfiguration` extension to create a configuration to link the connected cluster to the git repo:
+ ```
+ az k8sconfiguration create --name iotedge --cluster-name AzureArcIotEdge --resource-group AzureArcTest --operator-instance-name iotedge --operator-namespace azure-arc-iot-edge --enable-helm-operator --helm-operator-chart-version 0.6.0 --helm-operator-chart-values "--set helm.versions=v3" --repository-url "git://github.com/veyalla/edgearc.git" --cluster-scoped
+ ```
- ```
- az k8sconfiguration create --name iotedge --cluster-name AzureArcIotEdge --resource-group AzureArcTest --operator-instance-name iotedge --operator-namespace azure-arc-iot-edge --enable-helm-operator --helm-operator-chart-version 0.6.0 --helm-operator-chart-values "--set helm.versions=v3" --repository-url "git://github.com/veyalla/edgearc.git" --cluster-scoped
- ```
+In a few minutes, you should see the IoT Edge workload modules deployed into your cluster's `iotedge` namespace.
- In a minute or two, you should see the IoT Edge workload modules deployed into the `iotedge` namespace in your cluster. You can view the logs of the `SimulatedTemperatureSensor` pod in that namespace to see the sample values being generated. You can also watch the messages arrive at your IoT hub by using the [Azure IoT Hub Toolkit extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
+View the `SimulatedTemperatureSensor` pod logs in that namespace to see the sample values being generated. You can also watch the messages arrive at your IoT hub by using the [Azure IoT Hub Toolkit extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
## Cleanup
-You can remove the configuration using:
+Remove the configuration using:
``` az k8sconfiguration delete -g AzureArcTest --cluster-name AzureArcIotEdge --name iotedge
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-azure-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-azure-policy.md
@@ -3,7 +3,7 @@ Title: "Use Azure Policy to apply cluster configurations at scale (Preview)"
# Previously updated : 05/19/2020 Last updated : 02/10/2021
@@ -15,39 +15,58 @@ keywords: "Kubernetes, Arc, Azure, K8s, containers"
## Overview
-Use Azure Policy to enforce that each `Microsoft.Kubernetes/connectedclusters` resource or Git-Ops enabled `Microsoft.ContainerService/managedClusters` resource has specific `Microsoft.KubernetesConfiguration/sourceControlConfigurations` applied on it. To use Azure Policy you select an existing policy definition and create a policy assignment. When creating the policy assignment you set the scope for the assignment: this will be an Azure resource group or subscription. You also set the parameters for the `sourceControlConfiguration` that will be created. Once the assignment is created the Policy engine will identify all `connectedCluster` or `managedCluster` resources that are located within the scope and will apply the `sourceControlConfiguration` to each one.
+You can use Azure Policy to enforce either of the following resources to have specific `Microsoft.KubernetesConfiguration/sourceControlConfigurations` applied:
+* `Microsoft.Kubernetes/connectedclusters` resource.
+* GitOps-enabled `Microsoft.ContainerService/managedClusters` resource.
-If you are using multiple Git repos as the sources of truth for each cluster (for instance, one repo for central IT/cluster operator and other repos for application teams), you can enable this by using multiple policy assignments, each policy assignment configured to use a different Git repo.
+To use Azure Policy, select an existing policy definition and create a policy assignment. When creating the policy assignment:
+1. Set the scope for the assignment.
+ * The scope will be an Azure resource group or subscription.
+2. Set the parameters for the `sourceControlConfiguration` that will be created.
+
+Once the assignment is created, the Azure Policy engine identifies all `connectedCluster` or `managedCluster` resources located within the scope and applies the `sourceControlConfiguration` to each one.
+
+You can enable multiple Git repos as the sources of truth for each cluster by using multiple policy assignments. Each policy assignment would be configured to use a different Git repo; for example, one repo for the central IT/cluster operator and other repos for application teams.
## Prerequisite
-Ensure that you have `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you want to create this policy assignment.
+Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on the scope (subscription or resource group) where you will create this policy assignment.
## Create a policy assignment
-1. In the Azure portal, navigate to Policy, and in the **Authoring** section of the sidebar, select **Definitions**.
-2. Choose the "Deploy GitOps to Kubernetes cluster" built-in policy in the "Kubernetes" category, and click on **Assign**.
-3. Set the **Scope** to the management group, subscription, or resource group where the policy assignment will apply.
-4. If you want to exclude any resources from the policy scope, then set **Exclusions**.
-5. Give the policy assignment a **Name** and **Description** that you can use to identify it easily.
-6. Ensure that **Policy enforcement** is set to *Enabled*.
-7. Select **Next**.
-8. Set parameter values that will be used during the creation of the `sourceControlConfiguration`.
-9. Select **Next**.
-10. Enable **Create a remediation task**.
-11. Assure that **Create a managed identity** is checked, and that the identity will have **Contributor** permissions. See [this doc](../../governance/policy/assign-policy-portal.md) and [the comment in this doc](../../governance/policy/how-to/remediate-resources.md) for more information on the permissions you need.
-12. Select **Review + create**.
-
-After the policy assignment is created, for any new `connectedCluster` resource (or `managedCluster` resource with the GitOps agents installed) that is located within the scope of the assignment, the `sourceControlConfiguration` will be applied. For existing clusters, you will need to manually run a remediation task. It typically takes from 10-20 minutes for the policy assignment to take effect.
+1. In the Azure portal, navigate to **Policy**.
+1. In the **Authoring** section of the sidebar, select **Definitions**.
+1. In the "Kubernetes" category, choose the "Deploy GitOps to Kubernetes cluster" built-in policy.
+1. Click on **Assign**.
+1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply.
+ * If you want to exclude any resources from the policy scope, set **Exclusions**.
+1. Give the policy assignment an easily identifiable **Name** and **Description**.
+1. Ensure **Policy enforcement** is set to **Enabled**.
+1. Select **Next**.
+1. Set the parameter values to be used while creating the `sourceControlConfiguration`.
+1. Select **Next**.
+1. Enable **Create a remediation task**.
+1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions.
+ * For more information, see the [Create a policy assignment quickstart](../../governance/policy/assign-policy-portal.md) and the [Remediate non-compliant resources with Azure Policy article](../../governance/policy/how-to/remediate-resources.md).
+1. Select **Review + create**.
+
+After creating the policy assignment, the `sourceControlConfiguration` will be applied for any of the following resources located within the scope of the assignment:
+* New `connectedCluster` resources.
+* New `managedCluster` resources with the GitOps agents installed.
+
+For existing clusters, you will need to manually run a remediation task. This task typically takes 10 to 20 minutes for the policy assignment to take effect.
## Verify a policy assignment
-1. In the Azure portal, navigate to one of your `connectedCluster` resources, and in the **Settings** section of the sidebar, select **Policies**. (The UX for AKS cluster is not implemented yet, but is coming.)
-2. In the list, you should see the policy assignment that you created above, and the **Compliance state** should be *Compliant*.
-3. In the **Settings** section of the sidebar, select **Configurations**.
-4. In the list, you should see the `sourceControlConfiguration` that the policy assignment created.
-5. Use **kubectl** to interrogate the cluster: you should see the namespace and artifacts that were created by the `sourceControlConfiguration`.
-6. Within 5 minutes, you should see in the cluster the artifacts that are described in the manifests in the configured Git repo.
+1. In the Azure portal, navigate to one of your `connectedCluster` resources.
+1. In the **Settings** section of the sidebar, select **Policies**.
+ * The AKS cluster UX is not implemented yet.
+ * In the policies list, you should see the policy assignment that you created earlier with the **Compliance state** set as *Compliant*.
+1. In the **Settings** section of the sidebar, select **Configurations**.
+ * In the configurations list, you should see the `sourceControlConfiguration` that the policy assignment created.
+1. Use `kubectl` to interrogate the cluster.
+ * You should see the namespace and artifacts that were created by the `sourceControlConfiguration`.
+ * Within 5 minutes, you should see in the cluster the artifacts that are described in the manifests in the configured Git repo.
## Next steps
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-connected-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-connected-cluster.md
@@ -3,46 +3,62 @@ Title: "Deploy configurations using GitOps on Arc enabled Kubernetes cluster (Pr
# Previously updated : 05/19/2020 Last updated : 02/09/2021
-description: "Use GitOps to configure an Azure Arc-enabled Kubernetes cluster (Preview)"
+description: "Use GitOps to configure an Azure Arc enabled Kubernetes cluster (Preview)"
keywords: "GitOps, Kubernetes, K8s, Azure, Arc, Azure Kubernetes Service, AKS, containers" # Deploy configurations using GitOps on Arc enabled Kubernetes cluster (Preview)
-GitOps, as it relates to Kubernetes, is the practice of declaring the desired state of Kubernetes configuration (deployments, namespaces, etc.) in a Git repository followed by a polling and pull-based deployment of these configurations to the cluster using an operator. This document covers the setup of such workflows on Azure Arc enabled Kubernetes clusters.
+In relation to Kubernetes, GitOps is the practice of declaring the desired state of Kubernetes cluster configurations (deployments, namespaces, etc.) in a Git repository. This declaration is followed by a polling and pull-based deployment of these cluster configurations using an operator.
-The connection between your cluster and a Git repository is created in Azure Resource Manager as a `Microsoft.KubernetesConfiguration/sourceControlConfigurations` extension resource. The `sourceControlConfiguration` resource properties represent where and how Kubernetes resources should flow from Git to your cluster. The `sourceControlConfiguration` data is stored encrypted at rest in an Azure Cosmos DB database to ensure data confidentiality.
+This article covers the setup of GitOps workflows on Azure Arc enabled Kubernetes clusters.
-The `config-agent` running in your cluster is responsible for watching for new or updated `sourceControlConfiguration` extension resources on the Azure Arc enabled Kubernetes resource, for deploying a Flux operator to watch the Git repository for each `sourceControlConfiguration`, and applying any updates made to any `sourceControlConfiguration`. It is possible to create multiple `sourceControlConfiguration` resources on the same Azure Arc enabled Kubernetes cluster to achieve multi-tenancy. You can create each `sourceControlConfiguration` with a different `namespace` scope to limit deployments to within the respective namespaces.
+The connection between your cluster and a Git repository is created as a `Microsoft.KubernetesConfiguration/sourceControlConfigurations` extension resource in Azure Resource Manager. The `sourceControlConfiguration` resource properties represent where and how Kubernetes resources should flow from Git to your cluster. The `sourceControlConfiguration` data is stored encrypted, at rest in an Azure Cosmos DB database to ensure data confidentiality.
-The Git repository can contain YAML-format manifests that describe any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc. It may also contain Helm charts for deploying applications. A common set of scenarios includes defining a baseline configuration for your organization, which might include common Azure roles and bindings, monitoring or logging agents, or cluster-wide services.
+The `config-agent` running in your cluster is responsible for:
+* Tracking new or updated `sourceControlConfiguration` extension resources on the Azure Arc enabled Kubernetes resource.
+* Deploying a Flux operator to watch the Git repository for each `sourceControlConfiguration`.
+* Applying any updates made to any `sourceControlConfiguration`.
-The same pattern can be used to manage a larger collection of clusters, which may be deployed across heterogeneous environments. For example, you may have one repository that defines the baseline configuration for your organization and apply that to tens of Kubernetes clusters at once. [Azure policy can automate](use-azure-policy.md) the creation of a `sourceControlConfiguration` with a specific set of parameters on all Azure Arc enabled Kubernetes resources under a scope (subscription or resource group).
+You can create multiple `sourceControlConfiguration` resources on the same Azure Arc enabled Kubernetes cluster to achieve multi-tenancy. Limit deployments to within the respective namespaces by creating each `sourceControlConfiguration` with a different `namespace` scope.
-This getting started guide will walk you through applying a set of configurations with cluster-admin scope.
+The Git repository can contain:
+* YAML-format manifests describing any valid Kubernetes resources, including Namespaces, ConfigMaps, Deployments, DaemonSets, etc.
+* Helm charts for deploying applications.
+
+A common set of scenarios includes defining a baseline configuration for your organization, such as common Azure roles and bindings, monitoring or logging agents, or cluster-wide services.
+
+The same pattern can be used to manage a larger collection of clusters, which may be deployed across heterogeneous environments. For example, you have one repository that defines the baseline configuration for your organization, which applies to multiple Kubernetes clusters at once. [Azure Policy can automate](use-azure-policy.md) the creation of a `sourceControlConfiguration` with a specific set of parameters on all Azure Arc enabled Kubernetes resources within a scope (subscription or resource group).
+
+Walk through the following steps to learn how to apply a set of configurations with `cluster-admin` scope.
## Before you begin
-This article assumes that you have an existing Azure Arc enabled Kubernetes connected cluster. If you need a connected cluster, see the [connect a cluster quickstart](./connect-cluster.md).
+Verify you have an existing Azure Arc enabled Kubernetes connected cluster. If you need a connected cluster, see the [Connect an Azure Arc enabled Kubernetes cluster quickstart](./connect-cluster.md).
## Create a configuration
-The [example repository](https://github.com/Azure/arc-k8s-demo) used in this document is structured around the persona of a cluster operator who would like to provision a few namespaces, deploy a common workload, and provide some team-specific configuration. Using this repository creates the following resources on your cluster:
+The [example repository](https://github.com/Azure/arc-k8s-demo) used in this article is structured around the persona of a cluster operator who would like to provision a few namespaces, deploy a common workload, and provide some team-specific configuration. Using this repository creates the following resources on your cluster:
+
-**Namespaces:** `cluster-config`, `team-a`, `team-b`
-**Deployment:** `cluster-config/azure-vote`
-**ConfigMap:** `team-a/endpoints`
+* **Namespaces:** `cluster-config`, `team-a`, `team-b`
+* **Deployment:** `cluster-config/azure-vote`
+* **ConfigMap:** `team-a/endpoints`
-The `config-agent` polls Azure for new or updated `sourceControlConfiguration` every 30 seconds, which is the maximum time taken by `config-agent` to pick up a new or updated configuration.
-If you are associating a private repository with the `sourceControlConfiguration`, ensure that you also complete the steps in [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository).
+The `config-agent` polls Azure for new or updated `sourceControlConfiguration`. This task will take up to 30 seconds.
+
+If you are associating a private repository with the `sourceControlConfiguration`, complete the steps in [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository).
### Using Azure CLI
-Use the Azure CLI extension for `k8sconfiguration` to link a connected cluster to the [example Git repository](https://github.com/Azure/arc-k8s-demo). We will give this configuration a name `cluster-config`, instruct the agent to deploy the operator in the `cluster-config` namespace, and give the operator `cluster-admin` permissions.
+Use the Azure CLI extension for `k8sconfiguration` to link a connected cluster to the [example Git repository](https://github.com/Azure/arc-k8s-demo).
+1. Name this configuration `cluster-config`.
+1. Instruct the agent to deploy the operator in the `cluster-config` namespace.
+1. Give the operator `cluster-admin` permissions.
```azurecli az k8sconfiguration create --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name cluster-config --operator-namespace cluster-config --repository-url https://github.com/Azure/arc-k8s-demo --scope cluster --cluster-type connectedClusters
@@ -89,97 +105,94 @@ Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
| Parameter | Format | | - | - |
-| --repository-url | http[s]://server/repo[.git] or git://server/repo[.git]
+| `--repository-url` | http[s]://server/repo[.git] or git://server/repo[.git]
#### Use a private Git repo with SSH and Flux-created keys | Parameter | Format | Notes | - | - | - |
-| --repository-url | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may substitute for `user@`
+| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@`
> [!NOTE]
-> The public key generated by Flux must be added to the user account in your Git service provider. If the key is added to the repo instead of > the user account, use `git@` in place of `user@` in the URL. [View more details](#apply-configuration-from-a-private-git-repository)
+> The public key generated by Flux must be added to the user account in your Git service provider. If the key is added to the repo instead of the user account, use `git@` in place of `user@` in the URL. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
#### Use a private Git repo with SSH and user-provided keys | Parameter | Format | Notes | | - | - | - |
-| --repository-url | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may substitute for `user@` |
-| --ssh-private-key | base64-encoded key in [PEM format](https://aka.ms/PEMformat) | Provide key directly |
-| --ssh-private-key-file | full path to local file | Provide full path to local file that contains the PEM-format key
+| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` |
+| `--ssh-private-key` | base64-encoded key in [PEM format](https://aka.ms/PEMformat) | Provide key directly |
+| `--ssh-private-key-file` | full path to local file | Provide full path to local file that contains the PEM-format key
> [!NOTE]
-> Provide your own private key directly or in a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with newline (\n). The associated public key must be added to the user account in your Git service provider. If the key is added to the repo instead of the user account, use `git@` in place of `user@`. [View more details](#apply-configuration-from-a-private-git-repository)
+> Provide your own private key directly or in a file. The key must be in [PEM format](https://aka.ms/PEMformat) and end with newline (\n). The associated public key must be added to the user account in your Git service provider. If the key is added to the repo instead of the user account, use `git@` in place of `user@`. Jump to the [Apply configuration from a private Git repository](#apply-configuration-from-a-private-git-repository) section for more details.
#### Use a private Git host with SSH and user-provided known hosts | Parameter | Format | Notes | | - | - | - |
-| --repository-url | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may substitute for `user@` |
-| --ssh-known-hosts | base64-encoded | known hosts content provided directly |
-| --ssh-known-hosts-file | full path to local file | known hosts content provided in a local file
+| `--repository-url` | ssh://user@server/repo[.git] or user@server:repo[.git] | `git@` may replace `user@` |
+| `--ssh-known-hosts` | base64-encoded | Provide known hosts content directly |
+| `--ssh-known-hosts-file` | full path to local file | Provide known hosts content in a local file |
> [!NOTE]
-> The Flux operator maintains a list of common Git hosts in its known hosts file in order to authenticate the Git repo before establishing the SSH connection. If you are using an uncommon Git repo or your own Git host, you may need to supply the host key to ensure that Flux can identify your repo. You can provide your known hosts content directly or in a file. [View known hosts content format specification](https://aka.ms/KnownHostsFormat).
-> You can use this in conjunction with one of the SSH key scenarios described above.
+> In order to authenticate the Git repo before establishing the SSH connection, the Flux operator maintains a list of common Git hosts in its known hosts file. If you are using an uncommon Git repo or your own Git host, you may need to supply the host key to ensure that Flux can identify your repo. You can provide your known_hosts content directly or in a file. Use the [known_hosts content format specifications](https://aka.ms/KnownHostsFormat) in conjunction with one of the SSH key scenarios described above when providing your own content.
#### Use a private Git repo with HTTPS | Parameter | Format | Notes | | - | - | - |
-| --repository-url | https://server/repo[.git] | HTTPS with basic auth |
-| --https-user | raw or base64-encoded | HTTPS username |
-| --https-key | raw or base64-encoded | HTTPS personal access token or password
+| `--repository-url` | https://server/repo[.git] | HTTPS with basic auth |
+| `--https-user` | raw or base64-encoded | HTTPS username |
+| `--https-key` | raw or base64-encoded | HTTPS personal access token or password
> [!NOTE]
-> HTTPS Helm release private auth is supported only with Helm operator chart version >= 1.2.0. Version 1.2.0 is used by default.
-> HTTPS Helm release private auth is not supported currently for Azure Kubernetes Services managed clusters.
-> If you need Flux to access the Git repo through your proxy, then you will need to update the Azure Arc agents with the proxy settings. [More information](./connect-cluster.md#connect-using-an-outbound-proxy-server)
+> HTTPS Helm release private auth is supported only with Helm operator chart version 1.2.0+ (default).
+> HTTPS Helm release private auth is not supported currently for Azure Kubernetes Services-managed clusters.
+> If you need Flux to access the Git repo through your proxy, then you will need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./connect-cluster.md#connect-using-an-outbound-proxy-server).
#### Additional Parameters
-To customize the configuration, here are more parameters you can use:
-
-`--enable-helm-operator` : *Optional* switch to enable support for Helm chart deployments.
-
-`--helm-operator-params` : *Optional* chart values for Helm operator (if enabled). For example, '--set helm.versions=v3'.
+Customize the configuration with the following optional parameters:
-`--helm-operator-version` : *Optional* chart version for Helm operator (if enabled). Use '1.2.0' or greater. Default: '1.2.0'.
-
-`--operator-namespace` : *Optional* name for the operator namespace. Default: 'default'. Max 23 characters.
-
-`--operator-params` : *Optional* parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --sync-garbage-collection --git-branch=main' ```
+| Parameter | Description |
+| - | - |
+| `--enable-helm-operator`| Switch to enable support for Helm chart deployments. |
+| `--helm-operator-params` | Chart values for Helm operator (if enabled). For example, `--set helm.versions=v3`. |
+| `--helm-operator-version` | Chart version for Helm operator (if enabled). Use version 1.2.0+. Default: '1.2.0'. |
+| `--operator-namespace` | Name for the operator namespace. Default: 'default'. Max: 23 characters. |
+| `--operator-params` | Parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --sync-garbage-collection --git-branch=main' ```
-Options supported in --operator-params
+##### Options supported in `--operator-params`:
| Option | Description | | - | - |
-| --git-branch | Branch of Git repo to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named 'main', in which case you need to set --git-branch=main. |
-| --git-path | Relative path within the Git repo for Flux to locate Kubernetes manifests. |
-| --git-readonly | Git repo will be considered read-only; Flux will not attempt to write to it. |
-| --manifest-generation | If enabled, Flux will look for .flux.yaml and run Kustomize or other manifest generators. |
-| --git-poll-interval | Period at which to poll Git repo for new commits. Default is '5m' (5 minutes). |
-| --sync-garbage-collection | If enabled, Flux will delete resources that it created, but are no longer present in Git. |
-| --git-label | Label to keep track of sync progress, used to tag the Git branch. Default is 'flux-sync'. |
-| --git-user | Username for Git commit. |
-| --git-email | Email to use for Git commit. |
+| `--git-branch` | Branch of Git repo to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named `main`, in which case you need to set `--git-branch=main`. |
+| `--git-path` | Relative path within the Git repo for Flux to locate Kubernetes manifests. |
+| `--git-readonly` | Git repo will be considered read-only; Flux will not attempt to write to it. |
+| `--manifest-generation` | If enabled, Flux will look for .flux.yaml and run Kustomize or other manifest generators. |
+| `--git-poll-interval` | Period at which to poll Git repo for new commits. Default is `5m` (5 minutes). |
+| `--sync-garbage-collection` | If enabled, Flux will delete resources that it created, but are no longer present in Git. |
+| `--git-label` | Label to keep track of sync progress. Used to tag the Git branch. Default is `flux-sync`. |
+| `--git-user` | Username for Git commit. |
+| `--git-email` | Email to use for Git commit.
-* If '--git-user' or '--git-email' are not set (which means that you don't want Flux to write to the repo), then --git-readonly will automatically be set (if you have not already set it).
+If you don't want Flux to write to the repo and `--git-user` or `--git-email` are not set, then `--git-readonly` will automatically be set.
-For more information, see [Flux documentation](https://aka.ms/FluxcdReadme).
+For more information, see the [Flux documentation](https://aka.ms/FluxcdReadme).
> [!TIP]
-> It is possible to create a sourceControlConfiguration in the Azure portal in the **GitOps** tab of the Azure Arc enabled Kubernetes resource.
+> You can create a `sourceControlConfiguration` in the Azure portal in the **GitOps** tab of the Azure Arc enabled Kubernetes resource.
## Validate the sourceControlConfiguration
-Using the Azure CLI validate that the `sourceControlConfiguration` was successfully created.
+Use the Azure CLI to validate that the `sourceControlConfiguration` was successfully created.
```azurecli az k8sconfiguration show --name cluster-config --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters ```
-Note that the `sourceControlConfiguration` resource is updated with compliance status, messages, and debugging information.
+The `sourceControlConfiguration` resource will be updated with compliance status, messages, and debugging information.
**Output:**
@@ -224,7 +237,7 @@ Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
When a `sourceControlConfiguration` is created or updated, a few things happen under the hood:
-1. The Azure Arc `config-agent` is monitoring Azure Resource Manager for new or updated configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`) and notices the new `Pending` configuration.
+1. The Azure Arc `config-agent` monitors Azure Resource Manager for new or updated configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`) and notices the new `Pending` configuration.
1. The `config-agent` reads the configuration properties and creates the destination namespace. 1. The Azure Arc `controller-manager` prepares a Kubernetes Service Account with the appropriate permission (`cluster` or `namespace` scope) and then deploys an instance of `flux`. 1. If using the option of SSH with Flux-generated keys, `flux` generates an SSH key and logs the public key.
@@ -232,19 +245,23 @@ When a `sourceControlConfiguration` is created or updated, a few things happen u
While the provisioning process happens, the `sourceControlConfiguration` will move through a few state changes. Monitor progress with the `az k8sconfiguration show ...` command above:
-1. `complianceStatus` -> `Pending`: represents the initial and in-progress states
-1. `complianceStatus` -> `Installed`: `config-agent` was able to successfully configure the cluster and deploy `flux` without error
-1. `complianceStatus` -> `Failed`: `config-agent` encountered an error deploying `flux`, details should be available in `complianceStatus.message` response body
+| Stage change | Description |
+| - | - |
+| `complianceStatus`-> `Pending` | Represents the initial and in-progress states. |
+| `complianceStatus` -> `Installed` | `config-agent` was able to successfully configure the cluster and deploy `flux` without error. |
+| `complianceStatus` -> `Failed` | `config-agent` encountered an error deploying `flux`, details should be available in `complianceStatus.message` response body. |
## Apply configuration from a private Git repository
-If you are using a private Git repo, then you need to configure the SSH public key in your repo. You can configure the public key either on the specific Git repo or on the Git user that has access to the repo. The SSH public key will be either the one you provide or the one that Flux generates.
+If you are using a private Git repo, you need to configure the SSH public key in your repo. The SSH public key will either be the one that Flux generates or the one you provide. You can configure the public key either on the specific Git repo or on the Git user that has access to the repo.
-**Get your own public key**
+### Get your own public key
If you generated your own SSH keys, then you already have the private and public keys.
-**Get the public key using Azure CLI (useful if Flux generates the keys)**
+#### Get the public key using Azure CLI
+
+The following is useful if Flux generates the keys.
```console $ az k8sconfiguration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --cluster-type connectedClusters --query 'repositoryPublicKey'
@@ -252,45 +269,51 @@ Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAREDACTED" ```
-**Get the public key from the Azure portal (useful if Flux generates the keys)**
+#### Get the public key from the Azure portal
+
+The following is useful if Flux generates the keys.
1. In the Azure portal, navigate to the connected cluster resource. 2. In the resource page, select "GitOps" and see the list of configurations for this cluster. 3. Select the configuration that uses the private Git repository. 4. In the context window that opens, at the bottom of the window, copy the **Repository public key**.
-If you are using GitHub, use one of the following 2 options:
+#### Add public key using GitHub
-**Option 1: Add the public key to your user account (applies to all repos in your account)**
+Use one of the following options:
-1. Open GitHub, click on your profile icon at the top right corner of the page.
-2. Click on **Settings**
-3. Click on **SSH and GPG keys**
-4. Click on **New SSH key**
-5. Supply a Title
-6. Paste the public key (minus any surrounding quotation marks)
-7. Click on **Add SSH key**
+* Option 1: Add the public key to your user account (applies to all repos in your account):
+ 1. Open GitHub and click on your profile icon at the top-right corner of the page.
+ 2. Click on **Settings**.
+ 3. Click on **SSH and GPG keys**.
+ 4. Click on **New SSH key**.
+ 5. Supply a Title.
+ 6. Paste the public key without any surrounding quotes.
+ 7. Click on **Add SSH key**.
-**Option 2: Add the public key as a deploy key to the Git repo (applies to only this repo)**
+* Option 2: Add the public key as a deploy key to the Git repo (applies to only this repo):
+ 1. Open GitHub and navigate to your repo.
+ 1. Click on **Settings**.
+ 1. Click on **Deploy keys**.
+ 1. Click on **Add deploy key**.
+ 1. Supply a Title.
+ 1. Check **Allow write access**.
+ 1. Paste the public key without any surrounding quotes.
+ 1. Click on **Add key**.
-1. Open GitHub, navigate to your repo, to **Settings**, then to **Deploy keys**
-2. Click on **Add deploy key**
-3. Supply a Title
-4. Check **Allow write access**
-5. Paste the public key (minus any surrounding quotation marks)
-6. Click on **Add key**
+#### Add public key using an Azure DevOps repository
-**If you are using an Azure DevOps repository, add the key to your SSH keys**
+Use the following steps to add the key to your SSH keys:
-1. Under **User Settings** in the top right (next to the profile image), click **SSH public keys**
-1. Select **+ New Key**
-1. Supply a name
-1. Paste the public key without any surrounding quotes
-1. Click **Add**
+1. Under **User Settings** in the top right (next to the profile image), click **SSH public keys**.
+1. Select **+ New Key**.
+1. Supply a name.
+1. Paste the public key without any surrounding quotes.
+1. Click **Add**.
## Validate the Kubernetes configuration
-After `config-agent` has installed the `flux` instance, resources held in the Git repository should begin to flow to the cluster. Check to see that the namespaces, deployments, and resources have been created:
+After `config-agent` has installed the `flux` instance, resources held in the Git repository should begin to flow to the cluster. Check to see that the namespaces, deployments, and resources have been created with the following command:
```console kubectl get ns --show-labels
@@ -329,7 +352,7 @@ memcached 1/1 1 1 3h memcached memcached:1
## Further exploration
-You can explore the other resources deployed as part of the configuration repository:
+You can explore the other resources deployed as part of the configuration repository using:
```console kubectl -n team-a get cm -o yaml
@@ -338,10 +361,10 @@ kubectl -n itops get all
## Delete a configuration
-Delete a `sourceControlConfiguration` using the Azure CLI or Azure portal. After you initiate the delete command, the `sourceControlConfiguration` resource will be deleted immediately in Azure, and full deletion of the associated objects from the cluster should happen within 10 minutes. If the `sourceControlConfiguration` is in a failed state when it is deleted, the full deletion of associated objects can take up to an hour.
+Delete a `sourceControlConfiguration` using the Azure CLI or Azure portal. After you initiate the delete command, the `sourceControlConfiguration` resource will be deleted immediately in Azure. Full deletion of the associated objects from the cluster should happen within 10 minutes. If the `sourceControlConfiguration` is in a failed state when removed, the full deletion of associated objects can take up to an hour.
> [!NOTE]
-> After a sourceControlConfiguration with namespace scope is created, it's possible for users with `edit` role binding on the namespace to deploy workloads on this namespace. When this `sourceControlConfiguration` with namespace scope gets deleted, the namespace is left intact and will not be deleted to avoid breaking these other workloads. If needed you can delete this namespace manually with kubectl.
+> After a `sourceControlConfiguration` with `namespace` scope is created, users with `edit` role binding on the namespace can deploy workloads on this namespace. When this `sourceControlConfiguration` with namespace scope gets deleted, the namespace is left intact and will not be deleted to avoid breaking these other workloads. If needed, you can delete this namespace manually with `kubectl`.
> Any changes to the cluster that were the result of deployments from the tracked Git repo are not deleted when the `sourceControlConfiguration` is deleted. ```azurecli
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-with-helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-with-helm.md
@@ -3,29 +3,29 @@ Title: "Deploy Helm Charts using GitOps on Arc enabled Kubernetes cluster(Previe
# Previously updated : 05/19/2020 Last updated : 02/09/2021
-description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration (Preview)"
+description: "Use GitOps with Helm for an Azure Arc enabled cluster configuration (Preview)"
keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers" # Deploy Helm Charts using GitOps on Arc enabled Kubernetes cluster (Preview)
-Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers such as APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
+Helm is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like APT and Yum, Helm is used to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
This article shows you how to configure and use Helm with Azure Arc enabled Kubernetes. ## Before you begin
-This article assumes that you have an existing Azure Arc enabled Kubernetes connected cluster. If you need a connected cluster, see the [connect a cluster quickstart](./connect-cluster.md).
+Verify you have an existing Azure Arc enabled Kubernetes connected cluster. If you need a connected cluster, see the [Connect an Azure Arc enabled Kubernetes cluster quickstart](./connect-cluster.md).
## Overview of using GitOps and Helm with Azure Arc enabled Kubernetes
- The Helm operator provides an extension to Flux that automates Helm Chart releases. A Chart release is described through a Kubernetes custom resource named HelmRelease. Flux synchronizes these resources from git to the cluster, and the Helm operator makes sure Helm charts are released as specified in the resources.
+ The Helm operator provides an extension to Flux that automates Helm Chart releases. A Helm Chart release is described via a Kubernetes custom resource named HelmRelease. Flux synchronizes these resources from Git to the cluster, while the Helm operator makes sure Helm Charts are released as specified in the resources.
- The [example repository](https://github.com/Azure/arc-helm-demo) used in this document is structured in the following way:
+ The [example repository](https://github.com/Azure/arc-helm-demo) used in this article is structured in the following way:
```console Γö£ΓöÇΓöÇ charts
@@ -40,7 +40,7 @@ This article assumes that you have an existing Azure Arc enabled Kubernetes conn
ΓööΓöÇΓöÇ app.yaml ```
-In the git repo we have two directories, one containing a Helm chart and one containing the releases config. In the `releases` directory, the `app.yaml` contains the HelmRelease config shown below:
+In the Git repo we have two directories: one containing a Helm Chart and one containing the releases config. In the `releases` directory, the `app.yaml` contains the HelmRelease config, shown below:
```yaml apiVersion: helm.fluxcd.io/v1
@@ -60,19 +60,21 @@ spec:
The Helm release config contains the following fields: -- `metadata.name` is mandatory, and needs to follow Kubernetes naming conventions-- `metadata.namespace` is optional, and determines where the release is created-- `spec.releaseName` is optional, and if not provided the release name will be $namespace-$name-- `spec.chart.path` is the directory containing the chart, given relative to the repository root-- `spec.values` are user customizations of default parameter values from the chart itself
+| Field | Description |
+| - | - |
+| `metadata.name` | Mandatory field. Needs to follow Kubernetes naming conventions. |
+| `metadata.namespace` | Optional field. Determines where the release is created. |
+| `spec.releaseName` | Optional field. If not provided the release name will be `$namespace-$name`. |
+| `spec.chart.path` | The directory containing the Chart, given relative to the repository root. |
+| `spec.values` | User customizations of default parameter values from the Chart itself. |
-The options specified in the HelmRelease spec.values will override the options specified in values.yaml from the chart source.
+The options specified in the HelmRelease `spec.values` will override the options specified in `values.yaml` from the Chart source.
-You can learn more about the HelmRelease in the official [Helm Operator documentation](https://docs.fluxcd.io/projects/helm-operator/en/stable/)
+You can learn more about the HelmRelease in the official [Helm Operator documentation](https://docs.fluxcd.io/projects/helm-operator/en/stable/).
## Create a configuration
-Using the Azure CLI extension for `k8sconfiguration`, let's link our connected cluster to the example git repository. We will give this configuration a name `azure-arc-sample` and deploy the Flux operator in the `arc-k8s-demo` namespace.
+Using the Azure CLI extension for `k8sconfiguration`, link your connected cluster to the example Git repository. Give this configuration the name `azure-arc-sample` and deploy the Flux operator in the `arc-k8s-demo` namespace.
```console az k8sconfiguration create --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name flux --operator-namespace arc-k8s-demo --operator-params='--git-readonly --git-path=releases' --enable-helm-operator --helm-operator-version='0.6.0' --helm-operator-params='--set helm.versions=v3' --repository-url https://github.com/Azure/arc-helm-demo.git --scope namespace --cluster-type connectedClusters
@@ -80,11 +82,11 @@ az k8sconfiguration create --name azure-arc-sample --cluster-name AzureArcTest1
### Configuration Parameters
-To customize the creation of configuration, [learn about additional parameters you may use](./use-gitops-connected-cluster.md#additional-parameters).
+To customize the creation of the configuration, [learn about additional parameters you may use](./use-gitops-connected-cluster.md#additional-parameters).
## Validate the Configuration
-Using the Azure CLI, validate that the `sourceControlConfiguration` was successfully created.
+Using the Azure CLI, verify the `sourceControlConfiguration` was successfully created.
```console az k8sconfiguration show --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/manage-howto-migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-howto-migrate.md
@@ -0,0 +1,34 @@
+
+ Title: How to migrate Azure Arc enabled servers across regions
+description: Learn how to migrate an Azure Arc enabled server from one region to another.
Last updated : 02/10/2021+++
+# How to migrate Azure Arc enabled servers across regions
+
+There are scenarios in which you'd want to move your existing Azure Arc enabled server from one region to another. For example, you realized the machine was registered in the wrong region, to improve manageability, or to move for governance reasons.
+
+To migrate an Azure Arc enabled server from one Azure region to another, you have to uninstall the VM extensions, delete the resource in Azure, and re-create it in the other region. Before you perform these steps, you should audit the machine to verify which VM extensions are installed.
+
+> [!NOTE]
+> While installed extensions continue to run and perform their normal operation after this procedure is complete, you won't be able to manage them. If you attempt to redeploy the extensions on the machine, you may experience unpredictable behavior.
+
+## Move machine to other region
+
+> [!NOTE]
+> During this operation, it results in downtime during the migration.
+
+1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extension), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
+
+2. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Arc enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run this manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
+
+3. Re-register the Connected Machine agent with Arc enabled servers in the other region. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step.
+
+4. Redeploy the VM extensions that were originally deployed to the machine from Arc enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure policy, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+
+## Next steps
+
+* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [Azure Monitor with VMs](../../azure-monitor/insights/vminsights-enable-policy.md) policy, and much more.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-high-availability.md
@@ -16,8 +16,8 @@ Azure Cache for Redis implements high availability by using multiple VMs, called
| Option | Description | Availability | Standard | Premium | Enterprise | | - | - | - | :: | :: | :: | | [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% |Γ£ö|Γ£ö|-|
-| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.95% (Premium tier), 99.99% (Enterprise tiers) |-|Preview|Γ£ö|
-| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.99% (Premium tier) |-|Γ£ö|-|
+| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.95% (Premium tier), 99.99% (Enterprise tiers) |-|Preview|Preview|
+| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | 99.9% (Premium tier, single region) |-|Γ£ö|-|
## Standard replication
@@ -60,7 +60,7 @@ Azure Cache for Redis distributes nodes in a zone redundant cache in a round-rob
A zone redundant cache provides automatic failover. When the current primary node is unavailable, one of the replicas will take over. Your application may experience higher cache response time if the new primary node is located in a different AZ. AZs are geographically separated. Switching from one AZ to another alters the physical distance between where your application and cache are hosted. This change impacts round-trip network latencies from your application to the cache. The extra latency is expected to fall within an acceptable range for most applications. We recommend that you test your application to ensure that it can perform well with a zone-redundant cache.
-### Enterprise tiers
+### Enterprise and Enterprise Flash tiers
A cache in either Enterprise tier runs on a Redis Enterprise cluster. It requires an odd number of server nodes at all times to form a quorum. By default, it's comprised of three nodes, each hosted on a dedicated VM. An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. An Enterprise Flash cache has three same-sized data nodes. The Enterprise cluster divides Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never colocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-geo-replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
@@ -1,15 +1,16 @@
Title: Configure geo-replication for a Premium Azure Cache for Redis instance
+ Title: Configure geo-replication for Premium Azure Cache for Redis instances
description: Learn how to replicate your Azure Cache for Redis Premium instances across Azure regions + Last updated 02/08/2021
-# Configure geo-replication for a Premium Azure Cache for Redis instance
+# Configure geo-replication for Premium Azure Cache for Redis instances
-In this article, you'll learn how to configure a geo-replicated Azure Cache instance using the Azure portal.
+In this article, you'll learn how to configure a geo-replicated Azure Cache using the Azure portal.
Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are usually located in different Azure regions, though they aren't required to. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary. This process continues until the link between the two instances is removed.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-overview.md
@@ -41,8 +41,8 @@ Azure Cache for Redis is available in the following tiers:
| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. | | Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to those for Basic or Standard caches. |
-| Enterprise | High-performance caches powered by Redis Labs' Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. In addition, it offers even higher availability than the Premium tier. |
-| Enterprise Flash | Cost-effective large caches powered by Redis Labs' Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
+| Enterprise (Preview) | High-performance caches powered by Redis Labs' Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. In addition, it offers even higher availability than the Premium tier. |
+| Enterprise Flash (Preview) | Cost-effective large caches powered by Redis Labs' Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
### Feature comparison The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/) provides a detailed comparison of each tier. The following table helps describe some of the features supported by tier:
@@ -52,11 +52,11 @@ The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/cache/v1_0/) |-|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Data encryption |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Network isolation](cache-how-to-premium-vnet.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Preview|Γ£ö|Γ£ö|
-| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview|
+| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|-|-|
| [OSS cluster](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
+| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|-|-|
+| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Preview|Preview|Preview|
+| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|-|-|
| [Modules](https://redis.io/modules) |-|-|-|Γ£ö|Γ£ö| | [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-|
@@ -76,7 +76,7 @@ You should consider the following when choosing an Azure Cache for Redis tier:
You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier is not supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
-### Enterprise tier requirements
+### Enterprise and Enterprise Flash tier requirements
The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Labs. Customers will obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis will facilitate the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites: * Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions are not supported.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis-enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
@@ -10,7 +10,7 @@
Last updated 02/08/2021 #Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
-# Quickstart: Create a Redis Enterprise cache
+# Quickstart: Create a Redis Enterprise cache (Preview)
Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. They're currently available as a preview. There are two new tiers in this preview: * Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data
@@ -18,7 +18,7 @@ Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Re
## Prerequisites
-You'll need an Azure subscription before you begin. If you don't have one, create an [account](https://azure.microsoft.com/). For more information, see [Enterprise tier requirements](cache-overview.md#enterprise-tier-requirements).
+You'll need an Azure subscription before you begin. If you don't have one, create an [account](https://azure.microsoft.com/). For more information, see [Enterprise tier requirements](cache-overview.md#enterprise-and-enterprise-flash-tier-requirements).
## Create a cache 1. To create a cache, sign in to the Azure portal using the link in your preview invitation and select **Create a resource**.
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compare-azure-government-global-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
@@ -464,7 +464,8 @@ Azure Security Center is deployed on Azure Government regions but not DoD region
### [Azure Sentinel](../sentinel/overview.md) The following **features have known limitations** in Azure Government: - Office 365 data connector
- - The Office 365 data connector can be used only for [Office 365 GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc), [Office 365 GCC High](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod), and [Office 365 DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod).
+ - The Office 365 data connector can be used only for [Office 365 GCC High and Office 365 DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod). Office 365 GCC can be accessed only from global (commercial) Azure.
+ - AWS CloudTrail data connector - The AWS CloudTrail data connector can be used only for [AWS in the Public Sector](https://aws.amazon.com/government-education/).
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-use-map-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-map-control.md
@@ -15,6 +15,8 @@
The Map Control client-side JavaScript library allows you to render maps and embedded Azure Maps functionality into your web or mobile application.
+This documentation uses the Azure Maps Web SDK, however the Azure Maps services can be used with any map control. [Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
+ ## Prerequisites To use the Map Control in a web page, you must have one of the following prerequisites:
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/migrate-from-bing-maps-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-app.md
@@ -28,9 +28,9 @@ Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
If migrating an existing web application, check to see if it is using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it is and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The links below provide details on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [Documentation](https://cesiumjs.org/)
-* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Azure%20Maps%20Raster%20Tiles%20in%20Leaflet%20JS) \| [Documentation](https://leafletjs.com/)
-* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20OpenLayers) \| [Documentation](https://openlayers.org/)
+* [Cesium](https://cesiumjs.org/) - A 3D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=Cesium) \| [Plugin repo]()
+* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=leaflet) \| [Plugin repo]()
+* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=openlayers) \| [Plugin repo]()
If developing using a JavaScript framework, one of the following open-source projects may be useful:
@@ -59,7 +59,7 @@ The following table lists key API features in the Bing Maps V8 JavaScript SDK an
| Tile Layers | Γ£ô | | KML Layer | Γ£ô | | Contour layer | [Samples](https://azuremapscodesamples.azurewebsites.net/?search=contour) |
-| Data binning layer | [Samples](https://azuremapscodesamples.azurewebsites.net/?search=data%20binning) |
+| Data binning layer | Included in the open-source Azure Maps [Gridded Data Source module](https://github.com/Azure-Samples/azure-maps-gridded-data-source) |
| Animated tile layer | Included in the open-source Azure Maps [Animation module](https://github.com/Azure-Samples/azure-maps-animations) | | Drawing tools | Γ£ô | | Geocoder service | Γ£ô |
@@ -67,10 +67,10 @@ The following table lists key API features in the Bing Maps V8 JavaScript SDK an
| Distance Matrix service | Γ£ô | | Spatial Data service | N/A | | Satellite/Aerial imagery | Γ£ô |
-| Birds eye imagery | Planned |
-| Streetside imagery | Planned |
+| Birds eye imagery | N/A |
+| Streetside imagery | N/A |
| GeoJSON support | Γ£ô |
-| GeoXML support | Γ£ô |
+| GeoXML support | Γ£ô [Spatial IO module](how-to-use-spatial-io-module.md) |
| Well-Known Text support | Γ£ô | | Custom map styles | Partial |
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/migrate-from-bing-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps.md
@@ -42,8 +42,8 @@ The following table provides a high-level list of Bing Maps features and the rel
| Web SDK | Γ£ô | | Android SDK | Γ£ô | | iOS SDK | Planned |
-| UWP SDK | Planned |
-| WPF SDK | Planned |
+| UWP SDK | N/A |
+| WPF SDK | N/A |
| REST Service APIs | Γ£ô | | Autosuggest | Γ£ô | | Directions (including truck) | Γ£ô |
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/migrate-from-google-maps-web-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-web-services.md
@@ -49,7 +49,7 @@ The table shows the Azure Maps service APIs, which have a similar functionality
The following service APIs aren't currently available in Azure Maps: -- Geolocation
+- Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but does not currently support cell tower or WiFi triangulation.
- Places details and photos - Phone numbers and website URL are available in the Azure Maps search API. - Map URLs - Nearest Roads - This is achievable using the Web SDK as shown [here](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic), but not available as a service currently.
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/migrate-from-google-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps.md
@@ -43,7 +43,7 @@ The table provides a high-level list of Azure Maps features, which correspond to
| REST Service APIs | Γ£ô | | Directions (Routing) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevation | Planned |
+| Elevation | Γ£ô (Preview) |
| Geocoding (Forward/Reverse) | Γ£ô | | Geolocation | N/A | | Nearest Roads | Γ£ô |
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/open-source-projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/open-source-projects.md
@@ -61,11 +61,14 @@ The following is a list of open-source projects that extend the capabilities of
| [Implement IoT spatial analytics using Azure Maps](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing) | Tracking and capturing relevant events that occur in space and time is a common IoT scenario. | **Third party map control plugins**
+<a name="third-part-map-control-plugins"></a>
| Project Name | Description | |-|-|
+| [Azure Maps Cesium plugin](https://github.com/azure-samples/azure-maps-cesium) | A [Cesium JS](https://cesium.com/cesiumjs/) plugin that makes it easy to integrate Azure Maps services such as [tile layers](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview) and [geocoding services](https://docs.microsoft.com/rest/api/maps/search). |
| [Azure Maps Leaflet plugin](https://github.com/azure-samples/azure-maps-leaflet) | A [leaflet](https://leafletjs.com/) JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview). |
-
+ | [Azure Maps OpenLayers plugin](https://github.com/azure-samples/azure-maps-openlayers) | A [OpenLayers](https://www.openlayers.org/) JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview). |
+ **Tools and resources** | Project Name | Description |
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/quick-demo-map-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/quick-demo-map-app.md
@@ -19,6 +19,8 @@ This article shows you how to use Azure Maps to create a map that gives users an
* Get your primary key to use in the demo web application. * Download and open the demo map application.
+This quickstart uses the Azure Maps Web SDK, however the Azure Maps services can be used with any map control. [Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
+ ## Prerequisites * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/supported-browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-browsers.md
@@ -69,6 +69,8 @@ You might want to target older browsers that don't support WebGL or that have on
Additional code samples using Azure Maps in Leaflet can be found [here](https://azuremapscodesamples.azurewebsites.net/?search=leaflet).
+[Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
+ ## Next steps Learn more about the Azure Maps Web SDK:
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
@@ -77,7 +77,7 @@ Create a separate API key for each of your Azure Pipelines release templates.
Now, whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. The annotations can be viewed in the following locations:
-The usage pane where you also have the ability to manually create release annotations:
+The **Usage** pane where you also have the ability to manually create release annotations:
![Screenshot of bar chart with number of user visits displayed over a period of hours. Release annotations appear as green checkmarks above the chart indicating the moment in time that a release occurred](./media/annotations/usage-pane.png)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/worker-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/worker-service.md
@@ -104,9 +104,9 @@ Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
```json { "ApplicationInsights":
- {
+ {
"InstrumentationKey": "putinstrumentationkeyhere"
- },
+ },
"Logging": { "LogLevel":
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collection-rule-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/data-collection-rule-overview.md
@@ -115,7 +115,7 @@ The sample data collection rule below is for virtual machines with Azure Managem
{ "name": "cloudSecurityTeamEvents", "streams": [
- "Microsoft-WindowsEvent"
+ "Microsoft-Event"
], "scheduledTransferPeriod": "PT1M", "xPathQueries": [
@@ -125,7 +125,7 @@ The sample data collection rule below is for virtual machines with Azure Managem
{ "name": "appTeam1AppEvents", "streams": [
- "Microsoft-WindowsEvent"
+ "Microsoft-Event"
], "scheduledTransferPeriod": "PT5M", "xPathQueries": [
@@ -178,7 +178,7 @@ The sample data collection rule below is for virtual machines with Azure Managem
"streams": [ "Microsoft-Perf", "Microsoft-Syslog",
- "Microsoft-WindowsEvent"
+ "Microsoft-Event"
], "destinations": [ "centralWorkspace"
@@ -192,4 +192,4 @@ The sample data collection rule below is for virtual machines with Azure Managem
## Next steps -- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/logs-export-logic-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/logs-export-logic-app.md
@@ -65,7 +65,7 @@ Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscript
Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**. ## Create a trigger for the logic app
-Under **Start with a common trigger**, select **Recurrence**. This creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Hour** and in the **Interval** box, enter **1** to run the workflow once per day.
+Under **Start with a common trigger**, select **Recurrence**. This creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
![Recurrence action](media/logs-export-logicapp/recurrence-action.png)
@@ -208,4 +208,4 @@ Go to the **Storage accounts** menu in the Azure portal and select your storage
- Learn more about [log queries in Azure Monitor](../log-query/log-query-overview.md). - Learn more about [Logic Apps](../../logic-apps/index.yml)-- Learn more about [Power Automate](https://flow.microsoft.com).
+- Learn more about [Power Automate](https://flow.microsoft.com).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
@@ -5,12 +5,56 @@
Previously updated : 01/11/2021 Last updated : 02/10/2021 # What's new in Azure Monitor documentation? This article provides lists Azure Monitor articles that are either new or have been significantly updated. It will be refreshed the first week of each month to include article updates from the previous month.
+## January 2021
+
+### General
+- [Azure Monitor FAQ](faq.md) - Added entry on device information for Application Insights.
+### Agents
+- [Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs](platform/data-sources-event-tracing-windows.md) - New article.
+- [Data Collection Rules in Azure Monitor (preview)](platform/data-collection-rule-overview.md) - Added links to PowerShell and CLI samples.
+
+### Alerts
+- [Configure Azure to connect ITSM tools using Secure Export](platform/itsm-connector-secure-webhook-connections-azure-configuration.md) - New article.
+- [Connector status errors in the ITSMC dashboard](platform/itsmc-dashboard-errors.md) - New article.
+- [Investigate errors by using the ITSMC dashboard](platform/itsmc-dashboard.md) - New article.
+- [Troubleshooting Azure metric alerts](platform/alerts-troubleshoot-metric.md) - Added sections for dynamic thresholds.
+- [Troubleshoot problems in IT Service Management Connector](platform/itsmc-troubleshoot-overview.md) - New article.
+
+### Application Insights
+- [Azure Application Insights telemetry correlation](app/correlation.md) - Added trace correlation when one module calls another in OpenCensus Python.
+- [Application Insights for web pages](app/javascript.md) - New article.
+- [Click Analytics Auto-collection plugin for Application Insights JavaScript SDK](app/javascript-click-analytics-plugin.md) - New article.
+- [Monitor your apps without code changes - auto-instrumentation for Azure Monitor Application Insights](app/codeless-overview.md) - Added Python column.
+- [React plugin for Application Insights JavaScript SDK](app/javascript-react-plugin.md) - New article.
+- [Telemetry processors (preview) - Azure Monitor Application Insights for Java](app/java-standalone-telemetry-processors.md) - Rewritten.
+- [Usage analysis with Azure Application Insights](app/usage-overview.md) - New article.
+- [Use Application Change Analysis in Azure Monitor to find web-app issues](app/change-analysis.md) - Added error messges.
++
+### Insights
+- [Azure Monitor for Azure Data Explorer (preview)](insights/data-explorer.md) - New article.
+
+### Logs
+- [Azure Monitor customer-managed key](platform/customer-managed-keys.md) - Introduce user assigned managed identity.
+- [Azure Monitor Logs Dedicated Clusters](log-query/logs-dedicated-clusters.md) - Updated reponse code.
+- [Cross service query - Azure Monitor and Azure Data Explorer (Preview)](platform/azure-monitor-troubleshooting-logs.md) - New article.
+
+### Metrics
+- [Azure Monitor Metrics metrics aggregation and display explained](platform/metrics-aggregation-explained.md) - New article.
+
+### Platform Logs
+- [Azure Monitor Resource Logs supported services and categories](platform/resource-logs-categories.md) - New article.
+
+### Visualizations
+- [Azure Monitor workbooks data sources](platform/workbooks-data-sources.md) - Added merge and change analysis.
++ ## December 2020 ### General
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-manage-snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
@@ -13,7 +13,7 @@
na ms.devlang: na Previously updated : 11/18/2020 Last updated : 02/10/2021 # Manage snapshots by using Azure NetApp Files
@@ -182,7 +182,9 @@ If you do not want to [restore the entire snapshot to a volume](#restore-a-snaps
The mounted volume contains a snapshot directory named `.snapshot` (in NFS clients) or `~snapshot` (in SMB clients) that is accessible to the client. The snapshot directory contains subdirectories corresponding to the snapshots of the volume. Each subdirectory contains the files of the snapshot. If you accidentally delete or overwrite a file, you can restore the file to the parent read-write directory by copying the file from a snapshot subdirectory to the read-write directory.
-If you do not see the snapshot directory, it might be hidden because the Hide Snapshot Path option is currently enabled. You can [edit the Hide Snapshot Path option](#edit-the-hide-snapshot-path-option) to disable it.
+You can control access to the snapshot directories by using the [Hide Snapshot Path option](#edit-the-hide-snapshot-path-option). This option controls whether the directory should be hidden from the clients. Therefore, it also controls access to files and folders in the snapshots.
+
+NFSv4.1 does not show the `.snapshot` directory (`ls -la`). However, when the Hide Snapshot Path option is not set, you can still access the `.snapshot` directory via NFSv4.1 by using the `cd <snapshot-path>` command from the client command line.
### Restore a file by using a Linux NFS client
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-troubleshoot-resource-provider-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-troubleshoot-resource-provider-errors.md
@@ -14,7 +14,7 @@
na ms.devlang: na Previously updated : 10/18/2019 Last updated : 02/10/2021 # Troubleshoot Azure NetApp Files Resource Provider errors
@@ -23,6 +23,16 @@ This article describes common Azure NetApp Files Resource Provider errors, their
## Common Azure NetApp Files Resource Provider errors
+***Creation of `netAppAccounts` has been restricted in this region.***
+
+This situation occurs when the subscription is waitlisted for Azure NetApp Files and the user attempts to create a NetApp account.
+
+* Cause:
+Azure Resource Provider for Azure NetApp Files is not registered successfully.
+
+* Solution:
+Complete all the steps described in [Azure NetApp resource provider registration](azure-netapp-files-register.md#resource-provider) after your subscription is waitlisted.
+ ***BareMetalTenantId cannot be changed.*** This error occurs when you try to update or patch a volume and the `BaremetalTenantId` property has a changed value.
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-videos.md
@@ -21,5 +21,5 @@ This article provides references to videos that contain in-depth discussions abo
Several videos are available to help you learn more about Azure NetApp Files:
-* [Microsoft Ignite 2019: Run your most demanding enterprise file workloads with Azure NetApp Files](https://myignite.techcommunity.microsoft.com/sessions/82938?source=sessions) provides a brief introduction to Azure NetApp Files, including use cases and demo, and then goes deeper on the capabilities and roadmap.
+* [Microsoft Ignite 2019: Run your most demanding enterprise file workloads with Azure NetApp Files](https://azure.microsoft.com/resources/videos/ignite-2018-taking-on-the-most-demanding-enterprise-file-workloads-with-azure-netapp-files/) provides a brief introduction to Azure NetApp Files, including use cases and demo, and then goes deeper on the capabilities and roadmap.
* [Azure NetApp Files talks by Kirk Ryan](https://www.youtube.com/channel/UCq1jZkyVXqMsMSIvScBE2qg/playlists) are a series of videos, tutorials, and demonstrations dedicated to Azure NetApp Files.
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/volume-hard-quota-guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/volume-hard-quota-guidelines.md
@@ -21,7 +21,7 @@
From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlaying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up. > [!IMPORTANT]
-> The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from March 15th, 2021, volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.**
+> The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from April 1, 2021, volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.**
## Reasons for the change to volume hard quota
@@ -273,4 +273,4 @@ You can submit bugs and feature requests by clicking **New Issue** on the [ANFCa
## Next steps * [Resize a capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
-* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
+* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
@@ -359,6 +359,9 @@ For SQL Database limits, see [SQL Database resource limits for single databases]
For Azure Synapse Analytics limits, see [Azure Synapse resource limits](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md).
+## Azure Files and Azure File Sync
+To learn more about the limits for Azure Files and File Sync, see [Azure Files scalability and performance targets](../../storage/files/storage-files-scale-targets.md).
+ ## Storage limits <!--like # storage accts -->
@@ -374,16 +377,6 @@ For more information on limits for standard storage accounts, see [Scalability t
[!INCLUDE [storage-blob-scale-targets](../../../includes/storage-blob-scale-targets.md)]
-### Azure Files limits
-
-For more information on Azure Files limits, see [Azure Files scalability and performance targets](../../storage/files/storage-files-scale-targets.md).
--
-### Azure File Sync limits
-- ### Azure Queue storage limits [!INCLUDE [storage-queues-scale-targets](../../../includes/storage-queues-scale-targets.md)]
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
@@ -27,7 +27,7 @@ Resource Manager locks apply only to operations that happen in the management pl
Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
-* A read-only lock on a **storage account** prevents all users from listing the keys. The list keys operation is handled through a POST request because the returned keys are available for write operations.
+* A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who do not possess the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
* A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
@@ -101,7 +101,7 @@ The preceding example requires a publicly accessible URI for the template, which
To deploy remote linked templates with relative path that are stored in a storage account, use `query-string` to specify the SAS token:
-```azurepowershell
+```azurecli-interactive
az deployment group create \ --name linkedTemplateWithRelativePath \ --resource-group myResourceGroup \
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
@@ -241,7 +241,7 @@ The possible uses of list* are shown in the following table.
| Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) | | Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/databaseaccounts/listconnectionstrings) | | Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2020-06-01-preview/databaseaccounts/listkeys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2020-04-01/notebookworkspaces/listconnectioninfo) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2020-06-01/notebookworkspaces/listconnectioninfo) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2020-06-01/domains/listsharedaccesskeys) |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-user-defined-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-user-defined-functions.md
@@ -2,8 +2,9 @@
Title: User-defined functions in templates description: Describes how to define and use user-defined functions in an Azure Resource Manager template (ARM template). Previously updated : 03/09/2020 Last updated : 02/11/2021 + # User-defined functions in ARM template Within your template, you can create your own functions. These functions are available for use in your template. User-defined functions are separate from the [standard template functions](template-functions.md) that are automatically available within your template. Create your own functions when you have complicated expressions that are used repeatedly in your template.
@@ -38,7 +39,7 @@ Your functions require a namespace value to avoid naming conflicts with template
## Use the function
-The following example shows a template that includes a user-defined function. It uses that function to get a unique name for a storage account. The template has a parameter named `storageNamePrefix` that it passes as a parameter to the function.
+The following example shows a template that includes a user-defined function to get a unique name for a storage account. The template has a parameter named `storageNamePrefix` that is passed as a parameter to the function.
```json {
@@ -87,6 +88,12 @@ The following example shows a template that includes a user-defined function. It
} ```
+During deployment, the `storageNamePrefix` parameter is passed to the function:
+
+* The template defines a parameter named `storageNamePrefix`.
+* The function uses `namePrefix` because you can only use parameters defined in the function. For more information, see [Limitations](#limitations).
+* In the template's `resources` section, the `name` element uses the function and passes the `storageNamePrefix` value to the function's `namePrefix`.
+ ## Limitations When defining a user function, there are some restrictions:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-service-principal-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal-tutorial.md
@@ -8,7 +8,7 @@
Previously updated : 10/21/2020 Last updated : 02/11/2021 # Tutorial: Create Azure AD users using Azure AD applications
@@ -228,35 +228,27 @@ Once a service principal is created in Azure AD, create the user in SQL Database
```powershell # PowerShell script for creating a new SQL user called myapp using application AppSP with secret-
- $tenantId = "<TenantId>" # tenantID (Azure Directory ID) were AppSP resides
- $clientId = "<ClientId>" # AppID also ClientID for AppSP
- $clientSecret = "<ClientSecret>" # client secret for AppSP
- $Resource = "https://database.windows.net/"
+ # AppSP is part of an Azure AD admin for the Azure SQL server below
- $adalPath = "${env:ProgramFiles}\WindowsPowerShell\Modules\AzureRM.profile\5.8.3"
- # To install the latest AzureRM.profile version execute -Install-Module -Name AzureRM.profile
- $adal = "$adalPath\Microsoft.IdentityModel.Clients.ActiveDirectory.dll"
- $adalforms = "$adalPath\Microsoft.IdentityModel.Clients.ActiveDirectory.WindowsForms.dll"
- [System.Reflection.Assembly]::LoadFrom($adal) | Out-Null
- $resourceAppIdURI = 'https://database.windows.net/'
-
- # Set Authority to Azure AD Tenant
- $authority = 'https://login.windows.net/' + $tenantId
-
- $ClientCred = [Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential]::new($clientId, $clientSecret)
- $authContext = [Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext]::new($authority)
- $authResult = $authContext.AcquireTokenAsync($resourceAppIdURI,$ClientCred)
- $Tok = $authResult.Result.CreateAuthorizationHeader()
- $Tok=$Tok.Replace("Bearer ","")
- Write-host "token"
- $Tok
- Write-host " "
-
+ # Download latest MSAL - https://www.powershellgallery.com/packages/MSAL.PS
+ Import-Module MSAL.PS
+
+ $tenantId = "<TenantId>" # tenantID (Azure Directory ID) were AppSP resides
+ $clientId = "<ClientId>" # AppID also ClientID for AppSP
+ $clientSecret = "<ClientSecret>" # Client secret for AppSP
+ $scopes = "https://database.windows.net/.default" # The end-point
+
+ $result = Get-MsalToken -RedirectUri $uri -ClientId $clientId -ClientSecret (ConvertTo-SecureString $clientSecret -AsPlainText -Force) -TenantId $tenantId -Scopes $scopes
+
+ $Tok = $result.AccessToken
+ #Write-host "token"
+ $Tok
+
$SQLServerName = "<server name>" # Azure SQL logical server name
- Write-Host "Create SQL connectionstring"
- $conn = New-Object System.Data.SqlClient.SQLConnection
$DatabaseName = "<database name>" # Azure SQL database name
+
+ Write-Host "Create SQL connection string"
+ $conn = New-Object System.Data.SqlClient.SQLConnection
$conn.ConnectionString = "Data Source=$SQLServerName.database.windows.net;Initial Catalog=$DatabaseName;Connect Timeout=30" $conn.AccessToken = $Tok
@@ -270,20 +262,11 @@ Once a service principal is created in Azure AD, create the user in SQL Database
Write-host "results" $command.ExecuteNonQuery()
- $conn.Close()
+ $conn.Close()
``` Alternatively, you can use the code sample in the blog, [Azure AD Service Principal authentication to SQL DB - Code Sample](https://techcommunity.microsoft.com/t5/azure-sql-database/azure-ad-service-principal-authentication-to-sql-db-code-sample/ba-p/481467). Modify the script to execute a DDL statement `CREATE USER [myapp] FROM EXTERNAL PROVIDER`. The same script can be used to create a regular Azure AD user a group in SQL Database.
- > [!NOTE]
- > If you need to install the module AzureRM.profile, you will need to open PowerShell as an administrator. You can use the following commands to automatically install the latest AzureRM.profile version, and set `$adalpath` for the above script:
- >
- > ```powershell
- > Install-Module AzureRM.profile -force
- > Import-Module AzureRM.profile
- > $version = (Get-Module -Name AzureRM.profile).Version.toString()
- > $adalPath = "${env:ProgramFiles}\WindowsPowerShell\Modules\AzureRM.profile\${version}"
- > ```
2. Check if the user *myapp* exists in the database by executing the following command:
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-service-principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal.md
@@ -8,7 +8,7 @@
Previously updated : 10/21/2020 Last updated : 02/11/2021 # Azure Active Directory service principal with Azure SQL
@@ -47,7 +47,7 @@ Supporting this functionality is useful in Azure AD application automation proce
To enable an Azure AD object creation in SQL Database and Azure Synapse on behalf of an Azure AD application, the following settings are required:
-1. Assign the server identity
+1. Assign the server identity. The assigned server identity represents the Managed System Identity (MSI). Currently, the server identity for Azure SQL does not support User Managed Identity (UMI).
- For a new Azure SQL logical server, execute the following PowerShell command: ```powershell
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
@@ -11,7 +11,7 @@
Previously updated : 12/01/2020 Last updated : 02/11/2021 tags: azure-synapse # Data Discovery & Classification
@@ -178,6 +178,13 @@ You can use the REST API to programmatically manage classifications and recommen
- [List Current By Database](/rest/api/sql/sensitivitylabels/listcurrentbydatabase): Gets the current sensitivity labels of the specified database. - [List Recommended By Database](/rest/api/sql/sensitivitylabels/listrecommendedbydatabase): Gets the recommended sensitivity labels of the specified database. +
+## FAQ - Advanced classification capabilities
+
+**Question**: Will [Azure Purview](https://docs.microsoft.com/azure/purview/overview) replace SQL Data Discovery & Classification or will SQL Data Discovery & Classification be retired soon?
+**Answer**: We continue to support SQL Data Discovery & Classification and encourage you to adopt [Azure Purview](https://docs.microsoft.com/azure/purview/overview) which has richer capabilities to drive advanced classification capabilities and data governance. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies here.
++ ## <a id="next-steps"></a>Next steps - Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-vulnerability-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-vulnerability-assessment.md
@@ -1,7 +1,7 @@
Title: SQL Vulnerability Assessment
+ Title: SQL vulnerability assessment
-description: Learn how to configure SQL Vulnerability Assessment and interpret the assessment reports on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.
+description: Learn how to configure SQL vulnerability assessment and interpret the assessment reports on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.
@@ -11,24 +11,24 @@
Previously updated : 09/21/2020 Last updated : 02/11/2021 tags: azure-synapse
-# SQL Vulnerability Assessment helps you identify database vulnerabilities
+# SQL vulnerability assessment helps you identify database vulnerabilities
[!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
-SQL Vulnerability Assessment is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. Use it to proactively improve your database security.
+SQL vulnerability assessment is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. Use it to proactively improve your database security.
-Vulnerability Assessment is part of the [Azure Defender for SQL](azure-defender-for-sql.md) offering, which is a unified package for advanced SQL security capabilities. Vulnerability Assessment can be accessed and managed via the central Azure Defender for SQL portal.
+Vulnerability assessment is part of the [Azure Defender for SQL](azure-defender-for-sql.md) offering, which is a unified package for advanced SQL security capabilities. Vulnerability assessment can be accessed and managed via the central Azure Defender for SQL portal.
> [!NOTE]
-> Vulnerability Assessment is supported for Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics are referred to collectively in the remainder of this article as databases, and the server is referring to the [server](logical-servers.md) that hosts databases for Azure SQL Database and Azure Synapse.
+> Vulnerability assessment is supported for Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics are referred to collectively in the remainder of this article as databases, and the server is referring to the [server](logical-servers.md) that hosts databases for Azure SQL Database and Azure Synapse.
-## What is SQL Vulnerability Assessment?
+## What is SQL vulnerability assessment?
-SQL Vulnerability Assessment is a service that provides visibility into your security state. Vulnerability Assessment includes actionable steps to resolve security issues and enhance your database security. It can help you to monitor a dynamic database environment where changes are difficult to track and improve your SQL security posture.
+SQL vulnerability assessment is a service that provides visibility into your security state. Vulnerability assessment includes actionable steps to resolve security issues and enhance your database security. It can help you to monitor a dynamic database environment where changes are difficult to track and improve your SQL security posture.
-Vulnerability Assessment is a scanning service built into Azure SQL Database. The service employs a knowledge base of rules that flag security vulnerabilities. It highlights deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data.
+Vulnerability assessment is a scanning service built into Azure SQL Database. The service employs a knowledge base of rules that flag security vulnerabilities. It highlights deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data.
The rules are based on Microsoft's best practices and focus on the security issues that present the biggest risks to your database and its valuable data. They cover database-level issues and server-level security issues, like server firewall settings and server-level permissions.
@@ -38,52 +38,77 @@ Results of the scan include actionable steps to resolve each issue and provide c
- Feature configurations - Database settings
-## Configure Vulnerability Assessment
+## Configure vulnerability assessment
-The following steps configure the vulnerability assessment:
+Take the following steps to configure the vulnerability assessment:
-1. Go to your Azure SQL Database, SQL Managed Instance Database, or Azure Synapse resource in the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), open the specific resource in Azure SQL Database, SQL Managed Instance Database, or Azure Synapse.
-2. Under the **Security** heading, select **Security Center**.
+1. Under the **Security** heading, select **Security Center**.
-3. Then click **Configure** on the link to open the Azure Defender for SQL settings pane for either the entire server or managed instance.
+1. Select **Configure** on the link to open the Azure Defender for SQL settings pane for either the entire server or managed instance.
+
+ :::image type="content" source="media/sql-vulnerability-assessment/opening-sql-configuration.png" alt-text="Opening the Defender for SQL configuration":::
> [!NOTE]
- > SQL Vulnerability Assessment requires **Azure Defender for SQL** plan to be able to run scans. For more information about how to enable Azure Defender for SQL, see [Azure Defender for SQL](azure-defender-for-sql.md).
+ > SQL vulnerability assessment requires **Azure Defender for SQL** plan to be able to run scans. For more information about how to enable Azure Defender for SQL, see [Azure Defender for SQL](azure-defender-for-sql.md).
-4. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](../../storage/common/storage-account-create.md).
+1. In the **Server settings** page, define the Azure Defender for SQL settings:
- > [!NOTE]
- > For more information about storing Vulnerability Assessment scans behind firewalls and VNets, see [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
+ :::image type="content" source="media/sql-vulnerability-assessment/sql-vulnerability-scan-settings.png" alt-text="Configuring the SQL vulnerability assessment scans":::
+
+ 1. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](../../storage/common/storage-account-create.md).
+
+ > [!TIP]
+ > For more information about storing vulnerability assessment scans behind firewalls and VNets, see [Store vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
+
+ 1. To configure vulnerability assessments to automatically run weekly scans to detect security misconfigurations, set **Periodic recurring scans** to **On**. The results are sent to the email addresses you provide in **Send scan reports to**. You can also send email notification to admins and subscription owners by enabling **Also send email notification to admins and subscription owners**.
+
+1. SQL vulnerability assessment scans can also be run on-demand:
-5. Configure SQL Vulnerability Assessment to automatically run periodic recurring scans once a week to detect any change security misconfiguration automatically. To do so, enable **Periodic recurring scans** under the storage account selection. A scan result summary is sent to the email addresses you provide in **Send scan reports to**. You can also send email notification to admins and subscription owners by enabling **Also send email notification to admins and subscription owners**.
+ 1. From the resource's **Security Center** page, select **View additional findings in Vulnerability Assessment** to access the scan results from previous scans.
-6. SQL Vulnerability Assessment can run scan on demand. After you configured SQL Vulnerability Assessment, select **Scan** to scan your database for vulnerabilities.
+ :::image type="content" source="media/sql-vulnerability-assessment/view-additional-findings-link.png" alt-text="Opening the scan results and manual scan options":::
+
+ 1. To run an on-demand scan to scan your database for vulnerabilities, select **Scan** from the toolbar:
+
+ :::image type="content" source="media/sql-vulnerability-assessment/on-demand-vulnerability-scan.png" alt-text="Select scan to run an on-demand vulnerability assessment scan of your SQL resource":::
-![Scan a database](./media/sql-vulnerability-assessment/pp_va_initialize.png)
> [!NOTE] > The scan is lightweight and safe. It takes a few seconds to run and is entirely read-only. It doesn't make any changes to your database. ## Remediate vulnerabilities
-1. When your scan is finished, your scan report is automatically displayed in the Azure portal. The report presents an overview of your security state. It lists how many issues were found and their respective severities. Results include warnings on deviations from best practices and a snapshot of your security-related settings, such as database principals and roles, and their associated permissions.
+When a vulnerability scan completes, the report is displayed in the Azure portal. The report presents:
+
+- An overview of your security state
+- The number of issues that were found, and
+- A summary by severity of the risks
+- A list of the findings for further investigations
++
+To remediate the vulnerabilities discovered:
+
+1. Review your results and determine which of the report's findings are true security issues for your environment.
-![View the report](./media/sql-vulnerability-assessment/pp_main_getstarted.png)
+1. Select each failed result to understand its impact and why the security check failed.
-2. Review your results and determine the findings in the report that are true security issues in your environment. Drill down to each failed result to understand the impact of the findings and why each security check failed. Use the actionable remediation information provided by the report to resolve issues.
+ > [!TIP]
+ > The findings details page includes actionable remediation information explaining how to resolve the issue.
-![Analyze the report](./media/sql-vulnerability-assessment/pp_fail_rule_show_remediation.png)
+ :::image type="content" source="media/sql-vulnerability-assessment/examining-vulnerability-findings.gif" alt-text="Examining the findings from a vulnerability scan":::
-3. As you review your assessment results, you can mark specific results as being an acceptable *baseline* in your environment. The baseline is essentially a customization of how the results are reported. Results that match the baseline are considered as passing in subsequent scans. After you've established your baseline security state, Vulnerability Assessment only reports on deviations from the baseline. In this way, you can focus your attention on the relevant issues.
+1. As you review your assessment results, you can mark specific results as being an acceptable *baseline* in your environment. A baseline is essentially a customization of how the results are reported. In subsequent scans, results that match the baseline are considered as passes. After you've established your baseline security state, vulnerability assessment only reports on deviations from the baseline. In this way, you can focus your attention on the relevant issues.
-![Set your baseline](./media/sql-vulnerability-assessment/pp_fail_rule_show_baseline.png)
+ :::image type="content" source="media/sql-vulnerability-assessment/baseline-approval.png" alt-text="Approving a finding as a baseline for future scans":::
-4. After you finish setting up your **Rule Baselines**, run a new scan to view the customized report. Vulnerability Assessment now reports only the security issues that deviate from your approved baseline state.
+1. If you change the baselines, use the **Scan** button to run an on-demand scan and view the customized report. Any findings you've added to the baseline will now appear in **Passed** with an indication that they've passed because of the baseline changes.
-![View your customized report](./media/sql-vulnerability-assessment/pp_pass_main_with_baselines.png)
+ :::image type="content" source="media/sql-vulnerability-assessment/passed-per-custom-baseline.png" alt-text="Passed assessments indicating they've passed per custom baseline":::
-Vulnerability Assessment can now be used to monitor that your database maintains a high level of security at all times, and that your organizational policies are met.
+Your vulnerability assessment scans can now be used to ensure that your database maintains a high level of security, and that your organizational policies are met.
## Advanced capabilities
@@ -93,7 +118,7 @@ Select **Export Scan Results** to create a downloadable Excel report of your sca
### View scan history
-Select **Scan History** in the Vulnerability Assessment pane to view a history of all scans previously run on this database. Select a particular scan in the list to view the detailed results of that scan.
+Select **Scan History** in the vulnerability assessment pane to view a history of all scans previously run on this database. Select a particular scan in the list to view the detailed results of that scan.
## Manage vulnerability assessments programmatically
@@ -107,33 +132,33 @@ You can use Azure PowerShell cmdlets to programmatically manage your vulnerabili
| Cmdlet name as a link | Description | | :-- | :- |
-| [Clear-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Clear-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Clears the Vulnerability Assessment rule baseline.<br/>First, set the baseline before you use this cmdlet to clear it. |
-| [Clear-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-azSqlDatabaseVulnerabilityAssessmentSetting) | Clears the Vulnerability Assessment settings of a database. |
-| [Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Clears the Vulnerability Assessment rule baseline of a managed database.<br/>First, set the baseline before you use this cmdlet to clear it. |
-| [Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Clears the Vulnerability Assessment settings of a managed database. |
-| [Clear-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-AzSqlInstanceVulnerabilityAssessmentSetting) | Clears the Vulnerability Assessment settings of a managed instance. |
-| [Convert-AzSqlDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Convert-azSqlDatabaseVulnerabilityAssessmentScan) | Converts Vulnerability Assessment scan results of a database to an Excel file. |
-| [Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan) | Converts Vulnerability Assessment scan results of a managed database to an Excel file. |
-| [Get-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Gets the Vulnerability Assessment rule baseline of a database for a given rule. |
-| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Gets the Vulnerability Assessment rule baseline of a managed database for a given rule. |
-| [Get-AzSqlDatabaseVulnerabilityAssessmentScanRecord](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentScanRecord) | Gets all Vulnerability Assessment scan records associated with a given database. |
-| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord) | Gets all Vulnerability Assessment scan records associated with a given managed database. |
-| [Get-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentSetting) | Returns the Vulnerability Assessment settings of a database. |
-| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Returns the Vulnerability Assessment settings of a managed database. |
-| [Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Set-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Sets the Vulnerability Assessment rule baseline. |
-| [Set-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Set-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Sets the Vulnerability Assessment rule baseline for a managed database. |
-| [Start-AzSqlDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Start-azSqlDatabaseVulnerabilityAssessmentScan) | Triggers the start of a Vulnerability Assessment scan on a database. |
-| [Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan) | Triggers the start of a Vulnerability Assessment scan on a managed database. |
-| [Update-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-azSqlDatabaseVulnerabilityAssessmentSetting) | Updates the Vulnerability Assessment settings of a database. |
-| [Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Updates the Vulnerability Assessment settings of a managed database. |
-| [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the Vulnerability Assessment settings of a managed instance. |
+| [Clear-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Clear-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Clears the vulnerability assessment rule baseline.<br/>First, set the baseline before you use this cmdlet to clear it. |
+| [Clear-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-azSqlDatabaseVulnerabilityAssessmentSetting) | Clears the vulnerability assessment settings of a database. |
+| [Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Clears the vulnerability assessment rule baseline of a managed database.<br/>First, set the baseline before you use this cmdlet to clear it. |
+| [Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Clears the vulnerability assessment settings of a managed database. |
+| [Clear-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Clear-AzSqlInstanceVulnerabilityAssessmentSetting) | Clears the vulnerability assessment settings of a managed instance. |
+| [Convert-AzSqlDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Convert-azSqlDatabaseVulnerabilityAssessmentScan) | Converts vulnerability assessment scan results of a database to an Excel file. |
+| [Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan) | Converts vulnerability assessment scan results of a managed database to an Excel file. |
+| [Get-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Gets the vulnerability assessment rule baseline of a database for a given rule. |
+| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Gets the vulnerability assessment rule baseline of a managed database for a given rule. |
+| [Get-AzSqlDatabaseVulnerabilityAssessmentScanRecord](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentScanRecord) | Gets all vulnerability assessment scan records associated with a given database. |
+| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord) | Gets all vulnerability assessment scan records associated with a given managed database. |
+| [Get-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Get-azSqlDatabaseVulnerabilityAssessmentSetting) | Returns the vulnerability assessment settings of a database. |
+| [Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Returns the vulnerability assessment settings of a managed database. |
+| [Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Set-azSqlDatabaseVulnerabilityAssessmentRuleBaseline) | Sets the vulnerability assessment rule baseline. |
+| [Set-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline](/powershell/module/az.sql/Set-AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline) | Sets the vulnerability assessment rule baseline for a managed database. |
+| [Start-AzSqlDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Start-azSqlDatabaseVulnerabilityAssessmentScan) | Triggers the start of a vulnerability assessment scan on a database. |
+| [Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan](/powershell/module/az.sql/Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan) | Triggers the start of a vulnerability assessment scan on a managed database. |
+| [Update-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-azSqlDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a database. |
+| [Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed database. |
+| [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed instance. |
| &nbsp; | &nbsp; |
-For a script example, see [Azure SQL Vulnerability Assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support).
+For a script example, see [Azure SQL vulnerability assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support).
### Using Resource Manager templates
-To configure Vulnerability Assessment baselines by using Azure Resource Manager templates, use the `Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines` type.
+To configure vulnerability assessment baselines by using Azure Resource Manager templates, use the `Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines` type.
Ensure that you have enabled `vulnerabilityAssessments` before you add baselines.
@@ -216,4 +241,4 @@ To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Azure Defender for SQL](azure-defender-for-sql.md). - Learn more about [data discovery and classification](data-discovery-and-classification-overview.md).-- Learn about [Storing Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
+- Learn more about [Storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/netapp-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
@@ -2,7 +2,7 @@
Title: Azure NetApp Files with Azure VMware Solution description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Previously updated : 02/08/2021 Last updated : 02/10/2021 # Azure NetApp Files with Azure VMware Solution
@@ -11,7 +11,7 @@ In this article, we'll walk through the steps of integrating Azure NetApp Files
## Azure NetApp Files overview
-[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud. This includes databases, SAP, and high-performance computing applications, with no code changes.
+[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes.
### Features (Services where Azure NetApp Files are used.)
@@ -26,7 +26,7 @@ Azure NetApp Files is available in many Azure regions and supports cross-region
## Reference architecture
-The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share, which is mounted on Azure VMware Solution VMs.
+The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share mounted on Azure VMware Solution VMs.
![Diagram showing NetApp Files for Azure VMware Solution architecture.](media/net-app-files/net-app-files-topology.png)
@@ -78,11 +78,13 @@ The following steps include verification of the pre-configured Azure NetApp File
:::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
- You can see that the volume anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM. To learn about Azure NetApp Files volume performance by size or "Quota," see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
+ You can see that anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM.
+
+ To learn about Azure NetApp Files volume performance by size or "Quota," see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
## Verify pre-configured Azure VMware Solution VM share mapping
-To make an Azure NetApp Files share accessible to an Azure VMware Solution VM, it's important to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can they be mounted as documented here.
+To make your Azure NetApp Files share accessible to your Azure VMware Solution VM, you'll need to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can you mount them as documented here.
- SMB share: Create an Active Directory connection before deploying an SMB volume. The specified domain controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. Once the Active Directory is configured within the Azure NetApp Files account, it will appear as a selectable item while creating SMB volumes.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/protect-azure-vmware-solution-with-application-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
@@ -2,7 +2,7 @@
Title: Use Azure Application Gateway to protect your web apps on Azure VMware Solution description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. Previously updated : 02/08/2021 Last updated : 02/10/2021 # Use Azure Application Gateway to protect your web apps on Azure VMware Solution
@@ -52,7 +52,7 @@ The Application Gateway instance is deployed on the hub in a dedicated subnet. I
4. Add a backend pool of the VMs that run on Azure VMware Solution infrastructure. Provide the details of web servers that run on the Azure VMware Solution private cloud and select **Add**. Then select **Next: Configuration>**.
-1. On the **Configuration** tab, select **Add a routing rule**.
+5. On the **Configuration** tab, select **Add a routing rule**.
6. On the **Listener** tab, provide the details for the listener. If HTTPS is selected, you must provide a certificate, either from a PFX file or an existing Azure Key Vault certificate.
@@ -62,7 +62,7 @@ The Application Gateway instance is deployed on the hub in a dedicated subnet. I
9. If you want to configure path-based rules, select **Add multiple targets to create a path-based rule**.
-10. Add a path-based rule and select **Add**. Repeat to add additional path-based rules.
+10. Add a path-based rule and select **Add**. Repeat to add more path-based rules.
11. When you have finished adding path-based rules, select **Add** again; then select **Next: Tags>**.
@@ -72,7 +72,7 @@ The Application Gateway instance is deployed on the hub in a dedicated subnet. I
## Configuration examples
-In this section, you'll learn how to configure Application Gateway with Azure VMware Solution VMs as the backend pools for these use cases:
+Now we'll configure Application Gateway with Azure VMware Solution VMs as backend pools for the following use cases:
- [Hosting multiple sites](#hosting-multiple-sites) - [Routing by URL](#routing-by-url)
@@ -89,7 +89,7 @@ This procedure shows you how to define backend address pools using VMs running o
:::image type="content" source="media/protect-azure-vmware-solution-with-application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VSphere Client.":::
- We've used Windows Server 2016 with Internet Information Services (IIS) role installed to illustrate this tutorial. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
+ We've used Windows Server 2016 with the Internet Information Services (IIS) role installed. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
```powershell Install-WindowsFeature -Name Web-Server
@@ -116,7 +116,7 @@ This procedure shows you how to define backend address pools using VMs running o
### Routing by URL
-This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway. You then create routing rules that make sure web traffic arrives at the appropriate servers in the pools.
+The following steps define backend address pools using VMs running on an Azure VMware Solution private cloud. The private cloud is on an existing application gateway. You then create routing rules that make sure web traffic arrives at the appropriate servers in the pools.
1. In your private cloud, create a virtual machine pool to represent the web farm.
backup https://docs.microsoft.com/en-us/azure/backup/sap-hana-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-backup-support-matrix.md
@@ -20,14 +20,14 @@ Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **GA:**<br> **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China North, China East2, China North 2 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA | | **OS versions** | SLES 12 with SP2, SP3,SP4 and SP5; SLES 15 with SP0, SP1, SP2 <br><br> As of August 1st, 2020, SAP HANA backup for RHEL (7.4, 7.6, 7.7 & 8.1) is generally available. | |
-| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x <= SPS04 Rev 53, SPS05 (yet to be validated for encryption enabled scenarios) | |
+| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 53 (validated for encryption enabled scenarios as well) | |
| **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. | | **HANA database types** | Single Database Container (SDC) ON 1.x, Multi-Database Container (MDC) on 2.x | MDC in HANA 1.x |
-| **HANA database size** | HANA databases of size <= 2 TB (this isn't the memory size of the HANA system) | |
+| **HANA database size** | HANA databases of size <= 8 TB (this isn't the memory size of the HANA system) | |
| **Backup types** | Full, Differential, Incremental (Preview) and Log backups | Snapshots | | **Restore types** | Refer to the SAP HANA Note [1642148](https://launchpad.support.sap.com/#/notes/1642148) to learn about the supported restore types | |
-| **Backup limits** | Up to 2 TB of full backup size per SAP HANA instance (soft limit) | |
+| **Backup limits** | Up to 8 TB of full backup size per SAP HANA instance (soft limit) | |
| **Special configurations** | | SAP HANA + Dynamic Tiering <br> Cloning through LaMa |
backup https://docs.microsoft.com/en-us/azure/backup/tutorial-sap-hana-manage-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-manage-cli.md
@@ -90,7 +90,7 @@ Example:
az backup policy create --resource-group saphanaResourceGroup --vault-name saphanaVault --name sappolicy --backup-management-type AzureWorkload --policy sappolicy.json --workload-type SAPHana ```
-Sample JSON (sappolicy.json) output:
+Sample JSON (sappolicy.json):
```json "eTag": null,
batch https://docs.microsoft.com/en-us/azure/batch/batch-automatic-scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-automatic-scaling.md
@@ -123,6 +123,7 @@ You can get the value of these service-defined variables to make adjustments tha
| $PendingTasks |The sum of $ActiveTasks and $RunningTasks. | | $SucceededTasks |The number of tasks that finished successfully. | | $FailedTasks |The number of tasks that failed. |
+| $TaskSlotsPerNode |The number of task slots that can be used to run concurrent tasks on a single compute node in the pool. |
| $CurrentDedicatedNodes |The current number of dedicated compute nodes. | | $CurrentLowPriorityNodes |The current number of low-priority compute nodes, including any nodes that have been preempted. | | $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. |
batch https://docs.microsoft.com/en-us/azure/batch/batch-customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-customer-managed-key.md
@@ -3,7 +3,7 @@ Title: Configure customer-managed keys for your Azure Batch account with Azure K
description: Learn how to encrypt Batch data using customer-managed keys. Previously updated : 01/25/2021 Last updated : 02/11/2021
@@ -59,7 +59,7 @@ az batch account show \
``` > [!NOTE]
-> The system-assigned managed identity created in a Batch account is only used for retrieving customer-managed keys from the Key Vault. This identity is not available on Batch pools.
+> The system-assigned managed identity created in a Batch account is only used for retrieving customer-managed keys from the Key Vault. This identity is not available on Batch pools. To use a user-assigned managed identity in a pool, see [Configure managed identities in Batch pools](managed-identity-pools.md).
## Create a user-assigned managed identity
@@ -81,7 +81,7 @@ When [creating an Azure Key Vault instance](../key-vault/general/quick-create-po
In the Azure portal, after the Key Vault is created, In the **Access Policy** under **Setting**, add the Batch account access using managed identity. Under **Key Permissions**, select **Get**, **Wrap Key** and **Unwrap Key**.
-![Screenshow showing the Add access policy screen.](./media/batch-customer-managed-key/key-permissions.png)
+![Screenshot showing the Add access policy screen.](./media/batch-customer-managed-key/key-permissions.png)
In the **Select** field under **Principal**, fill in one of the following:
@@ -193,7 +193,7 @@ az batch account set \
- **After I restore access how long will it take for the Batch account to work again?** It can take up to 10 minutes for the account to be accessible again once access is restored. - **While the Batch Account is unavailable what happens to my resources?** Any pools that are running when Batch access to customer-managed keys is lost will continue to run. However, the nodes will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes will become available again and tasks will be restarted. - **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Service Configuration Pools, no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration Pools, the OS and any specified data disks will be encrypted with a Microsoft platform managed key by default. Currently, you cannot specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md)-- **Is the system-assigned managed identity on the Batch account available on the compute nodes?** No. The system-assigned managed identity is currently used only for accessing the Azure Key Vault for the customer-managed key.
+- **Is the system-assigned managed identity on the Batch account available on the compute nodes?** No. The system-assigned managed identity is currently used only for accessing the Azure Key Vault for the customer-managed key. To use a user-assigned managed identity on compute nodes, see [Configure managed identities in Batch pools](managed-identity-pools.md).
## Next steps
batch https://docs.microsoft.com/en-us/azure/batch/managed-identity-pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/managed-identity-pools.md
@@ -0,0 +1,97 @@
+
+ Title: Configure managed identities in Batch pools
+description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes.
+ Last updated : 02/10/2021+++
+# Configure managed identities in Batch pools
+
+[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Active Directory (Azure AD) and using it to obtain Azure Active Directory (Azure AD) tokens.
+
+This topic explains how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes.
+
+> [!IMPORTANT]
+> Support for Azure Batch pools with user-assigned managed identities is currently in public preview for the following regions: West US 2, South Central US, East US, US Gov Arizona and US Gov Virginia.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Create a user-assigned identity
+
+First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) in the same tenant as your Batch account. This managed identity does not need to be in the same resource group or even in the same subscription.
+
+## Create a Batch pool with user-assigned managed identities
+
+After you've created one or more user-assigned managed identities, you can create a Batch pool with that managed identity by using the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library).
+
+> [!IMPORTANT]
+> Pools must be configured using [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) in order to use managed identities.
+
+```csharp
+var poolParameters = new Pool(name: "yourPoolName")
+ {
+ VmSize = "standard_d1_v2",
+ ScaleSettings = new ScaleSettings
+ {
+ FixedScale = new FixedScaleSettings
+ {
+ TargetDedicatedNodes = 1
+ }
+ },
+ DeploymentConfiguration = new DeploymentConfiguration
+ {
+ VirtualMachineConfiguration = new VirtualMachineConfiguration(
+ new ImageReference(
+ "Canonical",
+ "UbuntuServer",
+ "18.04-LTS",
+ "latest"),
+ "batch.node.ubuntu 18.04")
+ };
+ Identity = new BatchPoolIdentity
+ {
+ Type = PoolIdentityType.UserAssigned,
+ UserAssignedIdentities = new Dictionary<string, BatchPoolIdentityUserAssignedIdentitiesValue>
+ {
+ ["Your Identity Resource Id"] =
+ new BatchPoolIdentityUserAssignedIdentitiesValue()
+ }
+ }
+ };
+
+var pool = await managementClient.Pool.CreateWithHttpMessagesAsync(
+ poolName:"yourPoolName",
+ resourceGroupName: "yourResourceGroupName",
+ accountName: "yourAccountName",
+ parameters: poolParameters,
+ cancellationToken: default(CancellationToken)).ConfigureAwait(false);
+```
+
+> [!NOTE]
+> Creating pools with managed identities is not currently supported with the [Batch .NET client library](/dotnet/api/overview/azure/batch#client-library).
+
+## Use user-assigned managed identities in Batch nodes
+
+After you've created your pools, your user-assigned managed identities can access the pool nodes via Secure Shell (SSH) or Remote Desktop (RDP). You can also configure your tasks so that the managed identities can directly access [Azure resources that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+
+Within the Batch nodes, you can get managed identity tokens and use them to authenticate through Azure AD authentication via the [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md).
+
+For Windows, the PowerShell script to get an access token to authenticate is:
+
+```powershell
+$Response = Invoke-RestMethod -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource={Resource App Id Url}' -Method GET -Headers @{Metadata="true"}
+```
+
+For Linux, the Bash script is:
+
+```bash
+curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource={Resource App Id Url}' -H Metadata:true
+```
+
+For more information, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+## Next steps
+
+- Learn more about [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+- Learn how to use [customer-managed keys with user-managed identities](batch-customer-managed-key.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/concepts/send-ink-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/concepts/send-ink-data.md
@@ -1,92 +0,0 @@
- Title: Send ink data to the Ink Recognizer API -
-description: Learn about calling the Ink Analyzer API for different applications
------ Previously updated : 08/24/2020---
-# Send ink data to the Ink Recognizer API
--
-Digital inking refers to the technologies that enable digital representations of input such as handwriting and drawings. This is typically achieved using a digitizer that captures the movements of input devices, such as a stylus. As devices continue to enable rich digital inking experiences, artificial intelligence and machine learning enables the recognition of written shapes and text in any context. The Ink Recognizer API enables you to send ink strokes and get detailed information about them.
-
-## The Ink recognizer API vs. OCR services
-
-The Ink Recognizer API does not use Optical Character Recognition(OCR). OCR services process the pixel data from images to provide handwriting and text recognition. This is sometimes called offline recognition. Instead, The Ink Recognizer API requires digital ink stroke data that's captured as the input device is used. Processing digital ink data in this way can produce more accurate recognition results compared to OCR services.
-
-## Sending ink data
-
-The Ink Recognizer API requires the X and Y coordinates that represent the ink strokes created by an input device, from the moment it touches the detection surface to when it's lifted. The points of each stroke must be a string of comma separated values, and be formatted in JSON like the example below. In addition, each ink stroke must have a unique ID in each request. If the ID is repeated within the same request, the API will return an error. For the most accurate recognition results, have at least eight digits after the decimal point. The origin (0,0) of the canvas is assumed to be the top left corner of the inking canvas.
-
-> [!NOTE]
-> The following example isn't valid JSON. You can find a full Ink Recognizer JSON request on [GitHub](https://go.microsoft.com/fwlink/?linkid=2089909).
-
-```json
-{
- "language": "en-US",
- "strokes": [
- {
- "id": 43,
- "points":
- "5.1365, 12.3845,
- 4.9534, 12.1301,
- 4.8618, 12.1199,
- 4.7906, 12.2217,
- 4.7906, 12.5372,
- 4.8211, 12.9849,
- 4.9534, 13.6667,
- 5.0958, 14.4503,
- 5.3299, 15.2441,
- 5.6555, 16.0480,
- ..."
- },
- ...
- ]
-}
-```
-
-## Ink Recognizer response
-
-The Ink Recognizer API returns an analysis response about the objects it recognized from the ink content. The response contains recognition units that describe the relationships between different ink strokes. For example, strokes that create distinct, separate shapes will be contained in different units. Each unit contains detailed information about its ink strokes including the recognized object, its coordinates, and other drawing attributes.
-
-## Shapes recognized by the Ink Recognizer API
-
-The Ink Recognizer API can identify the most commonly used shapes in note taking. The below image shows some basic examples. For a full list of shapes and other ink content recognized by the API, see the [API reference article](/rest/api/cognitiveservices/inkrecognizer/inkrecognizer).
-
-![The list of shapes recognized by the Ink Recognizer API](../media/shapes.png)
-
-## Recommended calling patterns
-
-You can call the Ink Recognizer REST API in different patterns according to your application.
-
-### User initiated API calls
-
-If you're building an app that takes user input (for example, a note taking or annotation app), you may want to give them control of when and which ink gets sent to the Ink Recognizer API. This functionality is especially useful when text and shapes are both present on the canvas, and users want to perform different actions for each. Consider adding selection features (like a lasso or other geometric selection tool) that enable users to choose what gets sent to the API.
-
-### App initiated API calls
-
-You can also have your app call the Ink Recognizer API after a timeout. By sending the current ink strokes to the API routinely, you can store recognition results as they're created while improving the API's response time. For example, you can send a line of handwritten text to the API after detecting your user has completed it.
-
-Having the recognition results in advance gives you information about the characteristics of ink strokes as they relate to each other. For example, which strokes are grouped to form the same word, line, list, paragraph, or shape. This information can enhance your app's ink selection features by being able to select groups of strokes at once, for example.
-
-## Integrate the Ink Recognizer API with Windows Ink
-
-[Windows Ink](/windows/uwp/design/input/pen-and-stylus-interactions) provides tools and technologies to enable digital inking experiences on a diverse range of devices. You can combine the Windows Ink platform with the Ink Recognizer API to create applications that display and interpret digital ink strokes.
-
-## Next steps
-
-* [What is the Ink Recognizer API?](../overview.md)
-* [Ink Recognizer REST API reference](/rest/api/cognitiveservices/inkrecognizer/inkrecognizer)
-
-* Start sending digital ink stroke data using:
- * [C#](../quickstarts/csharp.md)
- * [Java](../quickstarts/java.md)
- * [JavaScript](../quickstarts/javascript.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/includes/deprecation-note https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/includes/deprecation-note.md
@@ -1,11 +0,0 @@
------ Previously updated : 08/24/2020--
-> [!NOTE]
-> The Ink Recognizer API has ended its preview on August 26th, 2020. If you have existing Ink Recognizer resources, you can continue using them until the service is fully retired on January 31st, 2021.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/includes/setup-instructions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/includes/setup-instructions.md
@@ -1,21 +0,0 @@
------ Previously updated : 06/20/2019--
->[!NOTE]
-> Endpoints for resources created after July 1, 2019 use the custom subdomain format shown below. For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../../cognitive-services-custom-subdomains.md).
-
-Azure Cognitive Services are represented by Azure resources that you subscribe to. Create a resource for Ink Recognizer using the [Azure portal](../../cognitive-services-apis-create-account.md).
-
-After creating a resource, get your endpoint and key by opening your resource on the [Azure portal](https://ms.portal.azure.com#blade/HubsExtension/BrowseResourceGroupBlade), and clicking **Quick start**.
-
-Create two [environment variables](../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource):
-
-* `INK_RECOGNITION_SUBSCRIPTION_KEY` - The subscription key for authenticating your requests.
-
-* `INK_RECOGNITION_ENDPOINT` - The endpoint for your resource. It will look like this: <br> `https://<your-custom-subdomain>.api.cognitive.microsoft.com`
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/language-support.md
@@ -1,92 +0,0 @@
- Title: Language and region support for the Ink Recognizer API-
-description: A list of natural languages supported by the Ink Recognizer API.
------ Previously updated : 08/24/2020---
-# Language and region support for the Ink Recognizer API
--
-This article explains which languages are supported for the Ink Recognizer API. Digital ink content written in the following languages can be interpreted and processed by the API.
-
-## Supported languages
-
-| Language | Language code |
-|:-|::|
-| Afrikaans | `af-ZA` |
-| Albanian | `sq-AL` |
-| Basque | `eu-ES` |
-| Bosnian (Latin) | `bs-Latn-BA` |
-| Catalan | `ca-ES` |
-| Chinese (Simplified, China) | `zh-CN` |
-| Chinese (Traditional, Taiwan) | `zh-TW` |
-| Croatian (Croatia) | `hr-HR` |
-| Czech | `cs-CZ` |
-| Danish | `da-DK` |
-| Dutch (Belgium) | `nl-BE` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (India) | `en-IN` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Finnish | `fi-FI` |
-| French (France) | `fr-FR` |
-| Galician | `gl-ES` |
-| German (Switzerland) | `de-CH` |
-| German (Germany) | `de-DE` |
-| Greek | `el-GR` |
-| Hindi | `hi-IN` |
-| Indonesian | `id-ID` |
-| Irish | `ga-IE` |
-| Italian (Italy) | `it-IT` |
-| Japanese | `ja-JP` |
-| Kinyarwanda | `rw-RW` |
-| Kiswahili (Kenya) | `sw-KE` |
-| Korean | `ko-KR` |
-| Luxembourgish | `lb-LU` |
-| Malay (Brunei Darussalam) | `ms-BN` |
-| Malay (Malaysia) | `ms-MY` |
-| Maori | `mi-NZ` |
-| Norwegian (Bokmal) | `nb-NO` |
-| Norwegian (Nynorsk) | `nn-NO` |
-| Polish | `pl-PL` |
-| Portuguese (Brazil) | `pt-BR` |
-| Portuguese (Portugal) | `pt-PT` |
-| Romanian | `ro-RO` |
-| Romansh | `rm-CH` |
-| Russian | `ru-RU` |
-| Scottish Gaelic | `gd-GB` |
-| Serbian (Cyrillic, Bosnia and Herzegovina) | `sr-Cyrl-BA` |
-| Serbian (Cyrillic, Montenegro) | `sr-Cyrl-ME` |
-| Serbian (Cyrillic, Serbia) | `sr-Cyrl-RS` |
-| Serbian (Latin, Bosnia and Herzegovina) | `sr-Latn-BA` |
-| Serbian (Latin, Montenegro) | `sr-Latn-ME` |
-| Serbian (Latin, Serbia) | `sr-Latn-RS` |
-| Sesotho sa Leboa | `nso-ZA` |
-| Setswana (South Africa) | `tn-ZA` |
-| Slovak | `sk-SK` |
-| Slovenian | `sl-SI` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Spain) | `es-ES` |
-| Swedish (Sweden) | `sv-SE` |
-| Turkish | `tr-TR` |
-| Welsh | `cy-GB` |
-| Wolof | `wo-SN` |
-| Xhosa | `xh-ZA` |
-| Zulu | `zu-ZA` |
-
-## See also
-
-* [What is the Ink Recognizer API?](overview.md)
-* [Sending digital ink strokes to the Ink Recognizer API](concepts/send-ink-data.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/overview.md
@@ -1,59 +0,0 @@
- Title: What is Ink Recognizer? - Ink Recognizer API-
-description: Integrate the Ink Recognizer into your applications, websites, tools, and other solutions to allow ink stroke data to be identified and used as input.
------ Previously updated : 08/24/2020----
-# What is the Ink Recognizer API?
--
-The Ink Recognizer Cognitive Service provides a cloud-based REST API to analyze and recognize digital ink content. Unlike services that use Optical Character Recognition (OCR), the API requires digital ink stroke data as input. Digital ink strokes are time-ordered sets of 2D points (X,Y coordinates) that represent the motion of input tools such as digital pens or fingers. It then recognizes the shapes and handwritten content from the input and returns a JSON response containing all recognized entities.
-
-![A flowchart describing sending an ink stroke input to the API](media/ink-recognizer-pen-graph.svg)
-
-## Features
-
-With the Ink Recognizer API, you can easily recognize handwritten content in your applications.
-
-|Feature |Description |
-|||
-| Handwriting recognition | Recognize handwritten content in 63 core [languages and locales](language-support.md). |
-| Layout recognition | Get structural information about the digital ink content. Break the content into writing regions, paragraphs, lines, words, bulleted lists. Your applications can then use the layout information to build additional features like automatic list formatting, and shape alignment. |
-| Shape recognition | Recognize the most commonly used [geometric shapes](concepts/send-ink-data.md#shapes-recognized-by-the-ink-recognizer-api) when taking notes. |
-| Combined shapes and text recognition | Recognize which ink strokes belong to shapes or handwritten content, and separately classify them.|
-
-## Workflow
-
-The Ink Recognizer API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON.
--
-After signing up:
-
-1. Take your ink stroke data and [format it](concepts/send-ink-data.md#sending-ink-data) into valid JSON. The API accepts up to 1500 ink strokes per request.
-1. Send a request to the Ink Recognizer API with your data.
-1. Process the API response by parsing the returned JSON message.
-
-## Next steps
-
-Try a quickstart in the following languages to begin making calls to the Ink Recognizer API.
-* [C#](quickstarts/csharp.md)
-* [Java](quickstarts/java.md)
-* [JavaScript](quickstarts/javascript.md)
-
-To see how the Ink Recognition API works in a digital inking app, take a look at the following sample applications on GitHub:
-* [C# and Universal Windows Platform(UWP)](https://go.microsoft.com/fwlink/?linkid=2089803)
-* [C# and Windows Presentation Foundation(WPF)](https://go.microsoft.com/fwlink/?linkid=2089804)
-* [Javascript web-browser app](https://go.microsoft.com/fwlink/?linkid=2089908)
-* [Java and Android mobile app](https://go.microsoft.com/fwlink/?linkid=2089906)
-* [Swift and iOS mobile app](https://go.microsoft.com/fwlink/?linkid=2089805)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/quickstarts/csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/quickstarts/csharp.md
@@ -1,105 +0,0 @@
- Title: "Quickstart: Recognize digital ink with the Ink Recognizer REST API and C#"-
-description: This quickstart shows how to use the Ink Recognizer API and C# to start recognizing digital ink strokes.
------ Previously updated : 08/24/2020----
-# Quickstart: Recognize digital ink with the Ink Recognizer REST API and C#
--
-Use this quickstart to begin sending digital ink strokes to the Ink Recognizer API. This C# application sends an API request containing JSON-formatted ink stroke data, and gets the response.
-
-While this application is written in C#, the API is a RESTful web service compatible with most programming languages.
-
-Typically you would call the API from a digital inking app. This quickstart sends ink stroke data for the following handwritten sample from a JSON file.
-
-![an image of handwritten text](../media/handwriting-sample.jpg)
-
-The source code for this quickstart can be found on [GitHub](https://go.microsoft.com/fwlink/?linkid=2089502).
-
-## Prerequisites
--- Any edition of [Visual Studio 2017](https://visualstudio.microsoft.com/downloads/).-- [Newtonsoft.Json](https://www.newtonsoft.com/json)
- - To install Newtonsoft.Json as a NuGet package in Visual studio:
- 1. Right click on the **Solution Manager**
- 2. Click **Manage NuGet Packages...**
- 3. Search for `Newtonsoft.Json` and install the package
-- If you are using Linux/MacOS, this application can be ran using [Mono](https://www.mono-project.com/).--- The example ink stroke data for this quickstart can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Vision/InkRecognition/quickstart/example-ink-strokes.json).-
-### Create an Ink Recognizer resource
--
-## Create a new application
-
-1. In Visual Studio, create a new console solution and add the following packages.
-
- [!code-csharp[import statements](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=imports)]
-
-2. Create variables for your subscription key and endpoint, and the example JSON file. The endpoint will later be combined with `inkRecognitionUrl` to access the API.
-
- [!code-csharp[endpoint file and key variables](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=vars)]
-
-## Create a function to send requests
-
-1. Create a new async function called `Request` that takes the variables created above.
-
-2. Set the client's security protocol and header information using an `HttpClient` object. Be sure to add your subscription key to the `Ocp-Apim-Subscription-Key` header. Then create a `StringContent` object for the request.
-
-3. Send the request with `PutAsync()`. If the request is successful, return the response.
-
- [!code-csharp[request example method](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=request)]
-
-## Send an ink recognition request
-
-1. Create a new function called `recognizeInk()`. Construct the request and send it by calling the `Request()` function with your endpoint, subscription key, the URL for the API, and the digital ink stroke data.
-
-2. Deserialize the JSON object, and write it to the console.
-
- [!code-csharp[request to recognize ink data](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=recognize)]
-
-## Load your digital ink data
-
-Create a function called `LoadJson()` to load the ink data JSON file. Use a `StreamReader` and `JsonTextReader` to create a `JObject` and return it.
-
-[!code-csharp[load the JSON file](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=loadJson)]
-
-## Send the API request
-
-1. In the main method of your application, load your JSON data with the function created above.
-
-2. Call the `recognizeInk()` function created above. Use `System.Console.ReadKey()` to keep the console window open after running the application.
-
- [!code-csharp[file main method](~/cognitive-services-rest-samples/dotnet/Vision/InkRecognition/quickstart/recognizeInk.cs?name=main)]
--
-## Run the application and view the response
-
-Run the application. A successful response is returned in JSON format. You can also find the JSON response on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Vision/InkRecognition/quickstart/example-response.json).
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API reference](/rest/api/cognitiveservices/inkrecognizer/inkrecognizer)
--
-To see how the Ink Recognition API works in a digital inking app, take a look at the following sample applications on GitHub:
-* [C# and Universal Windows Platform(UWP)](https://go.microsoft.com/fwlink/?linkid=2089803)
-* [C# and Windows Presentation Foundation(WPF)](https://go.microsoft.com/fwlink/?linkid=2089804)
-* [Javascript web-browser app](https://go.microsoft.com/fwlink/?linkid=2089908)
-* [Java and Android mobile app](https://go.microsoft.com/fwlink/?linkid=2089906)
-* [Swift and iOS mobile app](https://go.microsoft.com/fwlink/?linkid=2089805)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/quickstarts/java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/quickstarts/java.md
@@ -1,100 +0,0 @@
- Title: "Quickstart: Recognize digital ink with the Ink Recognizer REST API and Java"-
-description: Use the Ink Recognizer API and Java to start recognizing digital ink strokes in this quickstart.
------ Previously updated : 08/24/2020----
-# Quickstart: Recognize digital ink with the Ink Recognizer REST API and Java
--
-Use this quickstart to begin using the Ink Recognizer API on digital ink strokes. This Java application sends an API request containing JSON-formatted ink stroke data, and gets the response.
-
-While this application is written in Java, the API is a RESTful web service compatible with most programming languages.
-
-Typically you would call the API from a digital inking app. This quickstart sends ink stroke data for the following handwritten sample from a JSON file.
-
-![an image of handwritten text](../media/handwriting-sample.jpg)
-
-The source code for this quickstart can be found on [GitHub](https://go.microsoft.com/fwlink/?linkid=2089904).
-
-## Prerequisites
--- The [Java&trade; Development Kit(JDK) 7](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) or later.--- Import these libraries from the Maven Repository
- - [JSON in Java](https://mvnrepository.com/artifact/org.json/json) package
- - [Apache HttpClient](https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient) package
--- The example ink stroke data for this quickstart can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/InkRecognition/quickstart/example-ink-strokes.json).-
-### Create an Ink Recognizer resource
--
-## Create a new application
-
-1. Create a new Java project in your favorite IDE or editor, and import the following libraries.
-
- [!code-java[import statements](~/cognitive-services-rest-samples/java/InkRecognition/quickstart/RecognizeInk.java?name=imports)]
-
-2. Create variables for your subscription key, endpoint and JSON file. The endpoint will later be appended to the Ink recognizer URI.
-
- [!code-java[initial vars](~/cognitive-services-rest-samples/java/InkRecognition/quickstart/RecognizeInk.java?name=vars)]
-
-## Create a function to send requests
-
-1. Create a new function called `sendRequest()` that takes the variables created above. Then perform the following steps.
-
-2. Create a `CloseableHttpClient` object that can send requests to the API. Send the request to an `HttpPut` request object by combining your endpoint, and the Ink Recognizer URL.
-
-3. Use the request's `setHeader()` function to set the `Content-Type` header to `application/json`, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
-4. Use the request's `setEntity()` function to the data to be sent.
-
-5. Use the client's `execute()` function to send the request, and save it to a `CloseableHttpResponse` object.
-
-6. Create an `HttpEntity` object to store the response content. Get the content with `getEntity()`. If the response isn't empty, return it.
-
- [!code-java[send a request](~/cognitive-services-rest-samples/java/InkRecognition/quickstart/RecognizeInk.java?name=sendRequest)]
-
-## Send an ink recognition request
-
-Create a method called `recognizeInk()` to recognize your ink stroke data. Call the `sendRequest()` method created above with your endpoint, url, subscription key, and json data. Get the result, and print it to the console.
-
-[!code-java[recognizeInk](~/cognitive-services-rest-samples/java/InkRecognition/quickstart/RecognizeInk.java?name=recognizeInk)]
-
-## Load your digital ink data and send the request
-
-1. In the main method of your application, read in the JSON file containing the data that will be added to the requests.
-
-2. Call the ink recognition function created above.
-
- [!code-java[main method](~/cognitive-services-rest-samples/java/InkRecognition/quickstart/RecognizeInk.java?name=main)]
--
-## Run the application and view the response
-
-Run the application. A successful response is returned in JSON format. You can also find the JSON response on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/InkRecognition/quickstart/example-response.json).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API reference](/rest/api/cognitiveservices/inkrecognizer/inkrecognizer)
--
-To see how the Ink Recognition API works in a digital inking app, take a look at the following sample applications on GitHub:
-* [C# and Universal Windows Platform(UWP)](https://go.microsoft.com/fwlink/?linkid=2089803)
-* [C# and Windows Presentation Foundation(WPF)](https://go.microsoft.com/fwlink/?linkid=2089804)
-* [Javascript web-browser app](https://go.microsoft.com/fwlink/?linkid=2089908)
-* [Java and Android mobile app](https://go.microsoft.com/fwlink/?linkid=2089906)
-* [Swift and iOS mobile app](https://go.microsoft.com/fwlink/?linkid=2089805)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Ink-Recognizer/quickstarts/javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Ink-Recognizer/quickstarts/javascript.md
@@ -1,172 +0,0 @@
- Title: "Quickstart: Recognize digital ink with the Ink Recognizer REST API and Node.js"-
-description: Use the Ink Recognizer API and JavaScript to start recognizing digital ink strokes in this quickstart.
------ Previously updated : 08/24/2020----
-# Quickstart: Recognize digital ink with the Ink Recognizer REST API and JavaScript
--
-Use this quickstart to begin using the Ink Recognizer API on digital ink strokes. This JavaScript application sends an API request containing JSON-formatted ink stroke data, and displays the response.
-
-While this application is written in Javascript and runs in your web browser, the API is a RESTful web service compatible with most programming languages.
-
-Typically you would call the API from a digital inking app. This quickstart sends ink stroke data for the following handwritten sample from a JSON file.
-
-![an image of handwritten text](../media/handwriting-sample.jpg)
-
-The source code for this quickstart can be found on [GitHub](https://go.microsoft.com/fwlink/?linkid=2089905).
-
-## Prerequisites
--- A web browser-- The example ink stroke data for this quickstart can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/javascript/InkRecognition/quickstart/example-ink-strokes.json).-
-### Create an Ink Recognizer resource
--
-## Create a new application
-
-1. In your favorite IDE or editor, create a new `.html` file. Then add basic HTML to it for the code we'll add later.
-
- ```html
- <!DOCTYPE html>
- <html>
-
- <head>
- <script type="text/javascript">
- </script>
- </head>
-
- <body>
- </body>
-
- </html>
- ```
-
-2. Within the `<body>` tag, add the following html:
- 1. Two text areas for displaying the JSON request and response.
- 2. A button for calling the `recognizeInk()` function that will be created later.
-
- ```HTML
- <!-- <body>-->
- <h2>Send a request to the Ink Recognition API</h2>
- <p>Request:</p>
- <textarea id="request" style="width:800px;height:300px"></textarea>
- <p>Response:</p>
- <textarea id="response" style="width:800px;height:300px"></textarea>
- <br>
- <button type="button" onclick="recognizeInk()">Recognize Ink</button>
- <!--</body>-->
- ```
-
-## Load the example JSON data
-
-1. Within the `<script>` tag, create a variable for the sampleJson. Then create a JavaScript function named `openFile()` that opens a file explorer so you can select your JSON file. When the `Recognize ink` button is clicked, it will call this function and begin reading the file.
-2. Use a `FileReader` object's `onload()` function to process the file asynchronously.
- 1. Replace any `\n` or `\r` characters in the file with an empty string.
- 2. Use `JSON.parse()` to convert the text to valid JSON
- 3. Update the `request` text box in the application. Use `JSON.stringify()` to format the JSON string.
-
- ```javascript
- var sampleJson = "";
- function openFile(event) {
- var input = event.target;
-
- var reader = new FileReader();
- reader.onload = function(){
- sampleJson = reader.result.replace(/(\\r\\n|\\n|\\r)/gm, "");
- sampleJson = JSON.parse(sampleJson);
- document.getElementById('request').innerHTML = JSON.stringify(sampleJson, null, 2);
- };
- reader.readAsText(input.files[0]);
- };
- ```
-
-## Send a request to the Ink Recognizer API
-
-1. Within the `<script>` tag, create a function called `recognizeInk()`. This function will later call the API and update the page with the response. Add the code from the following steps within this function.
-
- ```javascript
- function recognizeInk() {
- // add the code from the below steps here
- }
- ```
-
- 1. Create variables for your endpoint URL, subscription key, and the sample JSON. Then create an `XMLHttpRequest` object to send the API request.
-
- ```javascript
- // Replace the below URL with the correct one for your subscription.
- // Your endpoint can be found in the Azure portal. For example: "https://<your-custom-subdomain>.cognitiveservices.azure.com";
- var SERVER_ADDRESS = process.env["INK_RECOGNITION_ENDPOINT"];
- var ENDPOINT_URL = SERVER_ADDRESS + "/inkrecognizer/v1.0-preview/recognize";
- var SUBSCRIPTION_KEY = process.env["INK_RECOGNITION_SUBSCRIPTION_KEY"];
- var xhttp = new XMLHttpRequest();
- ```
- 2. Create the return function for the `XMLHttpRequest` object. This function will parse the API response from a successful request, and display it in the application.
-
- ```javascript
- function returnFunction(xhttp) {
- var response = JSON.parse(xhttp.responseText);
- console.log("Response: %s ", response);
- document.getElementById('response').innerHTML = JSON.stringify(response, null, 2);
- }
- ```
- 3. Create the error function for the request object. This function logs the error to the console.
-
- ```javascript
- function errorFunction() {
- console.log("Error: %s, Detail: %s", xhttp.status, xhttp.responseText);
- }
- ```
-
- 4. Create a function for the request object's `onreadystatechange` property. When the request object's readiness state changes, the above return and error functions will be applied.
-
- ```javascript
- xhttp.onreadystatechange = function () {
- if (this.readyState === 4) {
- if (this.status === 200) {
- returnFunction(xhttp);
- } else {
- errorFunction(xhttp);
- }
- }
- };
- ```
-
- 5. Send the API request. Add your subscription key to the `Ocp-Apim-Subscription-Key` header, and set the `content-type` to `application/json`
-
- ```javascript
- xhttp.open("PUT", ENDPOINT_URL, true);
- xhttp.setRequestHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
- xhttp.setRequestHeader("content-type", "application/json");
- xhttp.send(JSON.stringify(sampleJson));
- };
- ```
-
-## Run the application and view the response
-
-This application can be run within your web browser. A successful response is returned in JSON format. You can also find the JSON response on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/javascript/InkRecognition/quickstart/example-response.json):
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API reference](/rest/api/cognitiveservices/inkrecognizer/inkrecognizer)
-
-To see how the Ink Recognition API works in a digital inking app, take a look at the following sample applications on GitHub:
-* [C# and Universal Windows Platform(UWP)](https://go.microsoft.com/fwlink/?linkid=2089803)
-* [C# and Windows Presentation Foundation(WPF)](https://go.microsoft.com/fwlink/?linkid=2089804)
-* [Javascript web-browser app](https://go.microsoft.com/fwlink/?linkid=2089908)
-* [Java and Android mobile app](https://go.microsoft.com/fwlink/?linkid=2089906)
-* [Swift and iOS mobile app](https://go.microsoft.com/fwlink/?linkid=2089805)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -135,7 +135,7 @@ See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
For faster results, use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. In general, the service processes approximately 10 hours of audio data per day in regions with such hardware. It can only process about 1 hour of audio data per day in other regions. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
-Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and ignore the audio data. Training will then be finished much faster and results will be the same as training with just text.
+Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and ignore the audio data. Training will then be finished much faster and results will be the same as training with just text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
## Accuracy testing
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
@@ -114,7 +114,7 @@ Consider these details:
* It can take several days for a training operation to complete. To improve the speed of training, make sure to create your Speech service subscription in a [region with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training. > [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio.
+> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
### Add new words with pronunciation
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
@@ -20,7 +20,7 @@ If you're looking to improve recognition accuracy, especially issues that are ca
A large sample of transcription data is required to improve recognition, we suggest providing between 10 and 20 hours of transcription data. On this page, we'll review guidelines designed to help you create high-quality transcriptions. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German. > [!NOTE]
-> Not all base models support customization with audio files. If a base model does not support it, training will just use the text of the transcriptions in the same way as related text is used.
+> Not all base models support customization with audio files. If a base model does not support it, training will just use the text of the transcriptions in the same way as related text is used. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
## US English (en-US)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
@@ -138,7 +138,7 @@ After you've gathered your audio files and corresponding transcriptions, package
See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day compared to just 1 hour per day in other regions. If model training cannot be completed within a week, the model will be marked as failed.
-Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text.
+Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
## Related text data for training
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/spx-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/spx-setup.md
@@ -35,6 +35,16 @@ If you output to a file, a text editor like Notepad or a web browser like Micros
#### [Linux Install](#tab/linuxinstall)
+The following Linux distributions are supported for x64 architectures using the Speech CLI:
+
+* CentOS 7/8
+* Debian 9/10
+* Red Hat Enterprise Linux (RHEL) 7/8
+* Ubuntu 16.04/18.04/20.04
+
+> [!NOTE]
+> Additional architectures are supported by the Speech SDK (not the Speech CLI). For more information, see [About the Speech SDK](../speech-sdk.md).
+ Follow these steps to install the Speech CLI on Linux on an x64 CPU: 1. Install [.NET Core 3.1](/dotnet/core/install/linux).
@@ -47,7 +57,7 @@ Type `spx` to see help for the Speech CLI.
> [!NOTE] > As an alternative to NuGet, > you can download the binaries at [zip archive](https://aka.ms/speech/spx-zips.zip),
-> extract `spx-netcore-30-linux-x64` to a new `~/spx` directory, type `sudo chmod +r+x spx` on the binary,
+> extract `spx-netcore-30-linux-x64.zip` to a new `~/spx` directory, type `sudo chmod +r+x spx` on the binary,
> and add the `~/spx` path to your PATH system variable.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -21,7 +21,7 @@ Language support varies by Speech service functionality. The following tables su
Both the Microsoft Speech SDK and the REST API support the following languages (locales).
-To improve accuracy, customization is offered for a subset of the languages through uploading **Audio + Human-labeled Transcripts** or **Related Text: Sentences**. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
+To improve accuracy, customization is offered for a subset of the languages through uploading **Audio + Human-labeled Transcripts** or **Related Text: Sentences**. Support for customization of the acoustic model with **Audio + Human-labeled Transcripts** is limited to the specific base models listed below. Other base models and languages will only use the text of the transcripts to train custom models just like with **Related Text: Sentences**. To learn more about customization, see [Get started with Custom Speech](./custom-speech-overview.md).
<!-- To get the AM and ML bits:
@@ -48,48 +48,48 @@ https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Arabic (United Arab Emirates) | `ar-AE` | Language model | | | Bulgarian (Bulgaria) | `bg-BG` | Language model | | | Catalan (Spain) | `ca-ES` | Language model | Yes |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Acoustic model<br>Language model | Yes |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Acoustic model<br>Language model | Yes |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Acoustic model<br>Language model | Yes |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Acoustic model (20201015)<br>Language model | Yes |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Acoustic model (20200910)<br>Language model | Yes |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Acoustic model (20190701, 20201015)<br>Language model | Yes |
| Croatian (Croatia) | `hr-HR` | Language model | | | Czech (Czech Republic) | `cs-CZ` | Language Model | | | Danish (Denmark) | `da-DK` | Language model | Yes |
-| Dutch (Netherlands) | `nl-NL` | Language model | Yes |
-| English (Australia) | `en-AU` | Acoustic model<br>Language model | Yes |
-| English (Canada) | `en-CA` | Acoustic model<br>Language model | Yes |
+| Dutch (Netherlands) | `nl-NL` | Acoustic model (20201015)<br>Language model | Yes |
+| English (Australia) | `en-AU` | Acoustic model (20201019)<br>Language model | Yes |
+| English (Canada) | `en-CA` | Acoustic model (20201019)<br>Language model | Yes |
| English (Hong Kong) | `en-HK` | Language Model | |
-| English (India) | `en-IN` | Acoustic model<br>Language model | Yes |
+| English (India) | `en-IN` | Acoustic model (20200923)<br>Language model | Yes |
| English (Ireland) | `en-IE` | Language Model | |
-| English (New Zealand) | `en-NZ` | Acoustic model<br>Language model | Yes |
+| English (New Zealand) | `en-NZ` | Acoustic model (20201019)<br>Language model | Yes |
| English (Nigeria) | `en-NG` | Language Model | | | English (Philippines) | `en-PH` | Language Model | | | English (Singapore) | `en-SG` | Language Model | | | English (South Africa) | `en-ZA` | Language Model | |
-| English (United Kingdom) | `en-GB` | Acoustic model<br>Language model<br>Pronunciation| Yes |
-| English (United States) | `en-US` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| English (United Kingdom) | `en-GB` | Acoustic model (20201019)<br>Language model<br>Pronunciation| Yes |
+| English (United States) | `en-US` | Acoustic model (20201019)<br>Language model<br>Pronunciation| Yes |
| Estonian(Estonia) | `et-EE` | Language Model | | | Finnish (Finland) | `fi-FI` | Language model | Yes |
-| French (Canada) | `fr-CA` | Acoustic model<br>Language model | Yes |
-| French (France) | `fr-FR` | Acoustic model<br>Language model<br>Pronunciation| Yes |
-| German (Germany) | `de-DE` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| French (Canada) | `fr-CA` | Acoustic model (20201015)<br>Language model | Yes |
+| French (France) | `fr-FR` | Acoustic model (20201015)<br>Language model<br>Pronunciation| Yes |
+| German (Germany) | `de-DE` | Acoustic model (20190701, 20200619, 20201127)<br>Language model<br>Pronunciation| Yes |
| Greek (Greece) | `el-GR` | Language model | | | Gujarati (Indian) | `gu-IN` | Language model | |
-| Hindi (India) | `hi-IN` | Acoustic model<br>Language model | Yes |
+| Hindi (India) | `hi-IN` | Acoustic model (20200701)<br>Language model | Yes |
| Hungarian (Hungary) | `hu-HU` | Language Model | | | Irish(Ireland) | `ga-IE` | Language model | |
-| Italian (Italy) | `it-IT` | Acoustic model<br>Language model<br>Pronunciation| Yes |
-| Japanese (Japan) | `ja-JP` | Acoustic model<br>Language model | Yes |
-| Korean (Korea) | `ko-KR` | Acoustic model<br>Language model | Yes |
+| Italian (Italy) | `it-IT` | Acoustic model (20201016)<br>Language model<br>Pronunciation| Yes |
+| Japanese (Japan) | `ja-JP` | Language model | Yes |
+| Korean (Korea) | `ko-KR` | Acoustic model (20201015)<br>Language model | Yes |
| Latvian (Latvia) | `lv-LV` | Language model | | | Lithuanian (Lithuania) | `lt-LT` | Language model | | | Maltese(Malta) | `mt-MT` | Language model | | | Marathi (India) | `mr-IN` | Language model | | | Norwegian (Bokmål, Norway) | `nb-NO` | Language model | Yes | | Polish (Poland) | `pl-PL` | Language model | Yes |
-| Portuguese (Brazil) | `pt-BR` | Acoustic model<br>Language model<br>Pronunciation| Yes |
+| Portuguese (Brazil) | `pt-BR` | Acoustic model (20190620, 20201015)<br>Language model<br>Pronunciation| Yes |
| Portuguese (Portugal) | `pt-PT` | Language model | Yes | | Romanian (Romania) | `ro-RO` | Language model | |
-| Russian (Russia) | `ru-RU` | Acoustic model<br>Language model | Yes |
+| Russian (Russia) | `ru-RU` | Acoustic model (20200907)<br>Language model | Yes |
| Slovak (Slovakia) | `sk-SK` | Language model | | | Slovenian (Slovenia) | `sl-SI` | Language model | | | Spanish (Argentina) | `es-AR` | Language Model | |
@@ -104,13 +104,13 @@ https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| Spanish (Equatorial Guinea) | `es-GQ` | Language Model | | | Spanish (Guatemala) | `es-GT` | Language Model | | | Spanish (Honduras) | `es-HN` | Language Model | |
-| Spanish (Mexico) | `es-MX` | Acoustic model<br>Language model | Yes |
+| Spanish (Mexico) | `es-MX` | Acoustic model (20200907)<br>Language model | Yes |
| Spanish (Nicaragua) | `es-NI` | Language Model | | | Spanish (Panama) | `es-PA` | Language Model | | | Spanish (Paraguay) | `es-PY` | Language Model | | | Spanish (Peru) | `es-PE` | Language Model | | | Spanish (Puerto Rico) | `es-PR` | Language Model | |
-| Spanish (Spain) | `es-ES` | Acoustic model<br>Language model | Yes |
+| Spanish (Spain) | `es-ES` | Acoustic model (20201015)<br>Language model | Yes |
| Spanish (Uruguay) | `es-UY` | Language Model | | | Spanish (USA) | `es-US` | Language Model | | | Spanish (Venezuela) | `es-VE` | Language Model | |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/quickstarts/java-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/quickstarts/java-sdk.md
@@ -674,7 +674,7 @@ Recognized entity: Bill Gates, entity category: Person, entity subcategory: null
Recognized entity: Paul Allen, entity category: Person, entity subcategory: null, confidence score: 0.990000. ```
-You can also use the Analyze operation to detect PII and key phrase extraction. See the [Analyze sample](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/AnalyzeTasksAsync.java) on GitHub.
+You can also use the Analyze operation to detect PII and key phrase extraction. See the [Analyze sample](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro) on GitHub.
# [Version 3.0](#tab/version-3)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/quickstarts/nodejs-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/quickstarts/nodejs-sdk.md
@@ -876,7 +876,7 @@ analyze_example(textAnalyticsClient);
- Entity Paul Allen of type Person ```
-You can also use the Analyze operation to detect PII and key phrase extraction. See the Analyze samples for [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/textanalytics/ai-text-analytics/samples/javascript/beginAnalyze.js) and [TypeScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/textanalytics/ai-text-analytics/samples/typescript/src/beginAnalyze.ts) on GitHub.
+You can also use the Analyze operation to detect PII and key phrase extraction. See the Analyze samples for [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/textanalytics/ai-text-analytics/samples/javascript) and [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/textanalytics/ai-text-analytics/samples/typescript/src) on GitHub.
# [Version 3.0](#tab/version-3)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/includes/quickstarts/python-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/includes/quickstarts/python-sdk.md
@@ -993,7 +993,7 @@ Entity: Paul Allen
```
-You can also use the Analyze operation to detect PII and key phrase extraction. See the [Analyze sample](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_async.py) on GitHub.
+You can also use the Analyze operation to detect PII and key phrase extraction. See the [Analyze sample](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples) on GitHub.
# [Version 3.0](#tab/version-3)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/chat/sdk-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
@@ -39,7 +39,7 @@ The following list presents the set of features which are currently available in
| | Send and receive typing notifications when a member is actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | | | Get all messages in a chat thread <br/> *Unicode emojis supported* | ✔️ | ✔️ | ✔️ | ✔️ | | | Send emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ |
-|Real-time signaling (enabled by proprietary signalling package)| Get notified when a user receives a new message in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
+|Real-time signaling (enabled by proprietary signalling package**)| Get notified when a user receives a new message in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
| | Get notified when a message has been edited by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ | | | Get notified when a message has been deleted by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ | | | Get notified when another chat thread member is typing | ✔️ | ❌ | ❌ | ❌ |
@@ -49,6 +49,8 @@ The following list presents the set of features which are currently available in
| | Monitor the quality and status of API requests made by your app and configure alerts via the portal | ✔️ | ✔️ | ✔️ | ✔️ | |Additional features | Use [Cognitive Services APIs](../../../cognitive-services/index.yml) along with chat client library to enable intelligent features - *language translation & sentiment analysis of the incoming message on a client, speech to text conversion to compose a message while the member speaks, etc.* | ✔️ | ✔️ | ✔️ | ✔️ |
+**The proprietary signalling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
+ ## JavaScript chat client library support by OS and browser The following table represents the set of supported browsers and versions which are currently available.
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
@@ -10,7 +10,7 @@
Last updated 09/30/2020
-zone_pivot_groups: acs-js-csharp-java-python
+zone_pivot_groups: acs-js-csharp-java-python-swift
# Quickstart: Add Chat to your App
@@ -34,6 +34,10 @@ Get started with Azure Communication Services by using the Communication Service
[!INCLUDE [Chat with C# client library](./includes/chat-csharp.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-java.md
@@ -51,7 +51,7 @@ In your POM file, reference the `azure-communication-chat` package with the Chat
<dependency> <groupId>com.azure</groupId> <artifactId>azure-communication-chat</artifactId>
- <version>1.0.0-beta.3</version>
+ <version>1.0.0-beta.4</version>
</dependency> ```
@@ -61,9 +61,8 @@ For authentication, your client needs to reference the `azure-communication-comm
<dependency> <groupId>com.azure</groupId> <artifactId>azure-communication-common</artifactId>
- <version>1.0.0-beta.3</version>
+ <version>1.0.0-beta.4</version>
</dependency>- ``` ## Object model
@@ -78,7 +77,7 @@ The following classes and interfaces handle some of the major features of the Az
| ChatThreadAsyncClient | This class is needed for the asynchronous Chat Thread functionality. You obtain an instance via the ChatAsyncClient, and use it to send/receive/update/delete messages, add/remove/get users, send typing notifications and read receipts. | ## Create a chat client
-To create a chat client, you'll use the Communications Service endpoint and the access token that was generated as part of pre-requisite steps. User access tokens enable you to build client applications that directly authenticate to Azure Communication Services. Once you generate these tokens on your server, pass them back to a client device. You need to use the CommunicationUserCredential class from the Common client library to pass the token to your chat client.
+To create a chat client, you'll use the Communications Service endpoint and the access token that was generated as part of pre-requisite steps. User access tokens enable you to build client applications that directly authenticate to Azure Communication Services. Once you generate these tokens on your server, pass them back to a client device. You need to use the CommunicationTokenCredential class from the Common client library to pass the token to your chat client.
When adding the import statements, be sure to only add imports from the com.azure.communication.chat and com.azure.communication.chat.models namespaces, and not from the com.azure.communication.chat.implementation namespace. In the App.java file that was generated via Maven, you can use the following code to begin with:
@@ -107,8 +106,8 @@ public class App
// User access token fetched from your trusted service String userAccessToken = "<USER_ACCESS_TOKEN>";
- // Create a CommunicationUserCredential with the given access token, which is only valid until the token is valid
- CommunicationUserCredential userCredential = new CommunicationUserCredential(userAccessToken);
+ // Create a CommunicationTokenCredential with the given access token, which is only valid until the token is valid
+ CommunicationTokenCredential userCredential = new CommunicationTokenCredential(userAccessToken);
// Initialize the chat client final ChatClientBuilder builder = new ChatClientBuilder();
@@ -127,28 +126,28 @@ Use the `createChatThread` method to create a chat thread.
`createChatThreadOptions` is used to describe the thread request. - Use `topic` to give a topic to this chat; Topic can be updated after the chat thread is created using the `UpdateThread` function.-- Use `members` to list the thread members to be added to the thread. `ChatThreadMember` takes the user you created in the [User Access Token](../../access-tokens.md) quickstart.
+- Use `participants` to list the thread participants to be added to the thread. `ChatParticipant` takes the user you created in the [User Access Token](../../access-tokens.md) quickstart.
-The response `chatThreadClient` is used to perform operations on the created chat thread: adding members to the chat thread, sending a message, deleting a message, etc.
+The response `chatThreadClient` is used to perform operations on the created chat thread: adding participants to the chat thread, sending a message, deleting a message, etc.
It contains a `chatThreadId` property which is the unique ID of the chat thread. The property is accessible by the public method .getChatThreadId(). ```Java
-List<ChatThreadMember> members = new ArrayList<ChatThreadMember>();
+List<ChatParticipant> participants = new ArrayList<ChatParticipant>();
-ChatThreadMember firstThreadMember = new ChatThreadMember()
+ChatParticipant firstThreadParticipant = new ChatParticipant()
.setUser(firstUser)
- .setDisplayName("Member Display Name 1");
+ .setDisplayName("Participant Display Name 1");
-ChatThreadMember secondThreadMember = new ChatThreadMember()
+ChatParticipant secondThreadParticipant = new ChatParticipant()
.setUser(secondUser)
- .setDisplayName("Member Display Name 2");
+ .setDisplayName("Participant Display Name 2");
-members.add(firstThreadMember);
-members.add(secondThreadMember);
+participants.add(firstThreadParticipant);
+participants.add(secondThreadParticipant);
CreateChatThreadOptions createChatThreadOptions = new CreateChatThreadOptions() .setTopic("Topic")
- .setMembers(members);
+ .setParticipants(participants);
ChatThreadClient chatThreadClient = chatClient.createChatThread(createChatThreadOptions); String chatThreadId = chatThreadClient.getChatThreadId(); ```
@@ -159,7 +158,7 @@ Use the `sendMessage` method to send a message to the thread you just created, i
`sendChatMessageOptions` is used to describe the chat message request. - Use `content` to provide the chat message content.-- Use `priority` to specify the chat message priority level, such as 'Normal' or 'High'; this property can be used to have a UI indicator for the recipient user in your app, to bring attention to the message or execute custom business logic.
+- Use `type` to specify the chat message content type, TEXT or HTML.
- Use `senderDisplayName` to specify the display name of the sender. The response `sendChatMessageResult` contains an `id`, which is the unique ID of the message.
@@ -167,7 +166,7 @@ The response `sendChatMessageResult` contains an `id`, which is the unique ID of
```Java SendChatMessageOptions sendChatMessageOptions = new SendChatMessageOptions() .setContent("Message content")
- .setPriority(ChatMessagePriority.NORMAL)
+ .setType(ChatMessageType.TEXT)
.setSenderDisplayName("Sender Display Name"); SendChatMessageResult sendChatMessageResult = chatThreadClient.sendMessage(sendChatMessageOptions);
@@ -177,7 +176,7 @@ String chatMessageId = sendChatMessageResult.getId();
## Get a chat thread client
-The `getChatThreadClient` method returns a thread client for a thread that already exists. It can be used for performing operations on the created thread: add members, send message, etc.
+The `getChatThreadClient` method returns a thread client for a thread that already exists. It can be used for performing operations on the created thread: add participants, send message, etc.
`chatThreadId` is the unique ID of the existing chat thread. ```Java
@@ -203,7 +202,7 @@ chatThreadClient.listMessages().iterableByPage().forEach(resp -> {
`listMessages` returns different types of messages which can be identified by `chatMessage.getType()`. These types are: -- `Text`: Regular chat message sent by a thread member.
+- `Text`: Regular chat message sent by a thread participant.
- `ThreadActivity/TopicUpdate`: System message that indicates the topic has been updated.
@@ -213,44 +212,44 @@ chatThreadClient.listMessages().iterableByPage().forEach(resp -> {
For more details, see [Message Types](../../../concepts/chat/concepts.md#message-types).
-## Add a user as member to the chat thread
+## Add a user as participant to the chat thread
-Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to send messages to the chat thread, and add/remove other members. You'll need to start by getting a new access token and identity for that user. Before calling addMembers method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
+Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to send messages to the chat thread, and add/remove other participants. You'll need to start by getting a new access token and identity for that user. Before calling addParticipants method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
-Use `addMembers` method to add thread members to the thread identified by threadId.
+Use `addParticipants` method to add participants to the thread identified by threadId.
-- Use `members` to list the members to be added to the chat thread.-- `user`, required, is the CommunicationUser you've created by the CommunicationIdentityClient in the [User Access Token](../../access-tokens.md) quickstart.-- `display_name`, optional, is the display name for the thread member.-- `share_history_time`, optional, is the time from which the chat history is shared with the member. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the member was added, set it to the current date. To share partial history, set it to the required date.
+- Use `listParticipants` to list the participants to be added to the chat thread.
+- `user`, required, is the CommunicationUserIdentifier you've created by the CommunicationIdentityClient in the [User Access Token](../../access-tokens.md) quickstart.
+- `display_name`, optional, is the display name for the thread participant.
+- `share_history_time`, optional, is the time from which the chat history is shared with the participant. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the participant was added, set it to the current date. To share partial history, set it to the required date.
```Java
-List<ChatThreadMember> members = new ArrayList<ChatThreadMember>();
+List<ChatParticipant> participants = new ArrayList<ChatParticipant>();
-ChatThreadMember firstThreadMember = new ChatThreadMember()
+ChatParticipant firstThreadParticipant = new ChatParticipant()
.setUser(user1) .setDisplayName("Display Name 1");
-ChatThreadMember secondThreadMember = new ChatThreadMember()
+ChatParticipant secondThreadParticipant = new ChatParticipant()
.setUser(user2) .setDisplayName("Display Name 2");
-members.add(firstThreadMember);
-members.add(secondThreadMember);
+participants.add(firstThreadParticipant);
+participants.add(secondThreadParticipant);
-AddChatThreadMembersOptions addChatThreadMembersOptions = new AddChatThreadMembersOptions()
- .setMembers(members);
-chatThreadClient.addMembers(addChatThreadMembersOptions);
+AddChatParticipantsOptions addChatParticipantsOptions = new AddChatParticipantsOptions()
+ .setParticipants(participants);
+chatThreadClient.addParticipants(addChatParticipantsOptions);
``` ## Remove user from a chat thread
-Similar to adding a user to a thread, you can remove users from a chat thread. To do that, you need to track the user identities of the members you have added.
+Similar to adding a user to a thread, you can remove users from a chat thread. To do that, you need to track the user identities of the participants you have added.
-Use `removeMember`, where `user` is the CommunicationUser you've created.
+Use `removeParticipant`, where `user` is the CommunicationUserIdentifier you've created.
```Java
-chatThreadClient.removeMember(user);
+chatThreadClient.removeParticipant(user);
``` ## Run the code
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-js.md
@@ -17,8 +17,8 @@ Before you get started, make sure to:
- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Install [Node.js](https://nodejs.org/en/download/) Active LTS and Maintenance LTS versions (8.11.1 and 10.14.1 recommended).-- Create an Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../create-communication-resource.md). You'll need to record your resource **endpoint** for this quickstart.-- A [User Access Token](../../access-tokens.md). Be sure to set the scope to "chat", and note the token string as well as the userId string.
+- Create an Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../create-communication-resource.md). You'll need to **record your resource endpoint** for this quickstart.
+- Create *three* ACS Users and issue them a user access token [User Access Token](../../access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**. The full demo creates a thread with two initial participants and then adds a third participant to the thread.
## Setting up
@@ -36,8 +36,6 @@ Run `npm init -y` to create a **package.json** file with default settings.
npm init -y ```
-Use a text editor to create a file called **start-chat.js** in the project root directory. You'll add all the source code for this quickstart to this file in the following sections.
- ### Install the packages Use the `npm install` command to install the below Communication Services client libraries for JavaScript.
@@ -65,8 +63,6 @@ npm install webpack webpack-cli webpack-dev-server --save-dev
Create an **https://docsupdatetracker.net/index.html** file in the root directory of your project. We'll use this file as a template to add chat capability using the Azure Communication Chat client library for JavaScript.
-Here is the code:
- ```html <!DOCTYPE html> <html>
@@ -80,13 +76,33 @@ Here is the code:
</body> </html> ```
-Create a file in the root directory of your project called **client.js** to contain the application logic for this quickstart.
+
+Create a file in the root directory of your project called **client.js** to contain the application logic for this quickstart.
### Create a chat client
-To create a chat client in your web app, you'll use the Communications Service endpoint and the access token that was generated as part of pre-requisite steps. User access tokens enable you to build client applications that directly authenticate to Azure Communication Services. Once you generate these tokens on your server, pass them back to a client device. You need to use the `AzureCommunicationUserCredential` class from the `Common client library` to pass the token to your chat client.
+To create a chat client in your web app, you'll use the Communications Service **endpoint** and the **access token** that was generated as part of pre-requisite steps.
+
+User access tokens enable you to build client applications that directly authenticate to Azure Communication Services.
+
+##### Server vs. client side
+
+We recommend generating access tokens using a server-side component that passes them to the client application. In this scenario the server side would be responsible for creating and managing users and issuing their tokens. The client side can then receive access tokens from the service and use them to authenticate the Azure Communication Services client libraries.
+
+Tokens can also be issued on the client side using the Azure Communication Administration library for JavaScript. In this scenario the client side would need to be aware of users in order to issue their tokens.
+
+See the following documentation for more detail [Client and Server Architecture](../../../concepts/client-and-server-architecture.md)
+
+In the diagram below the client side application receives an access token from a trusted service tier. The application then uses the token to authenticate Communication Services libraries. Once authenticated, the application can now use the Communication Services client side libraries to perform operations such as chatting with other users.
-Create a **client.js** file in the root directory of your project. We'll use this file to add chat capability using the Azure Communication Chat client library for JavaScript.
+
+##### Instructions
+This demo does not cover creating a service tier for your chat application.
+
+If you have not generated users and their tokens, follow the instructions here to do so: [User Access Token](../../access-tokens.md). Remember to set the scope to "chat" and not "voip".
+
+Inside **client.js** use the endpoint and access token in the code below to add chat capability using the Azure Communication Chat client library for JavaScript.
```JavaScript
@@ -95,17 +111,18 @@ import { AzureCommunicationUserCredential } from '@azure/communication-common';
// Your unique Azure Communication service endpoint let endpointUrl = 'https://<RESOURCE_NAME>.communication.azure.com';
+// The user access token generated as part of the pre-requisites
let userAccessToken = '<USER_ACCESS_TOKEN>'; let chatClient = new ChatClient(endpointUrl, new AzureCommunicationUserCredential(userAccessToken)); console.log('Azure Communication Chat client created!'); ```
-Replace **ENDPOINT** with the one created before based on the [Create an Azure Communication Resource](../../create-communication-resource.md) documentation.
-Replace **USER_ACCESS_TOKEN** with a token issued based on the [User Access Token](../../access-tokens.md) documentation.
-Add this code to **client.js** file
+- Replace **endpointUrl** with the Communication Services resource endpoint, see [Create an Azure Communication Resource](../../create-communication-resource.md) if you have not already done so.
+- Replace **userAccessToken** with the token that you issued.
### Run the code+ Use the `webpack-dev-server` to build and run your app. Run the following command to bundle application host in on a local webserver: ```console npx webpack-dev-server --entry ./client.js --output bundle.js --debug --devtool inline-source-map
@@ -133,55 +150,54 @@ Use the `createThread` method to create a chat thread.
`createThreadRequest` is used to describe the thread request: - Use `topic` to give a topic to this chat; Topic can be updated after the chat thread is created using the `UpdateThread` function. -- Use `members` to list the members to be added to the chat thread;
+- Use `participants` to list the participants to be added to the chat thread.
-When resolved, `createChatThread` method returns `threadId` which is used to perform operations on the newly created chat thread like adding members to the chat thread, sending messages, deleting message, etc.
+When resolved, `createChatThread` method returns a `CreateChatThreadResponse`. This model contains a `chatThread` property where you can access the `id` of the newly created thread. You can then use the `id` to get an instance of a `ChatThreadClient`. The `ChatThreadClient` can then be used to perform operation within the thread such as sending messages or listing participants.
-```Javascript
+```JavaScript
async function createChatThread() {
- let createThreadRequest = {
- topic: 'Preparation for London conference',
- members: [{
- user: { communicationUserId: '<USER_ID_FOR_JACK>' },
- displayName: 'Jack'
- }, {
- user: { communicationUserId: '<USER_ID_FOR_GEETA>' },
- displayName: 'Geeta'
- }]
- };
- let chatThreadClient= await chatClient.createChatThread(createThreadRequest);
- let threadId = chatThreadClient.threadId;
- return threadId;
-}
+ let createThreadRequest = {
+ topic: 'Preparation for London conference',
+ participants: [{
+ user: { communicationUserId: '<USER_ID_FOR_JACK>' },
+ displayName: 'Jack'
+ }, {
+ user: { communicationUserId: '<USER_ID_FOR_GEETA>' },
+ displayName: 'Geeta'
+ }]
+ };
+ let createThreadResponse = await chatClient.createChatThread(createThreadRequest);
+ let threadId = createThreadResponse.chatThread.id;
+ return threadId;
+ }
createChatThread().then(async threadId => {
- console.log(`Thread created:${threadId}`);
- // PLACEHOLDERS
- // <CREATE CHAT THREAD CLIENT>
- // <RECEIVE A CHAT MESSAGE FROM A CHAT THREAD>
- // <SEND MESSAGE TO A CHAT THREAD>
- // <LIST MESSAGES IN A CHAT THREAD>
- // <ADD NEW MEMBER TO THREAD>
- // <LIST MEMBERS IN A THREAD>
- // <REMOVE MEMBER FROM THREAD>
-});
+ console.log(`Thread created:${threadId}`);
+ // PLACEHOLDERS
+ // <CREATE CHAT THREAD CLIENT>
+ // <RECEIVE A CHAT MESSAGE FROM A CHAT THREAD>
+ // <SEND MESSAGE TO A CHAT THREAD>
+ // <LIST MESSAGES IN A CHAT THREAD>
+ // <ADD NEW PARTICIPANT TO THREAD>
+ // <LIST PARTICIPANTS IN A THREAD>
+ // <REMOVE PARTICIPANT FROM THREAD>
+ });
```
-Replace **USER_ID_FOR_JACK** and **USER_ID_FOR_GEETA** with the user ids obtained from the previous step ( Create users and issue [User Access Tokens](../../access-tokens.md))
+Replace **USER_ID_FOR_JACK** and **USER_ID_FOR_GEETA** with the user IDs obtained from creating users and tokens ([User Access Tokens](../../access-tokens.md))
-When you refresh your browser tab you should see the following in the console
+When you refresh your browser tab you should see the following in the console:
```console
-Thread created: <threadId>
+Thread created: <thread_id>
``` ## Get a chat thread client
-The `getChatThreadClient` method returns a `chatThreadClient` for a thread that already exists. It can be used for performing operations on the created thread: add members, send message, etc. threadId is the unique ID of the existing chat thread.
+The `getChatThreadClient` method returns a `chatThreadClient` for a thread that already exists. It can be used for performing operations on the created thread: add participants, send message, etc. threadId is the unique ID of the existing chat thread.
```JavaScript- let chatThreadClient = await chatClient.getChatThreadClient(threadId);
-console.log(`Chat Thread client for threadId:${chatThreadClient.threadId}`);
+console.log(`Chat Thread client for threadId:${threadId}`);
``` Add this code in place of the `<CREATE CHAT THREAD CLIENT>` comment in **client.js**, refresh your browser tab and check the console, you should see:
@@ -202,7 +218,7 @@ Use `sendMessage` method to send a chat message to the thread you just created,
- Use `priority` to specify the chat message priority level, such as 'Normal' or 'High'; this property can be used to have UI indicator for the recipient user in your app to bring attention to the message or execute custom business logic. - Use `senderDisplayName` to specify the display name of the sender;
-The response `sendChatMessageResult` contains an "id", which is the unique ID of that message.
+The response `sendChatMessageResult` contains an ID, which is the unique ID of that message.
```JavaScript
@@ -248,16 +264,16 @@ Alternatively you can retrieve chat messages by polling the `listMessages` metho
let pagedAsyncIterableIterator = await chatThreadClient.listMessages(); let nextMessage = await pagedAsyncIterableIterator.next();
- while (!nextMessage.done) {
- let chatMessage = nextMessage.value;
- console.log(`Message :${chatMessage.content}`);
- // your code here
- nextMessage = await pagedAsyncIterableIterator.next();
- }
+ while (!nextMessage.done) {
+ let chatMessage = nextMessage.value;
+ console.log(`Message :${chatMessage.content}`);
+ // your code here
+ nextMessage = await pagedAsyncIterableIterator.next();
+ }
``` Add this code in place of the `<LIST MESSAGES IN A CHAT THREAD>` comment in **client.js**.
-Refresh your tab, in the console you should find list of messages sent in this chat thread.
+Refresh your tab, in the console you should find the list of messages sent in this chat thread.
`listMessages` returns the latest version of the message, including any edits or deletes that happened to the message using `updateMessage` and `deleteMessage`.
@@ -265,48 +281,49 @@ For deleted messages `chatMessage.deletedOn` returns a datetime value indicating
`listMessages` returns different types of messages which can be identified by `chatMessage.type`. These types are: -- `Text`: Regular chat message sent by a thread member.
+- `Text`: Regular chat message sent by a thread participant.
- `ThreadActivity/TopicUpdate`: System message that indicates the topic has been updated. -- `ThreadActivity/AddMember`: System message that indicates one or more members have been added to the chat thread.
+- `ThreadActivity/AddParticipant`: System message that indicates one or more participants have been added to the chat thread.
-- `ThreadActivity/RemoveMember`: System message that indicates a member has been removed from the chat thread.
+- `ThreadActivity/RemoveParticipant`: System message that indicates a participant has been removed from the chat thread.
For more details, see [Message Types](../../../concepts/chat/concepts.md#message-types).
-## Add a user as member to the chat thread
+## Add a user as a participant to the chat thread
+
+Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to send messages to the chat thread, and add/remove other participants.
-Once a chat thread is created, you can then add and remove users from it. By adding users, you give them access to send messages to the chat thread, and add/remove other members.
-Before calling `addMembers` method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
+Before calling the `addParticipants` method, ensure that you have acquired a new access token and identity for that user. The user will need that access token in order to initialize their chat client.
-`addMembersRequest` describes the request object wherein `members` lists the members to be added to the chat thread;
+`addParticipantsRequest` describes the request object wherein `participants` lists the participants to be added to the chat thread;
- `user`, required, is the communication user to be added to the chat thread.-- `displayName`, optional, is the display name for the thread member.-- `shareHistoryTime`, optional, is the time from which the chat history is shared with the member. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the member was added, set it to the current date. To share partial history, set it to the date of your choice.
+- `displayName`, optional, is the display name for the thread participant.
+- `shareHistoryTime`, optional, is the time from which the chat history is shared with the participant. To share history since the inception of the chat thread, set this property to any date equal to, or less than the thread creation time. To share no history previous to when the participant was added, set it to the current date. To share partial history, set it to the date of your choice.
```JavaScript
-let addMembersRequest =
+let addParticipantsRequest =
{
- members: [
+ participants: [
{
- user: { communicationUserId: '<NEW_MEMBER_USER_ID>' },
+ user: { communicationUserId: '<NEW_PARTICIPANT_USER_ID>' },
displayName: 'Jane' } ] };
-await chatThreadClient.addMembers(addMembersRequest);
+await chatThreadClient.addParticipants(addParticipantsRequest);
```
-Replace **NEW_MEMBER_USER_ID** with a [new user Id](../../access-tokens.md)
-Add this code in place of the `<ADD NEW MEMBER TO THREAD>` comment in **client.js**
+Replace **NEW_PARTICIPANT_USER_ID** with a [new user ID](../../access-tokens.md)
+Add this code in place of the `<ADD NEW PARTICIPANT TO THREAD>` comment in **client.js**
## List users in a chat thread ```JavaScript
-async function listThreadMembers() {
- let pagedAsyncIterableIterator = await chatThreadClient.listMembers();
+async function listParticipants() {
+ let pagedAsyncIterableIterator = await chatThreadClient.listParticipants();
let next = await pagedAsyncIterableIterator.next(); while (!next.done) { let user = next.value;
@@ -314,20 +331,20 @@ async function listThreadMembers() {
next = await pagedAsyncIterableIterator.next(); } }
-await listThreadMembers();
+await listParticipants();
```
-Add this code in place of the `<LIST MEMBERS IN A THREAD>` comment in **client.js**, refresh your browser tab and check the console, you should see information about users in a thread.
+Add this code in place of the `<LIST PARTICIPANTS IN A THREAD>` comment in **client.js**, refresh your browser tab and check the console, you should see information about users in a thread.
## Remove user from a chat thread
-Similar to adding a member, you can remove members from a chat thread. In order to remove, you'll need to track the ids of the members you have added.
+Similar to adding a participant, you can remove participants from a chat thread. In order to remove, you'll need to track the IDs of the participants you have added.
-Use `removeMember` method where `member` is the communication user to be removed from the thread.
+Use `removeParticipant` method where `participant` is the communication user to be removed from the thread.
```JavaScript
-await chatThreadClient.removeMember({ communicationUserId: <MEMBER_ID> });
-await listThreadMembers();
+await chatThreadClient.removeParticipant({ communicationUserId: <PARTICIPANT_ID> });
+await listParticipants();
```
-Replace **MEMBER_ID** with a User ID used in the previous step (<NEW_MEMBER_USER_ID>).
-Add this code in place of the `<REMOVE MEMBER FROM THREAD>` comment in **client.js**,
+Replace **PARTICIPANT_ID** with a User ID used in the previous step (<NEW_PARTICIPANT_USER_ID>).
+Add this code in place of the `<REMOVE PARTICIPANT FROM THREAD>` comment in **client.js**,
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/includes/chat-swift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/includes/chat-swift.md
@@ -0,0 +1,270 @@
+
+ Title: include file
+description: include file
+++++ Last updated : 2/11/2020+++++
+## Prerequisites
+Before you get started, make sure to:
+
+- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Install [Xcode](https://developer.apple.com/xcode/) and [Cocoapods](https://cocoapods.org/), we will be using Xcode to create an iOS application for the quickstart and Cocoapods to install dependencies.
+- Create an Azure Communication Services resource. For details, see [Create an Azure Communication Resource](../../create-communication-resource.md). You'll need to **record your resource endpoint** for this quickstart.
+- Create **two** ACS Users and issue them a user access token [User Access Token](../../access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**. In this quickstart we will create a thread with an initial participant and then add a second participant to the thread.
+
+## Setting up
+
+### Create a new iOS application
+
+Open Xcode and select `Create a new Xcode project`.
+
+On the next window, select `iOS` as the platform and `App` for the template.
+
+When choosing options enter `ChatQuickstart` as the project name.
+Select `Storyboard` as the interface, `UIKit App Delegate` as the life cycle, and `Swift` as the language.
+
+Click next and choose the directory where you want the project to be created.
+
+### Install the libraries
+
+We'll use Cocoapods to install the necessary Communication Services dependencies.
+
+From the command line navigate inside the root directory of the `ChatQuickstart` iOS project.
+
+Create a Podfile:
+`pod init`
+
+Open the Podfile and add the following dependencies to the `ChatQuickstart` target:
+```
+pod 'AzureCommunication', '~> 1.0.0-beta.8'
+pod 'AzureCommunicationChat', '~> 1.0.0-beta.8'
+```
+
+Install the dependencies, this will also create an Xcode workspace:
+`pod install`
+
+### Setup the placeholders
+
+Open the workspace file `ChatQuickstart.xcworkspace` in Xcode and then open `ViewController.swift`.
+
+In this Quickstart, we will add our code to `viewController`, and view the output in the Xcode console. This quickstart does not address building a UI in iOS.
+
+At the top of `viewController.swift` import the `AzureCommunication` and `AzureCommunicatonChat` libraries:
+
+```
+import AzureCommunication
+import AzureCommunicationChat
+```
+
+Copy the following code into the `viewDidLoad()` method of `ViewController`:
+
+```
+override func viewDidLoad() {
+ super.viewDidLoad()
+ // Do any additional setup after loading the view.
+
+ let semaphore = DispatchSemaphore(value: 0)
+ DispatchQueue.global(qos: .background).async {
+ do {
+ // <CREATE A CHAT CLIENT>
+
+ // <CREATE A CHAT THREAD>
+
+ // <CREATE A CHAT THREAD CLIENT>
+
+ // <SEND A MESSAGE>
+
+ // <ADD A USER>
+
+ // <LIST USERS>
+
+ // <REMOVE A USER>
+ } catch {
+ print("Quickstart failed: \(error.localizedDescription)")
+ }
+ }
+ }
+```
+
+We'll use a semaphore to synchronize our code for demonstration purposes. In following steps, we'll replace the placeholders with sample code using the Azure Communication Services Chat library.
++
+### Create a chat client
+
+Replace the comment `<CREATE A CHAT CLIENT>` with the following code:
+
+```
+let endpoint = "<ACS_RESOURCE_ENDPOINT>"
+ let credential =
+ try CommunicationTokenCredential(
+ token: "<ACCESS_TOKEN>"
+ )
+ let options = AzureCommunicationChatClientOptions()
+
+ let chatClient = try ChatClient(
+ endpoint: endpoint,
+ credential: credential,
+ withOptions: options
+ )
+```
+
+Replace `<ACS_RESOURCE_ENDPOINT>` with the endpoint of your ACS Resource.
+Replace `<ACCESS_TOKEN>` with a valid ACS access token.
+
+## Object model
+The following classes and interfaces handle some of the major features of the Azure Communication Services Chat client library for JavaScript.
+
+| Name | Description |
+| -- | - |
+| ChatClient | This class is needed for the Chat functionality. You instantiate it with your subscription information, and use it to create, get and delete threads. |
+| ChatThreadClient | This class is needed for the Chat Thread functionality. You obtain an instance via the ChatClient, and use it to send/receive/update/delete messages, add/remove/get users, send typing notifications and read receipts, subscribe chat events. |
+
+## Start a chat thread
+
+Now we will use our `ChatClient` to create a new thread with an initial user.
+
+Replace the comment `<CREATE A CHAT THREAD>` with the following code:
+
+```
+let request = CreateThreadRequest(
+ topic: "Quickstart",
+ participants: [
+ Participant(
+ id: "<USER_ID>",
+ displayName: "Jack"
+ )
+ ]
+)
+
+var threadId: String?
+chatClient.create(thread: request) { result, _ in
+ switch result {
+ case let .success(result):
+ threadId = result.thread?.id
+
+ case .failure:
+ fatalError("Failed to create thread.")
+ }
+ semaphore.signal()
+}
+semaphore.wait()
+```
+
+Replace `<<USER_ID>>` with a valid Communication Services user ID.
+
+We're using a semaphore here to wait for the completion handler before continuing. We will use the `threadId` from the response returned to the completion handler in later steps.
+
+## Get a chat thread client
+
+Now that we have created a Chat thread we'll obtain a `ChatThreadClient` to perform operations within the thread.
+
+Replace the comment `<CREATE A CHAT THREAD CLIENT>` with the following code:
+
+```
+let chatThreadClient = try chatClient.createClient(forThread: threadId!)
+```
+
+## Send a message to a chat thread
+
+Replace the comment `<SEND A MESSAGE>` with the following code:
+
+```
+let message = SendChatMessageRequest(
+ content: "Hello!",
+ senderDisplayName: "Jack"
+)
+
+chatThreadClient.send(message: message) { result, _ in
+ switch result {
+ case let .success(result):
+ print("Message sent, message id: \(result.id)")
+ case .failure:
+ print("Failed to send message")
+ }
+ semaphore.signal()
+}
+semaphore.wait()
+```
+
+First we construct the `SendChatMessageRequest` which contains the content and senders display name (also optionally can contain the share history time). The response returned to the completion handler contains the ID of the message that was sent.
+
+## Add a user as a participant to the chat thread
+
+Replace the comment `<ADD A USER>` with the following code:
+
+```
+let user = Participant(
+ id: "<USER_ID>",
+ displayName: "Jane"
+)
+
+chatThreadClient.add(participants: [user]) { result, _ in
+ switch result {
+ case let .success(result):
+ (result.errors != nil) ? print("Added participant") : print("Error adding participant")
+ case .failure:
+ print("Failed to list participants")
+ }
+ semaphore.signal()
+}
+semaphore.wait()
+```
+
+Replace `<USER_ID>` with the ACS user ID of the user to be added.
+
+When adding a participant to a thread, the response returned the completion may contain errors. These errors represent failure to add particular participants.
+
+## List users in a thread
+
+Replace the `<LIST USERS>` comment with the following code:
+
+```
+chatThreadClient.listParticipants { result, _ in
+ switch result {
+ case let .success(participants):
+ var iterator = participants.syncIterator
+ while let participant = iterator.next() {
+ print(participant.user.identifier)
+ }
+ case .failure:
+ print("Failed to list participants")
+ }
+ semaphore.signal()
+}
+semaphore.wait()
+```
++
+## Remove user from a chat thread
+
+Replace the `<REMOVE A USER>` comment with the following code:
+
+```
+chatThreadClient
+ .remove(
+ participant: "<USER_ID>"
+ ) { result, _ in
+ switch result {
+ case .success:
+ print("Removed user from the thread.")
+ case .failure:
+ print("Failed to remove user from the thread.")
+ }
+ }
+```
+
+Replace `<USER ID>` with the the Communication Services user ID of the participant being removed.
+
+## Run the code
+
+In Xcode hit the Run button to build and run the project. In the console you can view the output from the code and the logger output from the ChatClient.
++
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity.md
@@ -0,0 +1,125 @@
+
+ Title: Use managed identities in Communication Services (.NET)
+
+description: Managed identities let you authorize Azure Communication Services access from applications running in Azure VMs, function apps, and other resources.
++++ Last updated : 12/04/2020++++
+# Use managed identities (.NET)
+
+Get started with Azure Communication Services by using managed identities in .NET. The Communication Services Administration and SMS client libraries support Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+This quickstart shows you how to authorize access to the Administration and SMS client libraries from an Azure environment that supports managed identities. It also describes how to test your code in a development environment.
+
+## Prerequisites
+
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+ - An active Communication Services resource and connection string. [Create a Communication Services resource](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp).
+
+## Setting Up
+
+### Enable managed identities on a virtual machine or App service
+
+Managed identities should be enabled on the Azure resources that you're authorizing. To learn how to enable managed identities for Azure Resources, see one of these articles:
+
+- [Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+- [Azure PowerShell](../../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)
+- [Azure CLI](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)
+- [Azure Resource Manager template](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)
+- [Azure Resource Manager client libraries](../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+- [App services](../../app-service/overview-managed-identity.md)
+
+#### Assign Azure roles with the Azure portal
+
+1. Navigate to the Azure portal.
+1. Navigate to the Azure Communication Service resource.
+1. Navigate to Access Control (IAM) menu -> + Add -> Add role assignment.
+1. Select the role "Contributor" (this is the only supported role).
+1. Select "User assigned managed identity" (or a "System assigned managed identity") then select the desired identity. Save your selection.
+
+![Managed identity role](media/managed-identity-assign-role.png)
+
+#### Assign Azure roles with PowerShell
+
+To assign roles and permissions using PowerShell, see [Add or remove Azure role assignments using Azure PowerShell](../../../articles/role-based-access-control/role-assignments-powershell.md)
+
+## Add managed identity to your Communication Services solution
+
+### Install the client library packages
+
+```console
+dotnet add package Azure.Communication.Identity
+dotnet add package Azure.Communication.Configuration
+dotnet add package Azure.Communication.Sms
+dotnet add package Azure.Identity
+```
+
+### Use the client library packages
+
+Add the following `using` directives to your code to use the Azure Identity and Azure Storage client libraries.
+
+```csharp
+using Azure.Identity;
+using Azure.Communication.Identity;
+using Azure.Communication.Configuration;
+using Azure.Communication.Sms;
+```
+
+The examples below are using the [DefaultAzureCredential](https://docs.microsoft.com/dotnet/api/azure.identity.defaultazurecredential). This credential is suitable for production and development environments.
+
+### Create an identity and issue a token
+
+The following code example shows how to create a service client object with Azure Active Directory tokens, then use the client to issue a token for a new user:
+
+```csharp
+ public async Task<Response<CommunicationUserToken>> CreateIdentityAndIssueTokenAsync(Uri resourceEdnpoint)
+ {
+ TokenCredential credential = new DefaultAzureCredential();
+
+ var client = new CommunicationIdentityClient(resourceEndpoint, credential);
+ var identityResponse = await client.CreateUserAsync();
+
+ var tokenResponse = await client.IssueTokenAsync(identity, scopes: new [] { CommunicationTokenScope.VoIP });
+
+ return tokenResponse;
+ }
+```
+
+### Send an SMS with Azure Active Directory tokens
+
+The following code example shows how to create a service client object with Azure Active Directory tokens, then use the client to send an SMS message:
+
+```csharp
+
+ public async Task SendSmsAsync(Uri resourceEndpoint, PhoneNumber from, PhoneNumber to, string message)
+ {
+ TokenCredential credential = new DefaultAzureCredential();
+
+ SmsClient smsClient = new SmsClient(resourceEndpoint, credential);
+ smsClient.Send(
+ from: from,
+ to: to,
+ message: message,
+ new SendSmsOptions { EnableDeliveryReport = true } // optional
+ );
+ }
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about authentication](../concepts/authentication.md)
+
+You may also want to:
+
+- [Learn more about Azure role-based access control](../../../articles/role-based-access-control/index.yml)
+- [Learn more about Azure identity library for .NET](https://docs.microsoft.com/dotnet/api/overview/azure/identity-readme)
+- [Creating user access tokens](../quickstarts/access-tokens.md)
+- [Send an SMS message](../quickstarts/telephony-sms/send.md)
+- [Learn more about SMS](../concepts/telephony-sms/concepts.md)
connectors https://docs.microsoft.com/en-us/azure/connectors/connect-common-data-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connect-common-data-service.md
@@ -5,7 +5,7 @@
ms.suite: integration Previously updated : 12/11/2020 Last updated : 02/11/2021 tags: connectors
@@ -14,7 +14,7 @@ tags: connectors
> [!NOTE] > In November 2020, Common Data Service was renamed to Microsoft Dataverse.
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Common Data Service connector](/connectors/commondataservice/), you can build automated workflows that manage records in your [Common Data Service, now Microsoft Dataverse](/powerapps/maker/common-data-service/data-platform-intro) database. These workflows can create records, update records, and perform other operations. You can also get information from your Common Data Service database and make the output available for other actions to use in your logic app. For example, when a record is updated in your Common Data Service database, you can send an email by using the Office 365 Outlook connector.
+With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Common Data Service connector](/connectors/commondataservice/), you can build automated workflows that manage records in your [Common Data Service, now Microsoft Dataverse](/powerapps/maker/common-data-service/data-platform-intro) database. These workflows can create records, update records, and perform other operations. You can also get information from your Dataverse database and make the output available for other actions to use in your logic app. For example, when a record is updated in your Dataverse database, you can send an email by using the Office 365 Outlook connector.
This article shows how you can build a logic app that creates a task record whenever a new lead record is created.
@@ -27,7 +27,7 @@ This article shows how you can build a logic app that creates a task record when
* [Learn: Get started with Common Data Service](/learn/modules/get-started-with-powerapps-common-data-service/) * [Power Platform - Environments overview](/power-platform/admin/environments-overview)
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md) and the logic app from where you want to access the records in your Common Data Service database. To start your logic app with a Common Data Service trigger, you need a blank logic app. If you're new to Azure Logic Apps, review [Quickstart: Create your first workflow by using Azure Logic Apps](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md) and the logic app from where you want to access the records in your Dataverse database. To start your logic app with a Common Data Service trigger, you need a blank logic app. If you're new to Azure Logic Apps, review [Quickstart: Create your first workflow by using Azure Logic Apps](../logic-apps/quickstart-create-first-logic-app-workflow.md).
## Add Common Data Service trigger
@@ -166,6 +166,65 @@ This example shows how the **Create a new record** action creates a new "Tasks"
For technical information based on the connector's Swagger description, such as triggers, actions, limits, and other details, see the [connector's reference page](/connectors/commondataservice/).
+## Troubleshooting problems
+
+### Calls from multiple environments
+
+Both connectors, Common Data Service and Common Data Service (current environment), store information about the logic app workflows that need and get notifications about entity changes by using the `callbackregistrations` entity in your Microsoft Dataverse. If you copy a Dataverse organization, any webhooks are copied too. If you copy your organization before you disable workflows that are mapped to your organization, any copied webhooks also point at the same logic apps, which then get notifications from multiple organizations.
+
+To stop unwanted notifications, delete the callback registration from the organization that sends those notifications by following these steps:
+
+1. Identify the Dataverse organization from where you want to remove notifications, and sign in to that organization.
+
+1. In the Chrome browser, find the callback registration that you want to delete by following these steps:
+
+ 1. Review the generic list for all the callback registrations at the following OData URI so that you can view the data inside the `callbackregistrations` entity:
+
+ `https://{organization-name}.crm{instance-number}.dynamics.com/api/data/v9.0/callbackregistrations`:
+
+ > [!NOTE]
+ > If no values are returned, you might not have permissions to view this entity type,
+ > or you might not be signed in to the correct organization.
+
+ 1. Filter on the triggering entity's logical name `entityname` and the notification event that matches your logic app workflow (message). Each event type is mapped to the message integer as follows:
+
+ | Event type | Message integer |
+ ||--|
+ | Create | 1 |
+ | Delete | 2 |
+ | Update | 3 |
+ | CreateOrUpdate | 4 |
+ | CreateOrDelete | 5 |
+ | UpdateOrDelete | 6 |
+ | CreateOrUpdateOrDelete | 7 |
+ |||
+
+ This example shows how you can filter for `Create` notifications on an entity named `nov_validation` by using the following OData URI for a sample organization:
+
+ `https://fabrikam-preprod.crm1.dynamics.com/api/data/v9.0/callbackregistrations?$filter=entityname eq 'nov_validation' and message eq 1`
+
+ ![Screenshot that shows browser window and OData URI in the address bar.](./media/connect-common-data-service/find-callback-registrations.png)
+
+ > [!TIP]
+ > If multiple triggers exist for the same entity or event, you can filter the list by using additional filters such as
+ > the `createdon` and `_owninguser_value` attributes. The owner user's name appears under `/api/data/v9.0/systemusers({id})`.
+
+ 1. After you find the ID for the callback registration that you want to delete, follow these steps:
+
+ 1. In your Chrome browser, open the Chrome Developer Tools (Keyboard: F12).
+
+ 1. In the window, at the top, select the **Console** tab.
+
+ 1. On the command-line prompt, enter this command, which sends a request to delete the specified callback registration:
+
+ `fetch('http://{organization-name}.crm{instance-number}.dynamics.com/api/data/v9.0/callbackregistrations({ID-to-delete})', { method: 'DELETE'})`
+
+ > [!IMPORTANT]
+ > Make sure that you make the request from a non-Unified Client Interface (UCI) page, for example, from the
+ > OData or API response page itself. Otherwise, logic in the app.js file might interfere with this operation.
+
+ 1. To confirm that the callback registration no longer exists, check the callback registrations list.
+ ## Next steps * Learn about other [connectors for Azure Logic Apps](../connectors/apis-list.md)
connectors https://docs.microsoft.com/en-us/azure/connectors/connectors-native-recurrence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-recurrence.md
@@ -55,15 +55,16 @@ For differences between this trigger and the Sliding Window trigger or for more
|||||| > [!IMPORTANT]
- > When recurrences don't specify advanced scheduling options, future recurrences are based on the last run time.
- > The start times for these recurrences might drift due to factors such as latency during storage calls.
- > To make sure that your logic app doesn't miss a recurrence, especially when the frequency is in days or longer,
- > use one of these options:
+ > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time),
+ > the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior,
+ > provide a start date and time for when you want the first recurrence to run.
+ >
+ > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences, those recurrences are
+ > based on the last run time. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
+ > To make sure that your logic app doesn't miss a recurrence, especially when the frequency is in days or longer, try these options:
>
- > * Provide a start time for the recurrence.
- >
- > * Specify the hours and minutes for when to run the recurrence by using the properties named
- > **At these hours** and **At these minutes**.
+ > * Provide a start date and time for the recurrence plus the specific times when to run subsequent recurrences by using the properties
+ > named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
> > * Use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), > rather than the Recurrence trigger.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-command-line https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-command-line.md
@@ -44,7 +44,7 @@ This article describes how to provision an account with continuous backup and re
## <a id="provision-sql-api"></a>Provision a SQL API account with continuous backup
-To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named `pitracct2` with continuous backup policy created in the "West US" region under "myrg" resource group:
+To provision a SQL API account with continuous backup, an extra argument `--backup-policy-type Continuous` should be passed along with the regular provisioning command. The following command is an example of a single region write account named `pitracct2` with continuous backup policy created in the *West US* region under *myrg* resource group:
```azurecli-interactive
@@ -59,7 +59,7 @@ az cosmosdb create \
## <a id="provision-mongo-api"></a>Provision an Azure Cosmos DB API for MongoDB account with continuous backup
-The following command shows an example of a single region write account named `pitracct3` with continuous backup policy created the "West US" region under "myrg" resource group:
+The following command shows an example of a single region write account named `pitracct3` with continuous backup policy created the *West US* region under *myrg* resource group:
```azurecli-interactive
@@ -143,13 +143,13 @@ The response includes all the database accounts (both live and deleted) that can
} ```
-Just like the "CreationTime" or "DeletionTime" for the account, there is a "CreationTime" or "DeletionTime" for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
+Just like the `CreationTime` or `DeletionTime` for the account, there is a `CreationTime` or `DeletionTime` for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
**List all the versions of databases in a live database account** Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
-Run the following CLI command to list all the versions of databases. This command only works with live accounts. The "instanceId" and the "location" parameters are obtained from the "name" and "location" properties in the response of `az cosmosdb restorable-database-account list` command. The instanceId attribute is also a property of source database account that is being restored:
+Run the following CLI command to list all the versions of databases. This command only works with live accounts. The `instanceId` and the `location` parameters are obtained from the `name` and `location` properties in the response of `az cosmosdb restorable-database-account list` command. The instanceId attribute is also a property of source database account that is being restored:
```azurecli-interactive az cosmosdb sql restorable-database list \
@@ -196,7 +196,7 @@ This command output now shows when a database was created and deleted.
**List all the versions of SQL containers of a database in a live database account**
-Use the following command to list all the versions of SQL containers. This command only works with live accounts. The "databaseRid" parameter is the "ResourceId" of the database you want to restore. It is the value of "ownerResourceid" attribute found in the response of `az cosmosdb sql restorable-database list` command.
+Use the following command to list all the versions of SQL containers. This command only works with live accounts. The `databaseRid` parameter is the `ResourceId` of the database you want to restore. It is the value of `ownerResourceid` attribute found in the response of `az cosmosdb sql restorable-database list` command.
```azurecli-interactive az cosmosdb sql restorable-container list \
@@ -263,7 +263,7 @@ az cosmosdb sql restorable-resource list \
## <a id="enumerate-mongodb-api"></a>Enumerate restorable resources for MongoDB API account
-The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. Like with SQL API, you can use the `az cosmosdb` command but with "mongodb" as parameter instead of "sql". These commands only work for live accounts.
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. Like with SQL API, you can use the `az cosmosdb` command but with `mongodb` as parameter instead of `sql`. These commands only work for live accounts.
**List all the versions of mongodb databases in a live database account**
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-frequently-asked-questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-frequently-asked-questions.md
@@ -27,7 +27,7 @@ The restore duration dependents on the size of your data.
The restore may not happen depending on whether the key resources like databases or containers existed at that time. You can verify by entering the time and looking at the selected database or container for a given time. If you see no resources exist to restore, then the restore process doesn't work. ### How can I track if an account is being restored?
-After you submit the restore command, and wait on the same page, after the operation is complete, the status bar shows successfully restored account message. You can also search for the restored account and [track the status of account being restored](continuous-backup-restore-portal.md#track-restore-status). While restore is in progress, the status of the account will be "Creating", after the restore operation completes, the account status will change to "Online".
+After you submit the restore command, and wait on the same page, after the operation is complete, the status bar shows successfully restored account message. You can also search for the restored account and [track the status of account being restored](continuous-backup-restore-portal.md#track-restore-status). While restore is in progress, the status of the account will be *Creating*, after the restore operation completes, the account status will change to *Online*.
Similarly for PowerShell and CLI, you can track the progress of restore operation by executing `az cosmosdb show` command as follows:
@@ -35,7 +35,7 @@ Similarly for PowerShell and CLI, you can track the progress of restore operatio
az cosmosdb show --name "accountName" --resource-group "resourceGroup" ```
-The provisioningState shows "Succeeded" when the account is online.
+The provisioningState shows *Succeeded* when the account is online.
```json {
@@ -56,7 +56,7 @@ The provisioningState shows "Succeeded" when the account is online.
### How can I find out whether an account was restored from another account? Run the `az cosmosdb show` command, in the output, you can see that the value of `createMode` property. If the value is set to **Restore**. it indicates that the account was restored from another account. The `restoreParameters` property has further details such as `restoreSource`, which has the source account ID. The last GUID in the `restoreSource` parameter is the instanceId of the source account.
-For example, in the following output, the source account's instance ID is "7b4bb-f6a0-430e-ade1-638d781830cc"
+For example, in the following output, the source account's instance ID is *7b4bb-f6a0-430e-ade1-638d781830cc*
```json "restoreParameters": {
@@ -71,9 +71,9 @@ For example, in the following output, the source account's instance ID is "7b4bb
The entire shared throughput database is restored. You cannot choose a subset of containers in a shared throughput database for restore. ### What is the use of InstanceID in the account definition?
-At any given point in time, Azure Cosmos DB account's "accountName" property is globally unique while it is alive. However, after the account is deleted, it is possible to create another account with the same name and hence the "accountName" is no longer enough to identify an instance of an account.
+At any given point in time, Azure Cosmos DB account's `accountName` property is globally unique while it is alive. However, after the account is deleted, it is possible to create another account with the same name and hence the "accountName" is no longer enough to identify an instance of an account.
-ID or the "instanceId" is a property of an instance of an account and it is used to disambiguate across multiple accounts (live and deleted) if they have same name for restore. You can get the instance ID by running the `Get-AzCosmosDBRestorableDatabaseAccount` or `az cosmosdb restorable-database-account` commands. The name attribute value denotes the "InstanceID".
+ID or the `instanceId` is a property of an instance of an account and it is used to disambiguate across multiple accounts (live and deleted) if they have same name for restore. You can get the instance ID by running the `Get-AzCosmosDBRestorableDatabaseAccount` or `az cosmosdb restorable-database-account` commands. The name attribute value denotes the "InstanceID".
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-introduction.md
@@ -29,7 +29,7 @@ Azure Cosmos DB performs data backup in the background without consuming any ext
:::image type="content" source="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" alt-text="Azure Cosmos DB data backup to the Azure Blob Storage." lightbox="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" border="false":::
-The available time window for restore (also known as retention period) is the lower value of the following two: "30 days back in past from now" or "up to the resource creation time". The point in time for restore can be any timestamp within the retention period.
+The available time window for restore (also known as retention period) is the lower value of the following two: *30 days back in past from now* or *up to the resource creation time*. The point in time for restore can be any timestamp within the retention period.
In public preview, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account using [Azure portal](continuous-backup-restore-portal.md), [Azure Command Line Interface](continuous-backup-restore-command-line.md) (az CLI), [Azure PowerShell](continuous-backup-restore-powershell.md), or the [Azure Resource Manager](continuous-backup-restore-template.md).
@@ -55,18 +55,18 @@ You can add these configurations to the restored account after the restore is co
## Restore scenarios
-The following are some of the key scenarios that are addressed by the point-in-time-restore feature. Scenarios [a] through [c] demonstrate how to trigger a restore if the restore timestamp is known beforehand.
+The following are some of the key scenarios that are addressed by the point-in-time-restore feature. Scenarios [a] through [c] demonstrate how to trigger a restore if the restore timestamp is known beforehand.
However, there could be scenarios where you don't know the exact time of accidental deletion or corruption. Scenarios [d] and [e] demonstrate how to _discover_ the restore timestamp using the new event feed APIs on the restorable database or containers. :::image type="content" source="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" alt-text="Life-cycle events with timestamps for a restorable account." lightbox="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" border="false":::
-a. **Restore deleted account** - All the deleted accounts that you can restore are visible from the **Restore** pane. For example, if "Account A" is deleted at timestamp T3. In this case the timestamp just before T3, location, target account name, resource group, and target account name is sufficient to restore from [Azure portal](continuous-backup-restore-portal.md#restore-deleted-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore).
+a. **Restore deleted account** - All the deleted accounts that you can restore are visible from the **Restore** pane. For example, if *Account A* is deleted at timestamp T3. In this case the timestamp just before T3, location, target account name, resource group, and target account name is sufficient to restore from [Azure portal](continuous-backup-restore-portal.md#restore-deleted-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore).
:::image type="content" source="./media/continuous-backup-restore-introduction/restorable-container-database-scenario.png" alt-text="Life-cycle events with timestamps for a restorable database and container." lightbox="./media/continuous-backup-restore-introduction/restorable-container-database-scenario.png" border="false":::
-b. **Restore data of an account in a particular region** - For example, if "Account A" exists in two regions "East US" and "West US" at timestamp T3. If you need a copy of account A in "West US", you can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) with West US as the target location.
+b. **Restore data of an account in a particular region** - For example, if *Account A* exists in two regions *East US* and *West US* at timestamp T3. If you need a copy of account A in *West US*, you can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) with West US as the target location.
-c. **Recover from an accidental write or delete operation within a container with a known restore timestamp** - For example, if you **know** that the contents of "Container 1" within "Database 1" were modified accidentally at timestamp T3. You can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md#restore-live-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) into another account at timestamp T3 to recover the desired state of container.
+c. **Recover from an accidental write or delete operation within a container with a known restore timestamp** - For example, if you **know** that the contents of *Container 1* within *Database 1* were modified accidentally at timestamp T3. You can do a point in time restore from [Azure portal](continuous-backup-restore-portal.md#restore-live-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), or [CLI](continuous-backup-restore-command-line.md#trigger-restore) into another account at timestamp T3 to recover the desired state of container.
d. **Restore an account to a previous point in time before the accidental delete of the database** - In the [Azure portal](continuous-backup-restore-portal.md#restore-live-account), you can use the event feed pane to determine when a database was deleted and find the restore time. Similarly, with [Azure CLI](continuous-backup-restore-command-line.md#trigger-restore) and [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), you can discover the database deletion event by enumerating the database events feed and then trigger the restore command with the required parameters.
@@ -78,7 +78,7 @@ Azure Cosmos DB allows you to isolate and restrict the restore permissions for c
## <a id="continuous-backup-pricing"></a>Pricing
-Azure Cosmos DB accounts that have continuous backup enabled will incur an additional monthly charge to "store the backup" and to "restore your data". The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
+Azure Cosmos DB accounts that have continuous backup enabled will incur an additional monthly charge to *store the backup* and to *restore your data*. The restore cost is added every time the restore operation is initiated. If you configure an account with continuous backup but don't restore the data, only backup storage cost is included in your bill.
The following example is based on the price for an Azure Cosmos account deployed in a non-government region in the US. The pricing and calculation can vary depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-permissions.md
@@ -25,7 +25,7 @@ Scope is a set of resources that have access, to learn more on scopes, see the [
## Assign roles for restore using the Azure portal
-To perform a restore, a user or a principal need the permission to restore (that is "restore/action" permission), and permission to provision a new account (that is "write" permission). To grant these permissions, the owner can assign the "CosmosRestoreOperator" and "Cosmos DB Operator" built in roles to a principal.
+To perform a restore, a user or a principal need the permission to restore (that is *restore/action* permission), and permission to provision a new account (that is *write* permission). To grant these permissions, the owner can assign the `CosmosRestoreOperator` and `Cosmos DB Operator` built in roles to a principal.
1. Sign into the [Azure portal](https://portal.azure.com/)
@@ -35,7 +35,7 @@ To perform a restore, a user or a principal need the permission to restore (that
:::image type="content" source="./media/continuous-backup-restore-permissions/assign-restore-operator-roles.png" alt-text="Assign CosmosRestoreOperator and Cosmos DB Operator roles." border="true":::
-1. Select **Save** to grant the "restore/action permission".
+1. Select **Save** to grant the *restore/action* permission.
1. Repeat Step 3 with **Cosmos DB Operator** role to grant the write permission. When assigning this role from the Azure portal, it grants the restore permission to the whole subscription.
@@ -47,7 +47,7 @@ To perform a restore, a user or a principal need the permission to restore (that
|Resource group | /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Example-cosmosdb-rg | |CosmosDB restorable account resource | /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/23e99a35-cd36-4df4-9614-f767a03b9995|
-The restorable account resource can be extracted from the output of the `az cosmosdb restorable-database-account list --name <accountname>` command in CLI or `Get-AzCosmosDBRestorableDatabaseAccount -DatabaseAccountName <accountname>` cmdlet in PowerShell. The name attribute in the output represents the "instanceID" of the restorable account. To learn more, see the [PowerShell](continuous-backup-restore-powershell.md) or [CLI](continuous-backup-restore-command-line.md) article.
+The restorable account resource can be extracted from the output of the `az cosmosdb restorable-database-account list --name <accountname>` command in CLI or `Get-AzCosmosDBRestorableDatabaseAccount -DatabaseAccountName <accountname>` cmdlet in PowerShell. The name attribute in the output represents the `instanceID` of the restorable account. To learn more, see the [PowerShell](continuous-backup-restore-powershell.md) or [CLI](continuous-backup-restore-command-line.md) article.
## Permissions
@@ -55,11 +55,11 @@ Following permissions are required to perform the different activities pertainin
|Permission |Impact |Minimum scope |Maximum scope | |||||
-|Microsoft.Resources/deployments/validate/action, Microsoft.Resources/deployments/write | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction]() below for how to set this role. | Not applicable | Not applicable |
+|`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role. | Not applicable | Not applicable |
|Microsoft.DocumentDB/databaseAccounts/write | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
-|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>` | The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
-|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>`| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
-|Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The "RestorableDatabaseAccount" resource belonging to the source account being restored. This value is also given by the "ID" property of the restorable database account resource. An example of restorable account is `/subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>`| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>* | The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>*| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/<guid-instanceid>*| The subscription containing the restorable database account. The resource group cannot be chosen as scope. |
## Azure CLI role assignment scenarios to restore at different scopes
@@ -77,7 +77,7 @@ az role assignment create --role "CosmosRestoreOperator" --assignee <email> ΓÇôs
* Assign a user write action on the specific resource group. This action is required to create a new account in the resource group.
-* Assign the "CosmosRestoreOperator" built-in role to the specific restorable database account that needs to be restored. In the following command, the scope for the "RestorableDatabaseAccount" is retrieved from the "ID" property in the output of `az cosmosdb restorable-database-account` (if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount` (if using PowerShell).
+* Assign the *CosmosRestoreOperator* built-in role to the specific restorable database account that needs to be restored. In the following command, the scope for the *RestorableDatabaseAccount* is retrieved from the `ID` property in the output of `az cosmosdb restorable-database-account` (if using CLI) or `Get-AzCosmosDBRestorableDatabaseAccount` (if using PowerShell).
```azurecli-interactive az role assignment create --role "CosmosRestoreOperator" --assignee <email> ΓÇôscope <RestorableDatabaseAccount>
@@ -86,11 +86,11 @@ az role assignment create --role "CosmosRestoreOperator" --assignee <email> ΓÇôs
### Assign capability to restore from any source account in a resource group. This operation is currently not supported.
-## Custom role creation for restore action with CLI
+## <a id="custom-restorable-action"></a>Custom role creation for restore action with CLI
-The subscription owner can provide the permission to restore to any other Azure AD identity. The restore permission is based on the action: "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action", and it should be included in their restore permission. There is a built-in role called "CosmosRestoreOperator" that has this role included. You can either assign the permission using this built-in role or create a custom role.
+The subscription owner can provide the permission to restore to any other Azure AD identity. The restore permission is based on the action: `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action`, and it should be included in their restore permission. There is a built-in role called *CosmosRestoreOperator* that has this role included. You can either assign the permission using this built-in role or create a custom role.
-The RestorableAction below represents a custom role. You have to explicitly create this role. The following JSON template creates a custom role "RestorableAction" with restore permission:
+The RestorableAction below represents a custom role. You have to explicitly create this role. The following JSON template creates a custom role *RestorableAction* with restore permission:
```json {
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-portal.md
@@ -70,7 +70,7 @@ For example, if you want to restore to the point before a certain container was
You can use Azure portal to completely restore a deleted account within 30 days of its deletion. Use the following steps to restore a deleted account: 1. Sign into the [Azure portal](https://portal.azure.com/)
-1. Search for "Azure Cosmos DB" resources in the global search bar. It lists all your existing accounts.
+1. Search for *Azure Cosmos DB* resources in the global search bar. It lists all your existing accounts.
1. Next select the **Restore** button. The Restore pane displays a list of deleted accounts that can be restored within the retention period, which is 30 days from deletion time. 1. Choose the account that you want to restore.
@@ -91,7 +91,7 @@ You can use Azure portal to completely restore a deleted account within 30 days
## <a id="track-restore-status"></a>Track the status of restore operation
-After initiating a restore operation, select the **Notification** bell icon at top-right corner of portal. It gives a link displaying the status of the account being restored. While restore is in progress, the status of the account will be "Creating", after the restore operation completes, the account status will change to "Online".
+After initiating a restore operation, select the **Notification** bell icon at top-right corner of portal. It gives a link displaying the status of the account being restored. While restore is in progress, the status of the account will be *Creating*, after the restore operation completes, the account status will change to *Online*.
:::image type="content" source="./media/continuous-backup-restore-portal/track-restore-operation-status.png" alt-text="The status of restored account changes from creating to online when the operation is complete." border="true":::
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-powershell.md
@@ -46,7 +46,7 @@ This article describes how to provision an account with continuous backup and re
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of a single region write account `pitracct2` with continuous backup policy created in "West US" region under "myrg" resource group:
+The following cmdlet is an example of a single region write account `pitracct2` with continuous backup policy created in *West US* region under *myrg* resource group:
```azurepowershell
@@ -61,7 +61,7 @@ New-AzCosmosDBAccount `
## <a id="provision-mongodb-api"></a>Provision a MongoDB API account with continuous backup
-The following cmdlet is an example of continuous backup account "pitracct2" created in "West US" region under "myrg" resource group:
+The following cmdlet is an example of continuous backup account *pitracct2* created in *West US* region under *myrg* resource group:
```azurepowershell
@@ -158,13 +158,13 @@ The response includes all the database accounts (both live and deleted) that can
}, ```
-Just like the "CreationTime" or "DeletionTime" for the account, there is a "CreationTime" or "DeletionTime" for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
+Just like the `CreationTime` or `DeletionTime` for the account, there is a `CreationTime` or `DeletionTime` for the region too. These times allow you to choose the right region and a valid time range to restore into that region.
**List all the versions of SQL databases in a live database account** Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
-Run the following PowerShell command to list all the versions of databases. This command only works with live accounts. The "DatabaseAccountInstanceId" and the "LocationName" parameters are obtained from the "name" and "location" properties in the response of `Get-AzCosmosDBRestorableDatabaseAccount` cmdlet. The "DatabaseAccountInstanceId" attribute refers to "instanceId" property of source database account being restored:
+Run the following PowerShell command to list all the versions of databases. This command only works with live accounts. The `DatabaseAccountInstanceId` and the `LocationName` parameters are obtained from the `name` and `location` properties in the response of `Get-AzCosmosDBRestorableDatabaseAccount` cmdlet. The `DatabaseAccountInstanceId` attribute refers to `instanceId` property of source database account being restored:
```azurepowershell
@@ -177,7 +177,7 @@ Get-AzCosmosdbSqlRestorableDatabase `
**List all the versions of SQL containers of a database in a live database account.**
-Use the following command to list all the versions of SQL containers. This command only works with live accounts. The "DatabaseRid" parameter is the "ResourceId" of the database you want to restore. It is the value of "ownerResourceid" attribute found in the response of `Get-AzCosmosdbSqlRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the containers inside this database.
+Use the following command to list all the versions of SQL containers. This command only works with live accounts. The `DatabaseRid` parameter is the `ResourceId` of the database you want to restore. It is the value of `ownerResourceid` attribute found in the response of `Get-AzCosmosdbSqlRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the containers inside this database.
```azurepowershell
@@ -204,7 +204,7 @@ Get-AzCosmosdbSqlRestorableResource `
## <a id="enumerate-mongodb-api"></a>Enumerate restorable resources for MongoDB
-The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts and they are similar to SQL API commands but with "MongoDB" in the command name instead of "sql".
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts and they are similar to SQL API commands but with `MongoDB` in the command name instead of `sql`.
**List all the versions of MongoDB databases in a live database account**
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-resource-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-resource-model.md
@@ -26,14 +26,14 @@ The database account's resource model is updated with a few extra properties to
### BackupPolicy
-A new property in the account level backup policy named "Type" under "backuppolicy" parameter enables continuous backup and point-in-time restore functionalities. This mode is called **continuous backup**. In the public preview, you can only set this mode when creating the account. After it's enabled, all the containers and databases created within this account will have continuous backup and point-in-time restore functionalities enabled by default.
+A new property in the account level backup policy named `Type` under `backuppolicy` parameter enables continuous backup and point-in-time restore functionalities. This mode is called **continuous backup**. In the public preview, you can only set this mode when creating the account. After it's enabled, all the containers and databases created within this account will have continuous backup and point-in-time restore functionalities enabled by default.
> [!NOTE] > Currently the point-in-time restore feature is in public preview and it's available for Azure Cosmos DB API for MongoDB, and SQL accounts. After you create an account with continuous mode you can't switch it to a periodic mode. ### CreateMode
-This property indicates how the account was created. The possible values are "Default" and "Restore". To perform a restore, set this value to "Restore" and provide the appropriate values in the `RestoreParameters` property.
+This property indicates how the account was created. The possible values are *Default* and *Restore*. To perform a restore, set this value to *Restore* and provide the appropriate values in the `RestoreParameters` property.
### RestoreParameters
@@ -41,7 +41,7 @@ The `RestoreParameters` resource contains the restore operation details includin
|Property Name |Description | |||
-|restoreMode | The restore mode should be "PointInTime" |
+|restoreMode | The restore mode should be *PointInTime* |
|restoreSource | The instanceId of the source account from which the restore will be initiated. | |restoreTimestampInUtc | Point in time in UTC to which the account should be restored to. | |databasesToRestore | List of `DatabaseRestoreSource` objects to specify which databases and containers should be restored. If this value is empty, then the entire account is restored. |
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-template.md
@@ -24,7 +24,7 @@ This article describes how to provision an account with continuous backup and re
## <a id="provision"></a>Provision an account with continuous backup
-You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the "backupPolicy" parameter as shown in the following example:
+You can use Azure Resource Manager templates to deploy an Azure Cosmos DB account with continuous mode. When defining the template to provision an account, include the `backupPolicy` parameter as shown in the following example:
```json {
@@ -62,9 +62,9 @@ az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplate
You can also restore an account using Resource Manager template. When defining the template include the following parameters:
-* Set the "createMode" parameter to "Restore"
-* Define the "restoreParameters", notice that the "restoreSource" value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore.
-* Set the "restoreMode" parameter to "PointInTime" and configure the "restoreTimestampInUtc" value.
+* Set the `createMode` parameter to *Restore*
+* Define the `restoreParameters`, notice that the `restoreSource` value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore.
+* Set the `restoreMode` parameter to *PointInTime* and configure the `restoreTimestampInUtc` value.
```json {
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-java-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-application.md
@@ -6,7 +6,7 @@
ms.devlang: java Previously updated : 05/12/2020 Last updated : 02/10/2021
@@ -32,7 +32,7 @@ This Java application tutorial shows you how to create a web-based task-manageme
:::image type="content" source="./media/sql-api-java-application/image1.png" alt-text="My ToDo List Java application"::: > [!TIP]
-> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo](https://github.com/Azure-Samples/documentdb-java-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
+> This application development tutorial assumes that you have prior experience using Java. If you are new to Java or the [prerequisite tools](#Prerequisites), we recommend downloading the complete [todo]https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project from GitHub and building it using [the instructions at the end of this article](#GetProject). Once you have it built, you can review the article to gain insight on the code in the context of the project.
> ## <a id="Prerequisites"></a>Prerequisites for this Java web application tutorial
@@ -105,15 +105,15 @@ The easiest way to pull in the SQL Java SDK and its dependencies is through [Apa
* In the **Group Id** box, enter `com.azure`. * In the **Artifact Id** box, enter `azure-cosmos`.
- * In the **Version** box, enter `4.0.1-beta.1`.
+ * In the **Version** box, enter `4.11.0`.
Or, you can add the dependency XML for Group ID and Artifact ID directly to the *pom.xml* file: ```xml <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-cosmos</artifactId>
- <version>4.0.1-beta.1</version>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ <version>4.11.0</version>
</dependency> ```
@@ -127,7 +127,7 @@ Now let's add the models, the views, and the controllers to your web application
First, let's define a model within a new file *TodoItem.java*. The `TodoItem` class defines the schema of an item along with the getter and setter methods: ### Add the Data Access Object(DAO) classes
@@ -135,37 +135,37 @@ Create a Data Access Object (DAO) to abstract persisting the ToDo items to Azure
1. To invoke the Azure Cosmos DB service, you must instantiate a new `cosmosClient` object. In general, it is best to reuse the `cosmosClient` object rather than constructing a new client for each subsequent request. You can reuse the client by defining it within the `cosmosClientFactory` class. Update the HOST and MASTER_KEY values that you saved in [step 1](#CreateDB). Replace the HOST variable with with your URI and replace the MASTER_KEY with your PRIMARY KEY. Use the following code to create the `CosmosClientFactory` class within the *CosmosClientFactory.java* file:
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/documentdb/sample/dao/CosmosClientFactory.java":::
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/CosmosClientFactory.java":::
1. Create a new *TodoDao.java* file and add the `TodoDao` class to create, update, read, and delete the todo items:
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/documentdb/sample/dao/TodoDao.java":::
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDao.java":::
1. Create a new *MockDao.java* file and add the `MockDao` class, this class implements the `TodoDao` class to perform CRUD operations on the items:
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/documentdb/sample/dao/MockDao.java":::
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/MockDao.java":::
1. Create a new *DocDbDao.java* file and add the `DocDbDao` class. This class defines code to persist the TodoItems into the container, retrieves your database and collection, if it exists, or create a new one if it doesn't exist. This example uses [Gson](https://code.google.com/p/google-gson/) to serialize and de-serialize the TodoItem Plain Old Java Objects (POJOs) to JSON documents. In order to save ToDo items to a collection, the client needs to know which database and collection to persist to (as referenced by self-links). This class also defines helper function to retrieve the documents by another attribute (e.g. "ID") rather than self-link. You can use the helper method to retrieve a TodoItem JSON document by ID and then deserialize it to a POJO. You can also use the `cosmosClient` client object to get a collection or list of TodoItems using a SQL query. Finally, you define the delete method to delete a TodoItem from your list. The following code shows the contents of the `DocDbDao` class:
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/documentdb/sample/dao/DocDbDao.java":::
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/DocDbDao.java":::
1. Next, create a new *TodoDaoFactory.java* file and add the `TodoDaoFactory` class that creates a new DocDbDao object:
- :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/documentdb/sample/dao/TodoDaoFactory.java":::
+ :::code language="java" source="~/samples-cosmosdb-java-v4-web-app/src/com/microsoft/azure/cosmos/sample/dao/TodoDaoFactory.java":::
### Add a controller Add the *TodoItemController* controller to your application. In this project, you are using [Project Lombok](https://projectlombok.org/) to generate the constructor, getters, setters, and a builder. Alternatively, you can write this code manually or have the IDE generate it.: ### Create a servlet Next, create a servlet to route HTTP requests to the controller. Create the *ApiServlet.java* file and define the following code under it: ## <a id="Wire"></a>Wire the rest of the of Java app together
@@ -189,17 +189,17 @@ Azure Web Sites makes deploying Java applications as simple as exporting your ap
1. In the **WAR Export** window, do the following:
- * In the Web project box, enter azure-documentdb-java-sample.
+ * In the Web project box, enter azure-cosmos-java-sample.
* In the Destination box, choose a destination to save the WAR file. * Click **Finish**. 1. Now that you have a WAR file in hand, you can simply upload it to your Azure Web Site's **webapps** directory. For instructions on uploading the file, see [Add a Java application to Azure App Service Web Apps](../app-service/quickstart-java.md). After the WAR file is uploaded to the webapps directory, the runtime environment will detect that you've added it and will automatically load it.
-1. To view your finished product, navigate to `http://YOUR\_SITE\_NAME.azurewebsites.net/azure-java-sample/` and start adding your tasks!
+1. To view your finished product, navigate to `http://YOUR\_SITE\_NAME.azurewebsites.net/azure-cosmos-java-sample/` and start adding your tasks!
## <a id="GetProject"></a>Get the project from GitHub
-All the samples in this tutorial are included in the [todo](https://github.com/Azure-Samples/documentdb-java-todo-app) project on GitHub. To import the todo project into Eclipse, ensure you have the software and resources listed in the [Prerequisites](#Prerequisites) section, then do the following:
+All the samples in this tutorial are included in the [todo](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app) project on GitHub. To import the todo project into Eclipse, ensure you have the software and resources listed in the [Prerequisites](#Prerequisites) section, then do the following:
1. Install [Project Lombok](https://projectlombok.org/). Lombok is used to generate constructors, getters, setters in the project. Once you have downloaded the lombok.jar file, double-click it to install it or install it from the command line.
@@ -211,7 +211,7 @@ All the samples in this tutorial are included in the [todo](https://github.com/A
1. On the **Select Repository Source** screen, click **Clone URI**.
-1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/documentdb-java-todo-app.git, and then click **Next**.
+1. On the **Source Git Repository** screen, in the **URI** box, enter https://github.com/Azure-Samples/azure-cosmos-java-sql-api-todo-app, and then click **Next**.
1. On the **Branch Selection** screen, ensure that **main** is selected, and then click **Next**.
@@ -221,9 +221,9 @@ All the samples in this tutorial are included in the [todo](https://github.com/A
1. On the **Import Projects** screen, unselect the **DocumentDB** project, and then click **Finish**. The DocumentDB project contains the Azure Cosmos DB Java SDK, which we will add as a dependency instead.
-1. In **Project Explorer**, navigate to azure-documentdb-java-sample\src\com.microsoft.azure.documentdb.sample.dao\DocumentClientFactory.java and replace the HOST and MASTER_KEY values with the URI and PRIMARY KEY for your Azure Cosmos DB account, and then save the file. For more information, see [Step 1. Create an Azure Cosmos database account](#CreateDB).
+1. In **Project Explorer**, navigate to azure-cosmos-java-sample\src\com.microsoft.azure.cosmos.sample.dao\DocumentClientFactory.java and replace the HOST and MASTER_KEY values with the URI and PRIMARY KEY for your Azure Cosmos DB account, and then save the file. For more information, see [Step 1. Create an Azure Cosmos database account](#CreateDB).
-1. In **Project Explorer**, right-click the **azure-documentdb-java-sample**, click **Build Path**, and then click **Configure Build Path**.
+1. In **Project Explorer**, right-click the **azure-cosmos-java-sample**, click **Build Path**, and then click **Configure Build Path**.
1. On the **Java Build Path** screen, in the right pane, select the **Libraries** tab, and then click **Add External JARs**. Navigate to the location of the lombok.jar file, and click **Open**, and then click **OK**.
@@ -237,11 +237,11 @@ All the samples in this tutorial are included in the [todo](https://github.com/A
1. On the **Servers** tab at the bottom of the screen, right-click **Tomcat v7.0 Server at localhost** and then click **Add and Remove**.
-1. On the **Add and Remove** window, move **azure-documentdb-java-sample** to the **Configured** box, and then click **Finish**.
+1. On the **Add and Remove** window, move **azure-cosmos-java-sample** to the **Configured** box, and then click **Finish**.
1. In the **Servers** tab, right-click **Tomcat v7.0 Server at localhost**, and then click **Restart**.
-1. In a browser, navigate to `http://localhost:8080/azure-documentdb-java-sample/` and start adding to your task list. Note that if you changed your default port values, change 8080 to the value you selected.
+1. In a browser, navigate to `http://localhost:8080/azure-cosmos-java-sample/` and start adding to your task list. Note that if you changed your default port values, change 8080 to the value you selected.
1. To deploy your project to an Azure web site, see [Step 6. Deploy your application to Azure Web Sites](#Deploy).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/transfer-subscriptions-subscribers-csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
@@ -6,7 +6,7 @@
Previously updated : 11/17/2020 Last updated : 02/11/2021
@@ -31,6 +31,7 @@ When the request is approved, the CSP can then provide a combined invoice to the
To transfer any other Azure subscriptions to a CSP partner, the subscriber needs to move resources from source subscriptions to CSP subscriptions. Use the following guidance to move resources between subscriptions.
+1. Establish a [reseller relationship](/partner-center/request-a-relationship-with-a-customer) with the customer. Review the [CSP Regional Authorization Overview](/partner-center/regional-authorization-overview) to ensure both customer and Partner tenant are within the same authorized regions.
1. Work with your CSP partner to create target Azure CSP subscriptions. 1. Ensure that the source and target CSP subscriptions are in the same Azure Active Directory (Azure AD) tenant. You can't change the Azure AD tenant for an Azure CSP subscription. Instead, you must add or associate the source subscription to the CSP Azure AD tenant. For more information, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/charge-back-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/charge-back-usage.md
@@ -5,7 +5,7 @@
Previously updated : 07/24/2020 Last updated : 02/10/2021
@@ -15,15 +15,16 @@ Enterprise Agreement and Microsoft Customer Agreement billing readers can view a
Users with an individual subscription can get the amortized cost data from their usage file. When a resource gets a reservation discount, the *AdditionalInfo* section in the usage file contains the reservation details. For more information, see [Download usage from the Azure portal](../understand/download-azure-daily-usage.md#download-usage-from-the-azure-portal-csv).
-## Get reservation charge back data for chargeback
+## See reservation usage data for show back and charge back
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Cost Management + Billing**.
-1. Under **Actual Cost**, select the **Amortized Cost** metric.
-1. To see which resources were used by a reservation, apply a filter for **Reservation** and then select reservations.
-1. Set the **Granularity** to **Monthly** or **Daily**.
-1. Set the chart type to **Table**.
-1. Set the **Group by** option to **Resource**.
+2. Navigate to **Cost Management + Billing**
+3. Select **Cost analysis** from left navigation
+4. Under **Actual Cost**, select the **Amortized Cost** metric.
+5. To see which resources were used by a reservation, apply a filter for **Reservation** and then select reservations.
+6. Set the **Granularity** to **Monthly** or **Daily**.
+7. Set the chart type to **Table**.
+8. Set the **Group by** option to **Resource**.
[![Example showing reservation resource costs that you can use for chargeback](./media/charge-back-usage/amortized-reservation-costs.png)](./media/charge-back-usage/amortized-reservation-costs.png#lightbox)
@@ -31,13 +32,60 @@ Here's a video showing how to view reservation utilization costs in the Azure po
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4sQOw]
+## Get the data for show back and charge back
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to **Cost Management + Billing**
+3. Select **Export** from left navigation
+4. Click on **Add** button
+5. Select Amortized cost as the metric button and setup your export
+
+the EffectivePrice for the usage that gets reservation discount is the prorated cost of the reservation (instead of being zero). This helps you know the monetary value of reservation consumption by a subscription, resource group or a resource, and can help you charge back for the reservation utilization internally. The dataset also has unused reservation hours.
+
+## Get Azure consumption and reservation usage data using API
+
+You can get the data using the API or download it from Azure portal.
+
+You call the [Usage Details API](/rest/api/consumption/usagedetails/list) to get the new data. For details about terminology, see [usage terms](../understand/understand-usage.md).
+
+Here's an example call to the Usage Details API:
+
+```
+https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{enrollmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodId}/providers/Microsoft.Consumption/usagedetails?metric={metric}&amp;api-version=2019-05-01&amp;$filter={filter}
+```
+
+For more information about {enrollmentId} and {billingPeriodId}, see the [Usage Details ΓÇô List](/rest/api/consumption/usagedetails/list) API article.
+
+Information in the following table about metric and filter can help solve for common reservation problems.
+
+| **Type of API data** | API call action |
+| | |
+| **All Charges (usage and purchases)** | Replace {metric} with ActualCost |
+| **Usage that got reservation discount** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/reservationId%20ne%20 |
+| **Usage that didn't get reservation discount** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/reservationId%20eq%20 |
+| **Amortized charges (usage and purchases)** | Replace {metric} with AmortizedCost |
+| **Unused reservation report** | Replace {metric} with AmortizedCost<br><br>Replace {filter} with: properties/ChargeType%20eq%20'UnusedReservation' |
+| **Reservation purchases** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/ChargeType%20eq%20'Purchase' |
+| **Refunds** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/ChargeType%20eq%20'Refund' |
+
+## Download the usage CSV file with new data
+
+If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+
+In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
+
+1. Select the billing account.
+2. Click **Usage + charges**.
+3. Click **Download**.
+![Example showing where to Download the CSV usage data file in the Azure portal](./media/understand-reserved-instance-usage-ea/portal-download-csv.png)
+4. In **Usage Details**, select **Amortized usage data**.
+
+The CSV files that you download contain actual costs and amortized costs.
+ ## Need help? Contact us. If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). ## Next steps--- To learn how to manage a reservation, see [Manage Azure Reservations](manage-reserved-vm-instance.md).-- To learn more about Azure Reservations, see the following articles:
- - [What are Azure Reservations?](save-compute-costs-reservations.md)
- - [Manage Reservations in Azure](manage-reserved-vm-instance.md)
+- To learn more about Azure Reservations usage data, see the following articles:
+ - [Enterprise Agreement and Microsoft Customer Agreement reservation costs and usage](understand-reserved-instance-usage-ea.md)
+
data-factory https://docs.microsoft.com/en-us/azure/data-factory/author-global-parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
@@ -1,9 +1,7 @@
Title: Global parameters description: Set global parameters for each of your Azure Data Factory environments- -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/author-management-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-management-hub.md
@@ -1,13 +1,10 @@
Title: Management hub description: Manage your connections, source control configuration and global authoring properties in the Azure Data Factory management hub- - - Last updated 02/01/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/author-visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-visually.md
@@ -1,14 +1,10 @@
Title: Visual authoring description: Learn how to use visual authoring in Azure Data Factory- - -- Last updated 09/08/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/azure-integration-runtime-ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-integration-runtime-ip-addresses.md
@@ -1,13 +1,9 @@
Title: Azure Integration Runtime IP addresses description: Learn which IP addresses you must allow inbound traffic from, in order to properly configure firewalls for securing network access to data stores.- -- - Last updated 01/06/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/azure-ssis-integration-runtime-package-store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/azure-ssis-integration-runtime-package-store.md
@@ -1,15 +1,10 @@
Title: Manage packages with Azure-SSIS Integration Runtime package store description: Learn how to manage packages with Azure-SSIS Integration Runtime package store. - - -- Last updated 09/29/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/built-in-preinstalled-components-ssis-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
@@ -1,15 +1,10 @@
Title: Built-in and preinstalled components on Azure-SSIS Integration Runtime description: List all built-in and preinstalled components, such as clients, drivers, providers, connection managers, data sources/destinations/transformations, and tasks on Azure-SSIS Integration Runtime. - - -- Last updated 05/14/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/ci-cd-github-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
@@ -5,7 +5,6 @@
- Last updated 12/03/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/compare-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compare-versions.md
@@ -1,19 +1,11 @@
Title: Compare Azure Data Factory with Data Factory version 1 description: This article compares Azure Data Factory with Azure Data Factory version 1.- --+ --- Last updated 04/09/2018-- # Compare Azure Data Factory with Data Factory version 1
@@ -28,7 +20,7 @@ The following table compares the features of Data Factory with the features of D
| - | | | | Datasets | A named view of data that references the data that you want to use in your activities as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Azure Blob storage from which the activity should read the data.<br/><br/>**Availability** defines the processing window slicing model for the dataset (for example, hourly, daily, and so on). | Datasets are the same in the current version. However, you do not need to define **availability** schedules for datasets. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. For more information, see [Triggers](concepts-pipeline-execution-triggers.md#trigger-execution) and [Datasets](concepts-datasets-linked-services.md). | | Linked services | Linked services are much like connection strings, which define the connection information that's necessary for Data Factory to connect to external resources. | Linked services are the same as in Data Factory V1, but with a new **connectVia** property to utilize the Integration Runtime compute environment of the current version of Data Factory. For more information, see [Integration runtime in Azure Data Factory](concepts-integration-runtime.md) and [Linked service properties for Azure Blob storage](connector-azure-blob-storage.md#linked-service-properties). |
-| Pipelines | A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. You use startTime, endTime, and isPaused to schedule and run pipelines. | Pipelines are groups of activities that are performed on data. However, the scheduling of activities in the pipeline has been separated into new trigger resources. You can think of pipelines in the current version of Data Factory more as ΓÇ£workflow unitsΓÇ¥ that you schedule separately via triggers. <br/><br/>Pipelines do not have ΓÇ£windowsΓÇ¥ of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused are no longer present in the current version of Data Factory. For more information, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md) and [Pipelines and activities](concepts-pipelines-activities.md). |
+| Pipelines | A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. You use startTime, endTime, and isPaused to schedule and run pipelines. | Pipelines are groups of activities that are performed on data. However, the scheduling of activities in the pipeline has been separated into new trigger resources. You can think of pipelines in the current version of Data Factory more as "workflow units" that you schedule separately via triggers. <br/><br/>Pipelines do not have "windows" of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused are no longer present in the current version of Data Factory. For more information, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md) and [Pipelines and activities](concepts-pipelines-activities.md). |
| Activities | Activities define actions to perform on your data within a pipeline. Data movement (copy activity) and data transformation activities (such as Hive, Pig, and MapReduce) are supported. | In the current version of Data Factory, activities still are defined actions within a pipeline. The current version of Data Factory introduces new [control flow activities](concepts-pipelines-activities.md#control-flow-activities). You use these activities in a control flow (looping and branching). Data movement and data transformation activities that were supported in V1 are supported in the current version. You can define transformation activities without using datasets in the current version. | | Hybrid data movement and activity dispatch | Now called Integration Runtime, [Data Management Gateway](v1/data-factory-data-management-gateway.md) supported moving data between on-premises and cloud.| Data Management Gateway is now called Self-Hosted Integration Runtime. It provides the same capability as it did in V1. <br/><br/> The Azure-SSIS Integration Runtime in the current version of Data Factory also supports deploying and running SQL Server Integration Services (SSIS) packages in the cloud. For more information, see [Integration runtime in Azure Data Factory](concepts-integration-runtime.md).| | Parameters | NA | Parameters are key-value pairs of read-only configuration settings that are defined in pipelines. You can pass arguments for the parameters when you are manually running the pipeline. If you are using a scheduler trigger, the trigger can pass values for the parameters too. Activities within the pipeline consume the parameter values. |
@@ -67,7 +59,7 @@ Pipelines can be triggered by on-demand (event-based, i.e. blob post) or wall-cl
The [Execute Pipeline activity](control-flow-execute-pipeline-activity.md) allows a Data Factory pipeline to invoke another pipeline. ### Delta flows
-A key use case in ETL patterns is ΓÇ£delta loads,ΓÇ¥ in which only data that has changed since the last iteration of a pipeline is loaded. New capabilities in the current version, such as [lookup activity](control-flow-lookup-activity.md), flexible scheduling, and control flow, enable this use case in a natural way. For a tutorial with step-by-step instructions, see [Tutorial: Incremental copy](tutorial-incremental-copy-powershell.md).
+A key use case in ETL patterns is "delta loads," in which only data that has changed since the last iteration of a pipeline is loaded. New capabilities in the current version, such as [lookup activity](control-flow-lookup-activity.md), flexible scheduling, and control flow, enable this use case in a natural way. For a tutorial with step-by-step instructions, see [Tutorial: Incremental copy](tutorial-incremental-copy-powershell.md).
### Other control flow activities Following are a few more control flow activities that are supported by the current version of Data Factory.
@@ -90,7 +82,7 @@ For example, you can use SQL Server Data Tools or SQL Server Management Studio t
## Flexible scheduling In the current version of Data Factory, you do not need to define dataset availability schedules. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. You can also pass parameters to pipelines from a trigger for a flexible scheduling and execution model.
-Pipelines do not have ΓÇ£windowsΓÇ¥ of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused don't exist in the current version of Data Factory. For more information about how to build and then schedule a pipeline in the current version of Data Factory, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md).
+Pipelines do not have "windows" of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused don't exist in the current version of Data Factory. For more information about how to build and then schedule a pipeline in the current version of Data Factory, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md).
## Support for more data stores The current version supports the copying of data to and from more data stores than V1. For a list of supported data stores, see the following articles:
data-factory https://docs.microsoft.com/en-us/azure/data-factory/compute-linked-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
@@ -1,14 +1,10 @@
Title: Compute environments supported by Azure Data Factory description: Compute environments that can be used with Azure Data Factory pipelines (such as Azure HDInsight) to transform or process data.- - - Last updated 05/08/2019
@@ -125,8 +121,8 @@ The following JSON defines a Linux-based on-demand HDInsight linked service. The
| connectVia | The Integration Runtime to be used to dispatch the activities to this HDInsight linked service. For on-demand HDInsight linked service, it only supports Azure Integration Runtime. If not specified, it uses the default Azure Integration Runtime. | No | | clusterUserName | The username to access the cluster. | No | | clusterPassword | The password in type of secure string to access the cluster. | No |
-| clusterSshUserName | The username to SSH remotely connects to clusterΓÇÖs node (for Linux). | No |
-| clusterSshPassword | The password in type of secure string to SSH remotely connect clusterΓÇÖs node (for Linux). | No |
+| clusterSshUserName | The username to SSH remotely connects to cluster's node (for Linux). | No |
+| clusterSshPassword | The password in type of secure string to SSH remotely connect cluster's node (for Linux). | No |
| scriptActions | Specify script for [HDInsight cluster customizations](../hdinsight/hdinsight-hadoop-customize-cluster-linux.md) during on-demand cluster creation. <br />Currently, Azure Data Factory's User Interface authoring tool supports specifying only 1 script action, but you can get through this limitation in the JSON (specify multiple script actions in the JSON). | No |
@@ -393,7 +389,7 @@ You create an Azure Machine Learning Studio (classic) linked service to register
| - | - | - | | Type | The type property should be set to: **AzureML**. | Yes | | mlEndpoint | The batch scoring URL. | Yes |
-| apiKey | The published workspace modelΓÇÖs API. | Yes |
+| apiKey | The published workspace model's API. | Yes |
| updateResourceEndpoint | The Update Resource URL for an Azure Machine Learning Studio (classic) Web Service endpoint used to update the predictive Web Service with trained model file | No | | servicePrincipalId | Specify the application's client ID. | Required if updateResourceEndpoint is specified | | servicePrincipalKey | Specify the application's key. | Required if updateResourceEndpoint is specified |
@@ -543,12 +539,12 @@ You can create **Azure Databricks linked service** to register Databricks worksp
| name | Name of the Linked Service | Yes | | type | The type property should be set to: **Azure Databricks**. | Yes | | domain | Specify the Azure Region accordingly based on the region of the Databricks workspace. Example: https://eastus.azuredatabricks.net | Yes |
-| accessToken | Access token is required for Data Factory to authenticate to Azure Databricks. Access token needs to be generated from the databricks workspace. More detailed steps to find the access token can be found [here](https://docs.azuredatabricks.net/api/latest/authentication.html#generate-token) | No |
+| accessToken | Access token is required for Data Factory to authenticate to Azure Databricks. Access token needs to be generated from the databricks workspace. More detailed steps to find the access token can be found [here](/azure/databricks/dev-tools/api/latest/authentication#generate-token) | No |
| MSI | Use Data Factory's managed identity (system-assigned) to authenticate to Azure Databricks. You do not need Access Token when using 'MSI' authentication | No | | existingClusterId | Cluster ID of an existing cluster to run all jobs on this. This should be an already created Interactive Cluster. You may need to manually restart the cluster if it stops responding. Databricks suggest running jobs on new clusters for greater reliability. You can find the Cluster ID of an Interactive Cluster on Databricks workspace -> Clusters -> Interactive Cluster Name -> Configuration -> Tags. [More details](https://docs.databricks.com/user-guide/clusters/tags.html) | No | instancePoolId | Instance Pool ID of an existing pool in databricks workspace. | No | | newClusterVersion | The Spark version of the cluster. It creates a job cluster in databricks. | No |
-| newClusterNumOfWorker| Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes. A string formatted Int32, like ΓÇ£1ΓÇ¥ means numOfWorker is 1 or ΓÇ£1:10ΓÇ¥ means autoscale from 1 as min and 10 as max. | No |
+| newClusterNumOfWorker| Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes. A string formatted Int32, like "1" means numOfWorker is 1 or "1:10" means autoscale from 1 as min and 10 as max. | No |
| newClusterNodeType | This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or compute intensive workloads. This field is required for new cluster | No | | newClusterSparkConf | a set of optional, user-specified Spark configuration key-value pairs. Users can also pass in a string of extra JVM options to the driver and the executors via spark.driver.extraJavaOptions and spark.executor.extraJavaOptions respectively. | No | | newClusterInitScripts| a set of optional, user-defined initialization scripts for the new cluster. Specifying the DBFS path to the init scripts. | No |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-data-flow-debug-mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-debug-mode.md
@@ -3,7 +3,6 @@ Title: Mapping data flow Debug Mode
description: Start an interactive debug session when building data flows -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-data-redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-redundancy.md
@@ -1,16 +1,9 @@
Title: Data redundancy in Azure Data Factory | Microsoft Docs description: 'Learn about meta-data redundancy mechanisms in Azure Data Factory'- - - -- Last updated 11/05/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-datasets-linked-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-datasets-linked-services.md
@@ -1,14 +1,10 @@
Title: Datasets description: 'Learn about datasets in Data Factory. Datasets represent input/output data.'- - - Last updated 08/24/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
@@ -1,13 +1,9 @@
Title: Integration runtime description: 'Learn about integration runtime in Azure Data Factory.'- -- - Last updated 07/14/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-linked-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
@@ -1,14 +1,10 @@
Title: Linked services in Azure Data Factory description: 'Learn about linked services in Data Factory. Linked services link compute/data stores to data factory.'- - - Last updated 08/21/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
@@ -1,14 +1,10 @@
Title: Pipeline execution and triggers in Azure Data Factory description: This article provides information about how to execute a pipeline in Azure Data Factory, either on-demand or by creating a trigger.- - - Last updated 07/05/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
@@ -1,12 +1,9 @@
Title: Pipelines and activities in Azure Data Factory description: 'Learn about pipelines and activities in Azure Data Factory.'- - Last updated 11/19/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-roles-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
@@ -4,12 +4,8 @@ description: Describes the roles and permissions required to create Data Factori
Last updated 11/5/2018 -- - # Roles and permissions for Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/configure-azure-ssis-integration-runtime-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
@@ -1,15 +1,11 @@
Title: Configure performance for the Azure-SSIS Integration Runtime description: Learn how to configure the properties of the Azure-SSIS Integration Runtime for high performance- Last updated 01/10/2018 - -- # Configure the Azure-SSIS Integration Runtime for high performance
data-factory https://docs.microsoft.com/en-us/azure/data-factory/configure-bcdr-azure-ssis-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
@@ -1,14 +1,10 @@
Title: Configure Azure-SSIS integration runtime for SQL Database failover description: This article describes how to configure the Azure-SSIS integration runtime with Azure SQL Database geo-replication and failover for the SSISDB database- - ms.devlang: powershell -- Last updated 11/06/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connect-data-factory-to-azure-purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
@@ -2,13 +2,9 @@
Title: Connect a Data Factory to Azure Purview description: Learn about how to connect a Data Factory to Azure Purview - -- - Last updated 12/3/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-marketplace-web-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-marketplace-web-service.md
@@ -1,14 +1,10 @@
Title: Copy data from AWS Marketplace description: Learn how to copy data from Amazon Marketplace Web Service to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- - -- Last updated 08/01/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-redshift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-redshift.md
@@ -1,13 +1,9 @@
Title: Copy data from Amazon Redshift description: Learn about how to copy data from Amazon Redshift to supported sink data stores by using Azure Data Factory.- -- - Last updated 12/09/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-simple-storage-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
@@ -1,13 +1,9 @@
Title: Copy data from Amazon Simple Storage Service (S3) description: Learn about how to copy data from Amazon Simple Storage Service (S3) to supported sink data stores by using Azure Data Factory.- -- - Last updated 01/14/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
@@ -3,10 +3,7 @@ Title: Copy and transform data in Azure Blob storage
description: Learn how to copy data to and from Blob storage, and transform data in Blob storage by using Data Factory. -- - Last updated 12/08/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db-mongodb-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
@@ -1,13 +1,9 @@
Title: Copy data from Azure Cosmos DB's API for MongoDB description: Learn how to copy data from supported source data stores to or from Azure Cosmos DB's API for MongoDB to supported sink stores by using Data Factory.- ----+ Last updated 11/20/2019
@@ -112,7 +108,7 @@ The following properties are supported in the Copy Activity **source** section:
| filter | Specifies selection filter using query operators. To return all documents in a collection, omit this parameter or pass an empty document ({}). | No | | cursorMethods.project | Specifies the fields to return in the documents for projection. To return all fields in the matching documents, omit this parameter. | No | | cursorMethods.sort | Specifies the order in which the query returns matching documents. Refer to [cursor.sort()](https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort). | No |
-| cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No |
+| cursorMethods.limit | Specifies the maximum number of documents the server returns. Refer to [cursor.limit()](https://docs.mongodb.com/manual/reference/method/cursor.limit/#cursor.limit). | No |
| cursorMethods.skip | Specifies the number of documents to skip and from where MongoDB begins to return results. Refer to [cursor.skip()](https://docs.mongodb.com/manual/reference/method/cursor.skip/#cursor.skip). | No | | batchSize | Specifies the number of documents to return in each batch of the response from MongoDB instance. In most cases, modifying the batch size will not affect the user or the application. Cosmos DB limits each batch cannot exceed 40MB in size, which is the sum of the batchSize number of documents' size, so decrease this value if your document size being large. | No<br/>(the default is **100**) |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
@@ -1,13 +1,9 @@
Title: Copy and transform data in Azure Cosmos DB (SQL API) description: Learn how to copy data to and from Azure Cosmos DB (SQL API), and transform data in Azure Cosmos DB (SQL API) by using Data Factory.- ----+ Last updated 01/29/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
@@ -1,14 +1,9 @@
Title: Copy data to or from Azure Data Explorer description: Learn how to copy data to or from Azure Data Explorer by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 02/18/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
@@ -1,13 +1,9 @@
Title: Copy and transform data in Azure Data Lake Storage Gen2 description: Learn how to copy data to and from Azure Data Lake Storage Gen2, and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory.- -- - Last updated 10/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
@@ -1,13 +1,9 @@
Title: Copy data to or from Azure Data Lake Storage Gen1 description: Learn how to copy data from supported source data stores to Azure Data Lake Store, or from Data Lake Store to supported sink stores, by using Data Factory.- -- - Last updated 08/31/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-database-for-mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mariadb.md
@@ -1,13 +1,9 @@
Title: Copy data from Azure Database for MariaDB description: Learn how to copy data from Azure Database for MariaDB to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 09/04/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-database-for-mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mysql.md
@@ -1,13 +1,9 @@
Title: Copy data to and from Azure Database for MySQL description: Learn how to copy data to and from Azure Database for MySQL by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/25/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-database-for-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
@@ -1,13 +1,9 @@
Title: Copy and transform data in Azure Database for PostgreSQL description: Learn how to copy and transform data in Azure Database for PostgreSQL by using Azure Data Factory.- -- - Last updated 02/01/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-databricks-delta-lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
@@ -1,13 +1,9 @@
Title: Copy data to and from Azure Databricks Delta Lake description: Learn how to copy data to and from Azure Databricks Delta Lake by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 11/24/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-file-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-file-storage.md
@@ -1,13 +1,9 @@
Title: Copy data from/to Azure File Storage description: Learn how to copy data from Azure File Storage to supported sink data stores (or) from supported source data stores to Azure File Storage by using Azure Data Factory.- -- - Last updated 08/31/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-search.md
@@ -1,13 +1,9 @@
Title: Copy data to Search index description: Learn about how to push or copy data to an Azure search index by using the Copy Activity in an Azure Data Factory pipeline.- -- - Last updated 09/13/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
@@ -1,16 +1,11 @@
Title: Copy and transform data in Azure Synapse Analytics description: Learn how to copy data to and from Azure Synapse Analytics, and transform data in Azure Synapse Analytics by using Data Factory.- -- - - Previously updated : 01/29/2021 Last updated : 02/10/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory
@@ -63,7 +58,7 @@ The following properties are supported for an Azure Synapse Analytics linked ser
| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal. | | servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal. |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory's cloud environment is used. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No | For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
@@ -265,11 +260,11 @@ To copy data from Azure Synapse Analytics, set the **type** property in the Copy
| sqlReaderQuery | Use the custom SQL query to read data. Example: `select * from MyTable`. | No | | sqlReaderStoredProcedureName | The name of the stored procedure that reads data from the source table. The last SQL statement must be a SELECT statement in the stored procedure. | No | | storedProcedureParameters | Parameters for the stored procedure.<br/>Allowed values are name or value pairs. Names and casing of parameters must match the names and casing of the stored procedure parameters. | No |
-| isolationLevel | Specifies the transaction locking behavior for the SQL source. The allowed values are: **ReadCommitted**, **ReadUncommitted**, **RepeatableRead**, **Serializable**, **Snapshot**. If not specified, the database's default isolation level is used. Refer to [this doc](/dotnet/api/system.data.isolationlevel) for more details. | No |
+| isolationLevel | Specifies the transaction locking behavior for the SQL source. The allowed values are: **ReadCommitted**, **ReadUncommitted**, **RepeatableRead**, **Serializable**, **Snapshot**. If not specified, the database's default isolation level is used. For more information, see [system.data.isolationlevel](/dotnet/api/system.data.isolationlevel). | No |
| partitionOptions | Specifies the data partitioning options used to load data from Azure Synapse Analytics. <br>Allowed values are: **None** (default), **PhysicalPartitionsOfTable**, and **DynamicRange**.<br>When a partition option is enabled (that is, not `None`), the degree of parallelism to concurrently load data from an Azure Synapse Analytics is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. | No | | partitionSettings | Specify the group of the settings for data partitioning. <br>Apply when the partition option isn't `None`. | No | | ***Under `partitionSettings`:*** | | |
-| partitionColumnName | Specify the name of the source column **in integer or date/datetime type** (`int`, `smallint`, `bigint`, `date`, `smalldatetime`, `datetime`, `datetime2`, or `datetimeoffset`) that will be used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is auto-detected and used as the partition column.<br>Apply when the partition option is `DynamicRange`. If you use a query to retrieve the source data, hook `?AdfDynamicRangePartitionCondition ` in the WHERE clause. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-azure-synapse-analytics) section. | No |
+| partitionColumnName | Specify the name of the source column **in integer or date/datetime type** (`int`, `smallint`, `bigint`, `date`, `smalldatetime`, `datetime`, `datetime2`, or `datetimeoffset`) that will be used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is detected automatically and used as the partition column.<br>Apply when the partition option is `DynamicRange`. If you use a query to retrieve the source data, hook `?AdfDynamicRangePartitionCondition ` in the WHERE clause. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-azure-synapse-analytics) section. | No |
| partitionUpperBound | The maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. <br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-azure-synapse-analytics) section. | No | | partitionLowerBound | The minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from SQL database](#parallel-copy-from-azure-synapse-analytics) section. | No |
@@ -277,7 +272,7 @@ To copy data from Azure Synapse Analytics, set the **type** property in the Copy
- When using stored procedure in source to retrieve data, note if your stored procedure is designed as returning different schema when different parameter value is passed in, you may encounter failure or see unexpected result when importing schema from UI or when copying data to SQL database with auto table creation.
-**Example: using SQL query**
+#### Example: using SQL query
```json "activities":[
@@ -309,7 +304,7 @@ To copy data from Azure Synapse Analytics, set the **type** property in the Copy
] ```
-**Example: using stored procedure**
+#### Example: using stored procedure
```json "activities":[
@@ -345,7 +340,7 @@ To copy data from Azure Synapse Analytics, set the **type** property in the Copy
] ```
-**Sample stored procedure:**
+#### Sample stored procedure:
```sql CREATE PROCEDURE CopyTestSrcStoredProcedureWithParameters
@@ -524,7 +519,7 @@ If the requirements aren't met, Azure Data Factory checks the settings and autom
3. If your source is a folder, `recursive` in copy activity must be set to true.
-4. `wildcardFolderPath` , `wildcardFilename`, `modifiedDateTimeStart`, `modifiedDateTimeEnd`, `prefix`, `enablePartitionDiscovery` and `additionalColumns` are not specified.
+4. `wildcardFolderPath` , `wildcardFilename`, `modifiedDateTimeStart`, `modifiedDateTimeEnd`, `prefix`, `enablePartitionDiscovery`, and `additionalColumns` are not specified.
>[!NOTE] >If your source is a folder, note PolyBase retrieves files from the folder and all of its subfolders, and it doesn't retrieve data from files for which the file name begins with an underline (_) or a period (.), as documented [here - LOCATION argument](/sql/t-sql/statements/create-external-table-transact-sql#arguments-2).
@@ -573,6 +568,9 @@ To use this feature, create an [Azure Blob Storage linked service](connector-azu
>- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively. >- If your staging Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](../azure-sql/database/vnet-service-endpoint-rule-overview.md#impact-of-using-virtual-network-service-endpoints-with-azure-storage).
+>[!IMPORTANT]
+>If your staging Azure Storage is configured with Managed Private Endpoint and has the storage firewall enabled, you must use managed identity authentication and grant Storage Blob Data Reader permissions to the Synapse SQL Server to ensure it can access the staged files during the PolyBase load.
+ ```json "activities":[ {
@@ -612,7 +610,7 @@ To use this feature, create an [Azure Blob Storage linked service](connector-azu
### Best practices for using PolyBase
-The following sections provide best practices in addition to those mentioned in [Best practices for Azure Synapse Analytics](../synapse-analytics/sql/best-practices-sql-pool.md).
+The following sections provide best practices in addition to those practices mentioned in [Best practices for Azure Synapse Analytics](../synapse-analytics/sql/best-practices-sql-pool.md).
#### Required database permission
@@ -632,17 +630,17 @@ To achieve the best possible throughput, assign a larger resource class to the u
#### PolyBase troubleshooting
-**Loading to Decimal column**
+#### Loading to Decimal column
-If your source data is in text format or other non-PolyBase compatible stores (using staged copy and PolyBase), and it contains empty value to be loaded into Azure Synapse Analytics Decimal column, you may hit the following error:
+If your source data is in text format or other non-PolyBase compatible stores (using staged copy and PolyBase), and it contains empty value to be loaded into Azure Synapse Analytics Decimal column, you may get the following error:
-```
+```output
ErrorCode=FailedDbOperation, ......HadoopSqlException: Error converting data type VARCHAR to DECIMAL.....Detailed Message=Empty string can't be converted to DECIMAL..... ``` The solution is to unselect "**Use type default**" option (as false) in copy activity sink -> PolyBase settings. "[USE_TYPE_DEFAULT](/sql/t-sql/statements/create-external-file-format-transact-sql#arguments)" is a PolyBase native configuration, which specifies how to handle missing values in delimited text files when PolyBase retrieves data from the text file.
-**`tableName` in Azure Synapse Analytics**
+#### Check the tableName property in Azure Synapse Analytics
The following table gives examples of how to specify the **tableName** property in the JSON dataset. It shows several combinations of schema and table names.
@@ -655,20 +653,30 @@ The following table gives examples of how to specify the **tableName** property
If you see the following error, the problem might be the value you specified for the **tableName** property. See the preceding table for the correct way to specify values for the **tableName** JSON property.
-```
+```output
Type=System.Data.SqlClient.SqlException,Message=Invalid object name 'stg.Account_test'.,Source=.Net SqlClient Data Provider ```
-**Columns with default values**
+#### Columns with default values
Currently, the PolyBase feature in Data Factory accepts only the same number of columns as in the target table. An example is a table with four columns where one of them is defined with a default value. The input data still needs to have four columns. A three-column input dataset yields an error similar to the following message:
-```
+```output
All columns of the table must be specified in the INSERT BULK statement. ``` The NULL value is a special form of the default value. If the column is nullable, the input data in the blob for that column might be empty. But it can't be missing from the input dataset. PolyBase inserts NULL for missing values in Azure Synapse Analytics.
+#### External file access failed
+
+If you receive the following error, ensure that you are using managed identity authentication and have granted Storage Blob Data Reader permissions to the Azure Synapse workspace's managed identity.
+
+```output
+Job failed due to reason: at Sink '[SinkName]': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: External file access failed due to internal error: 'Error occurred while accessing HDFS: Java exception raised on call to HdfsBridge_IsDirExist. Java exception message:\r\nHdfsBridge::isDirExist
+```
+
+For more information, see [Grant permissions to managed identity after workspace creation](../synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md#grant-permissions-to-managed-identity-after-workspace-creation).
+ ## <a name="use-copy-statement"></a> Use COPY statement to load data into Azure Synapse Analytics Azure Synapse Analytics [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) directly supports loading data from **Azure Blob and Azure Data Lake Storage Gen2**. If your source data meets the criteria described in this section, you can choose to use COPY statement in ADF to load data into Azure Synapse Analytics. Azure Data Factory checks the settings and fails the copy activity run if the criteria is not met.
@@ -866,4 +874,4 @@ When you copy data from or to Azure Synapse Analytics, the following mappings ar
## Next steps
-For a list of data stores supported as sources and sinks by Copy Activity in Azure Data Factory, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores supported as sources and sinks by Copy Activity in Azure Data Factory, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
@@ -1,13 +1,9 @@
Title: Copy and transform data in Azure SQL Database description: Learn how to copy data to and from Azure SQL Database, and transform data in Azure SQL Database by using Azure Data Factory.- -- - Last updated 01/11/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-managed-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
@@ -1,14 +1,10 @@
Title: Copy and transform data in Azure SQL Managed Instance description: Learn how to copy and transform data in Azure SQL Managed Instance by using Azure Data Factory.- - -- Last updated 12/18/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-table-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-table-storage.md
@@ -1,13 +1,9 @@
Title: Copy data to and from Azure Table storage description: Learn how to copy data from supported source stores to Azure Table storage, or from Table storage to supported sink stores, by using Data Factory.- -- - Last updated 10/20/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-cassandra.md
@@ -1,20 +1,11 @@
Title: Copy data from Cassandra using Azure Data Factory description: Learn how to copy data from Cassandra to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/12/2019 - # Copy data from Cassandra using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-concur https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-concur.md
@@ -1,18 +1,12 @@
Title: Copy data from Concur using Azure Data Factory (Preview) description: Learn how to copy data from Concur to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 11/25/2020 - # Copy data from Concur using Azure Data Factory (Preview)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-couchbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-couchbase.md
@@ -1,20 +1,11 @@
Title: Copy data from Couchbase using Azure Data Factory (Preview) description: Learn how to copy data from Couchbase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/12/2019 - # Copy data from Couchbase using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-db2.md
@@ -1,21 +1,11 @@
Title: Copy data from DB2 using Azure Data Factory description: Learn how to copy data from DB2 to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 05/26/2020- - # Copy data from DB2 by using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
@@ -81,7 +71,7 @@ Typical properties inside the connection string:
| authenticationType |Type of authentication used to connect to the DB2 database.<br/>Allowed value is: **Basic**. |Yes | | username |Specify user name to connect to the DB2 database. |Yes | | password |Specify password for the user account you specified for the username. Mark this field as a SecureString to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
-| packageCollection | Specify under where the needed packages are auto created by ADF when querying the database. If this is not set, Data Factory uses the {username} as the default value. | No |
+| packageCollection | Specify under where the needed packages are auto created by ADF when querying the database. If this is not set, Data Factory uses the {username} as the default value. | No |
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No | > [!TIP]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-drill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-drill.md
@@ -1,20 +1,11 @@
Title: Copy data from Drill using Azure Data Factory description: Learn how to copy data from Drill to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 10/25/2019 - # Copy data from Drill using Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-dynamics-ax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-ax.md
@@ -1,14 +1,9 @@
Title: Copy data from Dynamics AX description: Learn how to copy data from Dynamics AX to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 06/12/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-dynamics-crm-office-365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
@@ -1,15 +1,10 @@
Title: Copy data in Dynamics (Common Data Service) description: Learn how to copy data from Microsoft Dynamics CRM or Microsoft Dynamics 365 (Common Data Service) to supported sink data stores or from supported source data stores to Dynamics CRM or Dynamics 365 by using a copy activity in a data factory pipeline.- - -- Last updated 02/02/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-file-system https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
@@ -1,13 +1,8 @@
Title: Copy data from/to a file system by using Azure Data Factory description: Learn how to copy data from file system to supported sink data stores (or) from supported source data stores to file system by using Azure Data Factory.- -- - Last updated 08/31/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-ftp.md
@@ -1,13 +1,8 @@
Title: Copy data from an FTP server by using Azure Data Factory description: Learn how to copy data from an FTP server to a supported sink data store by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 12/18/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
@@ -3,7 +3,6 @@ Title: Connect to GitHub
description: Use GitHub to specify your Common Data Model entity references - Last updated 06/03/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-google-adwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-adwords.md
@@ -1,13 +1,9 @@
Title: Copy data from Google AdWords description: Learn how to copy data from Google AdWords to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 10/25/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-google-bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-bigquery.md
@@ -1,14 +1,9 @@
Title: Copy data from Google BigQuery by using Azure Data Factory description: Learn how to copy data from Google BigQuery to supported sink data stores by using a copy activity in a data factory pipeline.- -- - Last updated 09/04/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-google-cloud-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-cloud-storage.md
@@ -1,12 +1,8 @@
Title: Copy data from Google Cloud Storage by using Azure Data Factory description: Learn about how to copy data from Google Cloud Storage to supported sink data stores by using Azure Data Factory.- -- - Last updated 10/14/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-greenplum https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-greenplum.md
@@ -1,20 +1,11 @@
Title: Copy data from Greenplum using Azure Data Factory description: Learn how to copy data from Greenplum to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 09/04/2019 - # Copy data from Greenplum using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hbase.md
@@ -1,20 +1,11 @@
Title: Copy data from HBase using Azure Data Factory description: Learn how to copy data from HBase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/12/2019 - # Copy data from HBase using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-hdfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hdfs.md
@@ -1,13 +1,8 @@
Title: Copy data from HDFS by using Azure Data Factory description: Learn how to copy data from a cloud or on-premises HDFS source to supported sink data stores by using Copy activity in an Azure Data Factory pipeline.- -- - Last updated 12/18/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hive.md
@@ -1,20 +1,11 @@
Title: Copy data from Hive using Azure Data Factory description: Learn how to copy data from Hive to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 11/17/2020 - # Copy and transform data from Hive using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
@@ -194,17 +185,17 @@ These settings translate into the following data flow script:
``` source(
- allowSchemaDrift: true,
- validateSchema: false,
- ignoreNoFilesFound: false,
- format: 'table',
- store: 'hive',
- schemaName: 'default',
- tableName: 'hivesampletable',
- staged: true,
- storageContainer: 'khive',
- storageFolderPath: '',
- stagingDatabaseName: 'default') ~> hivesource
+ allowSchemaDrift: true,
+ validateSchema: false,
+ ignoreNoFilesFound: false,
+ format: 'table',
+ store: 'hive',
+ schemaName: 'default',
+ tableName: 'hivesampletable',
+ staged: true,
+ storageContainer: 'khive',
+ storageFolderPath: '',
+ stagingDatabaseName: 'default') ~> hivesource
``` ### Known limitations
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
@@ -1,20 +1,11 @@
Title: Copy data from an HTTP source by using Azure Data Factory description: Learn how to copy data from a cloud or on-premises HTTP source to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 12/10/2019 - # Copy data from an HTTP endpoint by using Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-hubspot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hubspot.md
@@ -1,20 +1,11 @@
Title: Copy data from HubSpot using Azure Data Factory description: Learn how to copy data from HubSpot to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 12/18/2020 - # Copy data from HubSpot using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-impala https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-impala.md
@@ -1,20 +1,11 @@
Title: Copy data from Impala by using Azure Data Factory description: Learn how to copy data from Impala to supported sink data stores by using a copy activity in a data factory pipeline.- --- --- Last updated 09/04/2019 - # Copy data from Impala by using Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-informix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-informix.md
@@ -1,20 +1,11 @@
Title: Copy data from and to IBM Informix using Azure Data Factory description: Learn how to copy data from and to IBM Informix by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 06/28/2020 - # Copy data from and to IBM Informix using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
@@ -162,7 +153,7 @@ To copy data to Informix, the following properties are supported in the copy act
| Property | Description | Required | |: |: |: | | type | The type property of the copy activity sink must be set to: **InformixSink** | Yes |
-| writeBatchTimeout |Wait time for the batch insert operation to complete before it times out.<br/>Allowed values are: timespan. Example: ΓÇ£00:30:00ΓÇ¥ (30 minutes). |No |
+| writeBatchTimeout |Wait time for the batch insert operation to complete before it times out.<br/>Allowed values are: timespan. Example: "00:30:00" (30 minutes). |No |
| writeBatchSize |Inserts data into the SQL table when the buffer size reaches writeBatchSize.<br/>Allowed values are: integer (number of rows). |No (default is 0 - auto detected) | | preCopyScript |Specify a SQL query for Copy Activity to execute before writing data into data store in each run. You can use this property to clean up the pre-loaded data. |No |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-jira https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-jira.md
@@ -1,20 +1,11 @@
Title: Copy data from Jira using Azure Data Factory description: Learn how to copy data from Jira to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 10/25/2019 - # Copy data from Jira using Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-magento https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-magento.md
@@ -1,20 +1,11 @@
Title: Copy data from Magento using Azure Data Factory (Preview) description: Learn how to copy data from Magento to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/01/2019 - # Copy data from Magento using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mariadb.md
@@ -1,20 +1,11 @@
Title: Copy data from MariaDB using Azure Data Factory description: Learn how to copy data from MariaDB to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/12/2019 - # Copy data from MariaDB using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-marketo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-marketo.md
@@ -1,20 +1,11 @@
Title: Copy data from Marketo using Azure Data Factory (Preview) description: Learn how to copy data from Marketo to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 06/04/2020 - # Copy data from Marketo using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-microsoft-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-microsoft-access.md
@@ -1,13 +1,9 @@
Title: Copy data from and to Microsoft Access description: Learn how to copy data from and to Microsoft Access by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 06/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mongodb-atlas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-atlas.md
@@ -1,14 +1,9 @@
Title: Copy data from MongoDB Atlas description: Learn how to copy data from MongoDB Atlas to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 09/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mongodb-legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-legacy.md
@@ -1,13 +1,9 @@
Title: Copy data from MongoDB using legacy description: Learn how to copy data from Mongo DB to supported sink data stores by using a copy activity in a legacy Azure Data Factory pipeline.- -- - Last updated 08/12/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
@@ -1,14 +1,9 @@
Title: Copy data from MongoDB description: Learn how to copy data from Mongo DB to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 01/08/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mysql.md
@@ -1,13 +1,8 @@
Title: Copy data from MySQL using Azure Data Factory description: Learn about MySQL connector in Azure Data Factory that lets you copy data from a MySQL database to a data store supported as a sink.- -- - Last updated 09/09/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-netezza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-netezza.md
@@ -1,20 +1,11 @@
Title: Copy data from Netezza by using Azure Data Factory description: Learn how to copy data from Netezza to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 05/28/2020 - # Copy data from Netezza by using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
@@ -1,17 +1,11 @@
Title: Copy data from OData sources by using Azure Data Factory description: Learn how to copy data from OData sources to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 10/14/2020 - # Copy data from an OData source by using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-odbc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odbc.md
@@ -1,20 +1,11 @@
Title: Copy data from and to ODBC data stores using Azure Data Factory description: Learn how to copy data from and to ODBC data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 04/22/2020 - # Copy data from and to ODBC data stores using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
@@ -195,12 +186,12 @@ To copy data to ODBC-compatible data store, set the sink type in the copy activi
| Property | Description | Required | |: |: |: | | type | The type property of the copy activity sink must be set to: **OdbcSink** | Yes |
-| writeBatchTimeout |Wait time for the batch insert operation to complete before it times out.<br/>Allowed values are: timespan. Example: ΓÇ£00:30:00ΓÇ¥ (30 minutes). |No |
+| writeBatchTimeout |Wait time for the batch insert operation to complete before it times out.<br/>Allowed values are: timespan. Example: "00:30:00" (30 minutes). |No |
| writeBatchSize |Inserts data into the SQL table when the buffer size reaches writeBatchSize.<br/>Allowed values are: integer (number of rows). |No (default is 0 - auto detected) | | preCopyScript |Specify a SQL query for Copy Activity to execute before writing data into data store in each run. You can use this property to clean up the pre-loaded data. |No | > [!NOTE]
-> For "writeBatchSize", if it's not set (auto-detected), copy activity first detects whether the driver supports batch operations, and set it to 10000 if it does, or set it to 1 if it doesnΓÇÖt. If you explicitly set the value other than 0, copy activity honors the value and fails at runtime if the driver doesnΓÇÖt support batch operations.
+> For "writeBatchSize", if it's not set (auto-detected), copy activity first detects whether the driver supports batch operations, and set it to 10000 if it does, or set it to 1 if it doesn't. If you explicitly set the value other than 0, copy activity honors the value and fails at runtime if the driver doesn't support batch operations.
**Example:**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-office-365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-office-365.md
@@ -1,20 +1,11 @@
Title: Copy data from Office 365 using Azure Data Factory description: Learn how to copy data from Office 365 to supported sink data stores by using copy activity in an Azure Data Factory pipeline.- --- --- Last updated 10/20/2019 - # Copy data from Office 365 into Azure using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
@@ -47,7 +38,7 @@ To copy data from Office 365 into Azure, you need to complete the following prer
## Approving new data access requests
-If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as "In Progress", and only when you click into ["Details" link under Actions](copy-activity-overview.md#monitoring) will you see the status as ΓÇ£RequestingConsentΓÇ¥. A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
+If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as "In Progress", and only when you click into ["Details" link under Actions](copy-activity-overview.md#monitoring) will you see the status as "RequestingConsent". A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
Refer [here](/graph/data-connect-tips#approve-pam-requests-via-office-365-admin-portal) on how the approver can approve the data access request, and refer [here](/graph/data-connect-pam) for an explanation on the overall integration with Privileged Access Management, including how to set up the data access approver group.
@@ -87,7 +78,7 @@ The following properties are supported for Office 365 linked service:
>[!NOTE] > The difference between **office365TenantId** and **servicePrincipalTenantId** and the corresponding value to provide: >- If you are an enterprise developer developing an application against Office 365 data for your own organization's usage, then you should supply the same tenant ID for both properties, which is your organization's AAD tenant ID.
->- If you are an ISV developer developing an application for your customers, then office365TenantId will be your customerΓÇÖs (application installer) AAD tenant ID and servicePrincipalTenantId will be your companyΓÇÖs AAD tenant ID.
+>- If you are an ISV developer developing an application for your customers, then office365TenantId will be your customer's (application installer) AAD tenant ID and servicePrincipalTenantId will be your company's AAD tenant ID.
**Example:**
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-oracle-eloqua https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-eloqua.md
@@ -1,13 +1,9 @@
Title: Copy data from Oracle Eloqua (Preview) description: Learn how to copy data from Oracle Eloqua to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/01/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-oracle-responsys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-responsys.md
@@ -1,13 +1,9 @@
Title: Copy data from Oracle Responsys (Preview) description: Learn how to copy data from Oracle Responsys to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/01/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-oracle-service-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-service-cloud.md
@@ -1,13 +1,9 @@
Title: Copy data from Oracle Service Cloud (Preview) description: Learn how to copy data from Oracle Service Cloud to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/01/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
@@ -1,13 +1,8 @@
Title: Copy data to and from Oracle by using Azure Data Factory description: Learn how to copy data from supported source stores to an Oracle database, or from Oracle to supported sink stores, by using Data Factory.- -- - Last updated 09/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
@@ -1,14 +1,11 @@
Title: Azure Data Factory connector overview description: Learn the supported connectors in Data Factory.- - Last updated 09/28/2020 - # Azure Data Factory connector overview
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-paypal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-paypal.md
@@ -1,20 +1,11 @@
Title: Copy data from PayPal using Azure Data Factory (Preview) description: Learn how to copy data from PayPal to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 08/01/2019 - # Copy data from PayPal using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-phoenix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-phoenix.md
@@ -1,20 +1,11 @@
Title: Copy data from Phoenix using Azure Data Factory description: Learn how to copy data from Phoenix to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 09/04/2019 - # Copy data from Phoenix using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-postgresql.md
@@ -1,20 +1,12 @@
Title: Copy data From PostgreSQL using Azure Data Factory description: Learn how to copy data from PostgreSQL to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- --- Last updated 02/19/2020 - # Copy data from PostgreSQL by using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-presto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-presto.md
@@ -1,20 +1,11 @@
Title: Copy data from Presto using Azure Data Factory description: Learn how to copy data from Presto to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 12/18/2020 - # Copy data from Presto using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-quickbooks.md
@@ -1,14 +1,9 @@
Title: Copy data from QuickBooks Online using Azure Data Factory (Preview) description: Learn how to copy data from QuickBooks Online to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 01/15/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
@@ -1,13 +1,8 @@
Title: Copy data from and to a REST endpoint by using Azure Data Factory description: Learn how to copy data from a cloud or on-premises REST source to supported sink data stores, or from supported source data store to a REST sink by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 12/08/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce-marketing-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-marketing-cloud.md
@@ -1,14 +1,9 @@
Title: Copy data from Salesforce Marketing Cloud description: Learn how to copy data from Salesforce Marketing Cloud to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 07/17/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce-service-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
@@ -1,13 +1,9 @@
Title: Copy data from and to Salesforce Service Cloud description: Learn how to copy data from Salesforce Service Cloud to supported sink data stores or from supported source data stores to Salesforce Service Cloud by using a copy activity in a data factory pipeline.- -- - Last updated 02/02/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
@@ -1,13 +1,9 @@
Title: Copy data from and to Salesforce description: Learn how to copy data from Salesforce to supported sink data stores or from supported source data stores to Salesforce by using a copy activity in a data factory pipeline.- -- - Last updated 02/02/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-business-warehouse-open-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
@@ -1,14 +1,9 @@
Title: Copy data from SAP Business Warehouse via Open Hub description: Learn how to copy data from SAP Business Warehouse (BW) via Open Hub to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 02/02/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-business-warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse.md
@@ -1,14 +1,9 @@
Title: Copy data from SAP BW description: Learn how to copy data from SAP Business Warehouse to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 09/04/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-cloud-for-customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
@@ -1,14 +1,9 @@
Title: Copy data from/to SAP Cloud for Customer description: Learn how to copy data from SAP Cloud for Customer to supported sink data stores (or) from supported source data stores to SAP Cloud for Customer by using Data Factory.- -- - Last updated 02/02/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-ecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-ecc.md
@@ -1,13 +1,9 @@
Title: Copy data from SAP ECC description: Learn how to copy data from SAP ECC to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 10/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-hana.md
@@ -1,13 +1,9 @@
Title: Copy data from SAP HANA description: Learn how to copy data from SAP HANA to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 04/22/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sap-table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
@@ -1,13 +1,9 @@
Title: Copy data from an SAP table description: Learn how to copy data from an SAP table to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 02/01/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-servicenow.md
@@ -1,13 +1,9 @@
Title: Copy data from ServiceNow description: Learn how to copy data from ServiceNow to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/01/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
@@ -1,14 +1,9 @@
Title: Copy data from and to SFTP server description: Learn how to copy data from and to SFTP server by using Azure Data Factory.- -- - Last updated 08/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sharepoint-online-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
@@ -1,18 +1,11 @@
Title: Copy data from SharePoint Online List by using Azure Data Factory description: Learn how to copy data from SharePoint Online List to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- - Last updated 05/19/2020 - # Copy data from SharePoint Online List by using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
@@ -192,7 +185,7 @@ When you copy data from SharePoint Online List, the following mappings are used
| Multiple lines of text | Edm.String | String | | Choice (menu to choose from) | Edm.String | String | | Number (1, 1.0, 100) | Edm.Double | Double |
-| Currency ($, ¥, €) | Edm.Double | Double |
+| Currency ($, ¥, &euro;) | Edm.Double | Double |
| Date and Time | Edm.DateTime | DateTime | | Lookup (information already on this site) | Edm.Int32 | Int32 | | Yes/No (check box) | Edm.Boolean | Boolean |
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-shopify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-shopify.md
@@ -1,13 +1,9 @@
Title: Copy data from Shopify (Preview) description: Learn how to copy data from Shopify to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/01/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
@@ -1,13 +1,9 @@
Title: Copy and transform data in Snowflake description: Learn how to copy and transform data in Snowflake by using Data Factory.- -- - Last updated 12/08/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-spark.md
@@ -1,13 +1,9 @@
Title: Copy data from Spark description: Learn how to copy data from Spark to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 09/04/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sql-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
@@ -1,14 +1,9 @@
Title: Copy data to and from SQL Server description: Learn about how to move data to and from SQL Server database that is on-premises or in an Azure VM by using Azure Data Factory.- -- - Last updated 12/18/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-square https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-square.md
@@ -1,14 +1,9 @@
Title: Copy data from Square (Preview) description: Learn how to copy data from Square to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 08/03/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-sybase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sybase.md
@@ -1,20 +1,11 @@
Title: Copy data from Sybase using Azure Data Factory description: Learn how to copy data from Sybase to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 06/10/2020 - # Copy data from Sybase using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-teradata.md
@@ -1,13 +1,8 @@
Title: Copy data from Teradata Vantage by using Azure Data Factory description: The Teradata Connector of the Data Factory service lets you copy data from a Teradata Vantage to data stores supported by Data Factory as sinks. - -- - Last updated 01/22/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
@@ -1,13 +1,11 @@
Title: Troubleshoot Azure Data Factory connectors description: Learn how to troubleshoot connector issues in Azure Data Factory. - Last updated 02/08/2021 -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-vertica https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-vertica.md
@@ -1,20 +1,11 @@
Title: Copy data from Vertica using Azure Data Factory description: Learn how to copy data from Vertica to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- --- Last updated 09/04/2019 - # Copy data from Vertica using Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-web-table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-web-table.md
@@ -1,20 +1,11 @@
Title: Copy data from Web Table using Azure Data Factory description: Learn about Web Table Connector of Azure Data Factory that lets you copy data from a web table to data stores supported by Data Factory as sinks. - --- --- Last updated 08/01/2019 - # Copy data from Web table by using Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-xero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-xero.md
@@ -1,17 +1,11 @@
Title: Copy data from Xero using Azure Data Factory description: Learn how to copy data from Xero to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- -- - Last updated 01/26/2021 - # Copy data from Xero using Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-zoho https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-zoho.md
@@ -1,18 +1,11 @@
Title: Copy data from Zoho using Azure Data Factory (Preview) description: Learn how to copy data from Zoho to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.- --- - Last updated 08/03/2020 - # Copy data from Zoho using Azure Data Factory (Preview) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment-improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
@@ -1,14 +1,10 @@
Title: Automated publishing for continuous integration and delivery description: Learn how to publish for continuous integration and delivery automatically.- - - Last updated 02/02/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
@@ -1,14 +1,10 @@
Title: Continuous integration and delivery in Azure Data Factory description: Learn how to use continuous integration and delivery to move Data Factory pipelines from one environment (development, test, production) to another.- - - Last updated 12/17/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-append-variable-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-append-variable-activity.md
@@ -1,14 +1,10 @@
Title: Append Variable Activity in Azure Data Factory description: Learn how to set the Append Variable activity to add a value to an existing array variable defined in a Data Factory pipeline- - - Last updated 10/09/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-azure-function-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
@@ -1,14 +1,10 @@
Title: Azure Function Activity in Azure Data Factory description: Learn how to use the Azure Function activity to run an Azure Function in a Data Factory pipeline- - - Last updated 01/09/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-execute-data-flow-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
@@ -1,11 +1,8 @@
Title: Data Flow activity description: How to execute data flows from inside a data factory pipeline. - - Last updated 01/03/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-execute-pipeline-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-pipeline-activity.md
@@ -1,14 +1,10 @@
Title: Execute Pipeline Activity in Azure Data Factory description: Learn how you can use the Execute Pipeline Activity to invoke one Data Factory pipeline from another Data Factory pipeline.- - - Last updated 01/10/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
@@ -1,13 +1,10 @@
Title: Expression and functions in Azure Data Factory description: This article provides information about expressions and functions that you can use in creating data factory entities.- - Last updated 11/25/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-filter-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-filter-activity.md
@@ -1,14 +1,10 @@
Title: Filter activity in Azure Data Factory description: The Filter activity filters the inputs. - - - Last updated 05/04/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
@@ -1,14 +1,10 @@
Title: ForEach activity in Azure Data Factory description: The For Each Activity defines a repeating control flow in your pipeline. It is used for iterating over a collection and execute specified activities.- - - Last updated 01/23/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-get-metadata-activity.md
@@ -1,14 +1,8 @@
Title: Get Metadata activity in Azure Data Factory description: Learn how to use the Get Metadata activity in a Data Factory pipeline.- -- - Last updated 09/23/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-if-condition-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-if-condition-activity.md
@@ -1,14 +1,10 @@
Title: If Condition activity in Azure Data Factory description: The If Condition activity allows you to control the processing flow based on a condition.- - - Last updated 01/10/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-lookup-activity.md
@@ -1,14 +1,9 @@
Title: Lookup activity in Azure Data Factory description: Learn how to use Lookup activity to look up a value from an external source. This output can be further referenced by succeeding activities. - -- - Last updated 10/14/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-power-query-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-power-query-activity.md
@@ -1,11 +1,9 @@
Title: Power Query activity in Azure Data Factory description: Learn how to use the Power Query activity for data wrangling features in a Data Factory pipeline- - Last updated 01/18/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-set-variable-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-set-variable-activity.md
@@ -1,15 +1,11 @@
Title: Set Variable Activity in Azure Data Factory description: Learn how to use the Set Variable activity to set the value of an existing variable defined in a Data Factory pipeline- - Last updated 04/07/2020 - # Set Variable Activity in Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-switch-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
@@ -1,12 +1,10 @@
Title: Switch activity in Azure Data Factory description: The Switch activity allows you to control the processing flow based on a condition.- - Last updated 10/08/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-system-variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
@@ -1,14 +1,10 @@
Title: System variables in Azure Data Factory description: This article describes system variables supported by Azure Data Factory. You can use these variables in expressions when defining Data Factory entities.- - - Last updated 06/12/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-until-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-until-activity.md
@@ -1,14 +1,10 @@
Title: Until activity in Azure Data Factory description: The Until activity executes a set of activities in a loop until the condition associated with the activity evaluates to true or it times out. - - - Last updated 01/10/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-validation-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-validation-activity.md
@@ -1,14 +1,10 @@
Title: Validation activity in Azure Data Factory description: The Validation activity does not continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies.- - - Last updated 03/25/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-wait-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-wait-activity.md
@@ -1,14 +1,9 @@
Title: Wait activity in Azure Data Factory description: The Wait activity pauses the execution of the pipeline for the specified period. - -- - Last updated 01/12/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-web-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-web-activity.md
@@ -1,14 +1,9 @@
Title: Web Activity in Azure Data Factory description: Learn how you can use Web Activity, one of the control flow activities supported by Data Factory, to invoke a REST endpoint from a pipeline.- -- - Last updated 12/19/2018
data-factory https://docs.microsoft.com/en-us/azure/data-factory/control-flow-webhook-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-webhook-activity.md
@@ -1,14 +1,10 @@
Title: Webhook activity in Azure Data Factory description: The webhook activity doesn't continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies.- - - Last updated 03/25/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-data-consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-data-consistency.md
@@ -1,20 +1,11 @@
Title: Data consistency verification in copy activity description: 'Learn about how to enable data consistency verification in copy activity in Azure Data Factory.'- --- --- Last updated 3/27/2020 - # Data consistency verification in copy activity
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-fault-tolerance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-fault-tolerance.md
@@ -1,20 +1,11 @@
Title: Fault tolerance of copy activity in Azure Data Factory description: 'Learn about how to add fault tolerance to copy activity in Azure Data Factory by skipping the incompatible data.'- --- --- Last updated 06/22/2020 - # Fault tolerance of copy activity in Azure Data Factory > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
@@ -57,13 +48,13 @@ When you copy binary files between storage stores, you can enable fault toleranc
"fileMissing": true, "fileForbidden": true, "dataInconsistency": true,
- "invalidFileName": true
+ "invalidFileName": true
}, "validateDataConsistency": true, "logSettings": { "enableCopyActivityLog": true, "copyActivityLogSettings": {
- "logLevel": "Warning",
+ "logLevel": "Warning",
"enableReliableLogging": false }, "logLocationSettings": {
@@ -71,7 +62,7 @@ When you copy binary files between storage stores, you can enable fault toleranc
"referenceName": "ADLSGen2", "type": "LinkedServiceReference" },
- "path": "sessionlog/"
+ "path": "sessionlog/"
} } }
@@ -116,7 +107,7 @@ You can get the number of files being read, written, and skipped via the output
"filesSkipped": 2, "throughput": 297, "logFilePath": "myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
- "dataConsistencyVerification":
+ "dataConsistencyVerification":
{ "VerificationResult": "Verified", "InconsistentData": "Skipped"
@@ -185,7 +176,7 @@ The following example provides a JSON definition to configure skipping the incom
"logSettings": { "enableCopyActivityLog": true, "copyActivityLogSettings": {
- "logLevel": "Warning",
+ "logLevel": "Warning",
"enableReliableLogging": false }, "logLocationSettings": {
@@ -193,7 +184,7 @@ The following example provides a JSON definition to configure skipping the incom
"referenceName": "ADLSGen2", "type": "LinkedServiceReference" },
- "path": "sessionlog/"
+ "path": "sessionlog/"
} } },
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-log.md
@@ -1,20 +1,11 @@
Title: Session log in copy activity description: 'Learn about how to enable session log in copy activity in Azure Data Factory.'- --- --- Last updated 11/11/2020 - # Session log in copy activity
@@ -61,7 +52,7 @@ The following example provides a JSON definition to enable session log in Copy A
"referenceName": "ADLSGen2", "type": "LinkedServiceReference" },
- "path": "sessionlog/"
+ "path": "sessionlog/"
} } }
@@ -91,7 +82,7 @@ After the copy activity runs completely, you can see the path of log files from
"filesSkipped": 2, "throughput": 297, "logFilePath": "myfolder/a84bf8d4-233f-4216-8cb5-45962831cd1b/",
- "dataConsistencyVerification":
+ "dataConsistencyVerification":
{ "VerificationResult": "Verified", "InconsistentData": "Skipped"
@@ -100,6 +91,9 @@ After the copy activity runs completely, you can see the path of log files from
```
+> [!NOTE]
+> When the `enableCopyActivityLog` property is set to `Enabled`, the log file names are system generated.
+ ### The schema of the log file The following is the schema of a log file.
@@ -107,8 +101,8 @@ The following is the schema of a log file.
Column | Description -- | -- Timestamp | The timestamp when ADF reads, writes, or skips the object.
-Level | The log level of this item. It can be 'Warning' or ΓÇ£InfoΓÇ¥.
-OperationName | ADF copy activity operational behavior on each object. It can be ΓÇÿFileReadΓÇÖ,ΓÇÖ FileWriteΓÇÖ, 'FileSkip', or ΓÇÿTabularRowSkipΓÇÖ.
+Level | The log level of this item. It can be 'Warning' or "Info".
+OperationName | ADF copy activity operational behavior on each object. It can be 'FileRead',' FileWrite', 'FileSkip', or 'TabularRowSkip'.
OperationItem | The file names or skipped rows. Message | More information to show if the file has been read from source store, or written to the destination store. It can also be why the file or rows has being skipped.
@@ -156,7 +150,7 @@ select OperationItem from SessionLogDemo where OperationName='FileSkip'
select TIMESTAMP, OperationItem, Message from SessionLogDemo where OperationName='FileSkip' ``` -- Give me the list of files skipped due to the same reason: ΓÇ£blob file does not existΓÇ¥.
+- Give me the list of files skipped due to the same reason: "blob file does not exist".
```sql select TIMESTAMP, OperationItem, Message from SessionLogDemo where OperationName='FileSkip' and Message like '%UserErrorSourceBlobNotExist%' ```
@@ -172,4 +166,4 @@ See the other Copy Activity articles:
- [Copy activity overview](copy-activity-overview.md) - [Copy activity fault tolerance](copy-activity-fault-tolerance.md)-- [Copy activity data consistency](copy-activity-data-consistency.md)
+- [Copy activity data consistency](copy-activity-data-consistency.md)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-monitoring.md
@@ -1,17 +1,11 @@
Title: Monitor copy activity description: Learn about how to monitor the copy activity execution in Azure Data Factory. - -- - Last updated 08/06/2020 - # Monitor copy activity
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
@@ -1,13 +1,8 @@
Title: Copy activity in Azure Data Factory description: Learn about the Copy activity in Azure Data Factory. You can use it to copy data from a supported source data store to a supported sink data store.- -- - Last updated 10/12/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-features.md
@@ -1,14 +1,9 @@
Title: Copy activity performance optimization features description: Learn about the key features that help you optimize the copy activity performance in Azure Data Factory。- -- - Last updated 09/24/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
@@ -1,14 +1,9 @@
Title: Troubleshoot copy activity performance description: Learn about how to troubleshoot copy activity performance in Azure Data Factory.- -- - Last updated 01/07/2021
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance.md
@@ -1,14 +1,9 @@
Title: Copy activity performance and scalability guide description: Learn about key factors that affect the performance of data movement in Azure Data Factory when you use the copy activity.- -- - Last updated 09/15/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-preserve-metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-preserve-metadata.md
@@ -1,13 +1,8 @@
Title: Preserve metadata and ACLs using copy activity in Azure Data Factory description: 'Learn about how to preserve metadata and ACLs during copy using copy activity in Azure Data Factory.'- -- - Last updated 09/23/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-schema-and-type-mapping.md
@@ -1,18 +1,12 @@
Title: Schema and data type mapping in copy activity description: Learn about how copy activity in Azure Data Factory maps schemas and data types from source data to sink data.- -- - Last updated 06/22/2020 - # Schema and data type mapping in copy activity [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-clone-data-factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-clone-data-factory.md
@@ -1,13 +1,9 @@
Title: Copy or clone a data factory in Azure Data Factory description: Learn how to copy or clone a data factory in Azure Data Factory- - - Last updated 06/30/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/copy-data-tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool.md
@@ -1,18 +1,12 @@
Title: Copy Data tool Azure Data Factory description: 'Provides information about the Copy Data tool in Azure Data Factory UI'- -- - Last updated 06/17/2020 - # Copy Data tool in Azure Data Factory [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-azure-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
@@ -1,15 +1,11 @@
Title: Create Azure integration runtime in Azure Data Factory description: Learn how to create Azure integration runtime in Azure Data Factory, which is used to copy data and dispatch transform activities. - - Last updated 06/09/2020 - # How to create and configure Azure Integration Runtime [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-azure-ssis-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
@@ -1,16 +1,11 @@
Title: Create an Azure-SSIS integration runtime in Azure Data Factory description: Learn how to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure.- - Last updated 10/13/2020 -- # Create an Azure-SSIS integration runtime in Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
@@ -1,16 +1,11 @@
Title: Create a self-hosted integration runtime description: Learn how to create a self-hosted integration runtime in Azure Data Factory, which lets data factories access data stores in a private network.- - -- Previously updated : 12/25/2020 Last updated : 02/10/2021 # Create and configure a self-hosted integration runtime
@@ -25,7 +20,6 @@ This article describes how you can create and configure a self-hosted IR.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] - ## Considerations for using a self-hosted IR - You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).
@@ -37,7 +31,6 @@ This article describes how you can create and configure a self-hosted IR.
- Use the self-hosted integration runtime even if the data store is in the cloud on an Azure Infrastructure as a Service (IaaS) virtual machine. - Tasks might fail in a self-hosted integration runtime that you installed on a Windows server for which FIPS-compliant encryption is enabled. To work around this problem, you have two options: store credentials/secret values in an Azure Key Vault or disable FIPS-compliant encryption on the server. To disable FIPS-compliant encryption, change the following registry subkey's value from 1 (enabled) to 0 (disabled): `HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled`. If you use the [self-hosted integration runtime as a proxy for SSIS integration runtime](./self-hosted-integration-runtime-proxy-ssis.md), FIPS-compliant encryption can be enabled and will be used when moving data from on premises to Azure Blob Storage as a staging area. - ## Command flow and data flow When you move data between on-premises and the cloud, the activity uses a self-hosted integration runtime to transfer the data between an on-premises data source and the cloud.
@@ -46,32 +39,36 @@ Here is a high-level summary of the data-flow steps for copying with a self-host
![The high-level overview of data flow](media/create-self-hosted-integration-runtime/high-level-overview.png)
-1. A data developer creates a self-hosted integration runtime within an Azure data factory by using a PowerShell cmdlet. Currently, the Azure portal doesn't support this feature.
+1. A data developer creates a self-hosted integration runtime within an Azure data factory by using the Azure portal or the PowerShell cmdlet.
+ 2. The data developer creates a linked service for an on-premises data store. The developer does so by specifying the self-hosted integration runtime instance that the service should use to connect to data stores.+ 3. The self-hosted integration runtime node encrypts the credentials by using Windows Data Protection Application Programming Interface (DPAPI) and saves the credentials locally. If multiple nodes are set for high availability, the credentials are further synchronized across other nodes. Each node encrypts the credentials by using DPAPI and stores them locally. Credential synchronization is transparent to the data developer and is handled by the self-hosted IR.+ 4. Azure Data Factory communicates with the self-hosted integration runtime to schedule and manage jobs. Communication is via a control channel that uses a shared [Azure Relay](../azure-relay/relay-what-is-it.md#wcf-relay) connection. When an activity job needs to be run, Data Factory queues the request along with any credential information. It does so in case credentials aren't already stored on the self-hosted integration runtime. The self-hosted integration runtime starts the job after it polls the queue.
-5. The self-hosted integration runtime copies data between an on-premises store and cloud storage. The direction of the copy depends on how the copy activity is configured in the data pipeline. For this step, the self-hosted integration runtime directly communicates with cloud-based storage services like Azure Blob storage over a secure HTTPS channel.
+5. The self-hosted integration runtime copies data between an on-premises store and cloud storage. The direction of the copy depends on how the copy activity is configured in the data pipeline. For this step, the self-hosted integration runtime directly communicates with cloud-based storage services like Azure Blob storage over a secure HTTPS channel.
## Prerequisites - The supported versions of Windows are:
- + Windows 8.1
- + Windows 10
- + Windows Server 2012
- + Windows Server 2012 R2
- + Windows Server 2016
- + Windows Server 2019
-
+ - Windows 8.1
+ - Windows 10
+ - Windows Server 2012
+ - Windows Server 2012 R2
+ - Windows Server 2016
+ - Windows Server 2019
+ Installation of the self-hosted integration runtime on a domain controller isn't supported.-- Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details.+
+- Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details.
- The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717). - If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message. - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime. - Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times. - Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites:
- - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)
- - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the `JAVA_HOME` environment variable is set.
+ - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)
+ - Java Runtime (JRE) version 8 from a JRE provider such as [Adopt OpenJDK](https://adoptopenjdk.net/). Ensure that the `JAVA_HOME` environment variable is set to the JRE folder (and not just the JDK folder).
## Setting up a self-hosted integration runtime
@@ -107,7 +104,7 @@ Use the following steps to create a self-hosted IR using Azure Data Factory UI.
![Create an integration runtime](media/doc-common-process/manage-new-integration-runtime.png)
-1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
+1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
1. On the following page, select **Self-Hosted** to create a Self-Hosted IR, and then select **Continue**. ![Create a selfhosted IR](media/create-self-hosted-integration-runtime/new-selfhosted-integration-runtime.png)
@@ -123,7 +120,7 @@ Use the following steps to create a self-hosted IR using Azure Data Factory UI.
1. Download the self-hosted integration runtime on a local Windows machine. Run the installer. 1. On the **Register Integration Runtime (Self-hosted)** page, paste the key you saved earlier, and select **Register**.
-
+ ![Register the integration runtime](media/create-self-hosted-integration-runtime/register-integration-runtime.png) 1. On the **New Integration Runtime (Self-hosted) Node** page, select **Finish**.
@@ -168,7 +165,6 @@ Here are details of the application's actions and arguments:
|`-toffau`,<br/>`-TurnOffAutoUpdate`||Turn off the self-hosted integration runtime auto-update.| |`-ssa`,<br/>`-SwitchServiceAccount`|"`<domain\user>`" ["`<password>`"]|Set DIAHostService to run as a new account. Use the empty password "" for system accounts and virtual accounts.| - ## Install and register a self-hosted IR from Microsoft Download Center 1. Go to the [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717).
@@ -194,6 +190,7 @@ Here are details of the application's actions and arguments:
3. Select **Register**. ## Service account for Self-hosted integration runtime+ The default log on service account of Self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**. ![Service account for Self-hosted integration runtime](media/create-self-hosted-integration-runtime/shir-service-account.png)
@@ -204,21 +201,18 @@ Make sure the account has the permission of Log on as a service. Otherwise self-
![Screenshot of Log on as a service user rights assignment](media/create-self-hosted-integration-runtime/shir-service-account-permission-2.png) - ## Notification area icons and notifications If you move your cursor over the icon or message in the notification area, you can see details about the state of the self-hosted integration runtime. ![Notifications in the notification area](media/create-self-hosted-integration-runtime/system-tray-notifications.png) -- ## High availability and scalability You can associate a self-hosted integration runtime with multiple on-premises machines or virtual machines in Azure. These machines are called nodes. You can have up to four nodes associated with a self-hosted integration runtime. The benefits of having multiple nodes on on-premises machines that have a gateway installed for a logical gateway are:
-* Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration with Data Factory. This availability helps ensure continuity when you use up to four nodes.
-* Improved performance and throughput during data movement between on-premises and cloud data stores. Get more information on [performance comparisons](copy-activity-performance.md).
+- Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration with Data Factory. This availability helps ensure continuity when you use up to four nodes.
+- Improved performance and throughput during data movement between on-premises and cloud data stores. Get more information on [performance comparisons](copy-activity-performance.md).
You can associate multiple nodes by installing the self-hosted integration runtime software from [Download Center](https://www.microsoft.com/download/details.aspx?id=39717). Then, register it by using either of the authentication keys that were obtained from the **New-AzDataFactoryV2IntegrationRuntimeKey** cmdlet, as described in the [tutorial](tutorial-hybrid-copy-powershell.md).
@@ -261,7 +255,6 @@ Here are the requirements for the TLS/SSL certificate that you use to secure com
> > Data movement in transit from a self-hosted IR to other data stores always happens within an encrypted channel, regardless of whether or not this certificate is set. - ## Proxy server considerations If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase.
@@ -308,6 +301,7 @@ If you select the **Use system proxy** option for the HTTP proxy, the self-hoste
<defaultProxy useDefaultCredentials="true" /> </system.net> ```+ You can then add proxy server details as shown in the following example: ```xml
@@ -323,6 +317,7 @@ If you select the **Use system proxy** option for the HTTP proxy, the self-hoste
```xml <proxy autoDetect="true|false|unspecified" bypassonlocal="true|false|unspecified" proxyaddress="uriString" scriptLocation="uriString" usesystemdefault="true|false|unspecified "/> ```+ 1. Save the configuration file in its original location. Then restart the self-hosted integration runtime host service, which picks up the changes. To restart the service, use the services applet from Control Panel. Or from Integration Runtime Configuration Manager, select the **Stop Service** button, and then select **Start Service**.
@@ -338,13 +333,13 @@ You also need to make sure that Microsoft Azure is in your company's allow list.
If you see error messages like the following ones, the likely reason is improper configuration of the firewall or proxy server. Such configuration prevents the self-hosted integration runtime from connecting to Data Factory to authenticate itself. To ensure that your firewall and proxy server are properly configured, refer to the previous section.
-* When you try to register the self-hosted integration runtime, you receive the following error message: "Failed to register this Integration Runtime node! Confirm that the Authentication key is valid and the integration service host service is running on this machine."
-* When you open Integration Runtime Configuration Manager, you see a status of **Disconnected** or **Connecting**. When you view Windows event logs, under **Event Viewer** > **Application and Services Logs** > **Microsoft Integration Runtime**, you see error messages like this one:
+- When you try to register the self-hosted integration runtime, you receive the following error message: "Failed to register this Integration Runtime node! Confirm that the Authentication key is valid and the integration service host service is running on this machine."
+- When you open Integration Runtime Configuration Manager, you see a status of **Disconnected** or **Connecting**. When you view Windows event logs, under **Event Viewer** > **Application and Services Logs** > **Microsoft Integration Runtime**, you see error messages like this one:
- ```
- Unable to connect to the remote server
- A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted).
- ```
+ ```output
+ Unable to connect to the remote server
+ A component of Integration Runtime has become unresponsive and restarts automatically. Component name: Integration Runtime (Self-hosted).
+ ```
### Enable remote access from an intranet
@@ -356,13 +351,12 @@ When you run the self-hosted integration runtime setup version 3.3 or later, by
When you use a firewall from a partner or others, you can manually open port 8060 or the user-configured port. If you have a firewall problem while setting up the self-hosted integration runtime, use the following command to install the self-hosted integration runtime without configuring the firewall:
-```
+```cmd
msiexec /q /i IntegrationRuntime.msi NOFIREWALL=1 ``` If you choose not to open port 8060 on the self-hosted integration runtime machine, use mechanisms other than the Setting Credentials application to configure data-store credentials. For example, you can use the **New-AzDataFactoryV2LinkedServiceEncryptCredential** PowerShell cmdlet. - ## Ports and firewalls There are two firewalls to consider:
@@ -376,7 +370,6 @@ At the corporate firewall level, you need to configure the following domains and
[!INCLUDE [domain-and-outbound-port-requirements](./includes/domain-and-outbound-port-requirements-internal.md)] - At the Windows firewall level or machine level, these outbound ports are normally enabled. If they aren't, you can configure the domains and ports on a self-hosted integration runtime machine. > [!NOTE]
@@ -390,12 +383,14 @@ Based on your source and sinks, you might need to allow additional domains and o
For some cloud databases, such as Azure SQL Database and Azure Data Lake, you might need to allow IP addresses of self-hosted integration runtime machines on their firewall configuration. ### Get URL of Azure Relay
-One required domain and port that need to be put in the allow list of your firewall is for the communication to Azure Relay. Self-hosted integration runtime use it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can get all the FQDNs which is required by your self-hosted integration runtime from ADF portal.
+
+One required domain and port that need to be put in the allow list of your firewall is for the communication to Azure Relay. The self-hosted integration runtime uses it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can see all the FQDNs that are required by your self-hosted integration runtime from the ADF portal. Follow these steps:
+ 1. Go to ADF portal and select your self-hosted integration runtime. 2. In Edit page, select **Nodes**.
-3. Click **View Service URLs** to get all FQDNs.
+3. Select **View Service URLs** to get all FQDNs.
-![Azure Relay URLs](media/create-self-hosted-integration-runtime/Azure-relay-url.png)
+ ![Azure Relay URLs](media/create-self-hosted-integration-runtime/Azure-relay-url.png)
4. You can add these FQDNs in the allow list of firewall rules.
@@ -411,16 +406,13 @@ For example, to copy from an on-premises data store to a SQL Database sink or an
> [!NOTE] > If your firewall doesn't allow outbound port 1433, the self-hosted integration runtime can't access the SQL database directly. In this case, you can use a [staged copy](copy-activity-performance.md) to SQL Database and Azure Synapse Analytics. In this scenario, you require only HTTPS (port 443) for the data movement. - ## Installation best practices You can install the self-hosted integration runtime by downloading a Managed Identity setup package from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717). See the article [Move data between on-premises and cloud](tutorial-hybrid-copy-powershell.md) for step-by-step instructions. - Configure a power plan on the host machine for the self-hosted integration runtime so that the machine doesn't hibernate. If the host machine hibernates, the self-hosted integration runtime goes offline. - Regularly back up the credentials associated with the self-hosted integration runtime.-- To automate self-hosted IR setup operations, please refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime). --
+- To automate self-hosted IR setup operations, refer to [Set up an existing self hosted IR via PowerShell](#setting-up-a-self-hosted-integration-runtime).
## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/create-shared-self-hosted-integration-runtime-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
@@ -1,15 +1,10 @@
Title: Create a shared self-hosted integration runtime with PowerShell description: Learn how to create a shared self-hosted integration runtime in Azure Data Factory, so multiple data factories can access the integration runtime.- -- - Last updated 06/10/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-access-strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-access-strategies.md
@@ -1,11 +1,9 @@
Title: Data access strategies description: Azure Data Factory now supports Static IP address ranges.- - Last updated 05/28/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-factory-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
@@ -2,13 +2,9 @@
Title: Azure Private Link for Azure Data Factory description: Learn about how Azure Private Link works in Azure Data Factory. - -- - Last updated 09/01/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-factory-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
@@ -1,14 +1,8 @@
Title: Managed identity for Data Factory description: Learn about managed identity for Azure Data Factory. - -- -- Last updated 07/06/2020
@@ -126,8 +120,8 @@ PATCH https://management.azure.com/subscriptions/<subsID>/resourceGroups/<resour
"type": "Microsoft.DataFactory/factories", "location": "<region>", "identity": {
- "type": "SystemAssigned"
- }
+ "type": "SystemAssigned"
+ }
}] } ```
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-factory-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
@@ -1,13 +1,11 @@
Title: Troubleshoot Azure Data Factory | Microsoft Docs description: Learn how to troubleshoot external control activities in Azure Data Factory.- Last updated 12/30/2020 - # Troubleshoot Azure Data Factory
@@ -1016,7 +1014,7 @@ For more information, see [Getting started with Fiddler](https://docs.telerik.co
### Activity stuck issue
-When you observe that the activity is running much longer than your normal runs with barely no progress, it may happen to be stuck. You can try canceling it and retry to see if it helps. If itΓÇÖs a copy activity, you can learn about the performance monitoring and troubleshooting from [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md); if itΓÇÖs a data flow, learn from [Mapping data flows performance](concepts-data-flow-performance.md) and tuning guide.
+When you observe that the activity is running much longer than your normal runs with barely no progress, it may happen to be stuck. You can try canceling it and retry to see if it helps. If it's a copy activity, you can learn about the performance monitoring and troubleshooting from [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md); if it's a data flow, learn from [Mapping data flows performance](concepts-data-flow-performance.md) and tuning guide.
### Payload is too large
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-factory-ux-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
@@ -1,7 +1,6 @@
Title: Troubleshoot Azure Data Factory UX description: Learn how to troubleshoot Azure Data Factory UX issues.-
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-new-branch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-new-branch.md
@@ -3,7 +3,6 @@ Title: Multiple branches in mapping data flow
description: Replicating data streams in mapping data flow with multiple branches -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
@@ -3,7 +3,6 @@ Title: Source transformation in mapping data flow
description: Learn how to set up a source transformation in mapping data flow. -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-transformation-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-transformation-overview.md
@@ -3,7 +3,6 @@ Title: Mapping data flow transformation overview
description: An overview of the different transformations available in mapping data flow - Last updated 10/27/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
@@ -1,7 +1,6 @@
Title: Troubleshoot mapping data flows description: Learn how to troubleshoot data flow issues in Azure Data Factory.-
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-union https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-union.md
@@ -3,7 +3,6 @@ Title: Union transformation in mapping data flow
description: Azure Data Factory mapping data flow New Branch Transformation -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-flow-window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-window.md
@@ -3,7 +3,6 @@ Title: Window transformation in mapping data flow
description: Azure Data Factory mapping data flow Window Transformation -
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-migration-guidance-hdfs-azure-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-hdfs-azure-storage.md
@@ -1,13 +1,9 @@
Title: Migrate data from an on-premises Hadoop cluster to Azure Storage description: Learn how to use Azure Data Factory to migrate data from on-premises Hadoop cluster to Azure Storage.- -- - Last updated 8/30/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-migration-guidance-netezza-azure-sqldw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-netezza-azure-sqldw.md
@@ -1,13 +1,9 @@
Title: Migrate data from an on-premises Netezza server to Azure description: Use Azure Data Factory to migrate data from an on-premises Netezza server to Azure.- -- - Last updated 12/09/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-migration-guidance-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-overview.md
@@ -1,13 +1,9 @@
Title: Migrate data from data lake and data warehouse to Azure description: Use Azure Data Factory to migrate data from your data lake and data warehouse to Azure.- -- - Last updated 7/30/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-migration-guidance-s3-azure-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-s3-azure-storage.md
@@ -1,13 +1,9 @@
Title: Migrate data from Amazon S3 to Azure Storage description: Use Azure Data Factory to migrate data from Amazon S3 to Azure Storage.- -- - Last updated 8/04/2019
data-factory https://docs.microsoft.com/en-us/azure/data-factory/data-movement-security-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-movement-security-considerations.md
@@ -1,13 +1,9 @@
Title: Security considerations description: Describes basic security infrastructure that data movement services in Azure Data Factory use to help secure your data. - -- - Last updated 05/26/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/delete-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/delete-activity.md
@@ -1,15 +1,9 @@
Title: Delete Activity in Azure Data Factory description: Learn how to delete files in various file stores with the Delete Activity in Azure Data Factory.- -- - Last updated 08/12/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/enable-aad-authentication-azure-ssis-ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
@@ -1,14 +1,11 @@
Title: Enable AAD for Azure SSIS Integration Runtime description: This article describes how to enable Azure Active Directory authentication with the managed identity for Azure Data Factory to create Azure-SSIS Integration Runtime.- - ms.devlang: powershell - Last updated 07/09/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/enable-customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-customer-managed-key.md
@@ -1,9 +1,6 @@
Title: Encrypt Azure Data Factory with customer-managed key description: Enhance Data Factory security with Bring Your Own Key (BYOK)--
data-factory https://docs.microsoft.com/en-us/azure/data-factory/encrypt-credentials-self-hosted-integration-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
@@ -1,20 +1,11 @@
Title: Encrypt credentials in Azure Data Factory description: Learn how to encrypt and store credentials for your on-premises data stores on a machine with self-hosted integration runtime. - --- --- Last updated 01/15/2018 - # Encrypt credentials for on-premises data stores in Azure Data Factory
@@ -34,17 +25,17 @@ Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with va
```json {
- "properties": {
- "type": "SqlServer",
- "typeProperties": {
- "connectionString": "Server=<servername>;Database=<databasename>;User ID=<username>;Password=<password>;Timeout=60"
- },
- "connectVia": {
- "type": "integrationRuntimeReference",
- "referenceName": "<integration runtime name>"
- },
- "name": "SqlServerLinkedService"
- }
+ "properties": {
+ "type": "SqlServer",
+ "typeProperties": {
+ "connectionString": "Server=<servername>;Database=<databasename>;User ID=<username>;Password=<password>;Timeout=60"
+ },
+ "connectVia": {
+ "type": "integrationRuntimeReference",
+ "referenceName": "<integration runtime name>"
+ },
+ "name": "SqlServerLinkedService"
+ }
} ```
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-avro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-avro.md
@@ -2,10 +2,7 @@
Title: Avro format in Azure Data Factory description: 'This topic describes how to deal with Avro format in Azure Data Factory.' -- - Last updated 09/15/2020
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-binary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-binary.md
@@ -2,10 +2,7 @@
Title: Binary format in Azure Data Factory description: 'This topic describes how to deal with Binary format in Azure Data Factory.' -- - Last updated 10/29/2020
@@ -82,8 +79,8 @@ Supported **binary read settings** under `formatSettings`:
| - | | -- | | type | The type of formatSettings must be set to **BinaryReadSettings**. | Yes | | compressionProperties | A group of properties on how to decompress data for a given compression codec. | No |
-| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
-| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
+| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
+| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
```json "activities": [
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-common-data-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-common-data-model.md
@@ -3,7 +3,6 @@ Title: Common Data Model format
description: Transform data using the Common Data Model metadata system - Last updated 02/04/2021
@@ -86,24 +85,24 @@ When mapping data flow columns to entity properties in the Sink transformation,
``` source(output(
- ProductSizeId as integer,
- ProductColor as integer,
- CustomerId as string,
- Note as string,
- LastModifiedDate as timestamp
- ),
- allowSchemaDrift: true,
- validateSchema: false,
- entity: 'Product.cdm.json/Product',
- format: 'cdm',
- manifestType: 'manifest',
- manifestName: 'ProductManifest',
- entityPath: 'Product',
- corpusPath: 'Products',
- corpusStore: 'adlsgen2',
- adlsgen2_fileSystem: 'models',
- folderPath: 'ProductData',
- fileSystem: 'data') ~> CDMSource
+ ProductSizeId as integer,
+ ProductColor as integer,
+ CustomerId as string,
+ Note as string,
+ LastModifiedDate as timestamp
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ entity: 'Product.cdm.json/Product',
+ format: 'cdm',
+ manifestType: 'manifest',
+ manifestName: 'ProductManifest',
+ entityPath: 'Product',
+ corpusPath: 'Products',
+ corpusStore: 'adlsgen2',
+ adlsgen2_fileSystem: 'models',
+ folderPath: 'ProductData',
+ fileSystem: 'data') ~> CDMSource
``` ### Sink properties
@@ -135,21 +134,21 @@ The associated data flow script is:
``` CDMSource sink(allowSchemaDrift: true,
- validateSchema: false,
- entity: 'Product.cdm.json/Product',
- format: 'cdm',
- entityPath: 'ProductSize',
- manifestName: 'ProductSizeManifest',
- corpusPath: 'Products',
- partitionPath: 'adf',
- folderPath: 'ProductSizeData',
- fileSystem: 'cdm',
- subformat: 'parquet',
- corpusStore: 'adlsgen2',
- adlsgen2_fileSystem: 'models',
- truncate: true,
- skipDuplicateMapInputs: true,
- skipDuplicateMapOutputs: true) ~> CDMSink
+ validateSchema: false,
+ entity: 'Product.cdm.json/Product',
+ format: 'cdm',
+ entityPath: 'ProductSize',
+ manifestName: 'ProductSizeManifest',
+ corpusPath: 'Products',
+ partitionPath: 'adf',
+ folderPath: 'ProductSizeData',
+ fileSystem: 'cdm',
+ subformat: 'parquet',
+ corpusStore: 'adlsgen2',
+ adlsgen2_fileSystem: 'models',
+ truncate: true,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> CDMSink
```
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-delimited-text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
@@ -2,10 +2,7 @@
Title: Delimited text format in Azure Data Factory description: 'This topic describes how to deal with delimited text format in Azure Data Factory.' -- - Last updated 12/07/2020
@@ -28,13 +25,13 @@ For a full list of sections and properties available for defining datasets, see
| type | The type property of the dataset must be set to **DelimitedText**. | Yes | | location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. | Yes | | columnDelimiter | The character(s) used to separate columns in a file. <br>The default value is **comma `,`**. When the column delimiter is defined as empty string, which means no delimiter, the whole line is taken as a single column.<br>Currently, column delimiter as empty string or multi-char is only supported for mapping data flow but not Copy activity. | No |
-| rowDelimiter | The single character or "\r\n" used to separate rows in a file. <br>The default value is any of the following values **on read: ["\r\n", "\r", "\n"]**, and **"\n" or ΓÇ£\r\nΓÇ¥ on write** by mapping data flow and Copy activity respectively. <br>When the row delimiter is set to no delimiter (empty string), the column delimiter must be set as no delimiter (empty string) as well, which means to treat the entire content as a single value.<br>Currently, row delimiter as empty string is only supported for mapping data flow but not Copy activity. | No |
+| rowDelimiter | The single character or "\r\n" used to separate rows in a file. <br>The default value is any of the following values **on read: ["\r\n", "\r", "\n"]**, and **"\n" or "\r\n" on write** by mapping data flow and Copy activity respectively. <br>When the row delimiter is set to no delimiter (empty string), the column delimiter must be set as no delimiter (empty string) as well, which means to treat the entire content as a single value.<br>Currently, row delimiter as empty string is only supported for mapping data flow but not Copy activity. | No |
| quoteChar | The single character to quote column values if it contains column delimiter. <br>The default value is **double quotes** `"`. <br>When `quoteChar` is defined as empty string, it means there is no quote char and column value is not quoted, and `escapeChar` is used to escape the column delimiter and itself. | No |
-| escapeChar | The single character to escape quotes inside a quoted value.<br>The default value is **backslash `\`**. <br>When `escapeChar` is defined as empty string, the `quoteChar` must be set as empty string as well, in which case make sure all column values donΓÇÖt contain delimiters. | No |
+| escapeChar | The single character to escape quotes inside a quoted value.<br>The default value is **backslash `\`**. <br>When `escapeChar` is defined as empty string, the `quoteChar` must be set as empty string as well, in which case make sure all column values don't contain delimiters. | No |
| firstRowAsHeader | Specifies whether to treat/make the first row as a header line with names of columns.<br>Allowed values are **true** and **false** (default).<br>When first row as header is false, note UI data preview and lookup activity output auto generate column names as Prop_{n} (starting from 0), copy activity requires [explicit mapping](copy-activity-schema-and-type-mapping.md#explicit-mapping) from source to sink and locates columns by ordinal (starting from 1), and mapping data flow lists and locates columns with name as Column_{n} (starting from 1). | No | | nullValue | Specifies the string representation of null value. <br>The default value is **empty string**. | No |
-| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", ΓÇ£UTF-7ΓÇ¥, "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258ΓÇ¥.<br>Note mapping data flow doesnΓÇÖt support UTF-7 encoding. | No |
-| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed. <br>**Note** currently Copy activity doesnΓÇÖt support "snappy" & "lz4", and mapping data flow doesnΓÇÖt support "ZipDeflate", "TarGzip" and "Tar". <br>**Note** when using copy activity to decompress **ZipDeflate**/**TarGzip**/**Tar** file(s) and write to file-based sink data store, by default files are extracted to the folder:`<path specified in dataset>/<folder named as source compressed file>/`, use `preserveZipFileNameAsFolder`/`preserveCompressionFileNameAsFolder` on [copy activity source](#delimited-text-as-source) to control whether to preserve the name of the compressed file(s) as folder structure. | No |
+| encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", "UTF-7", "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258".<br>Note mapping data flow doesn't support UTF-7 encoding. | No |
+| compressionCodec | The compression codec used to read/write text files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed. <br>**Note** currently Copy activity doesn't support "snappy" & "lz4", and mapping data flow doesn't support "ZipDeflate", "TarGzip" and "Tar". <br>**Note** when using copy activity to decompress **ZipDeflate**/**TarGzip**/**Tar** file(s) and write to file-based sink data store, by default files are extracted to the folder:`<path specified in dataset>/<folder named as source compressed file>/`, use `preserveZipFileNameAsFolder`/`preserveCompressionFileNameAsFolder` on [copy activity source](#delimited-text-as-source) to control whether to preserve the name of the compressed file(s) as folder structure. | No |
| compressionLevel | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No | Below is an example of delimited text dataset on Azure Blob Storage:
@@ -86,8 +83,8 @@ Supported **delimited text read settings** under `formatSettings`:
| type | The type of formatSettings must be set to **DelimitedTextReadSettings**. | Yes | | skipLineCount | Indicates the number of **non-empty** rows to skip when reading data from input files. <br>If both skipLineCount and firstRowAsHeader are specified, the lines are skipped first and then the header information is read from the input file. | No | | compressionProperties | A group of properties on how to decompress data for a given compression codec. | No |
-| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
-| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
+| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
+| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
```json "activities": [
@@ -165,10 +162,10 @@ The associated data flow script is:
``` source(
- allowSchemaDrift: true,
- validateSchema: false,
- multiLineRow: true,
- wildcardPaths:['*.csv']) ~> CSVSource
+ allowSchemaDrift: true,
+ validateSchema: false,
+ multiLineRow: true,
+ wildcardPaths:['*.csv']) ~> CSVSource
``` > [!NOTE]
@@ -194,10 +191,10 @@ The associated data flow script is:
``` CSVSource sink(allowSchemaDrift: true,
- validateSchema: false,
- truncate: true,
- skipDuplicateMapInputs: true,
- skipDuplicateMapOutputs: true) ~> CSVSink
+ validateSchema: false,
+ truncate: true,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> CSVSink
``` ## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-delta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delta.md
@@ -3,11 +3,9 @@ Title: Delta format in Azure Data Factory
description: Transform and move data from a delta lake using the delta format - Last updated 12/07/2020 - # Delta format in Azure Data Factory
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-excel.md
@@ -2,10 +2,7 @@
Title: Excel format in Azure Data Factory description: 'This topic describes how to deal with Excel format in Azure Data Factory.' -- - Last updated 12/08/2020
@@ -34,7 +31,7 @@ For a full list of sections and properties available for defining datasets, see
| firstRowAsHeader | Specifies whether to treat the first row in the given worksheet/range as a header line with names of columns.<br>Allowed values are **true** and **false** (default). | No | | nullValue | Specifies the string representation of null value. <br>The default value is **empty string**. | No | | compression | Group of properties to configure file compression. Configure this section when you want to do compression/decompression during activity execution. | No |
-| type<br/>(*under `compression`*) | The compression codec used to read/write JSON files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed.<br>**Note** currently Copy activity doesnΓÇÖt support "snappy" & "lz4", and mapping data flow doesnΓÇÖt support "ZipDeflate", "TarGzip" and "Tar".<br>**Note** when using copy activity to decompress **ZipDeflate** file(s) and write to file-based sink data store, files are extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No. |
+| type<br/>(*under `compression`*) | The compression codec used to read/write JSON files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed.<br>**Note** currently Copy activity doesn't support "snappy" & "lz4", and mapping data flow doesn't support "ZipDeflate", "TarGzip" and "Tar".<br>**Note** when using copy activity to decompress **ZipDeflate** file(s) and write to file-based sink data store, files are extracted to the folder: `<path specified in dataset>/<folder named as source zip file>/`. | No. |
| level<br/>(*under `compression`*) | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No | Below is an example of Excel dataset on Azure Blob Storage:
@@ -124,8 +121,8 @@ The associated data flow script is:
``` source(allowSchemaDrift: true,
- validateSchema: false,
- wildcardPaths:['*.xls']) ~> ExcelSource
+ validateSchema: false,
+ wildcardPaths:['*.xls']) ~> ExcelSource
``` If you use inline dataset, you see the following source options in mapping data flow.
@@ -136,13 +133,13 @@ The associated data flow script is:
``` source(allowSchemaDrift: true,
- validateSchema: false,
- format: 'excel',
- fileSystem: 'container',
- folderPath: 'path',
- fileName: 'sample.xls',
- sheetName: 'worksheet',
- firstRowAsHeader: true) ~> ExcelSourceInlineDataset
+ validateSchema: false,
+ format: 'excel',
+ fileSystem: 'container',
+ folderPath: 'path',
+ fileName: 'sample.xls',
+ sheetName: 'worksheet',
+ firstRowAsHeader: true) ~> ExcelSourceInlineDataset
``` ## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-json.md
@@ -2,10 +2,7 @@
Title: JSON format in Azure Data Factory description: 'This topic describes how to deal with JSON format in Azure Data Factory.' -- - Last updated 10/29/2020
@@ -29,7 +26,7 @@ For a full list of sections and properties available for defining datasets, see
| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes | | encodingName | The encoding type used to read/write test files. <br>Allowed values are as follows: "UTF-8", "UTF-16", "UTF-16BE", "UTF-32", "UTF-32BE", "US-ASCII", "UTF-7", "BIG5", "EUC-JP", "EUC-KR", "GB2312", "GB18030", "JOHAB", "SHIFT-JIS", "CP875", "CP866", "IBM00858", "IBM037", "IBM273", "IBM437", "IBM500", "IBM737", "IBM775", "IBM850", "IBM852", "IBM855", "IBM857", "IBM860", "IBM861", "IBM863", "IBM864", "IBM865", "IBM869", "IBM870", "IBM01140", "IBM01141", "IBM01142", "IBM01143", "IBM01144", "IBM01145", "IBM01146", "IBM01147", "IBM01148", "IBM01149", "ISO-2022-JP", "ISO-2022-KR", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-13", "ISO-8859-15", "WINDOWS-874", "WINDOWS-1250", "WINDOWS-1251", "WINDOWS-1252", "WINDOWS-1253", "WINDOWS-1254", "WINDOWS-1255", "WINDOWS-1256", "WINDOWS-1257", "WINDOWS-1258".| No | | compression | Group of properties to configure file compression. Configure this section when you want to do compression/decompression during activity execution. | No |
-| type<br/>(*under `compression`*) | The compression codec used to read/write JSON files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed.<br>**Note** currently Copy activity doesnΓÇÖt support "snappy" & "lz4", and mapping data flow doesnΓÇÖt support "ZipDeflate"", "TarGzip" and "Tar".<br>**Note** when using copy activity to decompress **ZipDeflate**/**TarGzip**/**Tar** file(s) and write to file-based sink data store, by default files are extracted to the folder:`<path specified in dataset>/<folder named as source compressed file>/`, use `preserveZipFileNameAsFolder`/`preserveCompressionFileNameAsFolder` on [copy activity source](#json-as-source) to control whether to preserve the name of the compressed file(s) as folder structure.| No. |
+| type<br/>(*under `compression`*) | The compression codec used to read/write JSON files. <br>Allowed values are **bzip2**, **gzip**, **deflate**, **ZipDeflate**, **TarGzip**, **Tar**, **snappy**, or **lz4**. Default is not compressed.<br>**Note** currently Copy activity doesn't support "snappy" & "lz4", and mapping data flow doesn't support "ZipDeflate"", "TarGzip" and "Tar".<br>**Note** when using copy activity to decompress **ZipDeflate**/**TarGzip**/**Tar** file(s) and write to file-based sink data store, by default files are extracted to the folder:`<path specified in dataset>/<folder named as source compressed file>/`, use `preserveZipFileNameAsFolder`/`preserveCompressionFileNameAsFolder` on [copy activity source](#json-as-source) to control whether to preserve the name of the compressed file(s) as folder structure.| No. |
| level<br/>(*under `compression`*) | The compression ratio. <br>Allowed values are **Optimal** or **Fastest**.<br>- **Fastest:** The compression operation should complete as quickly as possible, even if the resulting file is not optimally compressed.<br>- **Optimal**: The compression operation should be optimally compressed, even if the operation takes a longer time to complete. For more information, see [Compression Level](/dotnet/api/system.io.compression.compressionlevel) topic. | No | Below is an example of JSON dataset on Azure Blob Storage:
@@ -80,8 +77,8 @@ Supported **JSON read settings** under `formatSettings`:
| - | | -- | | type | The type of formatSettings must be set to **JsonReadSettings**. | Yes | | compressionProperties | A group of properties on how to decompress data for a given compression codec. | No |
-| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
-| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you donΓÇÖt have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
+| preserveZipFileNameAsFolder<br>(*under `compressionProperties`->`type` as `ZipDeflateReadSettings`*) | Applies when input dataset is configured with **ZipDeflate** compression. Indicates whether to preserve the source zip file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes unzipped files to `<path specified in dataset>/<folder named as source zip file>/`.<br>- When set to **false**, Data Factory writes unzipped files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source zip files to avoid racing or unexpected behavior. | No |
+| preserveCompressionFileNameAsFolder<br>(*under `compressionProperties`->`type` as `TarGZipReadSettings` or `TarReadSettings`*) | Applies when input dataset is configured with **TarGzip**/**Tar** compression. Indicates whether to preserve the source compressed file name as folder structure during copy.<br>- When set to **true (default)**, Data Factory writes decompressed files to `<path specified in dataset>/<folder named as source compressed file>/`. <br>- When set to **false**, Data Factory writes decompressed files directly to `<path specified in dataset>`. Make sure you don't have duplicated file names in different source files to avoid racing or unexpected behavior. | No |
### JSON as sink
@@ -320,10 +317,10 @@ To manually add a JSON structure, add a new column and enter the expression in t
``` @(
- field1=0,
- field2=@(
- field1=0
- )
+ field1=0,
+ field2=@(
+ field1=0
+ )
) ```
@@ -331,38 +328,38 @@ If this expression were entered for a column named "complexColumn", then it woul
``` {
- "complexColumn": {
- "field1": 0,
- "field2": {
- "field1": 0
- }
- }
+ "complexColumn": {
+ "field1": 0,
+ "field2": {
+ "field1": 0
+ }
+ }
} ``` #### Sample manual script for complete hierarchical definition ``` @(
- title=Title,
- firstName=FirstName,
- middleName=MiddleName,
- lastName=LastName,
- suffix=Suffix,
- contactDetails=@(
- email=EmailAddress,
- phone=Phone
- ),
- address=@(
- line1=AddressLine1,
- line2=AddressLine2,
- city=City,
- state=StateProvince,
- country=CountryRegion,
- postCode=PostalCode
- ),
- ids=[
- toString(CustomerID), toString(AddressID), rowguid
- ]
+ title=Title,
+ firstName=FirstName,
+ middleName=MiddleName,
+ lastName=LastName,
+ suffix=Suffix,
+ contactDetails=@(
+ email=EmailAddress,
+ phone=Phone
+ ),
+ address=@(
+ line1=AddressLine1,
+ line2=AddressLine2,
+ city=City,
+ state=StateProvince,
+ country=CountryRegion,
+ postCode=PostalCode
+ ),
+ ids=[
+ toString(CustomerID), toString(AddressID), rowguid
+ ]
) ```
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-orc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-orc.md
@@ -2,10 +2,7 @@
Title: ORC format in Azure Data Factory description: 'This topic describes how to deal with ORC format in Azure Data Factory.' -- - Last updated 09/28/2020
@@ -117,9 +114,9 @@ The associated data flow script of an ORC source configuration is:
``` source(allowSchemaDrift: true,
- validateSchema: false,
- rowUrlColumn: 'fileName',
- format: 'orc') ~> OrcSource
+ validateSchema: false,
+ rowUrlColumn: 'fileName',
+ format: 'orc') ~> OrcSource
``` ### Sink properties
@@ -140,13 +137,13 @@ The associated data flow script of an ORC sink configuration is:
``` OrcSource sink(
- format: 'orc',
- filePattern:'output[n].orc',
- truncate: true,
+ format: 'orc',
+ filePattern:'output[n].orc',
+ truncate: true,
allowSchemaDrift: true,
- validateSchema: false,
- skipDuplicateMapInputs: true,
- skipDuplicateMapOutputs: true) ~> OrcSink
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> OrcSink
``` ## Using Self-hosted Integration Runtime
data-factory https://docs.microsoft.com/en-us/azure/data-factory/format-parquet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-parquet.md
@@ -2,10 +2,7 @@
Title: Parquet format in Azure Data Factory description: 'This topic describes how to deal with Parquet format in Azure Data Factory.' -- - Last updated 09/27/2020
@@ -26,7 +23,7 @@ For a full list of sections and properties available for defining datasets, see
| - | | -- | | type | The type property of the dataset must be set to **Parquet**. | Yes | | location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. **See details in connector article -> Dataset properties section**. | Yes |
-| compressionCodec | The compression codec to use when writing to Parquet files. When reading from Parquet files, Data Factories automatically determine the compression codec based on the file metadata.<br>Supported types are ΓÇ£**none**ΓÇ¥, ΓÇ£**gzip**ΓÇ¥, ΓÇ£**snappy**ΓÇ¥ (default), and "**lzo**". Note currently Copy