Updates from: 05/05/2021 03:09:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 03/10/2021 Last updated : 05/04/2021
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com).| | client_secret | Yes, in Web Apps | The application secret that was generated in the [Azure portal](https://portal.azure.com/). Client secrets are used in this flow for Web App scenarios, where the client can securely store a client secret. For Native App (public client) scenarios, client secrets cannot be securely stored, and therefore are not used in this call. If you use a client secret, please change it on a periodic basis. | | grant_type |Required |The type of grant. For the authorization code flow, the grant type must be `authorization_code`. |
-| scope |Recommended |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
+| scope |Required |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
| code |Required |The authorization code that you acquired in the first leg of the flow. | | redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
A successful token response looks like this:
"refresh_token": "AAQfQmvuDy8WtUv-sd0TBwWVQs1rC-Lfxa_NDkLqpg50Cxp5Dxj0VPF1mx2Z...", } ```+ | Parameter | Description | | | | | not_before |The time at which the token is considered valid, in epoch time. |
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 04/30/2021 Last updated : 05/04/2021
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
| [Profile editing flow](add-profile-editing-policy.md) | GA | GA | | | [Self-Service password reset](add-password-reset-policy.md) | GA| GA| | | [Force password reset](force-password-reset.md) | Preview | NA | |
-| [phone sign-up and sign-in](phone-authentication-user-flows.md) | GA | GA | |
+| [Phone sign-up and sign-in](phone-authentication-user-flows.md) | GA | GA | |
+
+## OAuth 2.0 application authorization flows
+
+The following table summarizes the OAuth 2.0 and OpenId Connect application authentication flows that can be integrated with Azure AD B2C.
+
+|Feature |User flow |Custom policy |Notes |
+||::|::||
+[Authorization code](authorization-code-flow.md) | GA | GA | Allows users to sign in to web applications. The web application receives an authorization code. The authorization code is redeemed to acquire a token to call web APIs.|
+[Authorization code with PKCE](authorization-code-flow.md)| GA | GA | Allows users to sign in to mobile and single-page applications. The application receives an authorization code using proof key for code exchange (PKCE). The authorization code is redeemed to acquire a token to call web APIs. |
+[Client credentials grant](https://tools.ietf.org/html/rfc6749#section-4.4)| GA | GA | Allows access web-hosted resources by using the identity of an application. Commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. <br /> <br /> To use this feature in an Azure AD B2C tenant, use the Azure AD endpoint of your Azure AD B2C tenant. For more information, see [OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). This flow doesn't use your Azure AD B2C [user flow or custom policy](user-flow-overview.md) settings. |
+[Device authorization grant](https://tools.ietf.org/html/rfc8628)| NA | NA | Allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. |
+[Implicit flow](implicit-flow-single-page-application.md) | GA | GA | Allows users to sign in to single-page applications. The app gets tokens directly without performing a back-end server credential exchange.|
+[On-behalf-of](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)| NA | NA | An application invokes a service or web API, which in turn needs to call another service or web API. <br /> <br /> For the middle-tier service to make authenticated requests to the downstream service, pass a *client credential* token in the authorization header. Optionally, you can include a custom header with the Azure AD B2C user's token. |
+[OpenId Connect](openid-connect.md) | GA | GA | OpenID Connect introduces the concept of an ID token, which is a security token that allows the client to verify the identity of the user. |
+[OpenId Connect hybrid flow](openid-connect.md) | GA | GA | Allows a web application retrieve the ID token on the authorize request along with an authorization code. |
+[Resource owner password credentials (ROPC)](add-ropc-policy.md) | Preview | Preview | Allows a mobile application to sign in the user by directly handling their password. |
+
+### OAuth 2.0 options
+
+|Feature |User flow |Custom policy |Notes |
+||::|::||
+| [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider) | GA | GA | Query string parameter `domain_hint`. |
+| [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name) | GA | GA | Query string parameter `login_hint`. |
+| Insert JSON into user journey via `client_assertion`| NA| Deprecated | |
+| Insert JSON into user journey as [id_token_hint](id-token-hint.md) | NA | GA | |
+| [Pass identity provider token to the application](idp-pass-through-user-flow.md)| Preview| Preview| For example, from Facebook to app. |
+
+## SAML2 application authentication flows
+
+The following table summarizes the Security Assertion Markup Language (SAML) application authentication flows that can be integrated with Azure AD B2C.
+
+|Feature |User flow |Custom policy |Notes |
+||::|::||
+[SP initiated](saml-service-provider.md) | NA | GA | POST and Redirect bindings. |
+[IDP initiated](saml-service-provider-options.md#identity-provider-initiated-flow) | NA | GA | Where the initiating identity provider is Azure AD B2C. |
## User experience customization
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
| [Disable email verification](disable-email-verification.md) | GA| GA| Not recommended for production environments. Disabling email verification in the sign-up process may lead to spam. |
-## Protocols and authorization flows
-
-|Feature |User flow |Custom policy |Notes |
-||::|::||
-|[OAuth2 authorization code](authorization-code-flow.md) | GA | GA |
-|[OAuth2 authorization code with PKCE](authorization-code-flow.md)| GA | GA | Public clients and single-page applications. |
-|[OAuth2 implicit flow](implicit-flow-single-page-application.md) | GA | GA | |
-|[OAuth2 resource owner password credentials](add-ropc-policy.md) | Preview | Preview | |
-|OAuth1 | NA | NA | Not supported. |
-|[OpenId Connect](openid-connect.md) | GA | GA | |
-|[SAML2](saml-service-provider.md) | NA | GA | POST and Redirect bindings. |
-| WSFED | NA | NA | Not supported. |
## Identity providers
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
|[Secure with OAuth2 bearer authentication](secure-rest-api.md#oauth2-bearer-authentication) | NA | GA | | |[Secure API key authentication](secure-rest-api.md#api-key-authentication) | NA | GA | |
-### Application and Azure AD B2C integration
-
-|Feature |User flow |Custom policy |Notes |
-||::|::||
-| [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider) | GA | GA | Query string parameter `domain_hint`. |
-| [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name) | GA | GA | Query string parameter `login_hint`. |
-| Insert JSON into user journey via `client_assertion`| NA| Deprecated | |
-| Insert JSON into user journey as [id_token_hint](id-token-hint.md) | NA | GA | |
-| [Pass identity provider token to the application](idp-pass-through-user-flow.md)| Preview| Preview| For example, from Facebook to app. |
- ## Custom policy features
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/string-transformations.md
Compare two claims, and throw an exception if they are not equal according to th
| InputClaim | inputClaim2 | string | Second claim's type, which is to be compared. | | InputParameter | stringComparison | string | string comparison, one of the values: Ordinal, OrdinalIgnoreCase. |
-The **AssertStringClaimsAreEqual** claims transformation is always executed from a [validation technical profile](validation-technical-profile.md) that is called by a [self-asserted technical profile](self-asserted-technical-profile.md), or a [DisplayConrtol](display-controls.md). The `UserMessageIfClaimsTransformationStringsAreNotEqual` metadata of a self-asserted technical profile controls the error message that is presented to the user. The error messages can be [localized](localization-string-ids.md#claims-transformations-error-messages).
+The **AssertStringClaimsAreEqual** claims transformation is always executed from a [validation technical profile](validation-technical-profile.md) that is called by a [self-asserted technical profile](self-asserted-technical-profile.md), or a [DisplayControl](display-controls.md). The `UserMessageIfClaimsTransformationStringsAreNotEqual` metadata of a self-asserted technical profile controls the error message that is presented to the user. The error messages can be [localized](localization-string-ids.md#claims-transformations-error-messages).
![AssertStringClaimsAreEqual execution](./media/string-transformations/assert-execution.png)
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 04/05/2021 Last updated : 05/04/2021
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## April 2021
+
+### New articles
+
+- [Set up sign-up and sign-in with a eBay account using Azure Active Directory B2C](identity-provider-ebay.md)
+- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)
+- [Define a Conditional Access technical profile in an Azure Active Directory B2C custom policy](conditional-access-technical-profile.md)
+- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+
+### Updated articles
+
+- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md)
+- [Add an API connector to a sign-up user flow](add-api-connector.md)
+- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](custom-policy-rest-api-claims-exchange.md)
+- [Secure your API Connector](secure-rest-api.md)
+- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)
+- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)
+- [Overview of policy keys in Azure Active Directory B2C](policy-keys-overview.md)
+- [Custom email verification with Mailjet](custom-email-mailjet.md)
+- [Custom email verification with SendGrid](custom-email-sendgrid.md)
+- [Tutorial: Create user flows in Azure Active Directory B2C](tutorial-create-user-flows.md)
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
+- [User flows and custom policies overview](user-flow-overview.md)
+- [Tutorial: Enable authentication in a single-page application with Azure AD B2C](tutorial-single-page-app.md)
+- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)
+- [Enable multi-factor authentication in Azure Active Directory B2C](multi-factor-authentication.md)
+- [User flow versions in Azure Active Directory B2C](user-flow-versions.md)
++ ## March 2021 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Azure Active Directory B2C code samples](code-samples.md) - [Track user behavior in Azure AD B2C by using Application Insights](analytics-with-application-insights.md) - [Configure session behavior in Azure Active Directory B2C](session-behavior.md)-
-## January 2021
-
-### New articles
--- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)-- [Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant](identity-provider-azure-ad-b2c.md)-- [Set up the local account identity provider](identity-provider-local.md)-- [Set up a sign-in flow in Azure Active Directory B2C](add-sign-in-policy.md)-
-### Updated articles
--- [Track user behavior in Azure Active Directory B2C using Application Insights](analytics-with-application-insights.md)-- [TechnicalProfiles](technicalprofiles.md)-- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs.md)-- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)-- [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md)-- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)-- [Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant](identity-provider-azure-ad-b2c.md)-- [Set up sign-in for multi-tenant Azure Active Directory using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md)-- [Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md)-- [Set up sign-up and sign-in with a Facebook account using Azure Active Directory B2C](identity-provider-facebook.md)-- [Set up sign-up and sign-in with a GitHub account using Azure Active Directory B2C](identity-provider-github.md)-- [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md)-- [Set up sign-up and sign-in with a ID.me account using Azure Active Directory B2C](identity-provider-id-me.md)-- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md)-- [Set up sign-up and sign-in with a Microsoft account using Azure Active Directory B2C](identity-provider-microsoft-account.md)-- [Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C](identity-provider-qq.md)-- [Set up sign-up and sign-in with a Salesforce account using Azure Active Directory B2C](identity-provider-salesforce.md)-- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)-- [Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C](identity-provider-wechat.md)-- [Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C](identity-provider-weibo.md)-- [Azure AD B2C custom policy overview](custom-policy-overview.md)--
-## December 2020
-
-### New articles
--- [Create a user flow in Azure Active Directory B2C](add-sign-up-and-sign-in-policy.md)-- [Set up phone sign-up and sign-in for user flows (preview)](phone-authentication-user-flows.md)-
-### Updated articles
--- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)-- [Azure Active Directory B2C code samples](code-samples.md)-- [Page layout versions](page-layout.md)-
-## November 2020
-
-### Updated articles
-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Tutorial: Enable authentication in a single-page application with Azure AD B2C](tutorial-single-page-app.md)--
-## October 2020
-
-### New articles
-- [Add an API connector to a sign-up user flow (preview)](add-api-connector.md)-- [Tutorial: Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)-- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)-- [SubJourneys](subjourneys.md)-
-### Updated articles
-- [Define a SAML identity provider technical profile in an Azure Active Directory B2C custom policy](saml-identity-provider-technical-profile.md)-- [Add an API connector to a sign-up user flow (preview)](add-api-connector.md)-- [Azure Active Directory B2C code samples](code-samples.md)-- [Application types that can be used in Active Directory B2C](application-types.md)-- [OAuth 2.0 authorization code flow in Azure Active Directory B2C](authorization-code-flow.md)-- [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md)-
-## September 2020
-
-### New articles
-- [Overview of policy keys in Azure Active Directory B2C](policy-keys-overview.md)--
-### Updated articles
-- [Set redirect URLs to b2clogin.com for Azure Active Directory B2C](b2clogin.md)-- [Define an OpenID Connect technical profile in an Azure Active Directory B2C custom policy](openid-connect-technical-profile.md)-- [Set up phone sign-up and sign-in with custom policies in Azure AD B2C](phone-authentication-user-flows.md)--
-## August 2020
-
-### Updated articles
-- [Page layout versions](page-layout.md)-- [Billing model for Azure Active Directory B2C](billing.md)
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/secure-your-domain.md
-# Disable weak ciphers and password hash synchronization to secure an Azure Active Directory Domain Services managed domain
+# Harden an Azure Active Directory Domain Services managed domain
By default, Azure Active Directory Domain Services (Azure AD DS) enables the use of ciphers such as NTLM v1 and TLS v1. These ciphers may be required for some legacy applications, but are considered weak and can be disabled if you don't need them. If you have on-premises hybrid connectivity using Azure AD Connect, you can also disable the synchronization of NTLM password hashes.
-This article shows you how to disable NTLM v1 and TLS v1 ciphers and disable NTLM password hash synchronization.
+This article shows you how to harden a managed domain by using setting setting such as:
+
+- Disable NTLM v1 and TLS v1 ciphers
+- Disable NTLM password hash synchronization
+- Disable the ability to change passwords with RC4 encryption
+- Enable Kerberos armoring
## Prerequisites
To complete this article, you need the following resources:
* An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance].
-## Use Security settings to disable weak ciphers and NTLM password hash sync
+## Use Security settings to harden your domain
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Azure AD Domain Services**.
To complete this article, you need the following resources:
- **TLS 1.2 only mode** - **NTLM authentication** - **NTLM password synchronization from on-premises**
+ - **RC4 encryption**
+ - **Kerberos armoring**
![Screenshot of Security settings to disable weak ciphers and NTLM password hash sync](media/secure-your-domain/security-settings.png)
-## Use PowerShell to disable weak ciphers and NTLM password hash sync
+## Use PowerShell to harden your domain
If needed, [install and configure Azure PowerShell](/powershell/azure/install-az-ps). Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
Next, define *DomainSecuritySettings* to configure the following security option
> Users and service accounts can't perform LDAP simple binds if you disable NTLM password hash synchronization in the Azure AD DS managed domain. If you need to perform LDAP simple binds, don't set the *"SyncNtlmPasswords"="Disabled";* security configuration option in the following command. ```powershell
-$securitySettings = @{"DomainSecuritySettings"=@{"NtlmV1"="Disabled";"SyncNtlmPasswords"="Disabled";"TlsV1"="Disabled"}}
+$securitySettings = @{"DomainSecuritySettings"=@{"NtlmV1"="Disabled";"SyncNtlmPasswords"="Disabled";"TlsV1"="Disabled";"KerberosRc4Encryption"="Disabled";"KerberosArmoring"="Disabled"}}
``` Finally, apply the defined security settings to the managed domain using the [Set-AzResource][Set-AzResource] cmdlet. Specify the Azure AD DS resource from the first step, and the security settings from the previous step. ```powershell
-Set-AzResource -Id $DomainServicesResource.ResourceId -Properties $securitySettings -Verbose -Force
+Set-AzResource -Id $DomainServicesResource.ResourceId -Properties $securitySettings -ApiVersion ΓÇ£2021-03-01ΓÇ¥ -Verbose -Force
``` It takes a few moments for the security settings to be applied to the managed domain.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
Along with this property, attribute-mappings also support the following attribut
- **Only during creation** - Apply this mapping only on user creation actions. ## Matching users in the source and target systems
-The Azure AD provisioning service can be deployed in both "green field" scenarios (where users do not exit in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note:
+The Azure AD provisioning service can be deployed in both "green field" scenarios (where users do not exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note:
- **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute. - **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they are evaluated (defined as matching precedence in the UI). If, for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service will not evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found.
Selecting this option will effectively force a resynchronization of all users wh
- [Writing Expressions for Attribute-Mappings](functions-for-customizing-application-data.md) - [Scoping Filters for User Provisioning](define-conditional-rules-for-provisioning-user-accounts.md) - [Using SCIM to enable automatic provisioning of users and groups from Azure Active Directory to applications](use-scim-to-provision-users-and-groups.md)-- [List of Tutorials on How to Integrate SaaS Apps](../saas-apps/tutorial-list.md)
+- [List of Tutorials on How to Integrate SaaS Apps](../saas-apps/tutorial-list.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 04/05/2021 Last updated : 05/04/2021
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## April 2021
+
+### Updated articles
+
+- [Syncing extension attributes for app provisioning](user-provisioning-sync-attributes-for-mapping.md)
++ ## March 2021 ### Updated articles
Welcome to what's new in Azure Active Directory application provisioning documen
- [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md) - [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md) - [How provisioning works](how-provisioning-works.md)-
-## January 2021
-
-### New articles
-- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)-
-### Updated articles
-- [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)-- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)-- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)-- [Application provisioning in quarantine status](application-provisioning-quarantine-status.md)--
-## December 2020
-
-### Updated articles
-- [Known issues: Application provisioning](known-issues.md)-- [What is automated SaaS app user provisioning in Azure AD?](user-provisioning.md)-- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)--
-## November 2020
-
-### Updated articles
-- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)-- [How provisioning works](how-provisioning-works.md)-- [Tutorial - Build a SCIM endpoint and configure user provisioning with Azure AD](use-scim-to-provision-users-and-groups.md)--
-## October 2020
-
-### New articles
--- [Understand how provisioning integrates with Azure Monitor logs](application-provisioning-log-analytics.md)-
-### Updated articles
--- [How provisioning works](how-provisioning-works.md)-- [Understand how provisioning integrates with Azure Monitor logs](application-provisioning-log-analytics.md)-- [Customizing user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)-- [Reference for writing expressions for attribute mappings in Azure AD](functions-for-customizing-application-data.md)-- [Tutorial - Build a SCIM endpoint and configure user provisioning with Azure AD](use-scim-to-provision-users-and-groups.md)-- [Enable automatic user provisioning for your multi-tenant application](isv-automatic-provisioning-multi-tenant-apps.md)-- [Known issues: Application provisioning](known-issues.md)-- [Plan an automatic user provisioning deployment](plan-auto-user-provisioning.md)-- [Plan cloud HR application to Azure Active Directory user provisioning](plan-cloud-hr-provision.md)-- [On-demand provisioning](provision-on-demand.md)--
-## September 2020
-
-### New articles
--- [What's new in docs?](whats-new-docs.md)-
-### Updated articles
-- [Application provisioning in quarantine status](application-provisioning-quarantine-status.md)-- [Customizing user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)-- [Build a SCIM endpoint and configure user provisioning with Azure AD](use-scim-to-provision-users-and-groups.md)-- [Workday attribute reference](workday-attribute-reference.md)-
-## August 2020
-
-### New articles
-- [Known issues: Application provisioning](known-issues.md)--
-### Updated articles
-- [Configure provisioning using Microsoft Graph APIs](/graph/application-provisioning-configure-api)-- [Known issues and resolutions with SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md)-
-## July 2020
-
-### New articles
-- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)--
-### Updated articles
-- [On-demand provisioning](provision-on-demand.md)
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
Title: Publish on premises SharePoint with Azure Active Directory Application Proxy
-description: Covers the basics about how to integrate an on-premises SharePoint server with Azure Active Directory Application Proxy for SAML.
+ Title: Publish an on-premises SharePoint farm with Azure Active Directory Application Proxy
+description: Covers the basics about how to integrate an on-premises SharePoint farm with Azure Active Directory Application Proxy for SAML.
# Integrate Azure Active Directory Application Proxy with SharePoint (SAML)
-This step-by-step guide explains how to secure the access to the [Azure Active Directory integrated on-premises Sharepoint (SAML)](../saas-apps/sharepoint-on-premises-tutorial.md) using Azure AD Application Proxy, where users in your organization (Azure AD, B2B) connect to Sharepoint through the Internet.
+This step-by-step guide explains how to secure the access to the [Azure Active Directory integrated on-premises SharePoint (SAML)](../saas-apps/sharepoint-on-premises-tutorial.md) using Azure AD Application Proxy, where users in your organization (Azure AD, B2B) connect to SharePoint through the Internet.
-> [!NOTE]
+> [!NOTE]
> If you're new to Azure AD Application Proxy and want to learn more, see [Remote access to on-premises applications through Azure AD Application Proxy](./application-proxy.md). There are three primary advantages of this setup: -- Azure AD Application Proxy ensures that authenticated traffic can reach your internal network and the Sharepoint server.-- Your users can access the Sharepoint sites as usual without using VPN.-- You can control the access by user assignment on Azure AD Application Proxy level and you can increase the security with Azure AD features like Conditional Access and Multi-Factor Authentication (MFA).
+- Azure AD Application Proxy ensures that authenticated traffic can reach your internal network and SharePoint.
+- Your users can access SharePoint sites as usual without using VPN.
+- You can control the access by user assignment on the Azure AD Application Proxy level and you can increase the security with Azure AD features like Conditional Access and Multi-Factor Authentication (MFA).
This process requires two Enterprise Applications. One is a SharePoint on-premises instance that you publish from the gallery to your list of managed SaaS apps. The second is an on-premises application (non-gallery application) you'll use to publish the first Enterprise Gallery Application. ## Prerequisites To complete this configuration, you need the following resources:
+ - A SharePoint 2013 farm or newer. The SharePoint farm must be [integrated with Azure AD](../saas-apps/sharepoint-on-premises-tutorial.md).
- An Azure AD tenant with a plan that includes Application Proxy. Learn more about [Azure AD plans and pricing](https://azure.microsoft.com/pricing/details/active-directory/). - A [custom, verified domain](../fundamentals/add-custom-domain.md) in the Azure AD tenant. The verified domain must match the SharePoint URL suffix. - An SSL certificate is required. See the details in [custom domain publishing](./application-proxy-configure-custom-domain.md). - On-premises Active Directory users must be synchronized with Azure AD Connect, and must be configure to [sign in to Azure](../hybrid/plan-connect-user-signin.md).
+ - For cloud-only and B2B guest users, you need to [grant access to a guest account to SharePoint on-premises in the Azure portal](../saas-apps/sharepoint-on-premises-tutorial.md#manage-guest-users-access).
- An Application Proxy connector installed and running on a machine within the corporate domain.
-## Step 1: Integrate SharePoint on-premises with Azure AD
+## Step 1: Integrate SharePoint on-premises with Azure AD
1. Configure the SharePoint on-premises app. For more information, see [Tutorial: Azure Active Directory single sign-on integration with SharePoint on-premises](../saas-apps/sharepoint-on-premises-tutorial.md).
-2. Validate the configuration before moving to the next step. To validate, try to access the SharePoint on-premises from the internal network and confirm it's accessible internally.
+2. Validate the configuration before moving to the next step. To validate, try to access the SharePoint on-premises from the internal network and confirm it's accessible internally.
-## Step 2: Publish the Sharepoint on-premises application with Application Proxy
+## Step 2: Publish the SharePoint on-premises application with Application Proxy
In this step, you create an application in your Azure AD tenant that uses Application Proxy. You set the external URL and specify the internal URL, both of which are used later in SharePoint.
-> [!NOTE]
+> [!NOTE]
> The Internal and External URLs must match the **Sign on URL** in the SAML Based Application configuration in Step 1. ![Screenshot that shows the Sign on URL value.](./media/application-proxy-integrate-with-sharepoint-server/sso-url-saml.png)
In this step, you create an application in your Azure AD tenant that uses Applic
![Screenshot that shows the options you use to create the app.](./media/application-proxy-integrate-with-sharepoint-server/create-application-azure-active-directory.png)
-2. Assign the [same groups](../saas-apps/sharepoint-on-premises-tutorial.md#create-an-azure-ad-security-group-in-the-azure-portal) you assigned to the on-premises SharePoint Gallery Application.
+2. Assign the [same groups](../saas-apps/sharepoint-on-premises-tutorial.md#grant-permissions-to-a-security-group) you assigned to the on-premises SharePoint Gallery Application.
3. Finally, go to the **Properties** section and set **Visible to users?** to **No**. This option ensures that only the icon of the first application appears on the My Apps Portal (https://myapplications.microsoft.com).
In this step, you create an application in your Azure AD tenant that uses Applic
## Step 3: Test your application
-Using a browser from a computer on an external network, navigate to the link that you configured during the publish step. Make sure you can sign in with the test account that you set up.
+Using a browser from a computer on an external network, navigate to the link that you configured during the publish step. Make sure you can sign in with the test account that you set up.
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
Title: Enable remote access to SharePoint - Azure Active Directory Application Proxy
-description: Covers the basics about how to integrate an on-premises SharePoint server with Azure Active Directory Application Proxy.
+description: Covers the basics about how to integrate on-premises SharePoint Server with Azure Active Directory Application Proxy.
Configure the KCD so that the Azure AD Application Proxy service can delegate us
To configure the KCD, follow these steps for each connector machine: 1. Sign in to a domain controller as a domain administrator, and then open Active Directory Users and Computers.
-1. Find the computer running the Azure AD Proxy connector. In this example, it's the SharePoint server itself.
+1. Find the computer running the Azure AD Proxy connector. In this example, it's the computer that's running SharePoint Server.
1. Double-click the computer, and then select the **Delegation** tab. 1. Make sure the delegation options are set to **Trust this computer for delegation to the specified services only**. Then, select **Use any authentication protocol**. 1. Select the **Add** button, select **Users or Computers**, and locate the SharePoint application pool account. For example: `Contoso\spapppool`.
If sign-in to the site isn't working, you can get more information about the iss
## Next steps * [Working with custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md)
-* [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md)
+* [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md)
active-directory Application Proxy Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-release-version-history.md
# Azure AD Application Proxy: Version release history
-This article lists the versions and features of Azure Active Directory (Azure AD) Application Proxy that have been released. The Azure AD team regularly updates Application Proxy with new features and functionality. Application Proxy connectors are updated automatically when a new version is released.
+This article lists the versions and features of Azure Active Directory (Azure AD) Application Proxy that have been released. The Azure AD team regularly updates Application Proxy with new features and functionality. Application Proxy connectors are [updated automatically when a new major version is released](application-proxy-faq.yml#why-is-my-connector-still-using-an-older-version-and-not-auto-upgraded-to-latest-version-).
-We recommend making sure that auto-updates are enabled for your connectors to ensure you have the latest features and bug fixes. Microsoft provides direct support for the latest connector version and one version before.
+We recommend making sure that auto-updates are enabled for your connectors to ensure you have the latest features and bug fixes. Microsoft Support might ask you to install the latest connector version to resolve a problem.
Here is a list of related resources:
Here is a list of related resources:
### Release status July 22, 2020: Released for download
-This version is only available for install via the download page. An auto-upgrade release of this version will be released at a later time.
+This version is only available for install via the download page.
### New features and improvements - Improved support for Azure Government cloud environments. For steps on how to properly install the connector for Azure Government cloud review the [pre-requisites](../hybrid/reference-connect-government-cloud.md#allow-access-to-urls) and [installation steps](../hybrid/reference-connect-government-cloud.md#install-the-agent-for-the-azure-government-cloud).
This version is only available for install via the download page. An auto-upgrad
### Release status July 17, 2020: Released for download.
-This version is only available for install via the download page. An auto-upgrade release of this version will be released at a later time.
+This version is only available for install via the download page.
### Fixed issues - Resolved memory leak issue present in previous version
This version is only available for install via the download page. An auto-upgrad
### Release status April 07, 2020: Released for download
-This version is only available for install via the download page. An auto-upgrade release of this version will be released at a later time.
+This version is only available for install via the download page.
### New features and improvements - Connectors only use TLS 1.2 for all connections. See [Connector pre-requisites](application-proxy-add-on-premises-application.md#prerequisites) for more details.
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-writeback.md
Passwords aren't written back in any of the following situations:
* **Unsupported end-user operations** * Any end user resetting their own password by using PowerShell version 1, version 2, or the Microsoft Graph API. * **Unsupported administrator operations**
- * Any administrator-initiated end-user password reset from PowerShell version 1, version 2, or the Microsoft Graph API (the [Microsoft Graph API](/graph/api/passwordauthenticationmethod-resetpassword?tabs=http) is supported).
+ * Any administrator-initiated end-user password reset from PowerShell version 1, or version 2.
* Any administrator-initiated end-user password reset from the [Microsoft 365 admin center](https://admin.microsoft.com). * Any administrator cannot use password reset tool to reset their own password for password writeback.
Passwords aren't written back in any of the following situations:
To get started with SSPR writeback, complete the following tutorial: > [!div class="nextstepaction"]
-> [Tutorial: Enable self-service password reset (SSPR) writeback](./tutorial-enable-sspr-writeback.md)
+> [Tutorial: Enable self-service password reset (SSPR) writeback](./tutorial-enable-sspr-writeback.md)
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Previously updated : 04/21/2021 Last updated : 05/04/2021
Registration features for passwordless authentication methods rely on the combin
1. **Target** - All users or Select users 1. **Save** the configuration. +
+### FIDO Security Key optional settings
+
+There are some optional settings for managing security keys per tenant.
+
+![Screenshot of FIDO2 security key options](media/howto-authentication-passwordless-security-key/optional-settings.png)
+
+**General**
+
+- **Allow self-service set up** should remain set to **Yes**. If set to no, your users will not be able to register a FIDO key through the MySecurityInfo portal, even if enabled by Authentication Methods policy.
+- **Enforce attestation** setting to **Yes** requires the FIDO security key metadata to be published and verified with the FIDO Alliance Metadata Service, and also pass MicrosoftΓÇÖs additional set of validation testing. For more information, see [What is a Microsoft-compatible security key?](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/microsoft-compatible-security-key)
+
+**Key Restriction Policy**
+
+- **Enforce key restrictions** should be set to **Yes** only if your organization wants to only allow or disallow certain FIDO security keys, which are identified by their AAGuids. You can work with your security key provider to determine the AAGuids of their devices. If the key is already registered, AAGUID can also be found by viewing the authentication method details of the key per user.
+ ## User registration and management of FIDO2 security keys 1. Browse to [https://myprofile.microsoft.com](https://myprofile.microsoft.com).
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The proxy service doesn't support the use of specific credentials for connecting
### Configure the proxy service to listen on a specific port
-The Azure AD Password Protection DC agent software uses RPC over TCP to communicate with the proxy service. By default, the Azure AD Password Protection proxy service listens on any available dynamic RPC endpoint. You can configure the service to listen on a specific TCP port, if necessary due to networking topology or firewall requirements in your environment.
+The Azure AD Password Protection DC agent software uses RPC over TCP to communicate with the proxy service. By default, the Azure AD Password Protection proxy service listens on any available dynamic RPC endpoint. You can configure the service to listen on a specific TCP port, if necessary due to networking topology or firewall requirements in your environment. When you configure a static port, you must open port 135 and the static port of your choice.
<a id="static" /></a>To configure the service to run under a static port, use the `Set-AzureADPasswordProtectionProxyConfiguration` cmdlet as follows:
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Title: 'Attribute mapping in Azure AD Connect cloud sync'
description: This article describes how to use the cloud sync feature of Azure AD Connect to map attributes. -+ Previously updated : 01/21/2021 Last updated : 04/30/2021
You can use the cloud sync feature of Azure Active Directory (Azure AD) Connect
You can customize (change, delete, or create) the default attribute mappings according to your business needs. For a list of attributes that are synchronized, see [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md).
+> [!NOTE]
+> This article describes how to use the Azure portal to map attributes. For information on using Microsoft Graph, see [Transformations](how-to-transformation.md).
+ ## Understand types of attribute mapping With attribute mapping, you control how attributes are populated in Azure AD. Azure AD supports four mapping types: -- **Direct**: The target attribute is populated with the value of an attribute of the linked object in Active Directory.-- **Constant**: The target attribute is populated with a specific string that you specify.-- **Expression**: The target attribute is populated based on the result of a script-like expression. For more information, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).-- **None**: The target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the default value that you specify.
+|Mapping Type|Description|
+|--|--|
+|**Direct**|The target attribute is populated with the value of an attribute of the linked object in Active Directory.|
+|**Constant**|The target attribute is populated with a specific string that you specify.|
+|**Expression**|The target attribute is populated based on the result of a script-like expression. For more information, see [Expression Builder](how-to-expression-builder.md) and [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).|
+|**None**|The target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the default value that you specify.|
Along with these basic types, custom attribute mappings support the concept of an optional *default* value assignment. The default value assignment ensures that a target attribute is populated with a value if Azure AD or the target object doesn't have a value. The most common configuration is to leave this blank.
-## Understand properties of attribute mapping
+## Schema updates and mappings
+Cloud sync will occasionally update the schema and the list of default attributes that are [synchronized](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-sync-attributes-synchronized?context=/azure/active-directory/cloud-provisioning/context/cp-context). These default attribute mappings will be available for new installations but will not automatically be added to existing installations. To add these mappings you can follow the steps below.
++
+ 1. Click on ΓÇ£add attribute mappingΓÇ¥
+ 2. Select the Target attribute dropdown
+ 3. You should see the new attributes that are available here.
+
+The following is a list of new mappings that were added.
+
+Attribute Added | Mapping Type | Added with Agent Version
+| -- | --| --|
+|preferredDatalocation|Direct|1.1.359.0|
+|EmployeeNumber|Direct|1.1.359.0|
+|UserType|Direct|1.1.359.0|
+
+For more information on how to map UserType, see [Map UserType with cloud sync](how-to-map-usertype.md).
+
+## Understand properties of attribute mappings
-Along with the type property, attribute mappings support the following attributes:
+Along with the type property, attribute mappings support certain attributes. These attributes will depend on the type of mapping you have selected. The following sections describe the supported attribute mappings for each of the individual types
+
+### Direct mapping attributes
+The following are the attributes supported by a direct mapping:
- **Source attribute**: The user attribute from the source system (example: Active Directory). - **Target attribute**: The user attribute in the target system (example: Azure Active Directory).
Along with the type property, attribute mappings support the following attribute
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
-> [!NOTE]
-> This article describes how to use the Azure portal to map attributes. For information on using Microsoft Graph, see [Transformations](how-to-transformation.md).
+ ![Screenshot for direct](media/how-to-attribute-mapping/mapping-7.png)
+
+### Constant mapping attributes
+The following are the attributes supported by a constant mapping:
+
+- **Constant value**: The value that you want to apply to the target attribute.
+- **Target attribute**: The user attribute in the target system (example: Azure Active Directory).
+- **Apply this mapping**:
+ - **Always**: Apply this mapping on both user-creation and update actions.
+ - **Only during creation**: Apply this mapping only on user-creation actions.
+
+ ![Screenshot for constant](media/how-to-attribute-mapping/mapping-9.png)
+
+### Expression mapping attributes
+The following are the attributes supported by an expression mapping:
+
+- **Expression**: This is the expression that is going to be applied to the target attribute. For more information, see [Expression Builder](how-to-expression-builder.md) and [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).
+- **Default value if null (optional)**: The value that will be passed to the target system if the source attribute is null. This value will be provisioned only when a user is created. It won't be provisioned when you're updating an existing user.
+- **Target attribute**: The user attribute in the target system (example: Azure Active Directory).
+
+- **Apply this mapping**:
+ - **Always**: Apply this mapping on both user-creation and update actions.
+ - **Only during creation**: Apply this mapping only on user-creation actions.
+
+ ![Screenshot for expression](media/how-to-attribute-mapping/mapping-10.png)
## Add an attribute mapping
To use the new capability, follow these steps:
![Screenshot that shows the button for adding an attribute, along with lists of attributes and mapping types.](media/how-to-attribute-mapping/mapping-1.png)
-7. Select the mapping type. For this example, we're using **Expression**.
-8. Enter the expression in the box. For this example, we're using `Replace([mail], "@contoso.com", , ,"", ,)`.
-9. Enter the target attribute. For this example, we're using **ExtensionAttribute15**.
-10. Select when to apply this mapping, and then select **Apply**.
-
- ![Screenshot that shows the filled-in boxes for creating an attribute mapping.](media/how-to-attribute-mapping/mapping-2a.png)
-
+7. Select the mapping type. This can be one of the following:
+ - **Direct**: The target attribute is populated with the value of an attribute of the linked object in Active Directory.
+ - **Constant**: The target attribute is populated with a specific string that you specify.
+ - **Expression**: The target attribute is populated based on the result of a script-like expression.
+ - **None**: The target attribute is left unmodified.
+
+ For more information see See [Understanding attribute types](#understand-types-of-attribute-mapping) above.
+8. Depending on what you have selected in the previous step, different options will be available for filling in. See the [Understand properties of attribute mappings](#understand-properties-of-attribute-mappings)sections above for information on these attributes.
+9. Select when to apply this mapping, and then select **Apply**.
11. Back on the **Attribute mappings** screen, you should see your new attribute mapping. 12. Select **Save schema**.
To test your attribute mapping, you can use [on-demand provisioning](how-to-on-d
![Screenshot that shows success and export details.](media/how-to-attribute-mapping/mapping-5.png) +++++ ## Next steps - [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md) - [Writing expressions for attribute mappings](reference-expressions.md)
+- [How to use expression builder with cloud sync](how-to-expression-builder.md)
- [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md)
active-directory How To Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-expression-builder.md
+
+ Title: 'How to use expression builder with Azure AD Connect cloud sync'
+description: This article describes how to use the expression builder with cloud sync.
++++++ Last updated : 04/19/2021+++++
+# Expression builder with cloud sync
+The expression builder is a new blade in Azure located under cloud sync. It helps in building complex expressions and allows you to test these expressions before you apply them to your cloud sync environment.
+
+## Use the expression builder
+To access the expression builder, use the following steps.
+
+ 1. In the Azure portal, select **Azure Active Directory**
+ 2. Select **Azure AD Connect**.
+ 3. Select **Manage cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. Under **Manage attributes**, select **Click to edit mappings**.
+ 6. On the **Edit attribute mappings** blade, click **Add attribute mapping**.
+ 7. Under **Mapping type**, select **Expression**.
+ 8. Select **Try the expression builder (Preview)**.
+ ![Use expression builder](media/how-to-expression-builder/expression-1.png)
+
+## Build an expression
+This section allows you to use the drop-down to select from a list of supported functions. Then it provides additional fields for you to fill in, depending on the function selected. Once you select **Apply expression**, the syntax will appear in the **Expression input** box.
+
+For example, by selecting **Replace** from the drop-down, additional boxes are provided. The syntax for the function is displayed in the light blue box. The boxes that are displayed correspond to the syntax of the function you selected. Replace works differently depending on the parameters provided. For our example we will use:
+
+- When oldValue and replacementValue are provided:
+ - Replaces all occurrences of oldValue in the source with replacementValue
+
+For more information, see [Replace](reference-expressions.md#replace)
+
+The first thing we need to do is select the attribute that is the source for the replace function. In our example, we selected the **mail** attribute.
+
+Next, we fill in the value for oldValue. This oldValue will be **@fabrikam.com**. Finally, in the box for replacementValue, we will fill in the value **@contoso.com**.
+
+So our expression, basically says, replace the mail attribute on user objects that have a value of @fabrikam.com with the @contoso.com value. By clicking the **Add expression** button, we can see the syntax in the **Expression input**
++
+>[!NOTE]
+>Be sure to place the values in the boxes that would correspond with oldValue and replacementValue based on the syntax that occurs when you have selected Replace.
+
+For more information on supported expressions, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md)
+
+### Information on expression builder input boxes
+Depending on which function you have selected, the boxes provided by expression builder, will accept multiple values. For example, the JOIN function will accept strings or the value that is associated with a given attribute. For example, we can use the value contained in the attribute value of [givenName] and join this with a string value of "@contoso.com" to create an email address.
+
+ ![Input box values](media/how-to-expression-builder/expression-8.png)
+
+For more information on acceptable values and how to write expressions, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).
+
+## Test an expression
+In this section, you can test your expressions. From the drop-down, select the **mail** attribute. Fill in the value with **@fabrikam.com** and now click **Test expression**.
+
+You will see the value of **@contoso.com** displayed in the **View expression output** box.
+
+ ![Test your expression](media/how-to-expression-builder/expression-4.png)
+
+## Deploy the expression
+Once you are satisfied with the expression, simply click the **Apply expression** button.
+![Add your expression](media/how-to-expression-builder/expression-5.png)
+
+This will add the expression to the agent configuration.
+![Agent configuration](media/how-to-expression-builder/expression-6.png)
+
+## Setting a NULL value on an expression
+To set an attributes value to NULL. You can use an expression with the value of `""`. This will flow the NULL value to the target attribute.
+
+![NULL value](media/how-to-expression-builder/expression-7.png)
+++
+## Next steps
+
+- [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md)
+- [Cloud sync configuration](how-to-configure.md)
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-install-pshell.md
Title: 'Install the Azure AD Connect cloud provisioning agent using powershell'
-description: Learn how to install the Azure AD Connect cloud provisioning agent using powershell cmdlets.
+ Title: 'Install the Azure AD Connect cloud provisioning agent using a command-line interface (CLI) and PowerShell'
+description: Learn how to install the Azure AD Connect cloud provisioning agent using PowerShell cmdlets.
-# Install the Azure AD Connect provisioning agent using powershell cmdlets
+# Install the Azure AD Connect provisioning agent using a command-line interface (CLI) and PowerShell
The following document will guide show you how to install the Azure AD Connect provisioning agent using PowerShell cmdlets.
+>[!NOTE]
+>This document deals with installing the provisioning agent using the command-line interface. For information on installing the Azure AD Connect provisioing agent using the wizard, see [Install the Azure AD Connect provisioning agent](how-to-install.md).
## Prerequisite:
The following document will guide show you how to install the Azure AD Connect p
>[!IMPORTANT] >The following installation instructions assume that all of the [Prerequisites](how-to-prerequisites.md) have been met. >
-> The windows server needs to have TLS 1.2 enabled before you install the Azure AD Connect provisioning agent using powershell cmdlets. To enable TLS 1.2 you can use the steps found [here](how-to-prerequisites.md#tls-requirements).
+> The windows server needs to have TLS 1.2 enabled before you install the Azure AD Connect provisioning agent using PowerShell cmdlets. To enable TLS 1.2 you can use the steps found [here](how-to-prerequisites.md#tls-requirements).
-## Install the Azure AD Connect provisioning agent using powershell cmdlets
+## Install the Azure AD Connect provisioning agent using PowerShell cmdlets
1. Sign in to the Azure portal, and then go to **Azure Active Directory**.
The following document will guide show you how to install the Azure AD Connect p
7. Import Provisioning Agent PS module ```
- Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.Powershell.dll"
+ Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.PowerShell.dll"
``` 8. Connect to AzureAD using global administrator credentials, you can customize this section to fetch password from a secure store.
The following document will guide show you how to install the Azure AD Connect p
``` Restart-Service -Name AADConnectProvisioningAgent ```
- 15. Go to the azure portal to create the cloud sync configuration.
+ 15. Go to the Azure portal to create the cloud sync configuration.
## Provisioning agent gMSA PowerShell cmdlets Now that you have installed the agent, you can apply more granular permissions to the gMSA. See [Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets](how-to-gmsa-cmdlets.md) for information and step-by-step instructions on configuring the permissions.
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-install.md
Installing and configuring the Azure AD Connect cloud sync is accomplished in th
- [Install the agent](#install-the-agent) - [Verify agent installation](#verify-agent-installation)
+>[!NOTE]
+>This document deals with installing the provisioning agent using the wizard. For information on installing the Azure AD Connect provisioing agent using a command-line interface (CLI), see [Install the Azure AD Connect provisioning agent using a command-line interface (CLI) and powershell](how-to-install-pshell.md).
## Group Managed Service Accounts A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management,the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and recommends the use of a group Managed Service Account for running the agent. For more information on a gMSA, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)
To upgrade an existing agent to use the gMSA account created during installation
++ ## Install the agent To install the agent, follow these steps.
active-directory How To Map Usertype https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-map-usertype.md
+
+ Title: 'How to use map UserType with Azure AD Connect cloud sync'
+description: This article describes how to use map the UserType attribute with cloud sync.
++++++ Last updated : 05/04/2021+++++
+# Map UserType with cloud sync
+
+Cloud sync supports synchronization of the UserType attribute for User objects.
+
+By default, the UserType attribute is not enabled for synchronization because there is no corresponding UserType attribute in on-premises Active Directory. You must manually add this mapping for synchronization. Before doing this, you must take note of the following behavior enforced by Azure AD:
+
+- Azure AD only accepts two values for the UserType attribute: Member and Guest.
+- If the UserType attribute is not mapped in cloud sync, Azure AD users created through directory synchronization would have the UserType attribute set to Member.
+
+Before adding a mapping for the UserType attribute, you must first decide how the attribute is derived from on-premises Active Directory. The following are the most common approaches:
+
+ - Designate an unused on-premises AD attribute (such as extensionAttribute1) to be used as the source attribute. The designated on-premises AD attribute should be of the type string, be single-valued, and contain the value Member or Guest.
+ - If you choose this approach, you must ensure that the designated attribute is populated with the correct value for all existing user objects in on-premises Active Directory that are synchronized to Azure AD before enabling synchronization of the UserType attribute.
+
+## To add the UserType mapping
+To add the UserType mapping, use the following steps.
+
+ 1. In the Azure portal, select **Azure Active Directory**
+ 2. Select **Azure AD Connect**.
+ 3. Select **Manage cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. Under **Manage attributes**, select **Click to edit mappings**.
+ ![Edit the attribute mappings](media/how-to-map-usertype/usertype-1.png)
+
+ 6. Click **Add attribute mapping**.
+ ![Add a new attribute mapping](media/how-to-map-usertype/usertype-2.png)
+7. Select the mapping type. You can do the mapping in one of three ways:
+ - a direct mapping (ie from an AD attribute)
+ - an expression (IIF(InStr([userPrincipalName], "@partners") > 0,"Guest","Member"))
+ - a constant (that is, make all user objects as Guest).
+ ![Add usertype](media/how-to-map-usertype/usertype-3.png)
+8. In the Target attribute dropdown, select UserType.
+9. Click the **Apply** button at the bottom of the page. This will create a mapping for the Azure AD UserType attribute.
+
+## Next steps
+
+- [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md)
+- [Cloud sync configuration](how-to-configure.md)
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-prerequisites.md
The following are known limitations:
### Delta Synchronization -- Group scope filtering for delta sync does not support more than 1500 members.
+- Group scope filtering for delta sync does not support more than 50,000 members.
- When you delete a group that's used as part of a group scoping filter, users who are members of the group, don't get deleted. - When you rename the OU or group that's in scope, delta sync will not remove the users.
The following are known limitations:
### Group re-naming or OU re-naming - If you rename a group or OU in AD that's in scope for a given configuration, the cloud sync job will not be able to recognize the name change in AD. The job won't go into quarantine and will remain healthy.
+### Scoping filter
+When using OU scoping filter
+- You can only sync up to 59 separate OUs for a given configuration.
+- Nested OUs are supported (that is, you **can** sync an OU that has 130 nested OUs, but you **cannot** sync 60 separate OUs in the same configuration).
+ ## Next steps
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
The redirect URIs to use in a desktop application depend on the flow you want to
Specify the redirect URI for your app by [configuring the platform settings](quickstart-register-app.md#add-a-redirect-uri) for the app in **App registrations** in the Azure portal. - For apps that use interactive authentication:+ - Apps that use embedded browsers: `https://login.microsoftonline.com/common/oauth2/nativeclient`
+ (Note: If your app would pop up a window which typically contains no address bar, it is using the "embedded browser".)
- Apps that use system browsers: `http://localhost`-
+ (Note: If your app would bring your system's default browser (such as Edge, Chrome, Firefox, etc.) to visit Microsoft login portal, it is using the "system browser".)
+
> [!IMPORTANT] > As a security best practice, we recommend explicitly setting `https://login.microsoftonline.com/common/oauth2/nativeclient` or `http://localhost` as the redirect URI. Some authentication libraries like MSAL.NET use a default value of `urn:ietf:wg:oauth:2.0:oob` when no other redirect URI is specified, which is not recommended. This default will be updated as a breaking change in the next major release.
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-saml-bearer-assertion.md
+
+ Title: Microsoft identity platform & SAML bearer assertion flow | Azure
+description: Learn how to fetch data from Microsoft Graph without prompting the user for credentials using the SAML bearer assertion flow.
+++++++ Last updated : 08/05/2019++++
+# Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow
+The OAuth 2.0 SAML bearer assertion flow allows you to request an OAuth access token using a SAML assertion when a client needs to use an existing trust relationship. The signature applied to the SAML assertion provides authentication of the authorized app. A SAML assertion is an XML security token issued by an identity provider and consumed by a service provider. The service provider relies on its content to identify the assertionΓÇÖs subject for security-related purposes.
+
+The SAML assertion is posted to the OAuth token endpoint. The endpoint processes the assertion and issues an access token based on prior approval of the app. The client isnΓÇÖt required to have or store a refresh token, nor is the client secret required to be passed to the token endpoint.
+
+SAML Bearer Assertion flow is useful when fetching data from Microsoft Graph APIs (which only support delegated permissions) without prompting the user for credentials. In this scenario the client credentials grant, which is preferred for background processes, does not work.
+
+For applications that do interactive browser-based sign-in to get a SAML assertion and then want to add access to an OAuth protected API (such as Microsoft Graph), you can make an OAuth request to get an access token for the API. When the browser is redirected to Azure AD to authenticate the user, the browser will pick up the session from the SAML sign-in and the user doesn't need to enter their credentials.
+
+The OAuth SAML Bearer Assertion flow is also supported for users authenticating with identity providers such as Active Directory Federation Services (ADFS) federated to Azure Active Directory. The SAML assertion obtained from ADFS can be used in an OAuth flow to authenticate the user.
+
+![OAuth flow](./media/v2-saml-bearer-assertion/1.png)
+
+## Call Graph using SAML bearer assertion
+Now let us understand on how we can actually fetch SAML assertion programatically. This approach is tested with ADFS. However, this works with any identity provider that supports the return of SAML assertion programatically. The basic process is: get a SAML assertion, get an access token, and access Microsoft Graph.
+
+### Prerequisites
+
+Establish a trust relationship between the authorization server/environment (Microsoft 365) and the identity provider, or issuer of the SAML 2.0 bearer assertion (ADFS). To configure ADFS for single sign-on and as an identity provider you may refer to [this article](/archive/blogs/canitpro/step-by-step-setting-up-ad-fs-and-enabling-single-sign-on-to-office-365).
+
+Register the application in the [portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade):
+1. Sign in to the [app registration blade of the portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) (Please note that we are using the v2.0 endpoints for Graph API and hence need to register the application in this portal. Otherwise we could have used the registrations in Azure active directory).
+1. Select **New registration**.
+1. When the **Register an application** page appears, enter your application's registration information:
+ 1. **Name** - Enter a meaningful application name that will be displayed to users of the app.
+ 1. **Supported account types** - Select which accounts you would like your application to support.
+ 1. **Redirect URI (optional)** - Select the type of app you're building, Web, or Public client (mobile & desktop), and then enter the redirect URI (or reply URL) for your application.
+ 1. When finished, select **Register**.
+1. Make a note of the application (client) ID.
+1. In the left pane, select **Certificates & secrets**. Click **New client secret** in the **Client secrets** section. Copy the new client secret, you won't be able to retrieve when you leave the blade.
+1. In the left pane, select **API permissions** and then **Add a permission**. Select **Microsoft Graph**, then **delegated permissions**, and then select **Tasks.read** since we intend to use the Outlook Graph API.
+
+Install [Postman](https://www.getpostman.com/), a tool required to test the sample requests. Later, you can convert the requests to code.
+
+### Get the SAML assertion from ADFS
+Create a POST request to the ADFS endpoint using SOAP envelope to fetch the SAML assertion:
+
+![Get SAML assertion](./media/v2-saml-bearer-assertion/2.png)
+
+Header values:
+
+![Header values](./media/v2-saml-bearer-assertion/3.png)
+
+ADFS request body:
+
+![ADFS request body](./media/v2-saml-bearer-assertion/4.png)
+
+Once this request is posted successfully, you should receive a SAML assertion from ADFS. Only the **SAML:Assertion** tag data is required, convert it to base64 encoding to use in further requests.
+
+### Get the OAuth2 token using the SAML assertion
+In this step, fetch an OAuth2 token using the ADFS assertion response.
+
+1. Create a POST request as shown below with the header values:
+
+ ![POST request](./media/v2-saml-bearer-assertion/5.png)
+1. In the body of the request, replace **client_id**, **client_secret**, and **assertion** (the base64 encoded SAML assertion obtained the previous step):
+
+ ![Request body](./media/v2-saml-bearer-assertion/6.png)
+1. Upon successful request, you will receive an access token from Azure active directory.
+
+### Get the data with the Oauth token
+
+After receiving the access token, call the Graph APIs (Outlook tasks in this example).
+
+1. Create a GET request with the access token fetched in the previous step:
+
+ ![GET request](./media/v2-saml-bearer-assertion/7.png)
+
+1. Upon successful request, you will receive a JSON response.
+
+## Next steps
+
+Learn about the different [authentication flows and application scenarios](authentication-flows-app-scenarios.md).
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
When guest access is restricted, guests can view only their own user profile. Pe
## Permissions and licenses
-You must be in the Global Administrator role to configure the external collaboration settings. There are no additional licensing requirements to restrict guest access.
+You must be in the Global Administrator or Privileged Role Administrator role to configure guest user access. There are no additional licensing requirements to restrict guest access.
## Update in the Azure portal
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-fundamentals.md
This article contains recommendations and best practices for business-to-busines
## B2B recommendations | Recommendation | Comments | | | |
-| For an optimal sign-in experience, federate with identity providers | Whenever possible, federate directly with identity providers to allow invited users to sign in to your shared apps and resources without having to create Microsoft Accounts (MSAs) or Azure AD accounts. You can use the [Google federation feature](google-federation.md) to allow B2B guest users to sign in with their Google accounts. Or, you can use the [Direct federation (preview) feature](direct-federation.md) to set up direct federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. |
+| For an optimal sign-in experience, federate with identity providers | Whenever possible, federate directly with identity providers to allow invited users to sign in to your shared apps and resources without having to create Microsoft Accounts (MSAs) or Azure AD accounts. You can use the [Google federation feature](google-federation.md) to allow B2B guest users to sign in with their Google accounts. Or, you can use the [SAML/WS-Fed identity provider (preview) feature](direct-federation.md) to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. |
| Use the Email one-time passcode feature for B2B guests who canΓÇÖt authenticate by other means | The [Email one-time passcode](one-time-passcode.md) feature authenticates B2B guest users when they can't be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. | | Add company branding to your sign-in page | You can customize your sign-in page so it's more intuitive for your B2B guest users. See how to [add company branding to sign in and Access Panel pages](../fundamentals/customize-branding.md). | | Add your privacy statement to the B2B guest user redemption experience | You can add the URL of your organization's privacy statement to the first time invitation redemption process so that an invited user must consent to your privacy terms to continue. See [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md). |
active-directory Compare With B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/compare-with-b2c.md
The following table gives a detailed comparison of the scenarios you can enable
| - | | | | **Primary scenario** | Collaboration using Microsoft applications (Microsoft 365, Teams, etc.) or your own applications (SaaS apps, custom-developed apps, etc.). | Identity and access management for modern SaaS or custom-developed applications (not first-party Microsoft apps). | | **Intended for** | Collaborating with business partners from external organizations like suppliers, partners, vendors. Users appear as guest users in your directory. These users may or may not have managed IT. | Customers of your product. These users are managed in a separate Azure AD directory. |
-| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, Gmail, and Facebook. | Consumer users with local application accounts (any email address or user name), various supported social identities, and users with corporate and government-issued identities via direct federation. |
+| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, Gmail, and Facebook. | Consumer users with local application accounts (any email address or user name), various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed based identity provider federation. |
| **External user management** | External users are managed in the same directory as employees, but are typically annotated as guest users. Guest users can be managed the same way as employees, added to the same groups, and so on. | External users are managed in the Azure AD B2C directory. They're managed separately from the organization's employee and partner directory (if any). | | **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. | | **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](conditional-access.md)). | Managed by the organization via Conditional Access and Identity Protection. |
active-directory Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/current-limitations.md
Azure AD B2B is subject to Azure AD service directory limits. For details about
[National clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration is not supported across national cloud boundaries. For example, if your Azure tenant is in the public, global cloud, you can't invite a user whose account is in a national cloud. To collaborate with the user, ask them for another email address or create a member user account for them in your directory. ## Azure US Government clouds
-Within the Azure US Government cloud, B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft or Google accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
+Within the Azure US Government cloud, B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
### How can I tell if B2B collaboration is available in my Azure US Government tenant? To find out if your Azure US Government cloud tenant supports B2B collaboration, do the following:
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation-adfs.md
Title: Set up direct federation with an AD FS for B2B - Azure AD
-description: Learn how to set up AD FS as an identity provider for direct federation so guests can sign in to your Azure AD apps
+ Title: Set up SAML/WS-Fed IdP federation with an AD FS for B2B - Azure AD
+description: Learn how to set up AD FS as an identity provider (IdP) for SAML/WS-Fed IdP federation so guests can sign in to your Azure AD apps
-# Example: Direct federation with Active Directory Federation Services (AD FS) (preview)
+# Example: Configure SAML/WS-Fed based identity provider federation with AD FS (preview)
-> [!NOTE]
-> Direct federation is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>[!NOTE]
+>- *Direct federation* in Azure Active Directory is now referred to as *SAML/WS-Fed identity provider (IdP) federation*.
+>- SAML/WS-Fed IdP federation is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article describes how to set up [direct federation](direct-federation.md) using Active Directory Federation Services (AD FS) as either a SAML 2.0 or WS-Fed identity provider. To support direct federation, certain attributes and claims must be configured at the identity provider. To illustrate how to configure an identity provider for direct federation, weΓÇÖll use Active Directory Federation Services (AD FS) as an example. WeΓÇÖll show how to set up AD FS both as a SAML identity provider and as a WS-Fed identity provider.
+This article describes how to set up [SAML/WS-Fed IdP federation](direct-federation.md) using Active Directory Federation Services (AD FS) as either a SAML 2.0 or WS-Fed IdP. To support federation, certain attributes and claims must be configured at the IdP. To illustrate how to configure an IdP for federation, weΓÇÖll use Active Directory Federation Services (AD FS) as an example. WeΓÇÖll show how to set up AD FS both as a SAML IdP and as a WS-Fed IdP.
> [!NOTE]
-> This article describes how to set up AD FS for both SAML and WS-Fed for illustration purposes. For direct federation integrations where the identity provider is AD FS, we recommend using WS-Fed as the protocol.
+> This article describes how to set up AD FS for both SAML and WS-Fed for illustration purposes. For federation integrations where the IdP is AD FS, we recommend using WS-Fed as the protocol.
+
+## Configure AD FS for SAML 2.0 federation
-## Configure AD FS for SAML 2.0 direct federation
-Azure AD B2B can be configured to federate with identity providers that use the SAML protocol with specific requirements listed below. To illustrate the SAML configuration steps, this section shows how to set up AD FS for SAML 2.0.
+Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed below. To illustrate the SAML configuration steps, this section shows how to set up AD FS for SAML 2.0.
-To set up direct federation, the following attributes must be received in the SAML 2.0 response from the identity provider. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
+To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
|Attribute |Value | |||
To set up direct federation, the following attributes must be received in the SA
|Audience |`urn:federation:MicrosoftOnline` | |Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
-The following claims need to be configured in the SAML 2.0 token issued by the identity provider:
+The following claims need to be configured in the SAML 2.0 token issued by the IdP:
|Attribute |Value |
The following claims need to be configured in the SAML 2.0 token issued by the i
|emailaddress |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-The next section illustrates how to configure the required attributes and claims using AD FS as an example of a SAML 2.0 identity provider.
+The next section illustrates how to configure the required attributes and claims using AD FS as an example of a SAML 2.0 IdP.
### Before you begin
An AD FS server must already be set up and functioning before you begin this pro
3. Click **Finish**. 4. The **Edit Claim Rules** window will show the new rules. Click **Apply**.
-5. Click **OK**. The AD FS server is now configured for direct federation using the SAML 2.0 protocol.
+5. Click **OK**. The AD FS server is now configured for federation using the SAML 2.0 protocol.
-## Configure AD FS for WS-Fed direct federation
-Azure AD B2B can be configured to federate with identity providers that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed identity provider. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, download the Azure AD Identity Provider Compatibility Docs.
+## Configure AD FS for WS-Fed federation
+Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed IdP. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, download the Azure AD Identity Provider Compatibility Docs.
-To set up direct federation, the following attributes must be received in the WS-Fed message from the identity provider. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
+To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
|Attribute |Value | |||
Required claims for the WS-Fed token issued by the IdP:
|ImmutableID |`http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID` | |emailaddress |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-The next section illustrates how to configure the required attributes and claims using AD FS as an example of a WS-Fed identity provider.
+The next section illustrates how to configure the required attributes and claims using AD FS as an example of a WS-Fed IdP.
### Before you begin An AD FS server must already be set up and functioning before you begin this procedure. For help with setting up an AD FS server, see [Create a test AD FS 3.0 instance on an Azure virtual machine](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed).
An AD FS server must already be set up and functioning before you begin this pro
1. Select **Finish**. 1. The **Edit Claim Rules** window will show the new rule. Click **Apply**.
-1. Click **OK**. The AD FS server is now configured for direct federation using WS-Fed.
+1. Click **OK**. The AD FS server is now configured for federation using WS-Fed.
## Next steps
-Next, you'll [configure direct federation in Azure AD](direct-federation.md#step-3-configure-direct-federation-in-azure-ad) either in the Azure AD portal or by using PowerShell.
+Next, you'll [configure SAML/WS-Fed IdP federation in Azure AD](direct-federation.md#step-3-configure-samlws-fed-idp-federation-in-azure-ad) either in the Azure AD portal or by using PowerShell.
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation.md
Title: Direct federation with an identity provider for B2B - Azure AD
+ Title: Federation with a SAML/WS-Fed identity provider (IdP) for B2B - Azure AD
description: Directly federate with a SAML or WS-Fed identity provider so guests can sign in to your Azure AD apps
-# Direct federation with AD FS and third-party providers for guest users (preview)
+# Federation with SAML/WS-Fed identity providers for guest users (preview)
> [!NOTE]
-> Direct federation is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>- *Direct federation* in Azure Active Directory is now referred to as *SAML/WS-Fed identity provider (IdP) federation*.
+>- SAML/WS-Fed IdP federation is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article describes how to set up direct federation with another organization for B2B collaboration. You can set up direct federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol.
-When you set up direct federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Azure AD tenant and start collaborating with you. There's no need for the guest user to create a separate Azure AD account.
+This article describes how to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. When you set up federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Azure AD tenant and start collaborating with you. There's no need for the guest user to create a separate Azure AD account.
> [!IMPORTANT]
-> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed identity provider. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
+> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
>- We now recommend that the partner set the audience of the SAML or WS-Fed based IdP to a tenanted audience. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below.
-## When is a guest user authenticated with direct federation?
-After you set up direct federation with an organization, any new guest users you invite will be authenticated using direct federation. ItΓÇÖs important to note that setting up direct federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. Here are some examples:
+## When is a guest user authenticated with SAML/WS-Fed IdP federation?
+
+After you set up federation with an organization's SAML/WS-Fed IdP, any new guest users you invite will be authenticated using that SAML/WS-Fed IdP. ItΓÇÖs important to note that setting up federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. Here are some examples:
+
+ - If guest users have already redeemed invitations from you, and you subsequently set up federation with the organization's SAML/WS-Fed IdP, those guest users will continue to use the same authentication method they used before you set up federation.
+ - If you set up federation with an organization's SAML/WS-Fed IdP and invite guest users, and then the partner organization later moves to Azure AD, the guest users who have already redeemed invitations will continue to use the federated SAML/WS-Fed IdP, as long as the federation policy in your tenant exists.
+ - If you delete federation with an organization's SAML/WS-Fed IdP, any guest users currently using the SAML/WS-Fed IdP will be unable to sign in.
In any of these scenarios, you can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
-Direct federation is tied to domain namespaces, such as contoso.com and fabrikam.com. When establishing a direct federation configuration with AD FS or a third-party IdP, organizations associate one or more domain namespaces to these IdPs.
+SAML/WS-Fed IdP federation is tied to domain namespaces, such as contoso.com and fabrikam.com. When establishing federation with AD FS or a third-party IdP, organizations associate one or more domain namespaces to these IdPs.
## End-user experience
-With direct federation, guest users sign into your Azure AD tenant using their own organizational account. When they are accessing shared resources and are prompted for sign-in, direct federation users are redirected to their IdP. After successful sign-in, they are returned to Azure AD to access resources. Direct federation usersΓÇÖ refresh tokens are valid for 12 hours, the [default length for passthrough refresh token](../develop/active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties) in Azure AD. If the federated IdP has SSO enabled, the user will experience SSO and will not see any sign-in prompt after initial authentication.
+
+With SAML/WS-Fed IdP federation, guest users sign into your Azure AD tenant using their own organizational account. When they are accessing shared resources and are prompted for sign-in, users are redirected to their IdP. After successful sign-in, users are returned to Azure AD to access resources. Their refresh tokens are valid for 12 hours, the [default length for passthrough refresh token](../develop/active-directory-configurable-token-lifetimes.md#configurable-token-lifetime-properties) in Azure AD. If the federated IdP has SSO enabled, the user will experience SSO and will not see any sign-in prompt after initial authentication.
## Sign-in endpoints
-Direct federation guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their own credentials.
+SAML/WS-Fed IdP federation guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their own credentials.
-Direct federation guest users can also use application endpoints that include your tenant information, for example:
+SAML/WS-Fed IdP federation guest users can also use application endpoints that include your tenant information, for example:
* `https://myapps.microsoft.com/?tenantid=<your tenant ID>` * `https://myapps.microsoft.com/<your verified domain>.onmicrosoft.com` * `https://portal.azure.com/<your tenant ID>`
-You can also give Direct federation guest users a direct link to an application or resource by including your tenant information, for example `https://myapps.microsoft.com/signin/Twitter/<application ID?tenantId=<your tenant ID>`.
+You can also give guest users a direct link to an application or resource by including your tenant information, for example `https://myapps.microsoft.com/signin/Twitter/<application ID?tenantId=<your tenant ID>`.
## Limitations ### DNS-verified domains in Azure AD
-The domain you want to federate with must ***not*** be DNS-verified in Azure AD. You're allowed to set up direct federation with unmanaged (email-verified or "viral") Azure AD tenants because they aren't DNS-verified.
+The domain you want to federate with must ***not*** be DNS-verified in Azure AD. You're allowed to set up federation with unmanaged (email-verified or "viral") Azure AD tenants because they aren't DNS-verified.
### Signing certificate renewal
-If you specify the metadata URL in the identity provider settings, Azure AD will automatically renew the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
+If you specify the metadata URL in the IdP settings, Azure AD will automatically renew the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
### Limit on federation relationships
-Currently, a maximum of 1,000 federation relationships is supported. This limit includes both [internal federations](/powershell/module/msonline/set-msoldomainfederationsettings) and direct federations.
+Currently, a maximum of 1,000 federation relationships is supported. This limit includes both [internal federations](/powershell/module/msonline/set-msoldomainfederationsettings) and SAML/WS-Fed IdP federations.
### Limit on multiple domains
-We donΓÇÖt currently support direct federation with multiple domains from the same tenant.
+We donΓÇÖt currently support SAML/WS-Fed IdP federation with multiple domains from the same tenant.
## Frequently asked questions
-### Can I set up direct federation with a domain for which an unmanaged (email-verified) tenant exists?
-Yes. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up direct federation with that domain. Unmanaged, or email-verified, tenants are created when a user redeems a B2B invitation or performs a self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. You can set up direct federation with these domains. If you try to set up direct federation with a DNS-verified domain, either in the Azure portal or via PowerShell, you'll see an error.
-### If direct federation and email one-time passcode authentication are both enabled, which method takes precedence?
-When direct federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up direct federation, they'll continue to use one-time passcode authentication.
-### Does direct federation address sign-in issues due to a partially synced tenancy?
-No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The direct federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.
-### Once Direct Federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?
-Setting up direct federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
+### Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists?
+Yes. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up federation with that domain. Unmanaged, or email-verified, tenants are created when a user redeems a B2B invitation or performs a self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. You can set up federation with these domains. If you try to set up federation with a DNS-verified domain, either in the Azure portal or via PowerShell, you'll see an error.
+### If SAML/WS-Fed IdP federation and email one-time passcode authentication are both enabled, which method takes precedence?
+When SAML/WS-Fed IdP federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up SAML/WS-Fed IdP federation, they'll continue to use one-time passcode authentication.
+### Does SAML/WS-Fed IdP federation address sign-in issues due to a partially synced tenancy?
+No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The SAML/WS-Fed IdP federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.
+### Once SAML/WS-Fed IdP federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?
+Setting up SAML/WS-Fed IdP federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
## Step 1: Determine if the partner needs to update their DNS text records
-Depending on the partner's IdP, the partner might need to update their DNS records to enable direct federation with you. Use the following steps to determine if DNS updates are needed.
+Depending on the partner's IdP, the partner might need to update their DNS records to enable federation with you. Use the following steps to determine if DNS updates are needed.
-1. If the partner's IdP is one of these allowed identity providers, no DNS changes are needed (this list is subject to change):
+1. If the partner's IdP is one of these allowed IdPs, no DNS changes are needed (this list is subject to change):
- accounts.google.com - pingidentity.com
Depending on the partner's IdP, the partner might need to update their DNS recor
- idaptive.app - idaptive.qa
-2. If the IdP is not one of the allowed providers listed in the previous step, check the partner's IdP authentication URL to see if the domain matches the target domain or a host within the target domain. In other words, when setting up direct federation for `fabrikam.com`:
+2. If the IdP is not one of the allowed providers listed in the previous step, check the partner's IdP authentication URL to see if the domain matches the target domain or a host within the target domain. In other words, when setting up federation for `fabrikam.com`:
- If the authentication URL is `https://fabrikam.com` or `https://sts.fabrikam.com/adfs` (a host in the same domain), no DNS changes are needed. - If the authentication URL is `https://fabrikamconglomerate.com/adfs` or `https://fabrikam.com.uk/adfs`, the domain doesn't match the fabrikam.com domain, so the partner will need to add a text record for the authentication URL to their DNS configuration; go to the next step.
Depending on the partner's IdP, the partner might need to update their DNS recor
3. If DNS changes are needed based on the previous step, ask the partner to add a TXT record to their domain's DNS records, like the following example: `fabrikam.com.ΓÇ» IN ΓÇ» TXT ΓÇ» DirectFedAuthUrl=https://fabrikamconglomerate.com/adfs`
-## Step 2: Configure the partner organizationΓÇÖs identity provider
-Next, your partner organization needs to configure their identity provider with the required claims and relying party trusts.
+## Step 2: Configure the partner organizationΓÇÖs IdP
+
+Next, your partner organization needs to configure their IdP with the required claims and relying party trusts.
> [!NOTE]
-> To illustrate how to configure an identity provider for direct federation, weΓÇÖll use Active Directory Federation Services (AD FS) as an example. See the article [Configure direct federation with AD FS](direct-federation-adfs.md), which gives examples of how to configure AD FS as a SAML 2.0 or WS-Fed identity provider in preparation for direct federation.
+> To illustrate how to configure a SAML/WS-Fed IdP for federation, weΓÇÖll use Active Directory Federation Services (AD FS) as an example. See the article [Configure SAML/WS-Fed IdP federation with AD FS](direct-federation-adfs.md), which gives examples of how to configure AD FS as a SAML 2.0 or WS-Fed IdP in preparation for federation.
### SAML 2.0 configuration
-Azure AD B2B can be configured to federate with identity providers that use the SAML protocol with specific requirements listed below. For more information about setting up a trust between your SAML identity provider and Azure AD, see [Use a SAML 2.0 Identity Provider (IdP) for Single Sign-On](../hybrid/how-to-connect-fed-saml-idp.md).
+Azure AD B2B can be configured to federate with IdPs that use the SAML protocol with specific requirements listed below. For more information about setting up a trust between your SAML IdP and Azure AD, see [Use a SAML 2.0 Identity Provider (IdP) for Single Sign-On](../hybrid/how-to-connect-fed-saml-idp.md).
> [!NOTE]
-> The target domain for direct federation must not be DNS-verified on Azure AD. The authentication URL domain must match the target domain or it must be the domain of an allowed identity provider. See the [Limitations](#limitations) section for details.
+> The target domain for SAML/WS-Fed IdP federation must not be DNS-verified on Azure AD. The authentication URL domain must match the target domain or it must be the domain of an allowed IdP. See the [Limitations](#limitations) section for details.
#### Required SAML 2.0 attributes and claims
-The following tables show requirements for specific attributes and claims that must be configured at the third-party identity provider. To set up direct federation, the following attributes must be received in the SAML 2.0 response from the identity provider. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
+The following tables show requirements for specific attributes and claims that must be configured at the third-party IdP. To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
Required attributes for the SAML 2.0 response from the IdP: |Attribute |Value | ||| |AssertionConsumerService |`https://login.microsoftonline.com/login.srf` |
-|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up direct federation with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
+|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
Required claims for the SAML 2.0 token issued by the IdP:
|NameID Format |`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` | |emailaddress |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-### WS-Fed configuration
-Azure AD B2B can be configured to federate with identity providers that use the WS-Fed protocol with some specific requirements as listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, see the "STS Integration Paper using WS Protocols" available in the [Azure AD Identity Provider Compatibility Docs](https://www.microsoft.com/download/details.aspx?id=56843).
+### WS-Fed configuration
+
+Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol with some specific requirements as listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, see the "STS Integration Paper using WS Protocols" available in the [Azure AD Identity Provider Compatibility Docs](https://www.microsoft.com/download/details.aspx?id=56843).
> [!NOTE]
-> The target domain for direct federation must not be DNS-verified on Azure AD. The authentication URL domain must match either the target domain or the domain of an allowed identity provider. See the [Limitations](#limitations) section for details.
+> The target domain for federation must not be DNS-verified on Azure AD. The authentication URL domain must match either the target domain or the domain of an allowed IdP. See the [Limitations](#limitations) section for details.
#### Required WS-Fed attributes and claims
-The following tables show requirements for specific attributes and claims that must be configured at the third-party WS-Fed identity provider. To set up direct federation, the following attributes must be received in the WS-Fed message from the identity provider. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
+The following tables show requirements for specific attributes and claims that must be configured at the third-party WS-Fed IdP. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
Required attributes in the WS-Fed message from the IdP: |Attribute |Value | ||| |PassiveRequestorEndpoint |`https://login.microsoftonline.com/login.srf` |
-|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up direct federation with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
+|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're federating with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` | Required claims for the WS-Fed token issued by the IdP:
Required claims for the WS-Fed token issued by the IdP:
|ImmutableID |`http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID` | |emailaddress |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` |
-## Step 3: Configure direct federation in Azure AD
-Next, you'll configure federation with the identity provider configured in step 1 in Azure AD. You can use either the Azure AD portal or PowerShell. It might take 5-10 minutes before the direct federation policy takes effect. During this time, don't attempt to redeem an invitation for the direct federation domain. The following attributes are required:
+## Step 3: Configure SAML/WS-Fed IdP federation in Azure AD
+
+Next, you'll configure federation with the IdP configured in step 1 in Azure AD. You can use either the Azure AD portal or PowerShell. It might take 5-10 minutes before the federation policy takes effect. During this time, don't attempt to redeem an invitation for the federation domain. The following attributes are required:
+ - Issuer URI of partner IdP - Passive authentication endpoint of partner IdP (only https is supported) - Certificate
-### To configure direct federation in the Azure AD portal
+### To configure federation in the Azure AD portal
1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**. 2. Select **External Identities** > **All identity providers**.
Next, you'll configure federation with the identity provider configured in step
![Screenshot showing parse button on the SAML or WS-Fed IdP page](media/direct-federation/new-saml-wsfed-idp-parse.png)
-5. Enter your partner organizationΓÇÖs domain name, which will be the target domain name for direct federation
+5. Enter your partner organizationΓÇÖs domain name, which will be the target domain name for federation
6. You can upload a metadata file to populate metadata details. If you choose to input metadata manually, enter the following information: - Domain name of partner IdP - Entity ID of partner IdP
Next, you'll configure federation with the identity provider configured in step
7. Select **Save**.
-### To configure direct federation in Azure AD using PowerShell
+### To configure SAML/WS-Fed IdP federation in Azure AD using PowerShell
1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)). If you need detailed steps, the Quickstart includes the guidance, [PowerShell module](b2b-quickstart-invite-powershell.md#prerequisites).
-2. Run the following command:
+2. Run the following command:
+ ```powershell Connect-AzureAD ```
-3. At the sign-in prompt, sign in with the managed Global Administrator account.
+
+3. At the sign-in prompt, sign in with the managed Global Administrator account.
4. Run the following commands, replacing the values from the federation metadata file. For AD FS Server and Okta, the federation file is federationmetadata.xml, for example: `https://sts.totheclouddemo.com/federationmetadata/2007-06/federationmetadata.xml`. ```powershell
Next, you'll configure federation with the identity provider configured in step
New-AzureADExternalDomainFederation -ExternalDomainName $domainName -FederationSettings $federationSettings ```
-## Step 4: Test direct federation in Azure AD
-Now test your direct federation setup by inviting a new B2B guest user. For details, see [Add Azure AD B2B collaboration users in the Azure portal](add-users-administrator.md).
+## Step 4: Test SAML/WS-Fed IdP federation in Azure AD
+Now test your federation setup by inviting a new B2B guest user. For details, see [Add Azure AD B2B collaboration users in the Azure portal](add-users-administrator.md).
-## How do I edit a direct federation relationship?
+## How do I edit a SAML/WS-Fed IdP federation relationship?
1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**. 2. Select **External Identities**.
Now test your direct federation setup by inviting a new B2B guest user. For deta
6. Select **Save**.
-## How do I remove direct federation?
-You can remove your direct federation setup. If you do, direct federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
-To remove direct federation with an identity provider in the Azure AD portal:
+## How do I remove federation?
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
+You can remove your federation setup. If you do, federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
+To remove federation with an IdP in the Azure AD portal:
+
+1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
2. Select **External Identities**. 3. Select **All identity providers**.
-4. Select the identity provider, and then select **Delete**.
+4. Select the identity provider, and then select **Delete**.
5. Select **Yes** to confirm deletion.
-To remove direct federation with an identity provider by using PowerShell:
+To remove federation with an identity provider by using PowerShell:
+ 1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)).
-2. Run the following command:
+2. Run the following command:
+ ```powershell Connect-AzureAD ```
-3. At the sign-in prompt, sign in with the managed Global Administrator account.
+
+3. At the sign-in prompt, sign in with the managed Global Administrator account.
4. Enter the following command:+ ```powershell Remove-AzureADExternalDomainFederation -ExternalDomainName $domainName ```
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
After you've added Google as one of your application's sign-in options, on the *
![Sign in options for Google users](media/google-federation/sign-in-with-google-overview.png) > [!NOTE]
-> Google federation is designed specifically for Gmail users. To federate with G Suite domains, use [direct federation](direct-federation.md).
+> Google federation is designed specifically for Gmail users. To federate with G Suite domains, use [SAML/WS-Fed identity provider federation](direct-federation.md).
> [!IMPORTANT] > **Starting in the second half of 2021**, Google is [deprecating web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If youΓÇÖre using Google federation for B2B invitations or [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md), or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. [Learn more](#deprecation-of-web-view-sign-in-support).
This change does not affect:
- Microsoft apps on Windows - Web apps - Mobile apps using system web-views for authentication ([SFSafariViewController](https://developer.apple.com/documentation/safariservices/sfsafariviewcontroller) on iOS, [Custom Tabs](https://developer.chrome.com/docs/android/custom-tabs/overview/) on Android). -- G Suite identities, for example when youΓÇÖre using SAML-based [direct federation](direct-federation.md) with G Suite
+- G Suite identities, for example when youΓÇÖre using [SAML-based federation](direct-federation.md) with G Suite
WeΓÇÖre confirming with Google whether this change affects the following: - Windows apps that use the Web Account Manager (WAM) or Web Authentication Broker (WAB).
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/identity-providers.md
In addition to Azure AD accounts, External Identities offers a variety of identi
- **Facebook**: When building an app, you can configure self-service sign-up and enable Facebook federation so that users can sign up for your app using their own Facebook accounts. Facebook can only be used for self-service sign-up user flows and isn't available as a sign-in option when users are redeeming invitations from you. See how to [add Facebook as an identity provider](facebook-federation.md). -- **Direct federation**: You can also set up direct federation with any external identity provider that supports the SAML or WS-Fed protocols. Direct federation allows external users to redeem invitations from you by signing in to your apps with their existing social or enterprise accounts. See how to [set up direct federation](direct-federation.md).
+- **SAML/WS-Fed identity provider federation**: You can also set up federation with any external IdP that supports the SAML or WS-Fed protocols. SAML/WS-Fed IdP federation allows external users to redeem invitations from you by signing in to your apps with their existing social or enterprise accounts. See how to [set up SAML/WS-Fed IdP federation](direct-federation.md).
> [!NOTE]
- > Direct federation identity providers can't be used in your self-service sign-up user flows.
+ > Federated SAML/WS-Fed IdPs can't be used in your self-service sign-up user flows.
## Adding social identity providers
To learn how to add identity providers for sign-in to your applications, refer t
- [Add email one-time passcode authentication](one-time-passcode.md) - [Add Google](google-federation.md) as an allowed social identity provider - [Add Facebook](facebook-federation.md) as an allowed social identity provider-- [Set up direct federation](direct-federation.md) with any organization whose identity provider supports the SAML 2.0 or WS-Fed protocol. Note that direct federation is not an option for self-service sign-up user flows.
+- [Set up SAML/WS-Fed IdP federation](direct-federation.md) with any organization whose identity provider supports the SAML 2.0 or WS-Fed protocol. Note that SAML/WS-Fed IdP federation is not an option for self-service sign-up user flows.
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When a user clicks the **Accept invitation** link in an [invitation email](invit
1. Azure AD performs user-based discovery to determine if the user exists in an [existing Azure AD tenant](./what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal).
-2. If an admin has enabled [direct federation](./direct-federation.md), Azure AD checks if the userΓÇÖs domain suffix matches the domain of a configured SAML/WS-Fed identity provider and redirects the user to the pre-configured identity provider.
+2. If an admin has enabled [SAML/WS-Fed IdP federation](./direct-federation.md), Azure AD checks if the userΓÇÖs domain suffix matches the domain of a configured SAML/WS-Fed identity provider and redirects the user to the pre-configured identity provider.
3. If an admin has enabled [Google federation](./google-federation.md), Azure AD checks if the userΓÇÖs domain suffix is gmail.com or googlemail.com and redirects the user to Google.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 04/05/2021 Last updated : 05/04/2021
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## April 2021
+
+### Updated articles
+
+- [Add Google as an identity provider for B2B guest users](google-federation.md)
+- [Example: Direct federation with Active Directory Federation Services (AD FS) (preview)](direct-federation-adfs.md)
+- [Direct federation with AD FS and third-party providers for guest users (preview)](direct-federation.md)
+- [Email one-time passcode authentication](one-time-passcode.md)
+- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)
+- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
+- [Conditional Access for B2B collaboration users](conditional-access.md)
+ ## March 2021 ### New articles
Welcome to what's new in Azure Active Directory external identities documentatio
- [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md) - [Invite internal users to B2B collaboration](invite-internal-users.md) - [Microsoft 365 external sharing and Azure Active Directory (Azure AD) B2B collaboration](o365-external-user.md)-- [Direct federation with AD FS and third-party providers for guest users (preview)](direct-federation.md)-
-## January 2021
-
-### Updated articles
-- [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md)-- [How users in your organization can invite guest users to an app](add-users-information-worker.md)--
-## December 2020
-
-### Updated articles
-- [Azure Active Directory B2B collaboration FAQs](faq.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Identity Providers for External Identities](identity-providers.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)-- [Email one-time passcode authentication](one-time-passcode.md)-
-## November 2020
-
-### Updated articles
-- [Microsoft 365 external sharing and Azure Active Directory (Azure AD) B2B collaboration](o365-external-user.md)-- [Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration](hybrid-on-premises-to-cloud.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)--
-## October 2020
-
-### Updated articles
-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [How users in your organization can invite guest users to an app](add-users-information-worker.md)-- [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md)-- [Azure Active Directory B2B collaboration FAQs](faq.md)-- [External Identities documentation](index.yml)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [What are External Identities in Azure Active Directory?](compare-with-b2c.md)-- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)--
-## September 2020
-
-### Updated articles
-- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [Billing model for Azure AD External Identities](external-identities-pricing.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)--
-## August 2020
-
-### New articles
-- [Billing model for Azure AD External Identities](external-identities-pricing.md)--
-### Updated articles
-- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)-- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)--
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
active-directory Active Directory Users Profile Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md
Previously updated : 04/11/2019 Last updated : 05/04/2021
Add user profile information, including a profile picture, job-specific informat
As you'll see, there's more information available in a user's profile than what you're able to add during the user's creation. All this additional information is optional and can be added as needed by your organization. ## To add or change profile information
-1. Sign in to the [Azure portal](https://portal.azure.com/) as a User administrator for the organization.
+
+>[!Note]
+>The user name and email address properties can't contain accent characters.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
2. Select **Azure Active Directory**, select **Users**, and then select a user. For example, _Alain Charon_.
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Previously updated : 03/05/2021 Last updated : 05/04/2021
Add new users or delete existing users from your Azure Active Directory (Azure A
You can create a new user using the Azure Active Directory portal.
+>[!Note]
+>The user name and email address properties can't contain accent characters.
+ To add a new user, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as a User administrator for the organization.
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
1. Search for and select *Azure Active Directory* from any page.
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-principal.md
A given application instance has two distinct properties: the ApplicationID (als
> You may find that the terms application and service principal are used interchangeably when loosely referring to an application in the context of authentication related tasks. However, they are two different representations of applications in Azure AD.
-The ApplicationID represents the global application and is the same for all the application instances across tenants. The ObjectID is a unique value for an application object and represents the service principal. As with users, groups, and other resources, the ObjectID helps uniquely identify an application instance in Azure AD.
+The ApplicationID represents the global application and is the same for all the application instances across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps uniquely identify an application instance in Azure AD.
ΓÇïΓÇïFor more detailed information on this topic, see [Application and service principal relationship](../develop/app-objects-and-service-principals.md).
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
For information about which PowerShell cmdlets to use, see [Azure AD 2.0 preview
1. Enable *password hash sync* from the [Optional features](how-to-connect-install-custom.md#optional-features) page in Azure AD Connect. 
- ![Screenshot of the "Optional features" page in Azure Active Directory Connect](media/how-to-connect-staged-rollout/sr1.png)
+ ![Screenshot of the "Optional features" page in Azure Active Directory Connect](media/how-to-connect-staged-rollout/staged-1.png)
1. Ensure that a full *password hash sync* cycle has run so that all the users' password hashes have been synchronized to Azure AD. To check the status of *password hash sync*, you can use the PowerShell diagnostics in [Troubleshoot password hash sync with Azure AD Connect sync](tshoot-connect-password-hash-synchronization.md).
- ![Screenshot of the AADConnect Troubleshooting log](./media/how-to-connect-staged-rollout/sr2.png)
+ ![Screenshot of the AADConnect Troubleshooting log](./media/how-to-connect-staged-rollout/staged-2.png)
If you want to test *pass-through authentication* sign-in by using staged rollout, enable it by following the pre-work instructions in the next section.
Enable *seamless SSO* by doing the following:
5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command displays a list of Active Directory forests (see the "Domains" list) on which this feature has been enabled. By default, it is set to false at the tenant level.
- ![Example of the Windows PowerShell output](./media/how-to-connect-staged-rollout/sr3.png)
+ ![Example of the Windows PowerShell output](./media/how-to-connect-staged-rollout/staged-3.png)
6. Call `$creds = Get-Credential`. At the prompt, enter the domain administrator credentials for the intended Active Directory forest.
We've enabled audit events for the various actions we perform for staged rollout
>[!NOTE] >An audit event is logged when *seamless SSO* is turned on by using staged rollout.
- ![The "Create rollout policy for feature" pane - Activity tab](./media/how-to-connect-staged-rollout/sr7.png)
+ ![The "Create rollout policy for feature" pane - Activity tab](./media/how-to-connect-staged-rollout/staged-7.png)
- ![The "Create rollout policy for feature" pane - Modified Properties tab](./media/how-to-connect-staged-rollout/sr8.png)
+ ![The "Create rollout policy for feature" pane - Modified Properties tab](./media/how-to-connect-staged-rollout/staged-8.png)
- Audit event when a group is added to *password hash sync*, *pass-through authentication*, or *seamless SSO*. >[!NOTE] >An audit event is logged when a group is added to *password hash sync* for staged rollout.
- ![The "Add a group to feature rollout" pane - Activity tab](./media/how-to-connect-staged-rollout/sr9.png)
+ ![The "Add a group to feature rollout" pane - Activity tab](./media/how-to-connect-staged-rollout/staged-9.png)
- ![The "Add a group to feature rollout" pane - Modified Properties tab](./media/how-to-connect-staged-rollout/sr10.png)
+ ![The "Add a group to feature rollout" pane - Modified Properties tab](./media/how-to-connect-staged-rollout/staged-10.png)
- Audit event when a user who was added to the group is enabled for staged rollout.
- ![The "Add user to feature rollout" pane - Activity tab](media/how-to-connect-staged-rollout/sr11.png)
+ ![The "Add user to feature rollout" pane - Activity tab](media/how-to-connect-staged-rollout/staged-11.png)
- ![The "Add user to feature rollout" pane - Target(s) tab](./media/how-to-connect-staged-rollout/sr12.png)
+ ![The "Add user to feature rollout" pane - Target(s) tab](./media/how-to-connect-staged-rollout/staged-12.png)
## Validation
To test sign-in with *seamless SSO*:
To track user sign-ins that still occur on Active Directory Federation Services (AD FS) for selected staged rollout users, follow the instructions at [AD FS troubleshooting: Events and logging](/windows-server/identity/ad-fs/troubleshooting/ad-fs-tshoot-logging#types-of-events). Check vendor documentation about how to check this on third-party federation providers.
+## Monitoring
+You can monitor the users and groups added or removed from staged rollout and users sign-ins while in staged rollout, using the new Hybrid Auth workbooks in the Azure portal.
+
+ ![Hybrid Auth workbooks](./media/how-to-connect-staged-rollout/staged-13.png)
+ ## Remove a user from staged rollout Removing a user from the group disables staged rollout for that user. To disable the staged rollout feature, slide the control back to **Off**.
A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD
- [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged_rollout ) - [Change the sign-in method to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) - [Change sign-in method to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
+- [Staged rollout interactive guide](https://mslearn.cloudguides.com/en-us/guides/Test%20migration%20to%20cloud%20authentication%20using%20staged%20rollout%20in%20Azure%20AD)
+
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
Previously updated : 04/19/2021 Last updated : 05/03/2021
Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it is important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk."
-You can use your organizational credentials to sign-in to another organization as a guest; this process is referred to B2B authentication. Organizations can configure policies to block users from signing-in if their credentials are at risk. If your account is at risk and you are blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the steps below. If your organization has not enabled self-service password reset, your administrator will need to manually remediate your account.
+You can use your organizational credentials to sign-in to another organization as a guest. This process is referred to [business-to-business or B2B collaboration](../external-identities/what-is-b2b.md). Organizations can configure policies to block users from signing-in if their credentials are considered [at risk](concept-identity-protection-risks.md). If your account is at risk and you are blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the following steps. If your organization has not enabled self-service password reset, your administrator will need to manually remediate your account.
## How to unblock your account
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 04/04/2021 Last updated : 05/04/2021
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## April 2021
+
+### New articles
+
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.yml)
+
+### Updated articles
+
+- [Application management best practices](application-management-fundamentals.md)
+- [Application management documentation](index.yml)
+- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md)
+- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+- [Single sign-on options in Azure AD](sso-options.md)
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
+- [Header-based authentication for single sign-on with Application Proxy and PingAccess](../app-proxy/application-proxy-ping-access-publishing-guide.md)
+- [Managing consent to applications and evaluating consent requests](manage-consent-requests.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+- [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)
+- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)
++ ## March 2021 ### New articles
Welcome to what's new in Azure Active Directory application management documenta
- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md) - [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md) - [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)-
-## January 2021
-
-### New articles
-- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-
-### Updated articles
-- [Problem installing the Application Proxy Agent Connector](../app-proxy/application-proxy-connector-installation-problem.md)-- [Troubleshoot password-based single sign-on in Azure AD](troubleshoot-password-based-sso.md)-- [Application management best practices](application-management-fundamentals.md)-- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)-- [What is application management?](what-is-application-management.md)-- [Active Directory (Azure AD) Application Proxy frequently asked questions](../app-proxy/application-proxy-faq.yml)-- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)-- [Work with existing on-premises proxy servers](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md)-- [Develop line-of-business apps for Azure Active Directory](../develop/v2-overview.md)-- [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md)-- [Understand linked sign-on](configure-linked-sign-on.md)-- [Understand password-based single sign-on](configure-password-single-sign-on-non-gallery-applications.md)-- [Understand SAML-based single sign-on](configure-saml-single-sign-on.md)-- [Troubleshoot common problem adding or removing an application to Azure Active Directory](/troubleshoot/azure/active-directory/troubleshoot-adding-apps)-- [Viewing apps using your Azure AD tenant for identity management](application-types.md)-- [Understand how users are assigned to apps in Azure Active Directory](ways-users-get-assigned-to-applications.md)-- [Quickstart: Delete an application from your Azure Active Directory (Azure AD) tenant](delete-application-portal.md)-- [Publish Remote Desktop with Azure AD Application Proxy](../app-proxy/application-proxy-integrate-with-remote-desktop-services.md)-- [Take action on overprivileged or suspicious applications in Azure Active Directory](manage-application-permissions.md)
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/my-staff-configure.md
When a user goes to My Staff, they are shown the names of the [administrative un
## Reset a user's password
-Before you can rest passwords for on-premises users, you must fulfill the following prerequisite conditions. For detailed instructions, see [Enable self-service password reset](../authentication/tutorial-enable-sspr-writeback.md) tutorial.
+Before you can reset passwords for on-premises users, you must fulfill the following prerequisite conditions. For detailed instructions, see [Enable self-service password reset](../authentication/tutorial-enable-sspr-writeback.md) tutorial.
* Configure permissions for password writeback * Enable password writeback in Azure AD Connect
active-directory Box Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/box-tutorial.md
In this section, a user called Britta Simon is created in Box. Box supports just
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Box Sign-on URL where you can initiate the login flow.
+* Select **Test this application** in the Azure portal. You're redirected to the Box Sign-on URL, where you can initiate the login flow.
* Go to Box Sign-on URL directly and initiate the login flow from there. * You can use Microsoft My Apps. When you click the Box tile in the My Apps, this will redirect to Box Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+### Push an Azure group to Box
+
+You can push an Azure group to Box and sync that group. Azure pushes groups to Box via an API-level integration.
+
+1. In **Users & Groups**, search for the group you want to assign to Box.
+1. In **Provisioning**, ensure that **Synchronize Azure Active Directory Groups to Box** is selected. This setting syncs the groups that you allocated in the preceding step. It might take some time for these groups to be pushed from Azure.
+
+> [!NOTE]
+> If you need to create a user manually, contact [Box support team](https://community.box.com/t5/custom/page/page-id/submit_sso_questionaire).
## Next steps
-Once you configure Box you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+Once you configure Box you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory Github Ae Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-ae-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Edit **User Attributes & Claims**.
-1. Click **Add new claim** and enter the name as **administrator** in the textbox.
+1. Click **Add new claim** and enter the name as **administrator** in the textbox (the **administrator** value is case-sensitive).
1. Expand **Claim conditions** and select **Members** from **User type**.
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/google-apps-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
+ ```Logout URL
+ https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0
+ ```
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
active-directory Holmes Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/holmes-tutorial.md
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Holmes supports **SP and IDP** initiated SSO.
- ## Adding Holmes from the gallery To configure the integration of Holmes into Azure AD, you need to add Holmes from the gallery to your list of managed SaaS apps.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- In the **Identifier** text box, type a URL using the following pattern:
- `https://<WorkspaceID>.holmescloud.com`
+2. On the **Basic SAML Configuration** section, enter the values for the following fields:
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+ 1. In the **Identifier** text box, type a URL using the following pattern:
- In the **Sign-on URL** text box, type the URL:
- `https://www.holmescloud.com/login`
+ `https://<WorkspaceID>.holmescloud.com`
+
+ 1. In the **Reply URL (Assertion Consumer Service URL)** text box, enter `https://holmescloud.com/sso/acs`.
+
+ 1. In the **Logout Url** text box, enter `https://holmescloud.com/sso/logout`.
> [!NOTE]
- > The value is not real. Update the value with the actual Identifier. Contact [Holmes Client support team](mailto:team-dev@holmescloud.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > Update the value with the actual Identifier, which refers to the Holmes Admin page. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+3. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
1. In the **Name** field, enter `B.Simon`. 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
+ 1. Select **Create**.
### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
+1. In the **Add Assignment** dialog, select the **Assign** button.
## Configure Holmes SSO
-To configure single sign-on on **Holmes** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Holmes support team](mailto:team-dev@holmescloud.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on the **Holmes** side, you need to register the downloaded **Certificate (Base64)** and the appropriate copied URLs from the Azure portal on the Holmes Admin page.
### Create Holmes test user
-In this section, you create a user called Britta Simon in Holmes. Work with [Holmes support team](mailto:team-dev@holmescloud.com) to add the users in the Holmes platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Holmes. You can create/invite a user on the Holmes Member Management page. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Looker Analytics Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/looker-analytics-platform-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **SP Entity/IdP Audience** text box, type a URL using the following pattern:
`<SPN>_looker` b. In the **Reply URL** text box, type a URL using the following pattern:
You can also use Microsoft Access Panel to test the application in any mode. Whe
## Next steps
-Once you configure Looker Analytics Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Looker Analytics Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Maxient Conduct Manager Software Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maxient-conduct-manager-software-tutorial.md
Configure and test Azure AD SSO with Maxient Conduct Manager Software. For SSO t
To configure and test Azure AD SSO with Maxient Conduct Manager Software, complete the following building blocks: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to authenticate for use with the Maxient Conduct Manager Software
- 1. **[Assign all users to use Maxient](#assign-all-users-to-be-able-to-authenticate-for-the-maxient-conduct-manager-software)** - to allow everyone at your institution to be able to authenticate.
+ - **[Set "User Assignment Required?" to No](#set-user-assignment-required-to-no)** - to allow everyone at your institution to be able to authenticate.
1. **[Test Azure AD Setup With Maxient](#test-with-maxient)** - to verify whether the configuration works, and the correct attributes are being released ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/copy-metadataurl.png)
-### Assign All Users to be Able to Authenticate for the Maxient Conduct Manager Software
+<a name="set-user-assignment-required-to-no"></a>
+
+### Set "User Assignment Required?" to No
-In this section, you will grant access for all accounts to authenticate using the Azure system for the Maxient Conduct Manager Software. It is important to note that this step is **REQUIRED** for Maxient to function properly. Maxient leverages your Azure AD system to *authenticate* users. The *authorization* of users is performed within the Maxient system for the particular function theyΓÇÖre trying to perform. Maxient does not use attributes from your directory to make those decisions.
+It is important to note that this step is **REQUIRED** for Maxient to function properly. Maxient leverages your Azure AD system to *authenticate* users. The *authorization* of users is performed within the Maxient system for the particular function theyΓÇÖre trying to perform. Maxient does not use attributes from your directory to make those decisions.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Maxient Conduct Manager Software**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select all users (or the appropriate groups) and **assign** them to be able to authenticate with Maxient.
+1. In the app's overview page, toggle the "User Assignment Required" setting to No.
## Test with Maxient
If a support ticket has not already been opened with a Maxient Implementation/Su
- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try Maxient Conduct Manager Software with Azure AD](https://aad.portal.azure.com/)
+- [Try Maxient Conduct Manager Software with Azure AD](https://aad.portal.azure.com/)
active-directory Saba Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/saba-cloud-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `<CUSTOMER_NAME>_SPLN_PRINCIPLE`
+ a. In the **Identifier** text box, type a URL using the following pattern (you'll get this value in the Configure Saba Cloud SSO section on step 6, but it usually is in the format of `<CUSTOMER_NAME>_sp`):
+ `<CUSTOMER_NAME>_sp`
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<SIGN-ON URL>/Saba/saml/SSO/alias/<ENTITY_ID>`
+ b. In the **Reply URL** text box, type a URL using the following pattern (ENTITY_ID refers to the previous step, usually `<CUSTOMER_NAME>_sp`):
+ `https://<CUSTOMER_NAME>.sabacloud.com/Saba/saml/SSO/alias/<ENTITY_ID>`
+
+ > [!NOTE]
+ > If you specify the reply URL incorrectly, you might have to adjust it in the **App Registration** section of Azure AD, not in the **Enterprise Application** section. Making changes to the **Basic SAML Configuration** section doesn't always update the Reply URL.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: a. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<CUSTOMER_NAME>.sabacloud.com`
+ `https://<CUSTOMER_NAME>.sabacloud.com`
b. In the **Relay State** text box, type a URL using the following pattern: `IDP_INITSAML_SSO_SITE=<SITE_ID> `or in case SAML is configured for a microsite, type a URL using the following pattern:
-`IDP_INITSAML_SSO_SITE=<SITE_ID>SAML_SSO_MICRO_SITE=<MicroSiteId>`
+ `IDP_INITSAML_SSO_SITE=<SITE_ID>SAML_SSO_MICRO_SITE=<MicroSiteId>`
> [!NOTE]
- > For more information on configuring the RelayState, please refer to [this](https://help.sabacloud.com/sabacloud/help-system/topics/help-system-idp-and-sp-initiated-sso-for-a-microsite.html) link.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Saba Cloud Client support team](mailto:support@saba.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ >
+ > For more information about configuring the RelayState, see [IdP and SP initiated SSO for a microsite](https://help.sabacloud.com/sabacloud/help-system/topics/help-system-idp-and-sp-initiated-sso-for-a-microsite.html).
+
+1. In the **User Attributes & Claims** section, adjust the Unique User Identifier to whatever you organization intends to use as the primary username for Saba users.
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Saba Cloud Client support team](mailto:support@saba.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ This step is required only if you're attempting to convert from username/password to SSO. If this is a new Saba Cloud deployment that doesn't have existing usrs, you can skip this step.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Configure Properties** section, verify the populated fields and click **SAVE**. ![screenshot for Configure Properties](./media/saba-cloud-tutorial/configure-properties.png)
+
+ You might need to set **Max Authentication Age (in seconds)** to **7776000** (90 days) to match the default max rolling age Azure AD allows for a login. Failure to do so could result in the error `(109) Login failed. Please contact system administrator.`
### Create Saba Cloud test user
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with SharePoint on-premises | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and SharePoint on-premises.
+description: Learn how to implement federated authentication between Azure Active Directory and SharePoint on-premises.
Previously updated : 09/10/2020 Last updated : 03/31/2021
-# Tutorial: Azure Active Directory single sign-on integration with SharePoint on-premises
+# Tutorial: Implement federated authentication between Azure Active Directory and SharePoint on-premises
-In this tutorial, you learn how to integrate SharePoint on-premises with Azure Active Directory (Azure AD). When you integrate SharePoint on-premises with Azure AD, you can:
+## Scenario description
-* Control who has access to SharePoint on-premises in Azure AD.
-* Enable your users to be automatically signed in to SharePoint on-premises with their Azure AD accounts.
-* Manage your accounts in the Azure portal.
+In this tutorial, you configure a federated authentication between Azure Active Directory and SharePoint on-premises. The goal is to allow users to sign in on Azure Active Directory and use their identity to access the SharePoint on-premises sites.
## Prerequisites
-To configure Azure AD integration with SharePoint on-premises, you need these items:
-
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+To perform the configuration, you need the following resources:
+* An Azure Active Directory tenant. If you don't have one, you can create a [free account](https://azure.microsoft.com/free/).
* A SharePoint 2013 farm or newer.
-## Scenario description
-
-In this tutorial, you configure and test Azure AD single sign-on (SSO) in a test environment. Users from Azure AD are able to access your SharePoint on-premises.
+This article uses the following values:
+- Enterprise application name (in Azure AD): `SharePoint corporate farm`
+- Trust identifier (in Azure AD) / realm (in SharePoint): `urn:sharepoint:federation`
+- loginUrl (to Azure AD): `https://login.microsoftonline.com/dc38a67a-f981-4e24-ba16-4443ada44484/wsfed`
+- SharePoint site URL: `https://spsites.contoso.local/`
+- SharePoint site reply URL: `https://spsites.contoso.local/_trust/`
+- SharePoint trust configuration name: `AzureADTrust`
+- UserPrincipalName of the Azure AD test user: `AzureUser1@demo1984.onmicrosoft.com`
-## Create enterprise applications in the Azure portal
+## Configure an enterprise application in Azure Active Directory
-To configure the integration of SharePoint on-premises into Azure AD, you need to add SharePoint on-premises from the gallery to your list of managed SaaS apps.
+To configure the federation in Azure AD, you need to create a dedicated Enterprise application. Its configuration is simplified using the pre-configured template `SharePoint on-premises` that can be found in the application gallery.
-To add SharePoint on-premises from the gallery:
-
-1. In the Azure portal, on the leftmost pane, select **Azure Active Directory**.
-
- > [!NOTE]
- > If the element isn't available, you can also open it through the **All services** link at the top of the leftmost pane. In the following overview, the **Azure Active Directory** link is located in the **Identity** section. You can also search for it by using the filter box.
+### Create the enterprise application
+1. Sign in to the [Azure Active Directory portal](https://aad.portal.azure.com/).
1. Go to **Enterprise applications**, and then select **All applications**.- 1. To add a new application, select **New application** at the top of the dialog box.- 1. In the search box, enter **SharePoint on-premises**. Select **SharePoint on-premises** from the result pane.
+1. Specify a name for your application (in this tutorial, it is `SharePoint corporate farm`), and click **Create** to add the application.
+1. In the new enterprise application, select **Properties**, and check the value for **User assignment required?**. For this scenario, set its value to **No** and click **Save**.
- <kbd>![SharePoint on-premises in the results list](./media/sharepoint-on-premises-tutorial/search-new-app.png)</kbd>
-
-1. Specify a name for your SharePoint on-premises instance, and select **Add** to add the application.
-
-1. In the new enterprise application, select **Properties**, and check the value for **User assignment required?**.
-
- <kbd>![User assignment required? toggle](./media/sharepoint-on-premises-tutorial/user-assignment-required.png)</kbd>
-
- In this scenario, the value is set to **No**.
-
-## Configure and test Azure AD
+### Configure the enterprise application
-In this section, you configure Azure AD SSO with SharePoint on-premises. For SSO to work, you establish a link relationship between an Azure AD user and the related user in SharePoint on-premises.
+In this section, you configure the SAML authentication and define the claims that will be sent to SharePoint upon successful authentication.
-To configure and test Azure AD SSO with SharePoint on-premises, complete these building blocks:
+1. In the Overview of the Enterprise application `SharePoint corporate farm`, select **2. Set up single sign-on** and choose the **SAML** in the next dialog.
+
+1. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon in the **Basic SAML Configuration** pane.
-- [Configure Azure AD SSO](#configure-azure-ad-sso) to enable your users to use this feature.-- [Configure SharePoint on-premises](#configure-sharepoint-on-premises) to configure the SSO settings on the application side.-- [Create an Azure AD test user in the Azure portal](#create-an-azure-ad-test-user-in-the-azure-portal) to create a new user in Azure AD for SSO.-- [Create an Azure AD security group in the Azure portal](#create-an-azure-ad-security-group-in-the-azure-portal) to create a new security group in Azure AD for SSO.-- [Grant permissions to an Azure AD account in SharePoint on-premises](#grant-permissions-to-an-azure-ad-account-in-sharepoint-on-premises) to give permissions to an Azure AD user.-- [Grant permissions to an Azure AD group in SharePoint on-premises](#grant-permissions-to-an-azure-ad-group-in-sharepoint-on-premises) to give permissions to an Azure AD group.-- [Grant access to a guest account to SharePoint on-premises in the Azure portal](#grant-access-to-a-guest-account-to-sharepoint-on-premises-in-the-azure-portal) to give permissions to a guest account in Azure AD for SharePoint on-premises.-- [Configure the trusted identity provider for multiple web applications](#configure-the-trusted-identity-provider-for-multiple-web-applications) to use the same trusted identity provider for multiple web applications.
+1. In the **Basic SAML Configuration** section, follow these steps:
-### Configure Azure AD SSO
+ 1. In the **Identifier** box, ensure that this value is present:
+ `urn:sharepoint:federation`.
-In this section, you enable Azure AD SSO in the Azure portal.
+ 1. In the **Reply URL** box, enter a URL by using this pattern:
+ `https://spsites.contoso.local/_trust/`.
-To configure Azure AD SSO with SharePoint on-premises:
+ 1. In the **Sign on URL** box, enter a URL by using this pattern:
+ `https://spsites.contoso.local/`.
+
+ 1. Select **Save**.
-1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**. Select the previously created enterprise application name, and select **Single sign-on**.
+1. In the **User Attributes & Claims** section, delete the following claim types, which are useless since they won't be used by SharePoint to grant permissions:
+ - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`
+ - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname`
+ - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname`
-1. In the **Select a Single sign-on method** dialog box, select the **SAML** mode to enable SSO.
-
-1. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box.
+1. The settings should now look like this:
-1. In the **Basic SAML Configuration** section, follow these steps:
+ ![Basic SAML settings](./media/sharepoint-on-premises-tutorial/azure-active-directory-app-saml-ids.png)
- ![SharePoint on-premises domain and URLs SSO information](./media/sharepoint-on-premises-tutorial/sp-identifier-reply.png)
+1. Copy the information that you'll need later in SharePoint:
- 1. In the **Identifier** box, enter a URL by using this pattern:
- `urn:<sharepointFarmName>:<federationName>`.
+ - In the **SAML Signing Certificate** section, **Download** the **Certificate (Base64)**. This is the public key of the signing certificate used by Azure AD to sign the SAML token. SharePoint will need it to verify the integrity of the incoming SAML tokens.
- 1. In the **Reply URL** box, enter a URL by using this pattern:
- `https://<YourSharePointSiteURL>/_trust/`.
+ - In the **Set up SharePoint corporate farm** section, copy the **Login URL** in a notepad and replace the trailing string **/saml2** with **/wsfed**.
+
+ > [!IMPORTANT]
+ > Make sure to replace **/saml2** with **/wsfed** to ensure that Azure AD issues a SAML 1.1 token, as required by SharePoint.
- 1. In the **Sign on URL** box, enter a URL by using this pattern:
- `https://<YourSharePointSiteURL>/`.
- 1. Select **Save**.
+ - In the **Set up SharePoint corporate farm** section, copy the **Logout URL**
- > [!NOTE]
- > These values aren't real. Update these values with the actual sign-on URL, identifier, and reply URL.
+## Configure SharePoint to trust Azure Active Directory
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Base64)** from the given options based on your requirements and save it on your computer.
+### Create the trust in SharePoint
- ![The certificate download link](./media/sharepoint-on-premises-tutorial/certificatebase64.png)
+In this step, you create a SPTrustedLoginProvider to store the configuration that SharePoint needs to trust Azure AD. For that, you need the information from Azure AD that you copied above. Start the SharePoint Management Shell and run the following script to create it:
-1. In the **Set up SharePoint on-premises** section, copy the appropriate URLs based on your requirement:
-
- - **Login URL**
-
- Copy the login URL and replace **/saml2** at the end with **/wsfed** so that it looks like https://login.microsoftonline.com/2c4f1a9f-be5f-10ee-327d-a95dac567e4f/wsfed. (This URL isn't accurate.)
+```powershell
+# Path to the public key of the Azure AD SAML signing certificate (self-signed), downloaded from the Enterprise application in the Azure AD portal
+$signingCert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("C:\AAD app\SharePoint corporate farm.cer")
+# Unique realm (corresponds to the "Identifier (Entity ID)" in the Azure AD Enterprise application)
+$realm = "urn:sharepoint:federation"
+# Login URL copied from the Azure AD enterprise application. Make sure to replace "saml2" with "wsfed" at the end of the URL:
+$loginUrl = "https://login.microsoftonline.com/dc38a67a-f981-4e24-ba16-4443ada44484/wsfed"
- - **Azure AD Identifier**
- - **Logout URL**
+# Define the claim types used for the authorization
+$userIdentifier = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" -IncomingClaimTypeDisplayName "name" -LocalClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"
+$role = New-SPClaimTypeMapping "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming
- > [!NOTE]
- > This URL can't be used as is in SharePoint. You must replace **/saml2** with **/wsfed**. The SharePoint on-premises application uses a SAML 1.1 token, so Azure AD expects a WS Fed request from the SharePoint server. After authentication, it issues the SAML 1.1 token.
+# Let SharePoint trust the Azure AD signing certificate
+New-SPTrustedRootAuthority -Name "Azure AD signing certificate" -Certificate $signingCert
-### Configure SharePoint on-premises
+# Create a new SPTrustedIdentityTokenIssuer in SharePoint
+$trust = New-SPTrustedIdentityTokenIssuer -Name "AzureADTrust" -Description "Azure AD" -Realm $realm -ImportTrustCertificate $signingCert -ClaimsMappings $userIdentifier, $role -SignInUrl $loginUrl -IdentifierClaim $userIdentifier.InputClaimType
+```
-1. Create a new trusted identity provider in SharePoint Server 2016.
+### Configure the SharePoint web application
- Sign in to the SharePoint server, and open the SharePoint Management Shell. Fill in the values:
- - **$realm** is the identifier value from the SharePoint on-premises domain and URLs section in the Azure portal.
- - **$wsfedurl** is the SSO service URL.
- - **$filepath** is the file path to which you have downloaded the certificate file from the Azure portal.
+In this step, you configure a web application in SharePoint to trust the Azure AD Enterprise application created above. There are important rules to have in mind:
+- The default zone of the SharePoint web application must have Windows authentication enabled. This is required for the Search crawler.
+- The SharePoint URL that will use Azure AD authentication must be set with HTTPS.
- Run the following commands to configure a new trusted identity provider.
+1. Create or extend the web application. This article describes two possible configurations:
- > [!TIP]
- > If you're new to using PowerShell or want to learn more about how PowerShell works, see [SharePoint PowerShell](/powershell/sharepoint/overview).
+ - If you create a new web application that uses both Windows and Azure AD authentication in the Default zone:
+ 1. Start the **SharePoint Management Shell** and run the following script:
+ ```powershell
+ # This script creates a new web application and sets Windows and Azure AD authentication on the Default zone
+ # URL of the SharePoint site federated with Azure AD
+ $trustedSharePointSiteUrl = "https://spsites.contoso.local/"
+ $applicationPoolManagedAccount = "Contoso\spapppool"
- ```
- $realm = "urn:sharepoint:sps201x"
- $wsfedurl="https://login.microsoftonline.com/2c4f1a9f-be5f-10ee-327d-a95dac567e4f/wsfed"
- $filepath="C:\temp\SharePoint 2019 OnPrem.cer"
- $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($filepath)
- New-SPTrustedRootAuthority -Name "AzureAD" -Certificate $cert
- $map1 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" -IncomingClaimTypeDisplayName "name" -LocalClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"
- $map2 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming
- $ap = New-SPTrustedIdentityTokenIssuer -Name "AzureAD" -Description "Azure AD SharePoint server 201x" -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map1,$map2 -SignInUrl $wsfedurl -IdentifierClaim $map1.InputClaimType
- ```
-1. Enable the trusted identity provider for your application.
+ $winAp = New-SPAuthenticationProvider -UseWindowsIntegratedAuthentication -DisableKerberos:$true
+ $sptrust = Get-SPTrustedIdentityTokenIssuer "AzureADTrust"
+ $trustedAp = New-SPAuthenticationProvider -TrustedIdentityTokenIssuer $sptrust
+
+ New-SPWebApplication -Name "SharePoint - Azure AD" -Port 443 -SecureSocketsLayer -URL $trustedSharePointSiteUrl -ApplicationPool "SharePoint - Azure AD" -ApplicationPoolAccount (Get-SPManagedAccount $applicationPoolManagedAccount) -AuthenticationProvider $winAp, $trustedAp
+ ```
+ 1. Open the **SharePoint Central Administration** site.
+ 1. Under **System Settings**, select **Configure Alternate Access Mappings**. The **Alternate Access Mapping Collection** box opens.
+ 1. Filter the display with the new web application and confirm that you see something like this:
+
+ ![Alternate Access Mappings of web application](./media/sharepoint-on-premises-tutorial/sp-alternate-access-mappings-new-web-app.png)
+
+ - If you extend an existing web application to use Azure AD authentication on a new zone:
+
+ 1. Start the SharePoint Management Shell and run the following script:
+
+ ```powershell
+ # This script extends an existing web application to set Azure AD authentication on a new zone
+ # URL of the default zone of the web application
+ $webAppDefaultZoneUrl = "http://spsites/"
+ # URL of the SharePoint site federated with ADFS
+ $trustedSharePointSiteUrl = "https://spsites.contoso.local/"
+ $sptrust = Get-SPTrustedIdentityTokenIssuer "AzureADTrust"
+ $ap = New-SPAuthenticationProvider -TrustedIdentityTokenIssuer $sptrust
+ $wa = Get-SPWebApplication $webAppDefaultZoneUrl
+
+ New-SPWebApplicationExtension -Name "SharePoint - Azure AD" -Identity $wa -SecureSocketsLayer -Zone Internet -Url $trustedSharePointSiteUrl -AuthenticationProvider $ap
+ ```
+
+ 1. Open the **SharePoint Central Administration** site.
+ 1. Under **System Settings**, select **Configure Alternate Access Mappings**. The **Alternate Access Mapping Collection** box opens.
+ 1. Filter the display with the web application that was extended and confirm that you see something like this:
+
+ ![Alternate Access Mappings of extended web application](./media/sharepoint-on-premises-tutorial/sp-alternate-access-mappings-extended-zone.png)
- 1. In **Central Administration**, go to **Manage Web Application** and select the web application that you want to secure with Azure AD.
+Once the web application is created, you can create a root site collection and add you Windows account as the primary site collection administrator.
- 1. On the ribbon, select **Authentication Providers** and choose the zone that you want to use.
+1. Create a certificate for the SharePoint site
- 1. Select **Trusted Identity provider**, and select the identify provider you just registered named *AzureAD*.
+ Since SharePoint URL uses HTTPS protocol (`https://spsites.contoso.local/`), a certificate must be set on the corresponding Internet Information Services (IIS) site. Follow those steps to generate a self-signed certificate:
+
+ > [!IMPORTANT]
+ > Self-signed certificates are suitable only for test purposes. In production environments, we strongly recommend that you use certificates issued by a certificate authority instead.
+
+ 1. Open the Windows PowerShell console.
+ 1. Run the following script to generate a self-signed certificate and add it to the computer's MY store:
+
+ ```powershell
+ New-SelfSignedCertificate -DnsName "spsites.contoso.local" -CertStoreLocation "cert:\LocalMachine\My"
+ ```
+
+1. Set the certificate in the IIS site
+ 1. Open the Internet Information Services Manager console.
+ 1. Expand the server in the tree view, expand **Sites**, select the site **SharePoint - Azure AD**, and select **Bindings**.
+ 1. Select **https binding** and then select **Edit**.
+ 1. In the TLS/SSL certificate field, choose the certificate to use (for example, **spsites.contoso.local** created above) and select **OK**.
+
+ > [!NOTE]
+ > If you have multiple Web Front End servers, you need to repeat this operation on each.
- 1. Select **OK**.
+The basic configuration of the trust between SharePoint and Azure AD is now finished. Let's see how to sign in to the SharePoint site as an Azure Active Directory user.
- ![Configuring your authentication provider](./media/sharepoint-on-premises-tutorial/config-auth-provider.png)
+## Sign in as a member user
-### Create an Azure AD test user in the Azure portal
+Azure Active Directory has [two type of users](https://docs.microsoft.com/azure/active-directory/active-directory-b2b-user-properties): Guest users and Member users. Let's start with a member user, which is merely a user that is homed in your organization.
-The objective of this section is to create a test user in the Azure portal.
+### Create a member user in Azure Active Directory
1. In the Azure portal, on the leftmost pane, select **Azure Active Directory**. In the **Manage** pane, select **Users**. 1. Select **All users** > **New user** at the top of the screen.
-1. Select **Create User**, and in the user properties, follow these steps. You might be able to create users in your Azure AD by using your tenant suffix or any verified domain.
+1. Select **Create User**, and in the user properties, follow these steps.
1. In the **Name** box, enter the user name. We used **TestUser**.
- 1. In the **User name** box, enter `TestUser@yourcompanydomain.extension`. This example shows `TestUser@contoso.com`.
+ 1. In the **User name** box, enter `AzureUser1@<yourcompanytenant>.onmicrosoft.com`. This example shows `AzureUser1@demo1984.onmicrosoft.com`:
- ![The User dialog box](./media/sharepoint-on-premises-tutorial/user-properties.png)
+ ![The User dialog box](./media/sharepoint-on-premises-tutorial/azure-active-directory-new-user.png)
1. Select the **Show password** check box, and then write down the value that appears in the **Password** box. 1. Select **Create**.
- 1. You can now share the site with TestUser@contoso.com and permit this user to access it.
+ 1. You can now share the site with `AzureUser1@demo1984.onmicrosoft.com` and permit this user to access it.
-### Create an Azure AD security group in the Azure portal
+### Grant permissions to the Azure Active Directory user in SharePoint
-1. Select **Azure Active Directory** > **Groups**.
+Sign in to the SharePoint root site collection as your Windows account (site collection administrator) and click **Share**.
+In the dialog, you need to type the exact value of the userprincipalname, for example `AzureUser1@demo1984.onmicrosoft.com`, and be careful to select the **name** claim result (move your mouse on a result to see its claim type)
-1. Select **New group**.
+> [!IMPORTANT]
+> Be careful to type the exact value of the user you want to invite, and choose the appropriate claim type in the list, otherwise the sharing will not work.
-1. Fill in the **Group type**, **Group name**, **Group description**, and **Membership type** boxes. Select the arrows to select members, and then search for or select the members you want to add to the group. Choose **Select** to add the selected members, and then select **Create**.
+![People picker results without AzureCP](./media/sharepoint-on-premises-tutorial/sp-people-picker-search-no-azurecp.png)
-![Create an Azure AD security group](./media/sharepoint-on-premises-tutorial/new-group.png)
+This limitation is because SharePoint does not validate the input from the people picker, which can be confusing and lead to misspellings or users accidentally choosing the wrong claim type.
+To fix this scenario, an open-source solution called [AzureCP](https://yvand.github.io/AzureCP/) can be used to connect SharePoint 2019 / 2016 / 2013 with Azure Active Directory and resolve the input against your Azure Active Directory tenant. For more information, see [AzureCP](https://yvand.github.io/AzureCP/).
-### Grant permissions to an Azure AD account in SharePoint on-premises
+Below is the same search with AzureCP configured: SharePoint returns actual users based on the input:
-To grant access to an Azure AD user in SharePoint on-premises, share the site collection or add the Azure AD user to one of the site collection's groups. Users can now sign in to SharePoint 201x by using identities from Azure AD, but there are still opportunities for improvement to the user experience. For instance, searching for a user presents multiple search results in the people picker. There's a search result for each of the claims types that are created in the claim mapping. To choose a user by using the people picker, you must enter their user name exactly and choose the **name** claim result.
+![People picker results with AzureCP](./media/sharepoint-on-premises-tutorial/sp-people-picker-search-with-azurecp.png)
-![Claims search results](./media/sharepoint-on-premises-tutorial/claims-search-results.png)
+> [!IMPORTANT]
+> AzureCP isn't a Microsoft product and isn't supported by Microsoft Support. To download, install, and configure AzureCP on the on-premises SharePoint farm, see the [AzureCP](https://yvand.github.io/AzureCP/) website.
-There's no validation on the values you search for, which can lead to misspellings or users accidentally choosing the wrong claim type. This situation can prevent users from successfully accessing resources.
+Azure Active Directory user `AzureUser1@demo1984.onmicrosoft.com` can now use his/her identity to sign in to the SharePoint site `https://spsites.contoso.local/`.
-To fix this scenario with the people picker, an open-source solution called [AzureCP](https://yvand.github.io/AzureCP/) provides a custom claims provider for SharePoint 2013, 2016, and 2019. It uses the Microsoft Graph API to resolve what users enter and perform validation. For more information, see [AzureCP](https://yvand.github.io/AzureCP/).
+## Grant permissions to a security group
- > [!NOTE]
- > Without AzureCP, you can add groups by adding the Azure AD group's ID, but this method isn't user friendly and reliable. Here's how it looks:
- >
- >![Add an Azure AD group to a SharePoint group by ID](./media/sharepoint-on-premises-tutorial/adding-group-by-id.png)
-
-### Grant permissions to an Azure AD group in SharePoint on-premises
+### Add the group claim type to the enterprise application
-To assign Azure AD security groups to SharePoint on-premises, it's necessary to use a custom claims provider for SharePoint server. This example uses AzureCP.
+1. In the Overview of the Enterprise application `SharePoint corporate farm`, select **2. Set up single sign-on**.
- > [!NOTE]
- > AzureCP isn't a Microsoft product and isn't supported by Microsoft Support. To download, install, and configure AzureCP on the on-premises SharePoint farm, see the [AzureCP](https://yvand.github.io/AzureCP/) website.
+1. In the **User Attributes & Claims** section, follow these steps if there is no group claim present:
-1. Configure AzureCP on the SharePoint on-premises farm or an alternative custom claims provider solution. To configure AzureCP, see this [AzureCP](https://yvand.github.io/AzureCP/Register-App-In-AAD.html) website.
+ 1. Select **Add a group claim**, select **Security groups**, make sure that **Source Attribute** is set to **Group ID**
+ 1. Check **Customize the name of the group claim**, then check **Emit groups as role claims** and click **Save**.
+ 1. The **User Attributes & Claims** should look like this:
-1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**. Select the previously created enterprise application name, and select **Single sign-on**.
+ ![Claims for users and group](./media/sharepoint-on-premises-tutorial/azure-active-directory-claims-with-group.png)
+
+### Create a security group in Azure Active Directory
-1. On the **Set up Single Sign-On with SAML** page, edit the **User Attributes & Claims** section.
+Let's create a security group in Azure Active Directory:
-1. Select **Add a group claim**.
+1. Select **Azure Active Directory** > **Groups**.
-1. Select which groups associated with the user should be returned in the claim. In this case, select **All groups**. In the **Source attribute** section, select **Group ID** and select **Save**.
+1. Select **New group**.
-To grant access to the Azure AD security group in SharePoint on-premises, share the site collection or add the Azure AD security group to one of the site collection's groups.
+1. Fill in the **Group type** (Security), **Group name** (for example, `AzureGroup1`), and **Membership type**. Add the user you created above as a member and click select **Create**:
-1. Browse to **SharePoint Site Collection**. Under **Site Settings** for the site collection, select **People and groups**.
+ ![Create an Azure AD security group](./media/sharepoint-on-premises-tutorial/azure-active-directory-new-group.png)
+
+### Grant permissions to the security group in SharePoint
-1. Select the SharePoint group, and then select **New** > **Add Users to this Group**. As you type the name of your group, the people picker displays the Azure AD security group.
+Azure AD security groups are identified with their attribute `Id`, which is a GUID (for example, `E89EF0A3-46CC-45BF-93A4-E078FCEBFC45`).
+Without a custom claims provider, users need to type the exact value (`Id`) of the group in the people picker, and select the corresponding claim type. This is not user-friendly nor reliable.
+To avoid this, this article uses third-party claims provider [AzureCP](https://yvand.github.io/AzureCP/) to find the group in a friendly way in SharePoint:
- ![Add an Azure AD group to a SharePoint group](./media/sharepoint-on-premises-tutorial/permission-azure-ad-group.png)
+![People picker search Azure AD group](./media/sharepoint-on-premises-tutorial/sp-people-picker-search-azure-active-directory-group.png)
-### Grant access to a guest account to SharePoint on-premises in the Azure portal
+## Manage Guest users access
-You can grant access to your SharePoint site to a guest account in a consistent way because the UPN now gets modified. For example, the user `jdoe@outlook.com` is represented as `jdoe_outlook.com#ext#@TENANT.onmicrosoft.com`. To share your site with external users, you need to add some modifications in your **User Attributes & Claims** section in the Azure portal.
+There are two types of guest accounts:
-1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**. Select the previously created enterprise application name, and select **Single sign-on**.
+- B2B guest accounts: Those users are homed in an external Azure Active Directory tenant
+- MSA guest accounts: Those users are homed in a Microsoft identify provider (Hotmail, Outlook) or a social account provider (Google or similar)
-1. On the **Set up Single Sign-On with SAML** page, edit the **User Attributes & Claims** section.
+By default, Azure Active Directory sets both the "Unique User Identifier" and the claim "name" to the attribute `user.userprincipalname`.
+Unfortunately, this attribute is ambiguous for guest accounts, as the table below shows:
-1. In the **Required claim** zone, select **Unique User Identifier (Name ID)**.
+| Source attribute set in Azure AD | Actual property used by Azure AD for B2B guests | Actual property used by Azure AD for MSA guests | Property that SharePoint can rely on to validate the identity |
+|--|--|--|--|
+| `user.userprincipalname` | `mail`, for example: `guest@PARTNERTENANT` | `userprincipalname`, for example: `guest_outlook.com#EXT#@TENANT.onmicrosoft.com` | ambiguous |
+| `user.localuserprincipalname` | `userprincipalname`, for example: `guest_PARTNERTENANT#EXT#@TENANT.onmicrosoft.com` | `userprincipalname`, for example: `guest_outlook.com#EXT#@TENANT.onmicrosoft.com` | `userprincipalname` |
-1. Change the **Source Attribute** property to the value **user.localuserprincipalname**, and select **Save**.
+As a conclusion, to ensure that guest accounts are all identified with the same attribute, the identifier claims of the enterprise application should be updated to use the attribute `user.localuserprincipalname` instead of `user.userprincipalname`.
- ![User Attributes & Claims initial Source Attribute](./media/sharepoint-on-premises-tutorial/manage-claim.png)
+### Update the application to use a consistent attribute for all guest users
-1. Using the ribbon, go back to **SAML-based Sign-on**. Now the **User Attributes & Claims** section looks like this:
+1. In the Overview of the Enterprise application `SharePoint corporate farm`, select **2. Set up single sign-on**.
+
+1. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon in the **User Attributes & Claims** pane.
- ![User Attributes & Claims final](./media/sharepoint-on-premises-tutorial/user-attributes-claims-final.png)
+1. In the **User Attributes & Claims** section, follow these steps:
- > [!NOTE]
- > A surname and given name aren't required in this setup.
+ 1. Select **Unique User Identifier (Name ID)**, change its **Source Attribute** property to **user.localuserprincipalname**, and click **Save**.
+
+ 1. Select **http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name**, change its **Source Attribute** property to **user.localuserprincipalname**, and click **Save**.
+
+ 1. The **User Attributes & Claims** should look like this:
+
+ ![User Attributes & Claims for Guests](./media/sharepoint-on-premises-tutorial/azure-active-directory-claims-guests.png)
-1. In the Azure portal, on the leftmost pane, select **Azure Active Directory** and then select **Users**.
+### Invite guest users in SharePoint
-1. Select **New Guest User**.
+> [!NOTE]
+> This section assumes that claims provider AzureCP is used
-1. Select the **Invite User** option. Fill in the user properties, and select **Invite**.
+In the section above, you updated the enterprise application to use a consistent attribute for all guest accounts.
+Now, the configuration of AzureCP needs to be updated to reflect that change and use the attribute `userprincipalname` for guest accounts:
-1. You can now share the site with MyGuestAccount@outlook.com and permit this user to access it.
+1. Open the **SharePoint Central Administration** site.
+1. Under **Security**, select **AzureCP global configuration**.
+1. In the section **User identifier property**: Set the **User identifier for 'Guest' users:** to **UserPrincipalName**.
+1. Click Ok
- ![Sharing a site with a guest account](./media/sharepoint-on-premises-tutorial/sharing-guest-account.png)
+![AzureCP guests accounts configuration](./media/sharepoint-on-premises-tutorial/sp-azurecp-attribute-for-guests.png)
-### Configure the trusted identity provider for multiple web applications
+You can now invite any guest user in the SharePoint sites.
-The configuration works for a single web application, but additional configuration is needed if you intend to use the same trusted identity provider for multiple web applications. For example, assume you extended a web application to use the URL `https://sales.contoso.com` and you now want to authenticate users to `https://marketing.contoso.com`. To do this, update the identity provider to honor the WReply parameter and update the application registration in Azure AD to add a reply URL.
+## Configure the federation for multiple web applications
-1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**. Select the previously created enterprise application name, and select **Single sign-on**.
+The configuration works for a single web application, but additional configuration is needed if you intend to use the same trusted identity provider for multiple web applications. For example, assume you have a separate web application `https://otherwebapp.contoso.local/` and you now want to enable Azure Active Directory authentication on it. To do this, configure SharePoint to pass the SAML WReply parameter, and add the URLs in the enterprise application.
-1. On the **Set up Single Sign-On with SAML** page, edit **Basic SAML Configuration**.
+### Configure SharePoint to pass the SAML WReply parameter
- ![Basic SAML Configuration](./media/sharepoint-on-premises-tutorial/add-reply-url.png)
+1. On the SharePoint server, open the SharePoint 201x Management Shell and run the following commands. Use the same name for the trusted identity token issuer as you used previously.
-1. For **Reply URL (Assertion Consumer Service URL)**, add the URL for the additional web applications and select **Save**.
+```powershell
+$t = Get-SPTrustedIdentityTokenIssuer "AzureADTrust"
+$t.UseWReplyParameter = $true
+$t.Update()
+```
- ![Edit the basic SAML configuration](./media/sharepoint-on-premises-tutorial/reply-url-for-web-application.png)
+### Add the URLs in the enterprise application
-1. On the SharePoint server, open the SharePoint 201x Management Shell and run the following commands. Use the name of the trusted identity token issuer that you used previously.
- ```
- $t = Get-SPTrustedIdentityTokenIssuer "AzureAD"
- $t.UseWReplyParameter=$true
- $t.Update()
- ```
-1. In **Central Administration**, go to the web application and enable the existing trusted identity provider.
+1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**. Select the previously created enterprise application name, and select **Single sign-on**.
-You might have other scenarios where you want to give access to your SharePoint on-premises instance for your internal users. For this scenario, you have to deploy Microsoft Azure Active Directory Connect to permit syncing your on-premises users with Azure AD. This setup is discussed in another article.
+1. On the **Set up Single Sign-On with SAML** page, edit **Basic SAML Configuration**.
-## Next Steps
+1. In the section **Reply URL (Assertion Consumer Service URL)**, add the URL (for example, `https://otherwebapp.contoso.local/`) of all additional web applications that need to sign in users with Azure Active Directory and click **Save**.
-Once you configure SharePoint on-premises you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+![Specify additional web applications](./media/sharepoint-on-premises-tutorial/azure-active-directory-app-reply-urls.png)
active-directory Zip Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zip-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Zip | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Zip.
++++++++ Last updated : 04/28/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Zip
+
+In this tutorial, you'll learn how to integrate Zip with Azure Active Directory (Azure AD). When you integrate Zip with Azure AD, you can:
+
+* Control in Azure AD who has access to Zip.
+* Enable your users to be automatically signed-in to Zip with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Zip single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Zip supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Zip from the gallery
+
+To configure the integration of Zip into Azure AD, you need to add Zip from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zip** in the search box.
+1. Select **Zip** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Zip
+
+Configure and test Azure AD SSO with Zip using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zip.
+
+To configure and test Azure AD SSO with Zip, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zip SSO](#configure-zip-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zip test user](#create-zip-test-user)** - to have a counterpart of B.Simon in Zip that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Zip** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ ||
+ | `https://ziphq.com/saml/acs` |
+ | `https://<CUSTOMER_NAME>.ziphq.com/saml/acs` |
+ |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.ziphq.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Zip Client support team](mailto:support@tryevergreen.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Zip** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zip.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zip**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Zip SSO
+
+To configure single sign-on on **Zip** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Zip support team](mailto:support@tryevergreen.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Zip test user
+
+In this section, you create a user called Britta Simon in Zip. Work with [Zip support team](mailto:support@tryevergreen.com) to add the users in the Zip platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Zip Sign on URL where you can initiate the login flow.
+
+* Go to Zip Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Zip for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Zip tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Zip for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Zip you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/troubleshooting.md
On Kubernetes versions **older than 1.15.0**, you may receive an error such as *
### Why do upgrades to Kubernetes 1.16 fail when using node labels with a kubernetes.io prefix
-As of Kubernetes [1.16](https://v1-16.docs.kubernetes.io/docs/setup/release/notes/) [only a defined subset of labels with the kubernetes.io prefix](https://v1-18.docs.kubernetes.io/docs/concepts/overview/working-with-objects/labels/) can be applied by the kubelet to nodes. AKS cannot remove active labels on your behalf without consent, as it may cause downtime to impacted workloads.
+As of Kubernetes 1.16 [only a defined subset of labels with the kubernetes.io prefix](https://v1-18.docs.kubernetes.io/docs/concepts/overview/working-with-objects/labels/) can be applied by the kubelet to nodes. AKS cannot remove active labels on your behalf without consent, as it may cause downtime to impacted workloads.
As a result, to mitigate this issue you can:
AKS is investigating the capability to mutate active labels on a node pool to im
<!-- LINKS - internal --> [view-master-logs]: ./view-control-plane-logs.md
-[cluster-autoscaler]: cluster-autoscaler.md
+[cluster-autoscaler]: cluster-autoscaler.md
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
In addition to manually upgrading a cluster, you can set an auto-upgrade channel
| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*| | `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*. | `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
+| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. |
> [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
automation Automation Scenario Using Watcher Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-scenario-using-watcher-task.md
+
+ Title: Track updated files with an Azure Automation watcher task
+description: This article tells how to create a watcher task in the Azure Automation account to watch for new files created in a folder.
+++ Last updated : 12/17/2020++
+# Track updated files with a watcher task
+
+Azure Automation uses a watcher task to look for events and trigger actions with PowerShell runbooks. The watcher task contains two parts, the watcher and the action. A watcher runbook runs at an interval defined in the watcher task, and outputs data to an action runbook.
+
+> [!NOTE]
+> Watcher tasks are not supported in Azure China Vianet 21.
+
+> [!IMPORTANT]
+> Starting in May 2020, using Azure Logic Apps is the recommended and supported way to monitor for events, schedule recurring tasks, and trigger actions. See [Schedule and run recurring automated tasks, processes, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+
+This article walks you through creating a watcher task to monitor when a new file is added to a directory. You learn how to:
+
+* Import a watcher runbook
+* Create an Automation variable
+* Create an action runbook
+* Create a watcher task
+* Trigger a watcher
+* Inspect the output
+
+## Prerequisites
+
+To complete this article, the following are required:
+
+* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Automation account](./index.yml) to hold the watcher and action runbooks and the Watcher Task.
+* A [hybrid runbook worker](automation-hybrid-runbook-worker.md) where the watcher task runs.
+* PowerShell runbooks. PowerShell Workflow runbooks aren't supported by watcher tasks.
+
+## Import a watcher runbook
+
+This article uses a watcher runbook called **Watcher runbook that looks for new files in a directory** to look for new files in a directory. The watcher runbook retrieves the last known write time to the files in a folder and looks at any files newer than that watermark.
+
+You can import this runbook into your Automation account from the portal using the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select the name of your Automation account from the list.
+1. In the left pane, select **Runbooks gallery** under **Process Automation**.
+1. Make sure **GitHub** is selected in the **Source** drop-down list.
+1. Search for **Watcher runbook**.
+1. Select **Watcher runbook that looks for new files in a directory**, and select **Import** on the details page.
+1. Give the runbook a name and optionally a description and click **OK** to import the runbook into your Automation account. You should see an **Import successful** message in a pane at the upper right of your window.
+1. The imported runbook appears in the list under the name you gave it when you select Runbooks from the left-hand pane.
+1. Click on the runbook, and on the runbook details page, select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
+
+You can also download the runbook from the [Azure Automation GitHub organization](https://github.com/azureautomation).
+
+1. Navigate to the Azure Automation GitHub organization page for [Watch-NewFile.ps1](https://github.com/azureautomation/watcher-runbook-that-looks-for-new-files-in-a-directory#watcher-runbook-that-looks-for-new-files-in-a-directory).
+1. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
+1. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
+
+## Create an Automation variable
+
+An [Automation variable](./shared-resources/variables.md) is used to store the timestamps that the preceding runbook reads and stores from each file.
+
+1. Select **Variables** under **Shared Resources** and click **+ Add a variable**.
+1. Enter **Watch-NewFileTimestamp** for the name.
+1. Select **DateTime** for the type. It will default to the current date and time.
+
+ :::image type="content" source="./media/automation-watchers-tutorial/create-new-variable.png" alt-text="Screenshot of creating a new variable blade.":::
+
+1. Click **Create** to create the Automation variable.
+
+## Create an action runbook
+
+An action runbook is used in a watcher task to act on the data passed to it from a watcher runbook. You must import a predefined action runbook, either from the Azure portal of from the [Azure Automation GitHub organization](https://github.com/azureautomation).
+
+You can import this runbook into your Automation account from the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select the name of your Automation account from the list.
+1. In the left pane, select **Runbooks gallery** under **Process Automation**.
+1. Make sure **GitHub** is selected in the **Source** drop-down list.
+1. Search for **Watcher action**, select **Watcher action that processes events triggered by a watcher runbook**, and click **Import**.
+1. Optionally, change the name of the runbook on the import page, and then click **OK** to import the runbook. You should see an **Import successful** message in the notification pane in the upper right-hand side of the browser.
+1. Go to your Automation Account page, and click on **Runbooks** on the left. Your new runbook should be listed under the name you gave it in the previous step. Click on the runbook, and on the runbook details page, select **Edit** and then click **Publish**. When prompted, click **Yes** to publish the runbook.
+
+To create an action runbook by downloading it from the [Azure Automation GitHub organization](https://github.com/azureautomation):
+
+1. Navigate to the Azure Automation GitHub organization page for [Process-NewFile.ps1](https://github.com/azureautomation/watcher-action-that-processes-events-triggerd-by-a-watcher-runbook).
+1. To download the runbook from GitHub, select **Code** from the right-hand side of the page, and then select **Download ZIP** to download the whole code in a zip file.
+1. Extract the contents and [import the runbook](manage-runbooks.md#import-a-runbook-from-the-azure-portal).
+
+## Create a watcher task
+
+In this step, you configure the watcher task referencing the watcher and action runbooks defined in the preceding sections.
+
+1. Navigate to your Automation account and select **Watcher tasks** under **Process Automation**.
+1. Select the Watcher tasks page and click **+ Add a watcher task**.
+1. Enter **WatchMyFolder** as the name.
+
+1. Select **Configure watcher** and choose the **Watch-NewFile** runbook.
+
+1. Enter the following values for the parameters:
+
+ * **FOLDERPATH** - A folder on the Hybrid Runbook Worker where new files get created, for example, **d:\examplefiles**.
+ * **EXTENSION** - Extension for the configuration. Leave blank to process all file extensions.
+ * **RECURSE** - Recursive operation. Leave this value as the default.
+ * **RUN SETTINGS** - Setting for running the runbook. Pick the hybrid worker.
+
+1. Click **OK**, and then **Select** to return to the Watcher page.
+1. Select **Configure action** and choose the **Process-NewFile** runbook.
+1. Enter the following values for the parameters:
+
+ * **EVENTDATA** - Event data. Leave blank. Data is passed in from the watcher runbook.
+ * **Run Settings** - Setting for running the runbook. Leave as Azure, as this runbook runs in Azure Automation.
+
+1. Click **OK**, and then **Select** to return to the Watcher page.
+1. Click **OK** to create the watcher task.
+
+ :::image type="content" source="./media/automation-watchers-tutorial/watchertaskcreation.png" alt-text="Screenshot of configuring watcher action in the Azure portal.":::
++
+## Trigger a watcher
+
+You must run a test as described below to ensure that the watcher task works as expected.
+
+1. Remote into the Hybrid Runbook Worker.
+1. Open **PowerShell** and create a test file in the folder.
+
+```azurepowerShell-interactive
+New-Item -Name ExampleFile1.txt
+```
+
+The following example shows the expected output.
+
+```output
+ Directory: D:\examplefiles
++
+Mode LastWriteTime Length Name
+- - -
+-a- 12/11/2017 9:05 PM 0 ExampleFile1.txt
+```
+
+## Inspect the output
+
+1. Navigate to your Automation account and select **Watcher tasks** under **Process Automation**.
+1. Select the watcher task **WatchMyFolder**.
+1. Click on **View watcher streams** under **Streams** to see that the watcher has found the new file and started the action runbook.
+1. To see the action runbook jobs, click on **View watcher action jobs**. Each job can be selected to view the details of the job.
+
+ :::image type="content" source="./media/automation-watchers-tutorial/WatcherActionJobs.png" alt-text="Screenshot of a watcher action jobs from the Azure portal.":::
++
+The expected output when the new file is found can be seen in the following example:
+
+```output
+Message is Process new file...
+
+Passed in data is @{FileName=D:\examplefiles\ExampleFile1.txt; Length=0}
+```
+
+## Next steps
+
+To learn more about authoring your own runbook, see [Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md).
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 01/22/2021 Last updated : 05/04/2021
Change Tracking and Inventory doesn't support or has the following limitations:
- Collecting Hotfix updates on Windows Server 2016 Core RS3 machines. - Linux daemons might show a changed state even though no change has occurred. This issue arises because of the way the `SvcRunLevels` data in the Azure Monitor [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationchange) table is written.
+## Limits
+
+For limits that apply to Change Tracking and Inventory, see [Azure Automation service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#change-tracking-and-inventory).
+ ## Supported operating systems Change Tracking and Inventory is supported on all operating systems that meet Log Analytics agent requirements. See [supported operating systems](../../azure-monitor/agents/agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are currently supported by the Log Analytics agent.
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runbooks.md
Title: Manage runbooks in Azure Automation
description: This article tells how to manage runbooks in Azure Automation. Previously updated : 04/22/2021 Last updated : 05/03/2021
Create a new runbook in Azure Automation using the Azure portal or Windows Power
### Create a runbook in the Azure portal
-1. In the Azure portal, open your Automation account.
-2. From the hub, select **Runbooks** under **Process Automation** to open the list of runbooks.
-3. Click **Create a runbook**.
-4. Enter a name for the runbook and select its [type](automation-runbook-types.md). The runbook name must start with a letter and can contain letters, numbers, underscores, and dashes.
-5. Click **Create** to create the runbook and open the editor.
+1. Sign in to the Azure [portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. From the Automation account, select **Runbooks** under **Process Automation** to open the list of runbooks.
+1. Click **Create a runbook**.
+1. Enter a name for the runbook and select its [type](automation-runbook-types.md). The runbook name must start with a letter and can contain letters, numbers, underscores, and dashes.
+1. Click **Create** to create the runbook and open the editor.
### Create a runbook with PowerShell
New-AzAutomationRunbook @params
## Import a runbook
-You can import a PowerShell or PowerShell Workflow (**.ps1**) script, a graphical runbook (**.graphrunbook**), or a Python 2 or Python 3 script (**.py**) to make your own runbook. You must specify the [type of runbook](automation-runbook-types.md) that is created during import, taking into account the following considerations.
+You can import a PowerShell or PowerShell Workflow (**.ps1**) script, a graphical runbook (**.graphrunbook**), or a Python 2 or Python 3 script (**.py**) to make your own runbook. You specify the [type of runbook](automation-runbook-types.md) that is created during import, taking into account the following considerations.
* You can import a **.ps1** file that doesn't contain a workflow into either a [PowerShell runbook](automation-runbook-types.md#powershell-runbooks) or a [PowerShell Workflow runbook](automation-runbook-types.md#powershell-workflow-runbooks). If you import it into a PowerShell Workflow runbook, it is converted to a workflow. In this case, comments are included in the runbook to describe the changes made.
-* You can import only a **.ps1** file containing a PowerShell Workflow into a [PowerShell Workflow runbook](automation-runbook-types.md#powershell-workflow-runbooks). If the file contains multiple PowerShell workflows, the import fails. You must save each workflow to its own file and import each separately.
+* You can import only a **.ps1** file containing a PowerShell Workflow into a [PowerShell Workflow runbook](automation-runbook-types.md#powershell-workflow-runbooks). If the file contains multiple PowerShell workflows, the import fails. You have to save each workflow to its own file and import each separately.
* Do not import a **.ps1** file containing a PowerShell Workflow into a [PowerShell runbook](automation-runbook-types.md#powershell-runbooks), as the PowerShell script engine can't recognize it.
You can use the following procedure to import a script file into Azure Automatio
> [!NOTE] > You can only import a **.ps1** file into a PowerShell Workflow runbook using the portal.
-1. In the Azure portal, open your Automation account.
-2. Select **Runbooks** under **Process Automation** to open the list of runbooks.
-3. Click **Import a runbook**.
-4. Click **Runbook file** and select the file to import.
-5. If the **Name** field is enabled, you have the option of changing the runbook name. The name must start with a letter and can contain letters, numbers, underscores, and dashes.
-6. The [runbook type](automation-runbook-types.md) is automatically selected, but you can change the type after taking the applicable restrictions into account.
-7. Click **Create**. The new runbook appears in the list of runbooks for the Automation account.
-8. You must [publish the runbook](#publish-a-runbook) before you can run it.
+1. In the Azure portal, search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. From the Automation account, select **Runbooks** under **Process Automation** to open the list of runbooks.
+1. Click **Import a runbook**.
+1. Click **Runbook file** and select the file to import.
+1. If the **Name** field is enabled, you have the option of changing the runbook name. The name must start with a letter and can contain letters, numbers, underscores, and dashes.
+1. The [runbook type](automation-runbook-types.md) is automatically selected, but you can change the type after taking the applicable restrictions into account.
+1. Click **Create**. The new runbook appears in the list of runbooks for the Automation account.
+1. You have to [publish the runbook](#publish-a-runbook) before you can run it.
> [!NOTE] > After you import a graphical runbook, you can convert it to another type. However, you can't convert a graphical runbook to a textual runbook.
if (-not $vmExists) {
You can retrieve runbook details, such as the person or account that started a runbook, from the [Activity log](automation-runbook-execution.md#activity-logging) for the Automation account. The following PowerShell example provides the last user to run the specified runbook. ```powershell-interactive
-$SubID = '00000000-0000-0000-0000-000000000000'
-$AutoRgName = 'MyResourceGroup'
-$aaName = 'MyAutomationAccount'
-$RunbookName = 'MyRunbook'
-$StartTime = (Get-Date).AddDays(-1)
+$rgName = 'MyResourceGroup'
+$accountName = 'MyAutomationAccount'
+$runbookName = 'MyRunbook'
+$startTime = (Get-Date).AddDays(-1)
$params = @{
- ResourceGroupName = $AutoRgName
- StartTime = $StartTime
+ ResourceGroupName = $rgName
+ StartTime = $startTime
} $JobActivityLogs = (Get-AzLog @params).Where( { $_.Authorization.Action -eq 'Microsoft.Automation/automationAccounts/jobs/write' })
foreach ($log in $JobActivityLogs) {
# Get job resource $JobResource = Get-AzResource -ResourceId $log.ResourceId
- if ($null -eq $JobInfo[$log.SubmissionTimestamp] -and $JobResource.Properties.Runbook.Name -eq $RunbookName) {
+ if ($null -eq $JobInfo[$log.SubmissionTimestamp] -and $JobResource.Properties.Runbook.Name -eq $runbookName) {
# Get runbook $jobParams = @{
- ResourceGroupName = $AutoRgName
- AutomationAccountName = $aaName
+ ResourceGroupName = $rgName
+ AutomationAccountName = $accountName
Id = $JobResource.Properties.JobId }
- $Runbook = Get-AzAutomationJob @jobParams | Where-Object RunbookName -EQ $RunbookName
+ $Runbook = Get-AzAutomationJob @jobParams | Where-Object RunbookName -EQ $runbookName
# Add job information to hashtable
- $JobInfo.Add($log.SubmissionTimestamp, @($Runbook.RunbookName,$Log.Caller, $JobResource.Properties.jobId))
+ $JobInfo.Add($log.SubmissionTimestamp, @($Runbook.RunbookName, $Log.Caller, $JobResource.Properties.jobId))
} } $JobInfo.GetEnumerator() | Sort-Object Key -Descending | Select-Object -First 1
$cnParams = @{
CertificateThumbprint = $connection.CertificateThumbprint } Connect-AzAccount @cnParams
-$AzureContext = Get-AzSubscription -SubscriptionId $connection.SubscriptionID
+$AzureContext = Set-AzContext -SubscriptionId $connection.SubscriptionID
# Check for already running or new runbooks
-$runbookName = "<RunbookName>"
-$rgName = "<ResourceGroupName>"
-$aaName = "<AutomationAccountName>"
-$jobs = Get-AzAutomationJob -ResourceGroupName $rgName -AutomationAccountName $aaName -RunbookName $runbookName -AzContext $AzureContext
+$runbookName = "RunbookName"
+$rgName = "ResourceGroupName"
+$accountName = "AutomationAccountName"
+$jobs = Get-AzAutomationJob -ResourceGroupName $rgName -AutomationAccountName $accountName -RunbookName $runbookName -AzContext $AzureContext
# Check to see if it is already running $runningCount = ($jobs.Where( { $_.Status -eq 'Running' })).count
Alternatively, you can use PowerShell's splatting feature to pass the connection
# Authenticate to Azure $connection = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount @connection
-$AzureContext = Get-AzSubscription -SubscriptionId $connection.SubscriptionID
+$AzureContext = Set-AzContext -SubscriptionId $connection.SubscriptionID
```
-See [About Splatting](/powershell/module/microsoft.powershell.core/about/about_splatting) for more information.
+For more information, see [about splatting](/powershell/module/microsoft.powershell.core/about/about_splatting).
## Handle transient errors in a time-dependent script
If your runbook normally runs within a time constraint, have the script implemen
## Work with multiple subscriptions
-Your runbook must be able to work with [subscriptions](automation-runbook-execution.md#subscriptions). For example, to handle multiple subscriptions, the runbook uses the [Disable-AzContextAutosave](/powershell/module/Az.Accounts/Disable-AzContextAutosave) cmdlet. This cmdlet ensures that the authentication context isn't retrieved from another runbook running in the same sandbox. The runbook also uses the `Get-AzContext` cmdlet to retrieve the context of the current session, and assign it to the variable `$AzureContext`.
+Your runbook must be able to work with [subscriptions](automation-runbook-execution.md#subscriptions). For example, to handle multiple subscriptions, the runbook uses the [Disable-AzContextAutosave](/powershell/module/Az.Accounts/Disable-AzContextAutosave) cmdlet. This cmdlet ensures that the authentication context isn't retrieved from another runbook running in the same sandbox.
```powershell Disable-AzContextAutosave -Scope Process
$cnParams = @{
} Connect-AzAccount @cnParams
-$ChildRunbookName = 'ChildRunbookDemo'
-$aaName = 'MyAutomationAccount'
+$childRunbookName = 'ChildRunbookDemo'
+$accountName = 'MyAutomationAccount'
$rgName = 'MyResourceGroup' $startParams = @{ ResourceGroupName = $rgName
- AutomationAccountName = $aaName
- Name = $ChildRunbookName
+ AutomationAccountName = $accountName
+ Name = $childRunbookName
DefaultProfile = $AzureContext } Start-AzAutomationRunbook @startParams
Start-AzAutomationRunbook @startParams
To use a custom script:
-1. Create an Automation account and obtain a [Contributor role](automation-role-based-access-control.md).
-2. [Link the account to the Azure workspace](../security-center/security-center-enable-data-collection.md).
-3. Enable [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md), [Update Management](./update-management/overview.md), or another Automation feature.
-4. If on a Linux machine, you need high permissions. Log in to [turn off signature checks](automation-linux-hrw-install.md#turn-off-signature-validation).
+1. Create an Automation account.
+2. Deploy the [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) role.
+4. If on a Linux machine, you need elevated privileges. Sign in to [turn off signature checks](automation-linux-hrw-install.md#turn-off-signature-validation).
## Test a runbook
-When you test a runbook, the [Draft version](#publish-a-runbook) is executed and any actions that it performs are completed. No job history is created, but the [output](automation-runbook-output-and-messages.md#use-the-output-stream) and [warning and error](automation-runbook-output-and-messages.md#working-with-message-streams) streams are displayed in the Test output pane. Messages to the [verbose stream](automation-runbook-output-and-messages.md#write-output-to-verbose-stream) are displayed in the Output pane only if the [VerbosePreference](automation-runbook-output-and-messages.md#work-with-preference-variables) variable is set to `Continue`.
+When you test a runbook, the [Draft version](#publish-a-runbook) is executed and any actions that it performs are completed. No job history is created, but the [output](automation-runbook-output-and-messages.md#use-the-output-stream) and [warning and error](automation-runbook-output-and-messages.md#working-with-message-streams) streams are displayed in the **Test output** pane. Messages to the [verbose stream](automation-runbook-output-and-messages.md#write-output-to-verbose-stream) are displayed in the Output pane only if the [VerbosePreference](automation-runbook-output-and-messages.md#work-with-preference-variables) variable is set to `Continue`.
-Even though the draft version is being run, the runbook still executes normally and performs any actions against resources in the environment. For this reason, you should only test runbooks on non-production resources.
+Even though the Draft version is being run, the runbook still executes normally and performs any actions against resources in the environment. For this reason, you should only test runbooks on non-production resources.
The procedure to test each [type of runbook](automation-runbook-types.md) is the same. There's no difference in testing between the textual editor and the graphical editor in the Azure portal. 1. Open the Draft version of the runbook in either the [textual editor](automation-edit-textual-runbook.md) or the [graphical editor](automation-graphical-authoring-intro.md).
-1. Click **Test** to open the Test page.
+1. Click **Test** to open the **Test** page.
1. If the runbook has parameters, they're listed in the left pane, where you can provide values to be used for the test.
-1. If you want to run the test on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md), change **Run Settings** to **Hybrid Worker** and select the name of the target group. Otherwise, keep the default **Azure** to run the test in the cloud.
+1. If you want to run the test on a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md), change **Run Settings** to **Hybrid Worker** and select the name of the target group. Otherwise, keep the default **Azure** to run the test in the cloud.
1. Click **Start** to begin the test.
-1. You can use the buttons under the Output pane to stop or suspend a [PowerShell Workflow](automation-runbook-types.md#powershell-workflow-runbooks) or [graphical](automation-runbook-types.md#graphical-runbooks) runbook while it's being tested. When you suspend the runbook, it completes the current activity before being suspended. Once the runbook is suspended, you can stop it or restart it.
-1. Inspect the output from the runbook in the Output pane.
+1. You can use the buttons under the **Output** pane to stop or suspend a [PowerShell Workflow](automation-runbook-types.md#powershell-workflow-runbooks) or [graphical](automation-runbook-types.md#graphical-runbooks) runbook while it's being tested. When you suspend the runbook, it completes the current activity before being suspended. Once the runbook is suspended, you can stop it or restart it.
+1. Inspect the output from the runbook in the **Output** pane.
## Publish a runbook
-When you create or import a new runbook, you must publish it before you can run it. Each runbook in Azure Automation has a Draft version and a Published version. Only the Published version is available to be run, and only the Draft version can be edited. The Published version is unaffected by any changes to the Draft version. When the Draft version should be made available, you publish it, overwriting the current Published version with the Draft version.
+When you create or import a new runbook, you have to publish it before you can run it. Each runbook in Azure Automation has a Draft version and a Published version. Only the Published version is available to be run, and only the Draft version can be edited. The Published version is unaffected by any changes to the Draft version. When the Draft version should be made available, you publish it, overwriting the current Published version with the Draft version.
### Publish a runbook in the Azure portal
-1. Open the runbook in the Azure portal.
+1. From the Azure portal, open the runbook your Automation account.
2. Click **Edit**. 3. Click **Publish** and then **Yes** in response to the verification message.
When you create or import a new runbook, you must publish it before you can run
Use the [Publish-AzAutomationRunbook](/powershell/module/Az.Automation/Publish-AzAutomationRunbook) cmdlet to publish your runbook. ```azurepowershell-interactive
-$aaName = "MyAutomationAccount"
-$RunbookName = "Sample_TestRunbook"
+$accountName = "MyAutomationAccount"
+$runbookName = "Sample_TestRunbook"
$rgName = "MyResourceGroup" $publishParams = @{
- AutomationAccountName = $aaName
+ AutomationAccountName = $accountName
ResourceGroupName = $rgName
- Name = $RunbookName
+ Name = $runbookName
} Publish-AzAutomationRunbook @publishParams ```
Publish-AzAutomationRunbook @publishParams
When your runbook has been published, you can schedule it for operation:
-1. Open the runbook in the Azure portal.
+1. From the Azure portal, open the runbook in your Automation account.
2. Select **Schedules** under **Resources**. 3. Select **Add a schedule**. 4. In the Schedule Runbook pane, select **Link a schedule to your runbook**.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
To schedule a new update deployment, perform the following steps. Depending on t
> [!NOTE] > This option is not available if you selected an Azure VM or Arc enabled server. The machine is automatically targeted for the scheduled deployment.
+ > [!IMPORTANT]
+ > When building a dynamic group of Azure VMs, Update Management only supports a maximum of 500 queries that combines subscriptions or resource groups in the scope of the group.
+ 6. In the **Machines to update** region, select a saved search, an imported group, or pick **Machines** from the dropdown menu and select individual machines. With this option, you can see the readiness of the Log Analytics agent for each machine. To learn about the different methods of creating computer groups in Azure Monitor logs, see [Computer groups in Azure Monitor logs](../../azure-monitor/logs/computer-groups.md). You can include up to a maximum of 1000 machines in a scheduled update deployment. > [!NOTE]
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 04/01/2021 Last updated : 05/04/2021
At the date and time specified in the update deployment, the target machines exe
Having a machine registered for Update Management in more than one Log Analytics workspace (also referred to as multihoming) isn't supported.
+## Limits
+
+For limits that apply to Update Management, see [Azure Automation service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#update-management).
+ ## Clients ### Supported operating systems
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemNam
## Create a service connection
-A [service connection](/azure/devops/pipelines/library/service-endpoints) allows you to access resources in your Azure subscription from your Azure DevOps project.
-
-1. In Azure DevOps, go to the project containing your target pipeline and open the **Project settings** at the bottom left.
-1. Under **Pipelines** select **Service connections**.
-1. If you don't have any existing service connections, click the **Create service connection** button in the middle of the screen. Otherwise, click **New service connection** in the top right of the page.
-1. Select **Azure Resource Manager**.
-![Screenshot shows selecting Azure Resource Manager from the New service connection dropdown list.](./media/new-service-connection.png)
-1. In the **Authentication method** dialog, select **Service principal (automatic)**.
- > [!NOTE]
- > **Managed identity** authentication is currently unsupported for the App Configuration task.
-1. Fill in your subscription and resource. Give your service connection a name.
-
-Now that your service connection is created, find the name of the service principal assigned to it. You'll add a new role assignment to this service principal in the next step.
-
-1. Go to **Project Settings** > **Service connections**.
-1. Select the service connection that you created in the previous section.
-1. Select **Manage Service Principal**.
-1. Note the **Display name** listed.
## Add role assignment
-Assign the proper App Configuration role to the service connection being used within the task so that the task can access the App Configuration store.
-1. Navigate to your target App Configuration store. For a walkthrough of setting up an App Configuration store, see [Create an App Configuration store](./quickstart-dotnet-core-app.md#create-an-app-configuration-store) in one of the Azure App Configuration quickstarts.
-1. On the left, select **Access control (IAM)**.
-1. On the right side, click the **Add role assignments** button.
-![Screenshot shows the Add role assignments button.](./media/add-role-assignment-button.png).
-1. Under **Role**, select **App Configuration Data Reader**. This role allows the task to read from the App Configuration store.
-1. Select the service principal associated with the service connection that you created in the previous section.
-![Screenshot shows the Add role assignment dialog.](./media/add-role-assignment-reader.png)
-
-> [!NOTE]
-> To resolve Azure Key Vault references within App Configuration, the service connection must also be granted permission to read secrets in the referenced Azure Key Vaults.
-
## Use in builds This section will cover how to use the Azure App Configuration task in an Azure DevOps build pipeline.
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/push-kv-devops-pipeline.md
The [Azure App Configuration Push](https://marketplace.visualstudio.com/items?it
## Create a service connection
-A [service connection](/azure/devops/pipelines/library/service-endpoints) allows you to access resources in your Azure subscription from your Azure DevOps project.
-
-1. In Azure DevOps, go to the project containing your target pipeline and open the **Project settings** at the bottom left.
-1. Under **Pipelines** select **Service connections** and select **New service connection** in the top right.
-1. Select **Azure Resource Manager**.
-![Screenshot shows selecting Azure Resource Manager from the New service connection dropdown list.](./media/new-service-connection.png)
-1. In the **Authentication method** dialog, select **Service principal (automatic)** to create a new service principal or select **Service principal (manual)** to [use an existing service principal](/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#use-spn).
-1. Fill in your subscription and resource. Give your service connection a name.
-
-If you created a new service principal, find the name of the service principal assigned to the service connection. You'll add a new role assignment to this service principal in the next step.
-
-1. Go to **Project Settings** > **Service connections**.
-1. Select the service connection that you created in the previous section.
-1. Select **Manage Service Principal**.
-1. Note the **Display name** listed.
-![Screenshot shows the service principal display name.](./media/service-principal-display-name.png)
## Add role assignment
-Assign the proper App Configuration role assignments to the credentials being used within the task so that the task can access the App Configuration store.
-
-1. Navigate to your target App Configuration store.
-1. On the left, select **Access control (IAM)**.
-1. On the right side, click the **Add role assignments** button.
-![Screenshot shows the Add role assignments button.](./media/add-role-assignment-button.png)
-1. Under **Role**, select **App Configuration Data Owner**. This role allows the task to read from and write to the App Configuration store.
-1. Select the service principal associated with the service connection that you created in the previous section.
-![Screenshot shows the Add role assignment dialog.](./media/add-role-assignment.png)
-
## Use in builds This section will cover how to use the Azure App Configuration Push task in an Azure DevOps build pipeline.
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-arc Deploy Data Controller Direct Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode.md
Previously updated : 04/06/2021 Last updated : 05/04/2021
$ENV:location="<Azure location>"
#### Linux ```bash
-az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --version "1.0.015564" --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
+az k8s-extension create -c ${resourceName} -g ${resourceGroup} --name ${ADSExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensionName} --cluster-type connectedclusters ```
az k8s-extension show -g ${resourceGroup} -c ${resourceName} --name ${ADSExtensi
```PowerShell $ENV:ADSExtensionName="ads-extension"
-az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --version "1.0.015564" --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
+az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$ENV:ADSExtensionName" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace arc --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper
az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --name "$ENV:ADSExtensionName" --cluster-type connectedclusters ```
export extensionId=$(az k8s-extension show -g ${resourceGroup} -c ${resourceName
az customlocation create -g ${resourceGroup} -n ${clName} --namespace ${clNamespace} \ --host-resource-id ${hostClusterId} \
- --cluster-extension-ids ${extensionId} --location eastus2euap
+ --cluster-extension-ids ${extensionId} --location eastus
``` #### Windows PowerShell
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 04/29/2021 Last updated : 05/04/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
This section describes the new features introduced or enabled for this release.
- In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work. - Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`. - Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified.
+- Currently, only one Azure Arc data controller in direct connected mode per kubernetes cluster is supported.
#### Azure Arc enabled SQL Managed Instance
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-cache-for-redis Cache Web App Arm With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-arm-with-redis-cache-provision.md
The template creates the cache in the same location as the resource group.
```
-### Web app
+### Web app (Azure Cache for Redis)
Creates the web app with name specified in the **webSiteName** variable. Notice that the web app is configured with app setting properties that enable it to work with the Azure Cache for Redis. These app settings are dynamically created based on values provided during deployment.
Notice that the web app is configured with app setting properties that enable it
"type": "Microsoft.Web/sites", "location": "[resourceGroup().location]", "dependsOn": [
- "[concat('Microsoft.Web/serverFarms/', variables('hostingPlanName'))]",
- "[concat('Microsoft.Cache/Redis/', variables('cacheName'))]"
+ "[concat('Microsoft.Web/serverFarms/', variables('hostingPlanName'))]"
], "tags": { "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', variables('hostingPlanName'))]": "empty",
Notice that the web app is configured with app setting properties that enable it
"[concat('Microsoft.Cache/Redis/', variables('cacheName'))]" ], "properties": {
- "CacheConnection": "[concat(variables('cacheName'),'.redis.cache.windows.net,abortConnect=false,ssl=true,password=', listKeys(resourceId('Microsoft.Cache/Redis', variables('cacheName')), '2015-08-01').primaryKey)]"
+ "CacheConnection": "[concat(variables('cacheHostName'),'.redis.cache.windows.net,abortConnect=false,ssl=true,password=', listKeys(resourceId('Microsoft.Cache/Redis', variables('cacheName')), '2015-08-01').primaryKey)]"
+ }
+ }
+ ]
+}
+```
++
+### Web app (RedisEnterprise)
+For RedisEnterprise, because the resource types are slightly different, the way to do **listKeys** is different:
+
+```json
+{
+ "apiVersion": "2015-08-01",
+ "name": "[variables('webSiteName')]",
+ "type": "Microsoft.Web/sites",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.Web/serverFarms/', variables('hostingPlanName'))]"
+ ],
+ "tags": {
+ "[concat('hidden-related:', resourceGroup().id, '/providers/Microsoft.Web/serverfarms/', variables('hostingPlanName'))]": "empty",
+ "displayName": "Website"
+ },
+ "properties": {
+ "name": "[variables('webSiteName')]",
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
+ },
+ "resources": [
+ {
+ "apiVersion": "2015-08-01",
+ "type": "config",
+ "name": "appsettings",
+ "dependsOn": [
+ "[concat('Microsoft.Web/Sites/', variables('webSiteName'))]",
+ "[concat('Microsoft.Cache/RedisEnterprise/databases/', variables('cacheName'), "/default")]",
+ ],
+ "properties": {
+ "CacheConnection": "[concat(variables('cacheHostName'),abortConnect=false,ssl=true,password=', listKeys(resourceId('Microsoft.Cache/RedisEnterprise', variables('cacheName'), 'default'), '2020-03-01').primaryKey)]"
} } ]
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
This section outlines variations and considerations when using Security services
The following features have known limitations in Azure Government: -- Limitations with B2B collaboration in supported Azure Government tenants:
- - B2B collaboration is available in most Azure Government tenants created after June, 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
- - B2B collaboration is currently only supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that isn't part of the Azure Government cloud or that doesn't yet support B2B collaboration, the invitation will fail or the user will be unable to redeem the invitation.
+- Limitations with B2B Collaboration in supported Azure US Government tenants:
+ - B2B Collaboration is available in most Azure US Government tenants created after June, 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
+ - B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user will be unable to redeem the invitation.
- B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error. - Microsoft 365 Groups are not supported for B2B users and can't be enabled. - Some SQL tools such as SQL Server Management Studio (SSMS) require you to set the appropriate cloud parameter. In the tool's Azure service setup options, set the cloud parameter to Azure Government.
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema.md
The common alert schema will primarily manifest itself in your alert notificatio
| Action | Enhancements| |:|:|
-| SMS | A consistent SMS template for all alert types. |
| Email | A consistent and detailed email template, allowing you to easily diagnose issues at a glance. Embedded deep-links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. | | Webhook/Logic App/Azure Function/Automation Runbook | A consistent JSON structure for all alert types, which allows you to easily build integrations across the different alert types. |
For example, the following request body made to the [create or update](/rest/api
## Next steps - [Common alert schema definitions for Webhooks/Logic Apps/Azure Functions/Automation Runbooks.](./alerts-common-schema-definitions.md)-- [Learn how to create a logic app that leverages the common alert schema to handle all your alerts.](./alerts-common-schema-integrations.md)
+- [Learn how to create a logic app that leverages the common alert schema to handle all your alerts.](./alerts-common-schema-integrations.md)
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability overview description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 04/15/2021 Last updated : 05/04/2021
Dedicated [troubleshooting article](troubleshoot-availability.md).
* [Availability Alerts](availability-alerts.md) * [Multi-step web tests](availability-multistep.md) * [URL tests](monitor-web-app-availability.md)
-* [Create and run custom availability tests using Azure Functions.](availability-azure-functions.md)
+* [Create and run custom availability tests using Azure Functions.](availability-azure-functions.md)
+* [Web Tests Azure Resource Manager template](https://docs.microsoft.com/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-web-app-availability.md
In addition to the raw results, you can also view two key Availability metrics i
* [Availability Alerts](availability-alerts.md) * [Multi-step web tests](availability-multistep.md) * [Troubleshooting](troubleshoot-availability.md)
+* [Web Tests Azure Resource Manager template](https://docs.microsoft.com/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/opencensus-python.md
# Set up Azure Monitor for your Python application
-Azure Monitor supports distributed tracing, metric collection, and logging of Python applications through integration with [OpenCensus](https://opencensus.io). This article walks you through the process of setting up OpenCensus for Python and sending your monitoring data to Azure Monitor.
+Azure Monitor supports distributed tracing, metric collection, and logging of Python applications.
+
+Microsoft's supported solution for tracking and exporting data for your Python applications is through the [Opencensus Python SDK](#introducing-opencensus-python-sdk) via the [Azure Monitor exporters](#instrument-with-opencensus-python-sdk-with-azure-monitor-exporters).
+
+Any other telemetry SDKs for Python are UNSUPPORTED and are NOT recommended by Microsoft to use as a telemetry solution.
+
+You may have noted that OpenCensus is converging into [OpenTelemetry](https://opentelemetry.io/). However, we continue to recommend OpenCensus while OpenTelemetry gradually matures.
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The SDK only supports Python v2.7 and v3.4-v3.7.
+- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The Opencensus Python SDK only supports Python v2.7 and v3.4-v3.7.
- Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource.
-## Instrument with OpenCensus Python SDK for Azure Monitor
+## Introducing Opencensus Python SDK
+
+[OpenCensus](https://opencensus.io) is a set of open source libraries to allow collection of distributed tracing, metrics and logging telemetry. Through the use of [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you will be able to send this collected telemetry to Application insights. This article walks you through the process of setting up OpenCensus and Azure Monitor Exporters for Python to send your monitoring data to Azure Monitor.
+
+## Instrument with OpenCensus Python SDK with Azure Monitor exporters
Install the OpenCensus Azure Monitor exporters:
For more detailed information about how to use queries and logs, see [Logs in Az
* [Availability tests](./monitor-web-app-availability.md): Create tests to make sure your site is visible on the web. * [Smart diagnostics](./proactive-diagnostics.md): These tests run automatically, so you don't have to do anything to set them up. They tell you if your app has an unusual rate of failed requests. * [Metric alerts](../alerts/alerts-log.md): Set alerts to warn you if a metric crosses a threshold. You can set them on custom metrics that you code into your app.-
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-audio-datasheet.md
Title: Azure Percept Audio datasheet description: Check out the Azure Percept Audio datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-dk-datasheet.md
Title: Azure Percept DK datasheet description: Check out the Azure Percept DK datasheet for detailed device specifications--++ Last updated 02/16/2021
Last updated 02/16/2021
|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](../iot-dps/index.yml) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) | |General Processor |NXP iMX8m (Azure Percept DK Carrier Board) | |AI Acceleration |1x Intel Movidius Myriad X Integrated ISP (Azure Percept Vision) |
-|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50cm - infinity<br>FoV: 120 degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
+|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50 cm - infinity<br>FoV: 120-degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
|Security |TPM 2.0 Nuvoton NCPT750 | |Connectivity |Wi-Fi and Bluetooth via Realtek RTL882CE single-chip controller |
-|Storage  |16GB |
-|Memory  |4GB |
+|Storage  |16 GB |
+|Memory  |4 GB |
|Ports |1x Ethernet <br> 2x USB-A 3.0 <br> 1x USB-C |
-|Operating Temperature |0 to 35 degrees C |
-|Non-Operating Temperature |-40 to 85 degrees C |
+|Operating Temperature |0 degrees to 35 degrees C |
+|Non-Operating Temperature |-40 degrees to 85 degrees C |
|Relative Humidity |10% to 95% | |Certification  |FCC <br> IC <br> RoHS <br> REACH <br> UL |
-|Power Supply |19VDC at 3.42A (65W) |
+|Power Supply |19 VDC at 3.42A (65 W) |
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-vision-datasheet.md
Title: Azure Percept Vision datasheet description: Check out the Azure Percept Vision datasheet for detailed device specifications--++ Last updated 02/16/2021
Specifications listed below are for the Azure Percept Vision device, included in
|--|| |Target Industries |Manufacturing <br> Smart Buildings <br> Auto <br> Retail | |Hero Scenarios |Shopper analytics <br> On-shelf availability <br> Shrink reduction <br> Workplace monitoring|
-|Dimensions |42mm x 42mm x 40mm (Azure Percept Vision SoM assembly with housing) <br> 42mm x 42mm x 6mm (Vision SoM chip)|
+|Dimensions |42 mm x 42 mm x 40 mm (Azure Percept Vision SoM assembly with housing) <br> 42 mm x 42 mm x 6 mm (Vision SoM chip)|
|Management Control Plane |Azure Device Update (ADU) | |Supported Software and Services |[Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [OpenVINO](https://docs.openvinotoolkit.org/latest/https://docsupdatetracker.net/index.html) <br> Azure Device Update | |AI Acceleration |Intel Movidius Myriad X (MA2085) Vision Processing Unit (VPU) with Intel Camera ISP integrated, 0.7 TOPS |
-|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50cm - infinity<br>FoV: 120 degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
+|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50 cm - infinity<br>FoV: 120-degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
|Camera Support |RGB <br> 2 cameras can be run simultaneously | |Security Crypto-Controller |ST-Micro STM32L462CE |
-|Versioning / ID Component |64kb EEPROM |
+|Versioning / ID Component |64 kb EEPROM |
|Memory  |LPDDR4 2GB | |Power   |3.5 W | |Ports |1x USB 3.0 Type C <br> 2x MIPI 4 Lane (up to 1.5 Gbps per lane) | |Control Interfaces |2x I2C <br> 2x SPI <br> 6x PWM (GPIOs: 2x clock, 2x frame sync, 2x unused) <br> 2x spare GPIO | |Certification |FCC <br> IC <br> RoHS <br> REACH <br> UL |
-|Operating Temperature    |0 to 27 degrees C (Azure Percept Vision SoM assembly with housing) <br> -10 to 70 degrees C (Vision SoM chip) |
+|Operating Temperature    |0 degrees to 27 degrees C (Azure Percept Vision SoM assembly with housing) <br> -10 degrees to 70 degrees C (Vision SoM chip) |
|Touch Temperature |<= 48 degrees C | |Relative Humidity   |8% to 90% |
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-capture-images.md
Title: Capture images for a no-code vision solution in Azure Percept Studio
-description: Learn how to capture images with your Azure Percept DK in Azure Percept Studio for a no-code vision solution
--
+description: How to capture images with your Azure Percept DK in Azure Percept Studio
++ Last updated 02/12/2021
# Capture images for a vision project in Azure Percept Studio
-Follow this guide to capture images using the Vision SoM of the Azure Percept DK for an existing vision project in Azure Percept Studio. If you have not created a vision project yet, please see the [no-code vision tutorial](./tutorial-nocode-vision.md).
+Follow this guide to capture images using Azure Percept DK for an existing vision project. If you haven't created a vision project yet, see the [no-code vision tutorial](./tutorial-nocode-vision.md).
## Prerequisites
Follow this guide to capture images using the Vision SoM of the Azure Percept DK
1. Navigate to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-1. On the left side of the overview page, click **Devices**.
+1. On the left side of the overview page, select **Devices**.
:::image type="content" source="./media/how-to-capture-images/overview-devices-inline.png" alt-text="Azure Percept Studio overview screen." lightbox="./media/how-to-capture-images/overview-devices.png":::
Follow this guide to capture images using the Vision SoM of the Azure Percept DK
:::image type="content" source="./media/how-to-capture-images/select-device.png" alt-text="Percept devices list.":::
-1. On your device page, click **Capture images for a project**.
+1. On your device page, select **Capture images for a project**.
:::image type="content" source="./media/how-to-capture-images/capture-images.png" alt-text="Percept devices page with available actions listed.":::
-1. In the **Image capture** window, do the following:
+1. In the **Image capture** window, follow these steps:
1. In the **Project** dropdown menu, select the vision project you would like to collect images for.
- 1. Click **View device stream** to ensure the camera of the Vision SoM is placed correctly.
+ 1. Select **View device stream** to ensure the camera of the Vision SoM is placed correctly.
- 1. Click **Take photo** to capture an image.
+ 1. Select **Take photo** to capture an image.
- 1. Alternatively, check the box next to **Automatic image capture** to set up a timer for image capture:
+ 1. Instead, check the box next to **Automatic image capture** to set up a timer for image capture:
1. Select your preferred imaging rate under **Capture rate**. 1. Select the total number of images you would like to collect under **Target**.
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-configure-voice-assistant.md
Title: Configure voice assistant application using Azure IoT Hub description: Configure voice assistant application using Azure IoT Hub--++ Last updated 02/15/2021
# Configure voice assistant application using Azure IoT Hub
-This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial that guides you through the process of creating a voice assistant using demo template, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
+This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
## Update your voice assistant configuration
-1. Open the [Azure portal](https://portal.azure.com) and type **IoT Hub** into the search bar. Click on the icon to open the IoT Hub page.
+1. Open the [Azure portal](https://portal.azure.com) and type **IoT Hub** into the search bar. Select the icon to open the IoT Hub page.
1. On the IoT Hub page, select the IoT Hub to which your device was provisioned.
This article describes how to configure your voice assistant application using I
1. Select the device to which your voice assistant application was deployed.
-1. Click on **Set Modules**.
+1. Select **Set Modules**.
:::image type="content" source="./media/manage-voice-assistant-using-iot-hub/set-modules.png" alt-text="Screenshot of device page with Set Modules highlighted.":::
-1. Verify that the following entry is present under the **Container Registry Credentials** section. Add credentials if required.
+1. Verify that the following entry is present under the **Container Registry Credentials** section. Add credentials if necessary.
|Name|Address|Username|Password| |-|-|--|--|
This article describes how to configure your voice assistant application using I
:::image type="content" source="./media/manage-voice-assistant-using-iot-hub/modules.png" alt-text="Screenshot showing list of all IoT Edge modules on the device.":::
-1. Click on the **Module Settings** tab. Verify the following configuration:
+1. Select the **Module Settings** tab. Verify the following configuration:
Image URI|Restart Policy|Desired Status |--|--
- mcr.microsoft.com/azureedgedevices/azureearspeechclientmodule:preload-devkit|always|running
+ mcr.microsoft.com/azureedgedevices/azureearspeechclientmodule: preload-devkit|always|running
- If your settings do not match, edit them and click **Update**.
+ If your settings don't match, edit them and select **Update**.
-1. Click on the **Environment Variables** tab. Verify that there are no environment variables defined.
+1. Select the **Environment Variables** tab. Verify that there are no environment variables defined.
-1. Click on the **Module Twin Settings** tab. Update the **speechConfigs** section as follows:
+1. Select the **Module Twin Settings** tab. Update the **speechConfigs** section as follows:
``` "speechConfigs": {
This article describes how to configure your voice assistant application using I
To locate your **appID**, **key**, and **region**, go to [Speech Studio](https://speech.microsoft.com/): 1. Sign in and select the appropriate speech resource.
-1. On the **Speech Studio** home page, click on **Custom Commands** under **Voice Assistants**.
+1. On the **Speech Studio** home page, select **Custom Commands** under **Voice Assistants**.
1. Select your target project. :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/project.png" alt-text="Screenshot of project page in Speech Studio.":::
-1. Click on **Settings** on the left-hand menu panel.
+1. Select **Settings** on the left-hand menu panel.
1. The **appID** and **key** will be located under the **General** settings tab. :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/general-settings.png" alt-text="Screenshot of speech project general settings.":::
To locate your **appID**, **key**, and **region**, go to [Speech Studio](https:/
:::image type="content" source="./media/manage-voice-assistant-using-iot-hub/luis-resources.png" alt-text="Screenshot of speech project LUIS resources.":::
-1. After entering your **speechConfigs** information, click **Update**.
+1. After entering your **speechConfigs** information, select **Update**.
-1. Click on the **Routes** tab at the top of the **Set modules** page. Ensure you have a route with the following value:
+1. Select the **Routes** tab at the top of the **Set modules** page. Ensure you have a route with the following value:
``` FROM /messages/modules/azureearspeechclientmodule/outputs/* INTO $upstream ```
- Add the route if it does not exist.
+ Add the route if it doesn't exist.
-1. Click **Review + Create**.
+1. Select **Review + Create**.
-1. Click **Create**.
+1. Select **Create**.
## Next steps
azure-percept How To Connect To Percept Dk Over Serial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md
Title: Connect to your Azure Percept DK over serial
-description: Learn how to set up a serial connection to your Azure Percept DK with PuTTY and a USB to TTL serial cable
--
+description: How to set up a serial connection to your Azure Percept DK with a USB to TTL serial cable
++ Last updated 02/03/2021
Follow the steps below to set up a serial connection to your Azure Percept DK th
:::image type="content" source="./media/how-to-connect-to-percept-dk-over-serial/usb-serial-cable.png" alt-text="USB to TTL serial cable.":::
-## Initiate the serial connection
+## Start the serial connection
1. If your carrier board is connected to an 80/20 rail, remove it from the rail using the hex key (included in the devkit welcome card).
Follow the steps below to set up a serial connection to your Azure Percept DK th
> [!TIP] > Note the orientation of the jumper board prior to removing it. For example, draw an arrow on or attach a sticker to the jumper board pointing towards the circuitry for reference. The jumper board is not keyed and may be accidentally connected backwards when reassembling your carrier board.
-1. Connect the [USB to TTL serial cable](https://www.adafruit.com/product/954) to the GPIO pins on the motherboard as shown below. Please note that the red wire is not connected.
+1. Connect the [USB to TTL serial cable](https://www.adafruit.com/product/954) to the GPIO pins on the motherboard as shown below.
- Connect the black cable (GND) to pin 6. - Connect the white cable (RX) to pin 8.
Follow the steps below to set up a serial connection to your Azure Percept DK th
1. Power on your devkit and connect the USB side of the serial cable to your PC.
-1. In Windows, go to **Start** -> **Windows Update settings** -> **View optional updates** -> **Driver updates**. Look for a Serial to USB update in the list, check the box next to it, and click **Download and Install**.
+1. In Windows, go to **Start** -> **Windows Update settings** -> **View optional updates** -> **Driver updates**. Look for a Serial to USB update in the list, check the box next to it, and select **Download and Install**.
-1. Next, open the Windows Device Manager (**Start** -> **Device Manager**). Go to **Ports** and click **USB to UART** to open **Properties**. Note which COM port your device is connected to.
+1. Next, open the Windows Device Manager (**Start** -> **Device Manager**). Go to **Ports** and select **USB to UART** to open **Properties**. Note which COM port your device is connected to.
-1. Click the **Port Settings** tab. Make sure **Bits per second** is set to 115200.
+1. Select the **Port Settings** tab. Make sure **Bits per second** is set to 115200.
-1. Open PuTTY. Enter the following and click **Open** to connect to your devkit via serial:
+1. Open PuTTY. Enter the following and select **Open** to connect to your devkit via serial:
1. Serial line: COM[port #] 1. Speed: 115200
Follow the steps below to set up a serial connection to your Azure Percept DK th
:::image type="content" source="./media/how-to-connect-to-percept-dk-over-serial/putty-serial-session.png" alt-text="PuTTY session window with serial parameters selected.":::
-## Next steps
-
-To update an unbootable device over serial with the [USB to TTL serial cable](https://www.adafruit.com/product/954), please see the USB update guide for non-standard situations.
-
-[comment]: # (Add link to USB update guide when available.)
+## Next Steps
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-deploy-model.md
Title: Deploy a vision AI model to your Azure Percept DK description: Learn how to deploy a vision AI model to your Azure Percept DK from Azure Percept Studio--++ Last updated 02/12/2021
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-manage-voice-assistant.md
Title: Configure voice assistant application within Azure Percept Studio
-description: Configure voice assistant application within Azure Percept Studio
--
+ Title: Configure a voice assistant application within Azure Percept Studio
+description: Configure a voice assistant application within Azure Percept Studio
++ Last updated 02/15/2021
# Managing your voice assistant
-This article describes how to configure the keyword and commands of your voice assistant application within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). For guidance on configuring your keyword within IoT Hub instead of the portal, please see this [how-to article](./how-to-configure-voice-assistant.md).
+This article describes how to configure the keyword and commands of your voice assistant application within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). For guidance on configuring your keyword within IoT Hub instead of the portal, see this [how-to article](./how-to-configure-voice-assistant.md).
-If you have not yet created a voice assistant application, please see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
+If you have not yet created a voice assistant application, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
## Keyword configuration
-A keyword is a word or short phrase used to activate a voice assistant. For example, "Hey Cortana" is the keyword for the Cortana assistant. Voice activation allows your users to start interacting with your product completely hands-free by simply speaking the keyword. As your product continuously listens for the keyword, all audio is processed locally on the device until a detection occurs to ensure user data stays as private as possible.
+A keyword is a word or short phrase used to activate a voice assistant. For example, "Hey Cortana" is the keyword for the Cortana assistant. Voice activation allows your users to start interacting with your product hands-free by speaking the keyword. As your product continuously listens for the keyword, all audio is processed locally on the device until a detection occurs to ensure user data stays as private as possible.
### Configuration within the voice assistant demo window
-1. Click **change** next to **Custom Keyword** on the demo page.
+1. Select **change** next to **Custom Keyword** on the demo page.
:::image type="content" source="./media/manage-voice-assistant/hospitality-demo.png" alt-text="Screenshot of hospitality demo window.":::
- If you do not have the demo page open, navigate to the device page (see below) and click **Test your voice assistant** under **Actions** to access the demo.
+ If you do not have the demo page open, navigate to the device page (see below) and select **Test your voice assistant** under **Actions** to access the demo.
-1. Select one of the available keywords and click **Save** to apply changes.
+1. Select one of the available keywords and select **Save** to apply changes.
1. The three LED lights on the Azure Percept Audio device will change to bright blue (no flashing) when configuration is complete and your voice assistant is ready to use. ### Configuration within the device page
-1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), click on **Devices** on the left menu pane.
+1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), select on **Devices** on the left menu pane.
:::image type="content" source="./media/manage-voice-assistant/portal-overview-devices.png" alt-text="Screenshot of Azure Percept Studio overview page with Devices highlighted.":::
A keyword is a word or short phrase used to activate a voice assistant. For exam
:::image type="content" source="./media/manage-voice-assistant/device-page.png" alt-text="Screenshot of the edge device page with the Speech tab highlighted.":::
-1. Click **Change** next to **Keyword**.
+1. Select **Change** next to **Keyword**.
:::image type="content" source="./media/manage-voice-assistant/change-keyword-device.png" alt-text="Screenshot of the available speech solution actions.":::
-1. Select one of the available keywords and click **Save** to apply changes.
+1. Select one of the available keywords and select **Save** to apply changes.
1. The three LED lights on the Azure Percept Audio device will change to bright blue (no flashing) when configuration is complete and your voice assistant is ready to use.
Custom commands make it easy to build rich voice commanding apps optimized for v
### Configuration within the voice assistant demo window
-1. Click **Change** next to **Custom Command** on the demo page. If you do not have the demo page open, navigate to the device page (see below) and click **Test your voice assistant** under **Actions** to access the demo.
+1. Select **Change** next to **Custom Command** on the demo page. If you do not have the demo page open, navigate to the device page (see below) and select **Test your voice assistant** under **Actions** to access the demo.
-1. Select one of the available custom commands and click **Save** to apply changes.
+1. Select one of the available custom commands and select **Save** to apply changes.
### Configuration within the device page
-1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), click on **Devices** on the left menu pane.
+1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), select on **Devices** on the left menu pane.
1. Select the device to which your voice assistant application was deployed. 1. Open the **Speech** tab.
-1. Click **Change** next to **Command**.
+1. Select **Change** next to **Command**.
-1. Select one of the available custom commands and click **Save** to apply changes.
+1. Select one of the available custom commands and select **Save** to apply changes.
## Create custom commands
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
+
+ Title: Select the best update package for your Azure Percept DK
+description: How to identify your Azure Percept DK version and select the best update package for it
++++ Last updated : 05/04/2021+++
+# How to determine and download the best update package for OTA and USB updates
+
+This page provides guidance on how to select the update package that is best for your dev kit and the download locations for the update packages.
+
+For more information on how to update your device, see these articles:
+- [Update your Azure Percept DK over-the-air](https://docs.microsoft.com/azure/azure-percept/how-to-update-over-the-air)
+- [Update your Azure Percept DK via USB](https://docs.microsoft.com/azure/azure-percept/how-to-update-via-usb)
++
+## Prerequisites
+
+- An [Azure Percept DK](https://go.microsoft.com/fwlink/?linkid=2155270) that has been [set up and connected to Azure Percept Studio and IoT Hub](https://docs.microsoft.com/azure/azure-percept/quickstart-percept-dk-set-up).
+
+## Identify the current model name and software version on your Azure Percept DK
+To ensure you apply the correct update package to your dev kit, you must first determine which software version it's currently running.
+
+> [!WARNING]
+> Applying the incorrect update package could result in your dev kit becoming inoperable. It is important that you follow these steps to ensure you apply the correct update package.
+
+1. Power on your dev kit and ensure it's connected to Azure Percept Studio.
+1. In Azure Percept Studio, select **Devices** from the left menu.
+1. From the device list, select the name of the device that is currently connected. The status will say **Connected**.
+1. Select **Open device in IoT Hub**
+1. You may be asked to sign into your Azure account again.
+1. Select **Device twin**.
+1. Scroll through the device twin properties and locate **"model"** and **"swVersion"** under **"deviceInformation"** and make a note of their values.
+
+## Determine the correct update package
+Using the **model** and **swVersion** identified in the previous section, check the table below to determine which update package to download.
++
+|model |swVersion |Update method |Download links |
+|||||
+|PE-101 |2020.108.101.105, <br>2020.108.114.120, <br>2020.109.101.122, <br>2020.109.116.120, <br>2021.101.106.118 |**USB only** |[USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |
+|PE-101 |2021.102.108.112, <br> |OTA or USB |[OTA manifest](https://go.microsoft.com/fwlink/?linkid=2155625)<br>[OTA update package](https://go.microsoft.com/fwlink/?linkid=2161538)<br>[USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |
+|APDK-101 |All swVersions |OTA or USB | [OTA manifest](https://go.microsoft.com/fwlink/?linkid=2162292)<br>[OTA update package](https://go.microsoft.com/fwlink/?linkid=2161538)<br>[USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |
+++
+## Next steps
+Update your dev kits via the methods and update packages determined in the previous section.
+- [Update your Azure Percept DK over-the-air](https://docs.microsoft.com/azure/azure-percept/how-to-update-over-the-air)
+- [Update your Azure Percept DK via USB](https://docs.microsoft.com/azure/azure-percept/how-to-update-via-usb)
+
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-ssh-into-percept-dk.md
Title: Connect to your Azure Percept DK over SSH description: Learn how to SSH into your Azure Percept DK with PuTTY--++ Last updated 03/18/2021
Follow the steps below to set up an SSH connection to your Azure Percept DK thro
- A Windows, Linux, or OS X based host computer with Wi-Fi capability - An SSH client (see the next section for installation guidance) - An Azure Percept DK (dev kit)-- An SSH login, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+- An SSH account, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
## Install your preferred SSH client
If your host computer runs Windows, you may have two SSH client options to choos
### OpenSSH
-Windows 10 includes a built-in SSH client called OpenSSH that can be run with a simple command inside of a command prompt. We recommend using OpenSSH with Azure Percept if it is available to you. To check if your Windows computer has OpenSSH installed, follow these steps:
+Windows 10 includes a built-in SSH client called OpenSSH that can be run with a simple command in a command prompt. We recommend using OpenSSH with Azure Percept if it's available to you. To check if your Windows computer has OpenSSH installed, follow these steps:
1. Go to **Start** -> **Settings**.
Windows 10 includes a built-in SSH client called OpenSSH that can be run with a
1. Under **Apps & features**, select **Optional features**.
-1. Type **OpenSSH Client** into the **Installed features** search bar. If OpenSSH appears, the client is already installed, and you may move on to the next section. If you do not see OpenSSH, click **Add a feature**.
+1. Type **OpenSSH Client** into the **Installed features** search bar. If OpenSSH appears, the client is already installed, and you may move on to the next section. If you do not see OpenSSH, select **Add a feature**.
:::image type="content" source="./media/how-to-ssh-into-percept-dk/open-ssh-install.png" alt-text="Screenshot of settings showing OpenSSH installation status.":::
-1. Select **OpenSSH Client** and click **Install**. You may now move on to the next section. If OpenSSH is not available to install on your computer, follow the steps below to install PuTTY, a third-party SSH client.
+1. Select **OpenSSH Client** and select **Install**. You may now move on to the next section. If OpenSSH is not available to install on your computer, follow the steps below to install PuTTY, a third-party SSH client.
### PuTTY
If your Windows computer does not include OpenSSH, we recommend using [PuTTY](ht
1. Go to the [PuTTY download page](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-1. Under **Package files**, click on the 32-bit or 64-bit .msi file to download the installer. If you are unsure of which version to choose, check out the [FAQs](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-32bit-64bit).
+1. Under **Package files**, select the 32-bit or 64-bit .msi file to download the installer. If you are unsure of which version to choose, check out the [FAQs](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-32bit-64bit).
-1. Click on the installer to start the installation process. Follow the prompts as required.
+1. Select the installer to start the installation process. Follow the prompts as required.
1. Congratulations! You have successfully installed the PuTTY SSH client.
If your Windows computer does not include OpenSSH, we recommend using [PuTTY](ht
1. Power on your Azure Percept DK.
-1. If your dev kit is already connected to a network over Ethernet or Wi-Fi, skip to the next step. Otherwise, connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point. Like connecting to any other Wi-Fi network, open the network and internet settings on your computer, click on the following network, and enter the network password when prompted:
+1. If your dev kit is already connected to a network over Ethernet or Wi-Fi, skip to the next step. Otherwise, connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point. Like connecting to any other Wi-Fi network, open the network and internet settings on your computer, select the following network, and enter the network password when prompted:
- **Network name**: depending on your dev kit's operating system version, the name of the Wi-Fi access point is either **scz-xxxx** or **apd-xxxx** (where ΓÇ£xxxxΓÇ¥ is the last four digits of the dev kitΓÇÖs MAC address) - **Password**: can be found on the Welcome Card that came with the dev kit
If your Windows computer does not include OpenSSH, we recommend using [PuTTY](ht
### Using PuTTY
-1. Open PuTTY. Enter the following into the **PuTTY Configuration** window and click **Open** to SSH into your dev kit:
+1. Open PuTTY. Enter the following into the **PuTTY Configuration** window and select **Open** to SSH into your dev kit:
1. Host Name: [IP address] 1. Port: 22
azure-percept How To View Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-telemetry.md
Title: View your Azure Percept DK's model inference telemetry description: Learn how to view your Azure Percept DK's vision model inference telemetry in Azure IoT Explorer--++ Last updated 02/17/2021
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-video-stream.md
Title: View your Azure Percept DK's RTSP video stream description: Learn how to view the RTSP video stream from Azure Percept DK--++ Last updated 02/12/2021
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-unboxing.md
Title: Unbox and assemble your Azure Percept DK components description: Learn how to unbox, connect, and power on your Azure Percept DK--++ Last updated 02/16/2021
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 03/16/2021 Last updated : 03/16/2021
The resources providers that are marked with **- registered** are registered by
| Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | | Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) | | Microsoft.DBforPostgreSQL | [Azure Database for PostgreSQL](../../postgresql/index.yml) |
-| Microsoft.DeploymentManager | [Azure Deployment Manager](../templates/deployment-manager-overview.md) |
| Microsoft.DesktopVirtualization | [Windows Virtual Desktop](../../virtual-desktop/index.yml) | | Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) | | Microsoft.DevOps | [Azure DevOps](/azure/devops/) |
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 04/28/2021 Last updated : 05/03/2021
As an administrator, you can lock a subscription, resource group, or resource to
You can set the lock level to **CanNotDelete** or **ReadOnly**. In the portal, the locks are called **Delete** and **Read-only** respectively.
-* **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.
-* **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
+- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.
+- **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role.
## How locks are applied
When you apply a lock at a parent scope, all resources within that scope inherit
Unlike role-based access control, you use management locks to apply a restriction across all users and roles. To learn about setting permissions for users and roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
-Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to `https://management.azure.com`. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
+Resource Manager locks apply only to operations that happen in the [management plane](control-plane-and-data-plane.md), which consists of operations sent to `https://management.azure.com`. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on a SQL Database logical server prevents you from deleting or modifying the server. It doesn't prevent you from creating, updating, or deleting data in the databases on that server. Data transactions are permitted because those operations aren't sent to `https://management.azure.com`.
## Considerations before applying locks Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
-* A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+- A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who don't have the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
-* A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and doesn't protect blob, queue, table, or file data within that storage account.
+- A cannot-delete lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and doesn't protect blob, queue, table, or file data within that storage account.
-* A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
+- A read-only lock on a **storage account** doesn't prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and doesn't protect blob, queue, table, or file data within that storage account.
-* A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
+- A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
-* A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out the plan](../../app-service/manage-scale-up.md).
+- A read-only lock on a **resource group** that contains an **App Service plan** prevents you from [scaling up or out the plan](../../app-service/manage-scale-up.md).
-* A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request.
+- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request.
-* A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments will fail.
+- A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments will fail.
-* A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml).
+- A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml).
-* A cannot-delete lock on a **resource group** prevents **Azure Machine Learning** from autoscaling [Azure Machine Learning compute clusters](../../machine-learning/concept-compute-target.md#azure-machine-learning-compute-managed) to remove unused nodes.
+- A cannot-delete lock on a **resource group** prevents **Azure Machine Learning** from autoscaling [Azure Machine Learning compute clusters](../../machine-learning/concept-compute-target.md#azure-machine-learning-compute-managed) to remove unused nodes.
-* A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries.
+- A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries.
## Who can create or delete locks
When using an Azure Resource Manager template (ARM template) to deploy a lock, y
The following template applies a lock to the resource group it's deployed to. Notice there isn't a scope property on the lock resource because the scope of the lock matches the scope of deployment. This template is deployed at the resource group level.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- },
- "resources": [
- {
- "type": "Microsoft.Authorization/locks",
- "apiVersion": "2016-09-01",
- "name": "rgLock",
- "properties": {
- "level": "CanNotDelete",
- "notes": "Resource Group should not be deleted."
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/locks",
+ "apiVersion": "2016-09-01",
+ "name": "rgLock",
+ "properties": {
+ "level": "CanNotDelete",
+ "notes": "Resource group should not be deleted."
+ }
+ }
+ ]
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
+ name: 'rgLock'
+ properties: {
+ level: 'CanNotDelete'
+ notes: 'Resource group should not be deleted.'
+ }
} ``` ++ To create a resource group and lock it, deploy the following template at the subscription level.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "rgName": {
- "type": "string"
- },
- "rgLocation": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "rgName": {
+ "type": "string"
},
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2019-10-01",
- "name": "[parameters('rgName')]",
- "location": "[parameters('rgLocation')]",
- "properties": {}
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
- "name": "lockDeployment",
- "resourceGroup": "[parameters('rgName')]",
- "dependsOn": [
- "[resourceId('Microsoft.Resources/resourceGroups/', parameters('rgName'))]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.Authorization/locks",
- "apiVersion": "2016-09-01",
- "name": "rgLock",
- "properties": {
- "level": "CanNotDelete",
- "notes": "Resource group and its resources should not be deleted."
- }
- }
- ],
- "outputs": {}
- }
+ "rgLocation": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Resources/resourceGroups",
+ "apiVersion": "2020-10-01",
+ "name": "[parameters('rgName')]",
+ "location": "[parameters('rgLocation')]",
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2020-10-01",
+ "name": "lockDeployment",
+ "resourceGroup": "[parameters('rgName')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Resources/resourceGroups/', parameters('rgName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/locks",
+ "apiVersion": "2016-09-01",
+ "name": "rgLock",
+ "properties": {
+ "level": "CanNotDelete",
+ "notes": "Resource group and its resources should not be deleted."
+ }
}
+ ],
+ "outputs": {}
}
- ],
- "outputs": {}
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+# [Bicep](#tab/bicep)
+
+The main Bicep file creates a resource group and uses a [module](../templates/bicep-modules.md) to create the lock.
+
+```Bicep
+targetScope = 'subscription'
+
+param rgName string
+param rgLocation string
+
+resource createRg 'Microsoft.Resources/resourceGroups@2020-10-01' = {
+ name: rgName
+ location: rgLocation
+}
+
+module deployRgLock './lockRg.bicep' = {
+ name: 'lockDeployment'
+ scope: resourceGroup(createRg.name)
+}
+```
+
+The module uses a Bicep file named _lockRg.bicep_ that adds the resource group lock.
+
+```bicep
+resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
+ name: 'rgLock'
+ properties: {
+ level: 'CanNotDelete'
+ notes: 'Resource group and its resources should not be deleted.'
+ }
} ``` ++ When applying a lock to a **resource** within the resource group, add the scope property. Set scope to the name of the resource to lock.
-The following example shows a template that creates an app service plan, a web site, and a lock on the web site. The scope of the lock is set to the web site.
+The following example shows a template that creates an app service plan, a website, and a lock on the website. The scope of the lock is set to the website.
+
+# [JSON](#tab/json)
```json {
The following example shows a template that creates an app service plan, a web s
"type": "string" }, "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
} }, "variables": {
The following example shows a template that creates an app service plan, a web s
"resources": [ { "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-12-01",
"name": "[parameters('hostingPlanName')]", "location": "[parameters('location')]", "sku": {
The following example shows a template that creates an app service plan, a web s
}, { "type": "Microsoft.Web/sites",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-12-01",
"name": "[variables('siteName')]", "location": "[parameters('location')]", "dependsOn": [
The following example shows a template that creates an app service plan, a web s
} ```
+# [Bicep](#tab/bicep)
+
+```Bicep
+param hostingPlanName string
+param location string = resourceGroup().location
+
+var siteName = concat('ExampleSite', uniqueString(resourceGroup().id))
+
+resource serverFarm 'Microsoft.Web/serverfarms@2020-12-01' = {
+ name: hostingPlanName
+ location: location
+ sku: {
+ tier: 'Free'
+ name: 'f1'
+ capacity: 0
+ }
+ properties: {
+ targetWorkerCount: 1
+ }
+}
+
+resource webSite 'Microsoft.Web/sites@2020-12-01' = {
+ name: siteName
+ location: location
+ properties: {
+ serverFarmId: serverFarm.name
+ }
+}
+
+resource siteLock 'Microsoft.Authorization/locks@2016-09-01' = {
+ name: 'siteLock'
+ scope: webSite
+ properties:{
+ level: 'CanNotDelete'
+ notes: 'Site should not be deleted.'
+ }
+}
+```
+++ ### Azure PowerShell You lock deployed resources with Azure PowerShell by using the [New-AzResourceLock](/powershell/module/az.resources/new-azresourcelock) command.
In the request, include a JSON object that specifies the properties for the lock
## Next steps
-* To learn about logically organizing your resources, see [Using tags to organize your resources](tag-resources.md).
-* You can apply restrictions and conventions across your subscription with customized policies. For more information, see [What is Azure Policy?](../../governance/policy/overview.md).
-* For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
+- To learn about logically organizing your resources, see [Using tags to organize your resources](tag-resources.md).
+- You can apply restrictions and conventions across your subscription with customized policies. For more information, see [What is Azure Policy?](../../governance/policy/overview.md).
+- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
azure-resource-manager Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-resources-cli.md
echo "Enter the Resource Group name:" &&
read resourceGroupName && echo "Enter the location (i.e. centralus):" && read location &&
-az deployment group create --resource-group $resourceGroupName --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json"
+az deployment group create --resource-group $resourceGroupName --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
``` For more information, see [Deploy resources with Resource Manager templates and Azure CLI](../templates/deploy-cli.md).
azure-resource-manager Manage Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-resources-powershell.md
The following script creates deploy a Quickstart template to create a storage ac
```azurepowershell-interactive $resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
-$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json"
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -Location $location ```
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-resource-manager Create Templates Use Intellij https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/create-templates-use-intellij.md
To complete this article, you need:
## Deploy a Quickstart template
-Instead of creating a template from scratch, you open a template from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). Azure Quickstart Templates is a repository for ARM templates. The template used in this article is called [Create a standard storage account](https://github.com/Azure/azure-quickstart-templates/tree/master/101-storage-account-create/). It defines an Azure Storage account resource.
+Instead of creating a template from scratch, you open a template from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). Azure Quickstart Templates is a repository for ARM templates. The template used in this article is called [Create a standard storage account](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-account-create/). It defines an Azure Storage account resource.
-1. Right-click and save the [`azuredeploy.json`](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json) and [`azuredeploy.parameters.json`](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.parameters.json) to your local computer.
+1. Right-click and save the [`azuredeploy.json`](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) and [`azuredeploy.parameters.json`](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json) to your local computer.
1. If your Azure Toolkit are properly installed and signed-in, you should see Azure Explorer in your IntelliJ IDEA's sidebar. Right-click on the **Resource Management** and select **Create Deployment**.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
To deploy an external template, use the `template-uri` parameter.
az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json" \
--parameters storageAccountType=Standard_GRS ```
azure-resource-manager Deploy Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cloud-shell.md
To deploy an external template, provide the URI of the template exactly as you w
az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json" \
--parameters storageAccountType=Standard_GRS ```
To deploy an external template, provide the URI of the template exactly as you w
New-AzResourceGroupDeployment ` -DeploymentName ExampleDeployment ` -ResourceGroupName ExampleGroup `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json `
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json `
-storageAccountType Standard_GRS ```
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-github-actions.md
You need to create secrets for your Azure credentials, resource group, and subsc
Add a Resource Manager template to your GitHub repository. This template creates a storage account. ```url
-https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` You can put the file anywhere in the repository. The workflow sample in the next section assumes the template file is named **azuredeploy.json**, and it is stored at the root of your repository.
azure-resource-manager Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-portal.md
If you want to execute a deployment but not use any of the templates in the Mark
This tutorial provides the instruction for loading a quickstart template.
-1. Under **Load a GitHub quickstart template**, type or select **101-storage-account-create**.
+1. Under **Load a GitHub quickstart template**, type or select **storage-account-create**.
You have two options:
If you want to execute a deployment but not use any of the templates in the Mark
- To view audit logs, see [Audit operations with Resource Manager](../management/view-activity-logs.md). - To troubleshoot deployment errors, see [View deployment operations](deployment-history.md).-- To export a template from a deployment or resource group, see [Export ARM templates](export-template-portal.md).-- To safely roll out your service across multiple regions, see [Azure Deployment Manager](deployment-manager-overview.md).
+- To export a template from a deployment or resource group, see [Export ARM templates](export-template-portal.md).
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
To deploy Bicep files, you need [Azure PowerShell version 5.6.0 or later](/power
## Prerequisites
-You need a template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. The local file name used in this article is _C:\MyTemplates\azuredeploy.json_.
+You need a template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. The local file name used in this article is _C:\MyTemplates\azuredeploy.json_.
You need to install Azure PowerShell and connect to Azure:
To deploy an external template, use the `-TemplateUri` parameter.
New-AzResourceGroupDeployment ` -Name remoteTemplateDeployment ` -ResourceGroupName ExampleGroup `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` The preceding example requires a publicly accessible URI for the template, which works for most scenarios because your template shouldn't include sensitive data. If you need to specify sensitive data (like an admin password), pass that value as a secure parameter. However, if you want to manage access to the template, consider using [template specs](#deploy-template-spec).
To pass an external parameter file, use the `TemplateParameterUri` parameter:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json `
- -TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.parameters.json
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json `
+ -TemplateParameterUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.parameters.json
``` ## Next steps
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
To create the URL for your template, start with the raw URL to the template in y
The format of the URL is: ```html
-https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` Then, convert the URL to a URL-encoded value. You can use an online encoder or run a command. The following PowerShell example shows how to URL encode a value. ```powershell
-$url = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json"
+$url = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
[uri]::EscapeDataString($url) ``` The example URL has the following value when URL encoded. ```html
-https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json
+https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json
``` Each link starts with the same base URL:
https://portal.azure.com/#create/Microsoft.Template/uri/
Add your URL-encoded template link to the end of the base URL. ```html
-https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json
+https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json
``` You have your full URL for the link.
Finally, put the link and image together.
To add the button with Markdown in the _README.md_ file in your GitHub repository or a web page, use: ```markdown
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
``` For HTML, use: ```html
-<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json" target="_blank">
+<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json" target="_blank">
<img src="https://aka.ms/deploytoazurebutton"/> </a> ```
For Git with Azure repo, the button is in the format:
To test the full solution, select the following button:
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
The portal displays a pane that allows you to easily provide parameter values. The parameters are pre-filled with the default values from the template.
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-resource-group.md
For Azure CLI, use [az deployment group create](/cli/azure/deployment/group#az_d
az deployment group create \ --name demoRGDeployment \ --resource-group ExampleGroup \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json" \
--parameters storageAccountType=Standard_GRS ```
For the PowerShell deployment command, use [New-AzResourceGroupDeployment](/powe
New-AzResourceGroupDeployment ` -Name demoRGDeployment ` -ResourceGroupName ExampleGroup `
- -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json `
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json `
-storageAccountType Standard_GRS ` ```
azure-resource-manager Deployment Manager Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-health-check.md
- Title: Health integration rollout - Azure Deployment Manager
-description: Describes how to deploy a service over many regions with Azure Deployment Manager. It shows safe deployment practices to verify the stability of your deployment before rolling out to all regions.
-- Previously updated : 09/21/2020----
-# Introduce health integration rollout to Azure Deployment Manager (Public preview)
-
-[Azure Deployment Manager](./deployment-manager-overview.md) allows you to perform staged rollouts of Azure Resource Manager resources. The resources are deployed region by region in an ordered fashion. The integrated health check of Azure Deployment Manager can monitor rollouts, and automatically stop problematic rollouts, so that you can troubleshoot and reduce the scale of the impact. This feature can reduce service unavailability caused by regressions in updates.
-
-## Health monitoring providers
-
-In order to make health integration as easy as possible, Microsoft has been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you're not already using a health monitor, these are great solutions to start with:
-
-| ![azure deployment manager health monitor provider azure monitor](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-azure-monitor.svg)| ![azure deployment manager health monitor provider datadog](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-datadog.svg) | ![azure deployment manager health monitor provider site24x7](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-site24x7.svg) | ![azure deployment manager health monitor provider wavefront](./media/deployment-manager-health-check/azure-deployment-manager-health-monitor-provider-wavefront.svg) |
-|--|--|||
-|Azure Monitor, Microsoft's full stack observability platform for cloud native & hybrid monitoring and analytics. |Datadog, the leading monitoring and analytics platform for modern cloud environments. See [how Datadog integrates with Azure Deployment Manager](https://www.datadoghq.com/azure-deployment-manager/).|Site24x7, the all-in-one private and public cloud services monitoring solution. See [how Site24x7 integrates with Azure Deployment Manager](https://www.site24x7.com/azure/adm.html).| Wavefront, the monitoring and analytics platform for multi-cloud application environments. See [how Wavefront integrates with Azure Deployment Manager](https://go.wavefront.com/wavefront-adm/).|
-
-## How service health is determined
-
-[Health monitoring providers](#health-monitoring-providers) offer several mechanisms for monitoring services and alerting you of any service health issues. [Azure Monitor](../../azure-monitor/overview.md) is an example of one such offering. Azure Monitor can be used to create alerts when certain thresholds are exceeded. For example, your memory and CPU utilization spike beyond expected levels when you deploy a new update to your service. When notified, you can take corrective actions.
-
-These health providers typically offer REST APIs so that the status of your service's monitors can be examined programmatically. The REST APIs can either come back with a simple healthy/unhealthy signal (determined by the HTTP response code), and/or with detailed information about the signals it is receiving.
-
-The new `healthCheck` step in Azure Deployment Manager allows you to declare HTTP codes that indicate a healthy service. For complex REST results you can specify regular expressions that, when matched, indicate a healthy response.
-
-The flow to set up Azure Deployment Manager health checks:
-
-1. Create your health monitors via a health service provider of your choice.
-1. Create one or more `healthCheck` steps as part of your Azure Deployment Manager rollout. Fill out the `healthCheck` steps with the following information:
-
- 1. The URI for the REST API for your health monitors (as defined by your health service provider).
- 1. Authentication information. Currently only API-key style authentication is supported. For Azure Monitor, the authentication type should be set as `RolloutIdentity` as the user-assigned managed identity used for Azure Deployment Manager rollout extends for Azure Monitor.
- 1. [HTTP status codes](https://www.wikipedia.org/wiki/List_of_HTTP_status_codes) or regular expressions that define a healthy response. You may provide regular expressions, which ALL must match for the response to be considered healthy, or you may provide expressions of which ANY must match for the response to be considered healthy. Both methods are supported.
-
- The following JSON is an example to integrate Azure Monitor with Azure Deployment Manager. The example uses `RolloutIdentity` and establishes a health check where a rollout proceeds if there are no alerts. The only supported Azure Monitor API: [Alerts ΓÇô Get All](/rest/api/monitor/alertsmanagement/alerts/getall).
-
- ```json
- {
- "type": "Microsoft.DeploymentManager/steps",
- "apiVersion": "2018-09-01-preview",
- "name": "healthCheckStep",
- "location": "[parameters('azureResourceLocation')]",
- "properties": {
- "stepType": "healthCheck",
- "attributes": {
- "waitDuration": "PT1M",
- "maxElasticDuration": "PT1M",
- "healthyStateDuration": "PT1M",
- "type": "REST",
- "properties": {
- "healthChecks": [
- {
- "name": "appHealth",
- "request": {
- "method": "GET",
- "uri": "[parameters('healthCheckUrl')]",
- "authentication": {
- "type": "RolloutIdentity"
- }
- },
- "response": {
- "successStatusCodes": [
- "200"
- ],
- "regex": {
- "matches": [
- "\"value\":\\[\\]"
- ],
- "matchQuantifier": "All"
- }
- }
- }
- ]
- }
- }
- }
- }
- ```
-
- The following JSON is an example for all other health monitoring providers:
-
- ```json
- {
- "type": "Microsoft.DeploymentManager/steps",
- "apiVersion": "2018-09-01-preview",
- "name": "healthCheckStep",
- "location": "[parameters('azureResourceLocation')]",
- "properties": {
- "stepType": "healthCheck",
- "attributes": {
- "waitDuration": "PT0M",
- "maxElasticDuration": "PT0M",
- "healthyStateDuration": "PT1M",
- "type": "REST",
- "properties": {
- "healthChecks": [
- {
- "name": "appHealth",
- "request": {
- "method": "GET",
- "uri": "[parameters('healthCheckUrl')]",
- "authentication": {
- "type": "ApiKey",
- "name": "code",
- "in": "Query",
- "value": "[parameters('healthCheckAuthAPIKey')]"
- }
- },
- "response": {
- "successStatusCodes": [
- "200"
- ],
- "regex": {
- "matches": [
- "Status: healthy",
- "Status: warning"
- ],
- "matchQuantifier": "Any"
- }
- }
- }
- ]
- }
- }
- }
- },
- ```
-
-1. Invoke the `healthCheck` steps at the appropriate time in your Azure Deployment Manager rollout. In the following example, a `healthCheck` step is invoked in `postDeploymentSteps` of `stepGroup2`.
-
- ```json
- "stepGroups": [
- {
- "name": "stepGroup1",
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceWUS.name, variables('serviceTopology').serviceWUS.serviceUnit2.name)]",
- "postDeploymentSteps": []
- },
- {
- "name": "stepGroup2",
- "dependsOnStepGroups": ["stepGroup1"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceWUS.name, variables('serviceTopology').serviceWUS.serviceUnit1.name)]",
- "postDeploymentSteps": [
- {
- "stepId": "[resourceId('Microsoft.DeploymentManager/steps/', 'healthCheckStep')]"
- }
- ]
- },
- {
- "name": "stepGroup3",
- "dependsOnStepGroups": ["stepGroup2"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceEUS.name, variables('serviceTopology').serviceEUS.serviceUnit2.name)]",
- "postDeploymentSteps": []
- },
- {
- "name": "stepGroup4",
- "dependsOnStepGroups": ["stepGroup3"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceEUS.name, variables('serviceTopology').serviceEUS.serviceUnit1.name)]",
- "postDeploymentSteps": []
- }
- ]
- ```
-
-To walk through an example, see [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
-
-## Phases of a health check
-
-At this point Azure Deployment Manager knows how to query for the health of your service and at which phases in your rollout to do so. However, Azure Deployment Manager also allows for deep configuration of the timing of these checks. A `healthCheck` step is executed in three sequential phases, all of which have configurable durations:
-
-1. Wait
-
- 1. After a deployment operation is completed, VMs may be rebooting, reconfiguring based on new data, or even being started for the first time. It also takes time for services to start emitting health signals to be aggregated by the health monitoring provider into something useful. During this tumultuous process, it may not make sense to check for service health since the update hasn't yet reached a steady state. Indeed, the service may be oscillating between healthy and unhealthy states as the resources settle.
- 1. During the Wait phase, service health isn't monitored. This is used to allow the deployed resources the time to bake before beginning the health check process.
-
-1. Elastic
-
- 1. Since it's impossible to know in all cases how long it will take before resources become stable, the Elastic phase allows for a flexible time period between when the resources are potentially unstable and when they are required to maintain a healthy steady state.
- 1. When the Elastic phase begins, Azure Deployment Manager begins polling the provided REST endpoint for service health periodically. The polling interval is configurable.
- 1. If the health monitor comes back with signals indicating that the service is unhealthy, these signals are ignored, the Elastic phase continues, and polling continues.
- 1. When the health monitor returns signals indicating that the service is healthy, the Elastic phase ends and the HealthyState phase begins.
- 1. Thus, the duration specified for the Elastic phase is the maximum amount of time that can be spent polling for service health before a healthy response is considered mandatory.
-
-1. HealthyState
-
- 1. During the HealthyState phase, service health is continually polled at the same interval as the Elastic phase.
- 1. The service is expected to maintain healthy signals from the health monitoring provider for the entire specified duration.
- 1. If at any point an unhealthy response is detected, Azure Deployment Manager will stop the entire rollout and return the REST response carrying the unhealthy service signals.
- 1. After the HealthyState duration has ended, the `healthCheck` is complete, and deployment continues to the next step.
-
-## Next steps
-
-In this article, you learned about how to integrate health monitoring in Azure Deployment Manager. Proceed to the next article to learn how to deploy with Deployment Manager.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md)
azure-resource-manager Deployment Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-overview.md
- Title: Safe deployment across regions - Azure Deployment Manager
-description: Learn how to deploy a service over many regions with Azure Deployment Manager and about safe deployment practices.
- Previously updated : 11/21/2019---
-# Enable safe deployment practices with Azure Deployment Manager (Public preview)
-
-To deploy your service across many regions and make sure it's running as expected in each region, you can use Azure Deployment Manager to coordinate a staged rollout of the service. Just as you would for any Azure deployment, you define the resources for your service in [Resource Manager templates](template-syntax.md). After creating the templates, you use Deployment Manager to describe the topology for your service and how it should be rolled out.
-
-Deployment Manager is a feature of Resource Manager. It expands your capabilities during deployment. Use Deployment Manager when you have a complex service that needs to be deployed to several regions. By staging the rollout of your service, you can find potential problems before it has been deployed to all regions. If you don't need the extra precautions of a staged rollout, use the standard [deployment options](deploy-portal.md) for Resource Manager. Deployment Manager seamlessly integrates with all existing third-party tools that support Resource Manager deployments, such as continuous integration and continuous delivery (CI/CD) offerings.
-
-Azure Deployment Manager is in preview. Help us improve the feature by providing [feedback](https://aka.ms/admfeedback).
-
-To use Deployment Manager, you need to create four files:
-
-* Topology template.
-* Rollout template.
-* Parameter file for topology.
-* Parameter file for rollout.
-
-You deploy the topology template before deploying the rollout template.
-
-Additional resources:
-
-* [Azure Deployment Manager REST API reference](/rest/api/deploymentmanager/).
-* [Tutorial: Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).
-* [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
-* [Azure Deployment Manager sample](https://github.com/Azure-Samples/adm-quickstart).
-
-## Identity and access
-
-With Deployment Manager, a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) performs the deployment actions. You create this identity before starting your deployment. It must have access to the subscription you're deploying the service to, and sufficient permission to complete the deployment. For information about the actions granted through roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-
-The identity must reside in the same location as the rollout.
-
-## Topology template
-
-The topology template describes the Azure resources that make up your service and where to deploy them. The following image shows the topology for an example service:
-
-![Hierarchy from service topology to services to service units](./media/deployment-manager-overview/service-topology.png)
-
-The topology template includes the following resources:
-
-* Artifact source - where your Resource Manager templates and parameters are stored.
-* Service topology - points to artifact source.
- * Services - specifies location and Azure subscription ID.
- * Service units - specifies resource group, deployment mode, and path to template and parameter files.
-
-To understand what happens at each level, it's helpful to see which values you provide.
-
-![Values for each level](./media/deployment-manager-overview/topology-values.png)
-
-### Artifact source for templates
-
-In your topology template, you create an artifact source that holds the templates and parameters files. The artifact source is a way to pull the files for deployment. You'll see another artifact source for binaries later in this article.
-
-The following example shows the general format of the artifact source.
-
-```json
-{
- "type": "Microsoft.DeploymentManager/artifactSources",
- "apiVersion": "2018-09-01-preview",
- "name": "<artifact-source-name>",
- "location": "<artifact-source-location>",
- "properties": {
- "sourceType": "AzureStorage",
- "artifactRoot": "<root-folder-for-templates>",
- "authentication": {
- "type": "SAS",
- "properties": {
- "sasUri": "<SAS-URI-for-storage-container>"
- }
- }
- }
-}
-```
-
-For more information, see [artifactSources template reference](/azure/templates/Microsoft.DeploymentManager/artifactSources).
-
-### Service topology
-
-The following example shows the general format of the service topology resource. You provide the resource ID of the artifact source that holds the templates and parameter files. The service topology includes all service resources. Make sure the artifact source is available because the service topology depends on it.
-
-```json
-{
- "type": "Microsoft.DeploymentManager/serviceTopologies",
- "apiVersion": "2018-09-01-preview",
- "name": "<topology-name>",
- "location": "<topology-location>",
- "dependsOn": [
- "<artifact-source>"
- ],
- "properties": {
- "artifactSourceId": "<resource-ID-artifact-source>"
- },
- "resources": [
- {
- "type": "services",
- ...
- }
- ]
-}
-```
-
-For more information, see [serviceTopologies template reference](/azure/templates/Microsoft.DeploymentManager/serviceTopologies).
-
-### Services
-
-The following example shows the general format of the services resource. In each service, you provide the location and Azure subscription ID to use for deploying your service. To deploy to several regions, you define a service for each region. The service depends on the service topology.
-
-```json
-{
- "type": "services",
- "apiVersion": "2018-09-01-preview",
- "name": "<service-name>",
- "location": "<service-location>",
- "dependsOn": [
- "<service-topology>"
- ],
- "properties": {
- "targetSubscriptionId": "<subscription-ID>",
- "targetLocation": "<location-of-deployed-service>"
- },
- "resources": [
- {
- "type": "serviceUnits",
- ...
- }
- ]
-}
-```
-
-For more information, see [services template reference](/azure/templates/Microsoft.DeploymentManager/serviceTopologies/services).
-
-### Service Units
-
-The following example shows the general format of the service units resource. In each service unit, you specify the resource group, the [deployment mode](deployment-modes.md) to use for deployment, and the path to the template and parameter file. If you specify a relative path for the template and parameters, the full path is constructed from the root folder in the artifacts source. You can specify an absolute path for the template and parameters, but you lose the ability to easily version your releases. The service unit depends on the service.
-
-```json
-{
- "type": "serviceUnits",
- "apiVersion": "2018-09-01-preview",
- "name": "<service-unit-name>",
- "location": "<service-unit-location>",
- "dependsOn": [
- "<service>"
- ],
- "tags": {
- "serviceType": "Service West US Web App"
- },
- "properties": {
- "targetResourceGroup": "<resource-group-name>",
- "deploymentMode": "Incremental",
- "artifacts": {
- "templateArtifactSourceRelativePath": "<relative-path-to-template>",
- "parametersArtifactSourceRelativePath": "<relative-path-to-parameter-file>"
- }
- }
-}
-```
-
-Each template should include the related resources that you want to deploy in one step. For example, a service unit could have a template that deploys all of the resources for your service's front end.
-
-For more information, see [serviceUnits template reference](/azure/templates/Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits).
-
-## Rollout template
-
-The rollout template describes the steps to take when deploying your service. You specify the service topology to use and define the order for deploying service units. It includes an artifact source for storing binaries for the deployment. In your rollout template, you define the following hierarchy:
-
-* Artifact source.
-* Step.
-* Rollout.
- * Step groups.
- * Deployment operations.
-
-The following image shows the hierarchy of the rollout template:
-
-![Hierarchy from rollout to steps](./media/deployment-manager-overview/Rollout.png)
-
-Each rollout can have many step groups. Each step group has one deployment operation that points to a service unit in the service topology.
-
-### Artifact source for binaries
-
-In the rollout template, you create an artifact source for the binaries you need to deploy to the service. This artifact source is similar to the [artifact source for templates](#artifact-source-for-templates), except that it contains the scripts, web pages, compiled code, or other files needed by your service.
-
-### Steps
-
-You can define a step to perform either before or after your deployment operation. Currently, only the `wait` step and the `healthCheck` step are available.
-
-The `wait` step pauses the deployment before continuing. It allows you to verify that your service is running as expected before deploying the next service unit. The following example shows the general format of a `wait` step.
-
-```json
-{
- "type": "Microsoft.DeploymentManager/steps",
- "apiVersion": "2018-09-01-preview",
- "name": "waitStep",
- "location": "<step-location>",
- "properties": {
- "stepType": "wait",
- "attributes": {
- "duration": "PT1M"
- }
- }
-},
-```
-
-The duration property uses [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601#Durations). The preceding example specifies a one-minute wait.
-
-For more information about health checks, see [Introduce health integration rollout to Azure Deployment Manager](./deployment-manager-health-check.md) and [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
-
-For more information, see [steps template reference](/azure/templates/Microsoft.DeploymentManager/steps).
-
-### Rollouts
-
-Make sure the artifact source is available because the rollout depends on it. The rollout defines steps groups for each service unit that is deployed. You can define actions to take before or after deployment. For example, you can specify the deployment to wait after the service unit has been deployed. You can define the order of the step groups.
-
-The identity object specifies the [user-assigned managed identity](#identity-and-access) that performs the deployment actions.
-
-The following example shows the general format of the rollout.
-
-```json
-{
- "type": "Microsoft.DeploymentManager/rollouts",
- "apiVersion": "2018-09-01-preview",
- "name": "<rollout-name>",
- "location": "<rollout-location>",
- "Identity": {
- "type": "userAssigned",
- "identityIds": [
- "<managed-identity-ID>"
- ]
- },
- "dependsOn": [
- "<artifact-source>"
- ],
- "properties": {
- "buildVersion": "1.0.0.0",
- "artifactSourceId": "<artifact-source-ID>",
- "targetServiceTopologyId": "<service-topology-ID>",
- "stepGroups": [
- {
- "name": "stepGroup1",
- "dependsOnStepGroups": ["<step-group-name>"],
- "preDeploymentSteps": ["<step-ID>"],
- "deploymentTargetId":
- "<service-unit-ID>",
- "postDeploymentSteps": ["<step-ID>"]
- },
- ...
- ]
- }
-}
-```
-
-For more information, see [rollouts template reference](/azure/templates/Microsoft.DeploymentManager/rollouts).
-
-## Parameter file
-
-You create two parameter files. One parameter file is used when deploying the service topology, and the other is used for the rollout deployment. There are some values that you need to make sure are the same in both parameter files.
-
-## containerRoot variable
-
-With versioned deployments, the path to your artifacts changes with each new version. The first time you run a deployment the path might be `https://<base-uri-blob-container>/binaries/1.0.0.0`. The second time it might be `https://<base-uri-blob-container>/binaries/1.0.0.1`. Deployment Manager simplifies getting the correct root path for the current deployment by using the `$containerRoot` variable. This value changes with each version and isn't known before deployment.
-
-Use the `$containerRoot` variable in the parameter file for the template that deploys the Azure resources. At deployment time, this variable is replaced with the actual values from the rollout.
-
-For example, during rollout you create an artifact source for the binary artifacts.
-
-```json
-{
- "type": "Microsoft.DeploymentManager/artifactSources",
- "apiVersion": "2018-09-01-preview",
- "name": "[variables('rolloutArtifactSource').name]",
- "location": "[parameters('azureResourceLocation')]",
- "properties": {
- "sourceType": "AzureStorage",
- "artifactRoot": "[parameters('binaryArtifactRoot')]",
- "authentication" :
- {
- "type": "SAS",
- "properties": {
- "sasUri": "[parameters('artifactSourceSASLocation')]"
- }
- }
- }
-},
-```
-
-Notice the `artifactRoot` and `sasUri` properties. The artifact root might be set to a value like `binaries/1.0.0.0`. The SAS URI is the URI to your storage container with a SAS token for access. Deployment Manager automatically constructs the value of the `$containerRoot` variable. It combines those values in the format `<container>/<artifactRoot>`.
-
-Your template and parameter file need to know the correct path for getting the versioned binaries. For example, to deploy files for a web app, create the following parameter file with the `$containerRoot` variable. You must use two backslashes (`\\`) for the path because the first is an escape character.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "deployPackageUri": {
- "value": "$containerRoot\\helloWorldWebAppWUS.zip"
- }
- }
-}
-```
-
-Then, use that parameter in your template:
-
-```json
-{
- "name": "MSDeploy",
- "apiVersion": "2015-08-01",
- "type": "extensions",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[concat('Microsoft.Web/sites/', parameters('WebAppName'))]"
- ],
- "tags": {
- "displayName": "WebAppMSDeploy"
- },
- "properties": {
- "packageUri": "[parameters('deployPackageURI')]"
- }
-}
-```
-
-You manage versioned deployments by creating new folders and passing in that root path during rollout. The path flows through to the template that deploys the resources.
-
-## Next steps
-
-In this article, you learned about Deployment Manager. Proceed to the next article to learn how to deploy with Deployment Manager.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md)
->
-> [Quickstart: Try out Azure Deployment Manager in just a few minutes](https://github.com/Azure-Samples/adm-quickstart)
azure-resource-manager Deployment Manager Tutorial Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-tutorial-health-check.md
- Title: Use Azure Deployment Manager health check
-description: Use health check to safely deploy Azure resources with Azure Deployment Manager.
- Previously updated : 10/09/2019------
-# Tutorial: Use health check in Azure Deployment Manager (Public preview)
-
-Learn how to integrate health check in [Azure Deployment Manager](./deployment-manager-overview.md). This tutorial is based of the [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md) tutorial. You must complete that tutorial before you proceed with this one.
-
-In the rollout template used in [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md), you used a wait step. In this tutorial, you replace the wait step with a health check step.
-
-> [!IMPORTANT]
-> If your subscription is marked for Canary to test out new Azure features, you can only use Azure Deployment Manager to deploy to the Canary regions.
-
-This tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Create a health check service simulator
-> * Revise the rollout template
-> * Deploy the topology
-> * Deploy the rollout with unhealthy status
-> * Verify the rollout deployment
-> * Deploy the rollout with healthy status
-> * Verify the rollout deployment
-> * Clean up resources
-
-Additional resources:
-
-* [Azure Deployment Manager REST API reference](/rest/api/deploymentmanager/).
-* [An Azure Deployment Manager sample](https://github.com/Azure-Samples/adm-quickstart).
-
-## Prerequisites
-
-To complete this tutorial you need:
-
-* Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* Complete [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).
-
-## Install the artifacts
-
-If you haven't already downloaded the samples used in the prerequisite tutorial, you can download [the templates and the artifacts](https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-adm/ADMTutorial.zip) and unzip it locally. Then, run the PowerShell script from the prerequisite tutorial's section [Prepare the artifacts](./deployment-manager-tutorial.md#prepare-the-artifacts). The script creates a resource group, creates a storage container, creates a blob container, uploads the downloaded files, and then creates a SAS token.
-
-* Make a copy of the URL with SAS token. This URL is needed to populate a field in the two parameter files: topology parameters file and rollout parameters file.
-* Open _CreateADMServiceTopology.Parameters.json_ and update the values of `projectName` and `artifactSourceSASLocation`.
-* Open _CreateADMRollout.Parameters.json_ and update the values of `projectName` and `artifactSourceSASLocation`.
-
-## Create a health check service simulator
-
-In production, you typically use one or more monitoring providers. In order to make health integration as easy as possible, Microsoft has been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. For a list of these companies, see [Health monitoring providers](./deployment-manager-health-check.md#health-monitoring-providers). For the purpose of this tutorial, you create an [Azure Function](../../azure-functions/index.yml) to simulate a health monitoring service. This function takes a status code, and returns the same code. Your Azure Deployment Manager template uses the status code to determine how to proceed with the deployment.
-
-The following two files are used for deploying the Azure Function. You don't need to download these files to go through the tutorial.
-
-* A Resource Manager template located at [https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/tutorial-adm/deploy_hc_azure_function.json](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/tutorial-adm/deploy_hc_azure_function.json). You deploy this template to create an Azure Function.
-* A zip file of the Azure Function source code, [https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-adm/ADMHCFunction0417.zip](https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-adm/ADMHCFunction0417.zip). This zip called is called by the Resource Manager template.
-
-To deploy the Azure function, select **Try it** to open the Azure Cloud Shell, and then paste the following script into the shell window. To paste the code, right-click the shell window and then select **Paste**.
-
-```azurepowershell-interactive
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/tutorial-adm/deploy_hc_azure_function.json" -projectName $projectName
-```
-
-To verify and test the Azure function:
-
-1. Open the [Azure portal](https://portal.azure.com).
-1. Open the resource group. The default name is the project name with **rg** appended.
-1. Select the app service from the resource group. The default name of the app service is the project name with **webapp** appended.
-1. Expand **Functions**, and then select **HttpTrigger1**.
-
- ![Azure Deployment Manager health check Azure Function](./media/deployment-manager-tutorial-health-check/azure-deployment-manager-hc-function.png)
-
-1. Select **&lt;/> Get function URL**.
-1. Select **Copy** to copy the URL to the clipboard. The URL is similar to:
-
- ```url
- https://myhc0417webapp.azurewebsites.net/api/healthStatus/{healthStatus}?code=hc4Y1wY4AqsskAkVw6WLAN1A4E6aB0h3MbQ3YJRF3XtXgHvooaG0aw==
- ```
-
- Replace `{healthStatus}` in the URL with a status code. In this tutorial, use *unhealthy* to test the unhealthy scenario, and use either *healthy* or *warning* to test the healthy scenario. Create two URLs, one with the *unhealthy* status, and the other with *healthy* status. For example:
-
- ```url
- https://myhc0417webapp.azurewebsites.net/api/healthStatus/unhealthy?code=hc4Y1wY4AqsskAkVw6WLAN1A4E6aB0h3MbQ3YJRF3XtXgHvooaG0aw==
- https://myhc0417webapp.azurewebsites.net/api/healthStatus/healthy?code=hc4Y1wY4AqsskAkVw6WLAN1A4E6aB0h3MbQ3YJRF3XtXgHvooaG0aw==
- ```
-
- You need both URLs to complete this tutorial.
-
-1. To test the health monitoring simulator, open the URLs that you created in the previous step. The results for the unhealthy status will be similar to:
-
- ```Output
- Status: unhealthy
- ```
-
-## Revise the rollout template
-
-The purpose of this section is to show you how to include a health check step in the rollout template.
-
-1. Open _CreateADMRollout.json_ that you created in [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md). This JSON file is a part of the download. See [Prerequisites](#prerequisites).
-1. Add two more parameters:
-
- ```json
- "healthCheckUrl": {
- "type": "string",
- "metadata": {
- "description": "Specifies the health check URL."
- }
- },
- "healthCheckAuthAPIKey": {
- "type": "string",
- "metadata": {
- "description": "Specifies the health check Azure Function function authorization key."
- }
- }
- ```
-
-1. Replace the wait step resource definition with a health check step resource definition:
-
- ```json
- {
- "type": "Microsoft.DeploymentManager/steps",
- "apiVersion": "2018-09-01-preview",
- "name": "healthCheckStep",
- "location": "[parameters('azureResourceLocation')]",
- "properties": {
- "stepType": "healthCheck",
- "attributes": {
- "waitDuration": "PT0M",
- "maxElasticDuration": "PT0M",
- "healthyStateDuration": "PT1M",
- "type": "REST",
- "properties": {
- "healthChecks": [
- {
- "name": "appHealth",
- "request": {
- "method": "GET",
- "uri": "[parameters('healthCheckUrl')]",
- "authentication": {
- "type": "ApiKey",
- "name": "code",
- "in": "Query",
- "value": "[parameters('healthCheckAuthAPIKey')]"
- }
- },
- "response": {
- "successStatusCodes": [
- "200"
- ],
- "regex": {
- "matches": [
- "Status: healthy",
- "Status: warning"
- ],
- "matchQuantifier": "Any"
- }
- }
- }
- ]
- }
- }
- }
- },
- ```
-
- Based on the definition, the rollout proceeds if the health status is either *healthy* or *warning*.
-
-1. Update the `dependsOn` of the rollout definition to include the newly defined health check step:
-
- ```json
- "dependsOn": [
- "[resourceId('Microsoft.DeploymentManager/artifactSources', variables('rolloutArtifactSource').name)]",
- "[resourceId('Microsoft.DeploymentManager/steps/', 'healthCheckStep')]"
- ],
- ```
-
-1. Update `stepGroups` to include the health check step. The `healthCheckStep` is called in `postDeploymentSteps` of `stepGroup2`. `stepGroup3` and `stepGroup4` are only deployed if the healthy status is either *healthy* or *warning*.
-
- ```json
- "stepGroups": [
- {
- "name": "stepGroup1",
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceWUS.name, variables('serviceTopology').serviceWUS.serviceUnit2.name)]",
- "postDeploymentSteps": []
- },
- {
- "name": "stepGroup2",
- "dependsOnStepGroups": ["stepGroup1"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceWUS.name, variables('serviceTopology').serviceWUS.serviceUnit1.name)]",
- "postDeploymentSteps": [
- {
- "stepId": "[resourceId('Microsoft.DeploymentManager/steps/', 'healthCheckStep')]"
- }
- ]
- },
- {
- "name": "stepGroup3",
- "dependsOnStepGroups": ["stepGroup2"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceEUS.name, variables('serviceTopology').serviceEUS.serviceUnit2.name)]",
- "postDeploymentSteps": []
- },
- {
- "name": "stepGroup4",
- "dependsOnStepGroups": ["stepGroup3"],
- "preDeploymentSteps": [],
- "deploymentTargetId": "[resourceId('Microsoft.DeploymentManager/serviceTopologies/services/serviceUnits', variables('serviceTopology').name, variables('serviceTopology').serviceEUS.name, variables('serviceTopology').serviceEUS.serviceUnit1.name)]",
- "postDeploymentSteps": []
- }
- ]
- ```
-
- If you compare the `stepGroup3` section before and after it's revised, this section now depends on `stepGroup2`. This is necessary when `stepGroup3` and the subsequent step groups depend on the results of health monitoring.
-
- The following screenshot illustrates the modified areas and how the health check step is used:
-
- ![Azure Deployment Manager health check template](./media/deployment-manager-tutorial-health-check/azure-deployment-manager-hc-rollout-template.png)
-
-## Deploy the topology
-
-Run the following PowerShell script to deploy the topology. You need the same _CreateADMServiceTopology.json_ and _CreateADMServiceTopology.Parameters.json_ that you used in [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).
-
-```azurepowershell
-# Create the service topology
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile "$filePath\ADMTemplates\CreateADMServiceTopology.json" `
- -TemplateParameterFile "$filePath\ADMTemplates\CreateADMServiceTopology.Parameters.json"
-```
-
-Verify the service topology and the underlined resources have been created successfully using the Azure portal:
-
-![Azure Deployment Manager tutorial deployed service topology resources](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-deployed-topology-resources.png)
-
-**Show hidden types** must be selected to see the resources.
-
-## Deploy the rollout with the unhealthy status
-
-Use the unhealthy status URL you created in [Create a health check service simulator](#create-a-health-check-service-simulator). You need the revised _CreateADMServiceTopology.json_ and the same _CreateADMServiceTopology.Parameters.json_ that you used in [Use Azure Deployment Manager with Resource Manager templates](./deployment-manager-tutorial.md).
-
-```azurepowershell-interactive
-$healthCheckUrl = Read-Host -Prompt "Enter the health check Azure function URL"
-$healthCheckAuthAPIKey = $healthCheckUrl.Substring($healthCheckUrl.IndexOf("?code=")+6, $healthCheckUrl.Length-$healthCheckUrl.IndexOf("?code=")-6)
-$healthCheckUrl = $healthCheckUrl.Substring(0, $healthCheckUrl.IndexOf("?"))
-
-# Create the rollout
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile "$filePath\ADMTemplates\CreateADMRollout.json" `
- -TemplateParameterFile "$filePath\ADMTemplates\CreateADMRollout.Parameters.json" `
- -healthCheckUrl $healthCheckUrl `
- -healthCheckAuthAPIKey $healthCheckAuthAPIKey
-```
-
-> [!NOTE]
-> `New-AzResourceGroupDeployment` is an asynchronous call. The success message only means the deployment has successfully begun. To verify the deployment, use `Get-AZDeploymentManagerRollout`. See the next procedure.
-
-To check the rollout progress use the following PowerShell script:
-
-```azurepowershell
-$projectName = Read-Host -Prompt "Enter the same project name used earlier in this tutorial"
-$resourceGroupName = "${projectName}rg"
-$rolloutName = "${projectName}Rollout"
-
-# Get the rollout status
-Get-AzDeploymentManagerRollout `
- -ResourceGroupName $resourceGroupName `
- -Name $rolloutName `
- -Verbose
-```
-
-The following sample output shows the deployment failed due to the unhealthy status:
-
-```Output
-Service: myhc0417ServiceWUSrg
- TargetLocation: WestUS
- TargetSubscriptionId: <Subscription ID>
-
- ServiceUnit: myhc0417ServiceWUSWeb
- TargetResourceGroup: myhc0417ServiceWUSrg
-
- Step: RestHealthCheck/healthCheckStep.PostDeploy
- Status: Failed
- StepGroup: stepGroup2
- Operation Info:
- Start Time: 05/06/2019 17:58:31
- End Time: 05/06/2019 17:58:32
- Total Duration: 00:00:01
- Error:
- Code: ResourceReportedUnhealthy
- Message: Health checks failed as the following resources were unhealthy: '05/06/2019 17:58:32 UTC: Health check 'appHealth' failed with the following errors: Response from endpoint 'https://myhc0417webapp.azurewebsites.net/api/healthStatus/unhealthy' does not match the regex pattern(s): 'Status: healthy, Status: warning.'. Response content: "Status: unhealthy"..'.
-Get-AzDeploymentManagerRollout :
-Service: myhc0417ServiceWUSrg
- ServiceUnit: myhc0417ServiceWUSWeb
- Step: RestHealthCheck/healthCheckStep.PostDeploy
- Status: Failed
- StepGroup: stepGroup2
- Operation Info:
- Start Time: 05/06/2019 17:58:31
- End Time: 05/06/2019 17:58:32
- Total Duration: 00:00:01
- Error:
- Code: ResourceReportedUnhealthy
- Message: Health checks failed as the following resources were unhealthy: '05/06/2019 17:58:32 UTC: Health check 'appHealth' failed with the following errors: Response from endpoint 'https://myhc0417webapp.azurewebsites.net/api/healthStatus/unhealthy' does not match the regex pattern(s): 'Status: healthy, Status: warning.'. Response content: "Status: unhealthy"..'.
-At line:1 char:1
-+ Get-AzDeploymentManagerRollout `
-+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-+ CategoryInfo : NotSpecified: (:) [Get-AzDeploymentManagerRollout], Exception
-+ FullyQualifiedErrorId : RolloutFailed,Microsoft.Azure.Commands.DeploymentManager.Commands.GetRollout
--
-ResourceGroupName : myhc0417rg
-BuildVersion : 1.0.0.0
-ArtifactSourceId : /subscriptions/<Subscription ID>/resourceGroups/myhc0417rg/providers/Mi
- crosoft.DeploymentManager/artifactSources/myhc0417ArtifactSourceRollout
-TargetServiceTopologyId : /subscriptions/<Subscription ID>/resourceGroups/myhc0417rg/providers/Mi
- crosoft.DeploymentManager/serviceTopologies/myhc0417ServiceTopology
-Status : Failed
-TotalRetryAttempts : 0
-Identity : Microsoft.Azure.Commands.DeploymentManager.Models.PSIdentity
-OperationInfo : Microsoft.Azure.Commands.DeploymentManager.Models.PSRolloutOperationInfo
-Services : {myhc0417ServiceWUS, myhc0417ServiceWUSrg}
-Name : myhc0417Rollout
-Type : Microsoft.DeploymentManager/rollouts
-Location : centralus
-Id : /subscriptions/<Subscription ID>/resourcegroups/myhc0417rg/providers/Mi
- crosoft.DeploymentManager/rollouts/myhc0417Rollout
-Tags :
-```
-
-After the rollout is completed, you'll see one additional resource group created for West US.
-
-## Deploy the rollout with the healthy status
-
-Repeat this section to redeploy the rollout with the healthy status URL. After the rollout is completed, you'll see one more resource group created for East US.
-
-## Verify the deployment
-
-1. Open the [Azure portal](https://portal.azure.com).
-1. Browse to the new web applications under the new resource groups created by the rollout deployment.
-1. Open the web application in a web browser. Verify the location and the version on the _https://docsupdatetracker.net/index.html_ file.
-
-## Clean up resources
-
-When the Azure resources are no longer needed, clean up the resources you deployed by deleting the resource group.
-
-1. From the Azure portal, select **Resource group** from the left menu.
-1. Use the **Filter by name** field to narrow down the resource groups created in this tutorial.
-
- * **&lt;projectName>rg**: contains the Deployment Manager resources.
- * **&lt;projectName>ServiceWUSrg**: contains the resources defined by ServiceWUS.
- * **&lt;projectName>ServiceEUSrg**: contains the resources defined by ServiceEUS.
- * The resource group for the user-defined managed identity.
-1. Select the resource group name.
-1. Select **Delete resource group** from the top menu.
-1. Repeat the last two steps to delete other resource groups created by this tutorial.
-
-## Next steps
-
-In this tutorial, you learned how to use the health check feature of Azure Deployment Manager. To learn more, see [Azure Resource Manager documentation](../index.yml).
azure-resource-manager Deployment Manager Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-manager-tutorial.md
- Title: Use Azure Deployment Manager to deploy templates
-description: Learn how to use Resource Manager templates with Azure Deployment Manager to deploy Azure resources.
- Previously updated : 08/25/2020------
-# Tutorial: Use Azure Deployment Manager with Resource Manager templates (Public preview)
-
-Learn how to use [Azure Deployment Manager](./deployment-manager-overview.md) to deploy your applications across multiple regions. If you prefer a faster approach, [Azure Deployment Manager quickstart](https://github.com/Azure-Samples/adm-quickstart) creates the required configurations in your subscription and customizes the artifacts to deploy an application across multiple regions. The quickstart performs the same tasks as it does in this tutorial.
-
-To use Deployment Manager, you need to create two templates:
-
-* **A topology template**: describes the Azure resources the make up your applications and where to deploy them.
-* **A rollout template**: describes the steps to take when deploying your applications.
-
-> [!IMPORTANT]
-> If your subscription is marked for Canary to test out new Azure features, you can only use Azure Deployment Manager to deploy to the Canary regions.
-
-This tutorial covers the following tasks:
-
-> [!div class="checklist"]
-> * Understand the scenario
-> * Download the tutorial files
-> * Prepare the artifacts
-> * Create the user-defined managed identity
-> * Create the service topology template
-> * Create the rollout template
-> * Deploy the templates
-> * Verify the deployment
-> * Deploy the newer version
-> * Clean up resources
-
-Additional resources:
-
-* [Azure Deployment Manager REST API reference](/rest/api/deploymentmanager/).
-* [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
--
-## Prerequisites
-
-To complete this tutorial, you need:
-
-* Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* Some experience with developing [Azure Resource Manager templates](overview.md).
-* Azure PowerShell. For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
-* Deployment Manager cmdlets. To install these prerelease cmdlets, you need the latest version of PowerShellGet. To get the latest version, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget). After installing PowerShellGet, close your PowerShell window. Open a new elevated PowerShell window, and use the following command:
-
- ```powershell
- Install-Module -Name Az.DeploymentManager
- ```
-
-## Understand the scenario
-
-The service topology template describes the Azure resources that make up your service and where to deploy them. The service topology definition has the following hierarchy:
-
-* Service topology
- * Services
- * Service units
-
-The following diagram illustrates the service topology used in this tutorial:
-
-![Azure Deployment Manager tutorial scenario diagram](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-scenario-diagram.png)
-
-There are two services allocated in the West US and the East US locations. Each service has two service units: a front-end web application and a back-end storage account. The service unit definitions contain links to the template and parameter files that create the web applications and the storage accounts.
-
-## Download the tutorial files
-
-1. Download [the templates and the artifacts](https://github.com/Azure/azure-docs-json-samples/raw/master/tutorial-adm/ADMTutorial.zip) used by this tutorial.
-1. Unzip the files to your location computer.
-
-Under the root folder, there are two folders:
-
-* _ADMTemplates_: contains the Deployment Manager templates, that include:
- * _CreateADMServiceTopology.json_
- * _CreateADMServiceTopology.Parameters.json_
- * _CreateADMRollout.json_
- * _CreateADMRollout.Parameters.json_
-* _ArtifactStore_: contains both the template artifacts and the binary artifacts. See [Prepare the artifacts](#prepare-the-artifacts).
-
-There are two sets of templates. One set is the Deployment Manager templates that are used to deploy the service topology and the rollout. The other set is called from the service units to create web services and storage accounts.
-
-## Prepare the artifacts
-
-The ArtifactStore folder from the download contains two folders:
-
-![Azure Deployment Manager tutorial artifact source diagram](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-artifact-source-diagram.png)
-
-* The _templates_ folder: contains the template artifacts. The folders _1.0.0.0_ and _1.0.0.1_ represent the two versions of the binary artifacts. Within each version, there is a folder for each service: _ServiceEUS_ (Service East US) and _ServiceWUS_ (Service West US). Each service has a pair of template and parameter files for creating a storage account, and another pair for creating a web application. The web application template calls a compressed package, which contains the web application files. The compressed file is a binary artifact stored in the binaries folder.
-* The _binaries_ folder: contains the binary artifacts. The folders _1.0.0.0_ and _1.0.0.1_ represent the two versions of the binary artifacts. Within each version, there is one zip file for to create the web application in the West US location, and the other zip file to create the web application in the East US location.
-
-The two versions (1.0.0.0 and 1.0.0.1) are for the [revision deployment](#deploy-the-revision). Even though both the template artifacts and the binary artifacts have two versions, only the binary artifacts are different between the two versions. In practice, binary artifacts are updated more frequently comparing to template artifacts.
-
-1. Open _\ArtifactStore\templates\1.0.0.0\ServiceWUS\CreateStorageAccount.json_ in a text editor. It's a basic template to create a storage account.
-1. Open _\ArtifactStore\templates\1.0.0.0\ServiceWUS\CreateWebApplication.json_.
-
- ![Azure Deployment Manager tutorial create web application template](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-create-web-application-packageuri.png)
-
- The template calls a deploy package, which contains the files of the web application. In this tutorial, the compressed package only contains an _https://docsupdatetracker.net/index.html_ file.
-1. Open _\ArtifactStore\templates\1.0.0.0\ServiceWUS\CreateWebApplicationParameters.json_.
-
- ![Azure Deployment Manager tutorial create web application template parameters containerRoot](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-create-web-application-parameters-deploypackageuri.png)
-
- The value of `deployPackageUri` is the path to the deployment package. The parameter contains a `$containerRoot` variable. The value of `$containerRoot` is provided in the [rollout template](#create-the-rollout-template) by concatenating the artifact source SAS location, artifact root, and `deployPackageUri`.
-1. Open _\ArtifactStore\binaries\1.0.0.0\helloWorldWebAppWUS.zip\https://docsupdatetracker.net/index.html_.
-
- ```html
- <html>
- <head>
- <title>Azure Deployment Manager tutorial</title>
- </head>
- <body>
- <p>Hello world from west U.S.!</p>
- <p>Version 1.0.0.0</p>
- </body>
- </html>
- ```
-
- The HTML shows the location and the version information. The binary file in the _1.0.0.1_ folder shows _Version 1.0.0.1_. After you deploy the service, you can browse to these pages.
-1. Check out other artifact files. It helps you to understand the scenario better.
-
-Template artifacts are used by the service topology template, and binary artifacts are used by the rollout template. Both the topology template and the rollout template define an artifact source Azure resource, which is a resource used to point Resource Manager to the template and binary artifacts that are used in the deployment. To simplify the tutorial, one storage account is used to store both the template artifacts and the binary artifacts. Both artifact sources point to the same storage account.
-
-Run the following PowerShell script to create a resource group, create a storage container, create a blob container, upload the downloaded files, and then create a SAS token.
-
-> [!IMPORTANT]
-> `projectName` in the PowerShell script is used to generate names for the Azure services that are deployed in this tutorial. Different Azure services have different requirements on the names. To ensure the deployment is successful, choose a name with less than 12 characters with only lower case letters and numbers.
-> Save a copy of the project name. You use the same `projectName` throughout the tutorial.
-
-```azurepowershell
-$projectName = Read-Host -Prompt "Enter a project name that is used to generate Azure resource names"
-$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
-$filePath = Read-Host -Prompt "Enter the folder that contains the downloaded files"
--
-$resourceGroupName = "${projectName}rg"
-$storageAccountName = "${projectName}store"
-$containerName = "admfiles"
-$filePathArtifacts = "${filePath}\ArtifactStore"
-
-New-AzResourceGroup -Name $resourceGroupName -Location $location
-
-$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroupName `
- -Name $storageAccountName `
- -Location $location `
- -SkuName Standard_RAGRS `
- -Kind StorageV2
-
-$storageContext = $storageAccount.Context
-
-$storageContainer = New-AzStorageContainer -Name $containerName -Context $storageContext -Permission Off
--
-$filesToUpload = Get-ChildItem $filePathArtifacts -Recurse -File
-
-foreach ($x in $filesToUpload) {
- $targetPath = ($x.fullname.Substring($filePathArtifacts.Length + 1)).Replace("\", "/")
-
- Write-Verbose "Uploading $("\" + $x.fullname.Substring($filePathArtifacts.Length + 1)) to $($storageContainer.CloudBlobContainer.Uri.AbsoluteUri + "/" + $targetPath)"
- Set-AzStorageBlobContent -File $x.fullname -Container $storageContainer.Name -Blob $targetPath -Context $storageContext | Out-Null
-}
-
-$token = New-AzStorageContainerSASToken -name $containerName -Context $storageContext -Permission rl -ExpiryTime (Get-date).AddMonths(1)
-
-$url = $storageAccount.PrimaryEndpoints.Blob + $containerName + $token
-
-Write-Host $url
-```
-
-Make a copy of the URL with the SAS token. This URL is needed to populate a field in the two parameter files: topology parameters file and rollout parameters file.
-
-Open the container from the Azure portal and verify that both the _binaries_ and the _templates_ folders, and the files are uploaded.
-
-## Create the user-assigned managed identity
-
-Later in the tutorial, you deploy a rollout. A user-assigned managed identity is needed to perform the deployment actions (for example, deploy the web applications and the storage account). This identity must be granted access to the Azure subscription you're deploying the service to, and have sufficient permission to complete the artifact deployment.
-
-You need to create a user-assigned managed identity and configure the access control for your subscription.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Create a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
-1. From the portal, select **Subscriptions** from the left menu, and then select your subscription.
-1. Select **Access control (IAM)**, and then select **Add role assignment**.
-1. Enter or select the following values:
-
- ![Azure Deployment Manager tutorial user-assigned managed identity access control](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-access-control.png)
-
- * **Role**: give sufficient permission to complete the artifact deployment (the web applications and the storage accounts). Select **Contributor** in this tutorial. In practice, you want to restrict the permissions to the minimum.
- * **Assign access to**: select **User Assigned Managed Identity**.
- * Select the user-assigned managed identity you created earlier in the tutorial.
-1. Select **Save**.
-
-## Create the service topology template
-
-Open _\ADMTemplates\CreateADMServiceTopology.json_.
-
-### The parameters
-
-The template contains the following parameters:
-
-* `projectName`: This name is used to create the names for the Deployment Manager resources. For example, using **demo**, the service topology name is **demo**ServiceTopology. The resource names are defined in the template's `variables` section.
-* `azureResourcelocation`: To simplify the tutorial, all resources share this location unless it's specified otherwise.
-* `artifactSourceSASLocation`: The SAS URI to the Blob container where service unit template and parameters files are stored for deployment. See [Prepare the artifacts](#prepare-the-artifacts).
-* `templateArtifactRoot`: The offset path from the Blob container where the templates and parameters are stored. The default value is _templates/1.0.0.0_. Don't change this value unless you want to change the folder structure explained in [Prepare the artifacts](#prepare-the-artifacts). Relative paths are used in this tutorial. The full path is constructed by concatenating `artifactSourceSASLocation`, `templateArtifactRoot`, and `templateArtifactSourceRelativePath` (or `parametersArtifactSourceRelativePath`).
-* `targetSubscriptionID`: The subscription ID to which the Deployment Manager resources are going to be deployed and billed. Use your subscription ID in this tutorial.
-
-### The variables
-
-The variables section defines the names of the resources, the Azure locations for the two
-
-![Azure Deployment Manager tutorial topology template variables](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-topology-template-variables.png)
-
-Compare the artifact paths with the folder structure that you uploaded to the storage account. Notice the artifact paths are relative paths. The full path is constructed by concatenating `artifactSourceSASLocation`, `templateArtifactRoot`, and `templateArtifactSourceRelativePath` (or `parametersArtifactSourceRelativePath`).
-
-### The resources
-
-On the root level, two resources are defined: *an artifact source*, and *a service topology*.
-
-The artifact source definition is:
-
-![Azure Deployment Manager tutorial topology template resources artifact source](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-topology-template-resources-artifact-source.png)
-
-The following screenshot only shows some parts of the service topology, services, and service units definitions:
-
-![Azure Deployment Manager tutorial topology template resources service topology](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-topology-template-resources-service-topology.png)
-
-* `artifactSourceId`: Used to associate the artifact source resource to the service topology resource.
-* `dependsOn`: All the service topology resources depend on the artifact source resource.
-* `artifacts`: Point to the template artifacts. Relative paths are used here. The full path is constructed by concatenating `artifactSourceSASLocation` (defined in the artifact source), `artifactRoot` (defined in the artifact source), and `templateArtifactSourceRelativePath` (or `parametersArtifactSourceRelativePath`).
-
-### Topology parameters file
-
-You create a parameters file used with the topology template.
-
-1. Open _\ADMTemplates\CreateADMServiceTopology.Parameters.json_ in Visual Studio Code or any text editor.
-1. Enter the parameter values:
-
- * `projectName`: Enter a string with 4-5 characters. This name is used to create unique Azure resource names.
- * `azureResourceLocation`: If you're not familiar with Azure locations, use **centralus** in this tutorial.
- * `artifactSourceSASLocation`: Enter the SAS URI to the root directory (the Blob container) where service unit template and parameters files are stored for deployment. See [Prepare the artifacts](#prepare-the-artifacts).
- * `templateArtifactRoot`: Unless you change the folder structure of the artifacts, use _templates/1.0.0.0_ in this tutorial.
-
-> [!IMPORTANT]
-> The topology template and the rollout template share some common parameters. These parameters must have the same values. These parameters are: `projectName`, `azureResourceLocation`, and `artifactSourceSASLocation` (both artifact sources share the same storage account in this tutorial).
-
-## Create the rollout template
-
-Open _\ADMTemplates\CreateADMRollout.json_.
-
-### The parameters
-
-The template contains the following parameters:
-
-![Azure Deployment Manager tutorial rollout template parameters](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-rollout-template-parameters.png)
-
-* `projectName`: This name is used to create the names for the Deployment Manager resources. For example, using **demo**, the rollout name is **demo**Rollout. The names are defined in the template's `variables` section.
-* `azureResourcelocation`: To simplify the tutorial, all Deployment Manager resources share this location unless it's specified otherwise.
-* `artifactSourceSASLocation`: The SAS URI to the root directory (the Blob container) where service unit template and parameters files are stored for deployment. See [Prepare the artifacts](#prepare-the-artifacts).
-* `binaryArtifactRoot`: The default value is _binaries/1.0.0.0_. Don't change this value unless you want to change the folder structure explained in [Prepare the artifacts](#prepare-the-artifacts). Relative paths are used in this tutorial. The full path is constructed by concatenating `artifactSourceSASLocation`, `binaryArtifactRoot`, and the `deployPackageUri` specified in _CreateWebApplicationParameters.json_. See [Prepare the artifacts](#prepare-the-artifacts).
-* `managedIdentityID`: The user-assigned managed identity that performs the deployment actions. See [Create the user-assigned managed identity](#create-the-user-assigned-managed-identity).
-
-### The variables
-
-The `variables` section defines the names of the resources. Make sure the service topology name, the service names, and the service unit names match the names defined in the [topology template](#create-the-service-topology-template).
-
-![Azure Deployment Manager tutorial rollout template variables](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-rollout-template-variables.png)
-
-### The resources
-
-On the root level, there are three resources defined: an artifact source, a step, and a rollout.
-
-The artifact source definition is identical to the one defined in the topology template. See [Create the service topology template](#create-the-service-topology-template) for more information.
-
-The following screenshot shows the `wait` step definition:
-
-![Azure Deployment Manager tutorial rollout template resources wait step](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-rollout-template-resources-wait-step.png)
-
-The duration uses the [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601#Durations). **PT1M** (capital letters are required) is an example of a 1-minute wait.
-
-The following screenshot only shows some parts of the rollout definition:
-
-![Azure Deployment Manager tutorial rollout template resources rollout](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-rollout-template-resources-rollout.png)
-
-* `dependsOn`: The rollout resource depends on the artifact source resource, and any of the steps defined.
-* `artifactSourceId`: Used to associate the artifact source resource to the rollout resource.
-* `targetServiceTopologyId`: Used to associate the service topology resource to the rollout resource.
-* `deploymentTargetId`: It's the service unit resource ID of the service topology resource.
-* `preDeploymentSteps` and `postDeploymentSteps`: Contains the rollout steps. In the template, a `wait` step is called.
-* `dependsOnStepGroups`: Configure the dependencies between the step groups.
-
-### Rollout parameters file
-
-You create a parameters file used with the rollout template.
-
-1. Open _\ADMTemplates\CreateADMRollout.Parameters.json_ in Visual Studio Code or any text editor.
-1. Enter the parameter values:
-
- * `projectName`: Enter a string with 4-5 characters. This name is used to create unique Azure resource names.
- * `azureResourceLocation`: Specify an Azure location.
- * `artifactSourceSASLocation`: Enter the SAS URI to the root directory (the Blob container) where service unit template and parameters files are stored for deployment. See [Prepare the artifacts](#prepare-the-artifacts).
- * `binaryArtifactRoot`: Unless you change the folder structure of the artifacts, use _binaries/1.0.0.0_ in this tutorial.
- * `managedIdentityID`: Enter the user-assigned managed identity. See [Create the user-assigned managed identity](#create-the-user-assigned-managed-identity). The syntax is:
-
- ```json
- "/subscriptions/<SubscriptionID>/resourcegroups/<ResourceGroupName>/providers/Microsoft.ManagedIdentity/userassignedidentities/<ManagedIdentityName>"
- ```
-
-> [!IMPORTANT]
-> The topology template and the rollout template share some common parameters. These parameters must have the same values. These parameters are: `projectName`, `azureResourceLocation`, and `artifactSourceSASLocation` (both artifact sources share the same storage account in this tutorial).
-
-## Deploy the templates
-
-Azure PowerShell can be used to deploy the templates.
-
-1. Run the script to deploy the service topology.
-
- ```azurepowershell
- # Create the service topology
- New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile "$filePath\ADMTemplates\CreateADMServiceTopology.json" `
- -TemplateParameterFile "$filePath\ADMTemplates\CreateADMServiceTopology.Parameters.json"
- ```
-
- If you run this script from a different PowerShell session from the one you ran the [Prepare the artifacts](#prepare-the-artifacts) script, you need to repopulate the variables first, which include `$resourceGroupName` and `$filePath`.
-
- > [!NOTE]
- > `New-AzResourceGroupDeployment` is an asynchronous call. The success message only means the deployment has successfully begun. To verify the deployment, see Step 2 and Step 4 of this procedure.
-
-1. Verify the service topology and the underlined resources have been created successfully using the Azure portal:
-
- ![Azure Deployment Manager tutorial deployed service topology resources](./media/deployment-manager-tutorial/azure-deployment-manager-tutorial-deployed-topology-resources.png)
-
- **Show hidden types** must be selected to see the resources.
-
-1. <a id="deploy-the-rollout-template"></a>Deploy the rollout template:
-
- ```azurepowershell
- # Create the rollout
- New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile "$filePath\ADMTemplates\CreateADMRollout.json" `
- -TemplateParameterFile "$filePath\ADMTemplates\CreateADMRollout.Parameters.json"
- ```
-
-1. Check the rollout progress using the following PowerShell script:
-
- ```azurepowershell
- # Get the rollout status
- $rolloutname = "${projectName}Rollout" # "adm0925Rollout" is the rollout name used in this tutorial
- Get-AzDeploymentManagerRollout `
- -ResourceGroupName $resourceGroupName `
- -Name $rolloutName `
- -Verbose
- ```
-
- The Deployment Manager PowerShell cmdlets must be installed before you can run this cmdlet. See [Prerequisites](#prerequisites). The `-Verbose` parameter can be used to see the whole output.
-
- The following sample shows the running status:
-
- ```Output
- VERBOSE:
-
- Status: Succeeded
- ArtifactSourceId: /subscriptions/<AzureSubscriptionID>/resourceGroups/adm0925rg/providers/Microsoft.DeploymentManager/artifactSources/adm0925ArtifactSourceRollout
- BuildVersion: 1.0.0.0
-
- Operation Info:
- Retry Attempt: 0
- Skip Succeeded: False
- Start Time: 03/05/2019 15:26:13
- End Time: 03/05/2019 15:31:26
- Total Duration: 00:05:12
-
- Service: adm0925ServiceEUS
- TargetLocation: EastUS
- TargetSubscriptionId: <AzureSubscriptionID>
-
- ServiceUnit: adm0925ServiceEUSStorage
- TargetResourceGroup: adm0925ServiceEUSrg
-
- Step: Deploy
- Status: Succeeded
- StepGroup: stepGroup3
- Operation Info:
- DeploymentName: 2F535084871E43E7A7A4CE7B45BE06510adm0925ServiceEUSStorage
- CorrelationId: 0b6f030d-7348-48ae-a578-bcd6bcafe78d
- Start Time: 03/05/2019 15:26:32
- End Time: 03/05/2019 15:27:41
- Total Duration: 00:01:08
- Resource Operations:
-
- Resource Operation 1:
- Name: txq6iwnyq5xle
- Type: Microsoft.Storage/storageAccounts
- ProvisioningState: Succeeded
- StatusCode: OK
- OperationId: 64A6E6EFEF1F7755
-
- ...
-
- ResourceGroupName : adm0925rg
- BuildVersion : 1.0.0.0
- ArtifactSourceId : /subscriptions/<SubscriptionID>/resourceGroups/adm0925rg/providers/Microsoft.DeploymentManager/artifactSources/adm0925ArtifactSourceRollout
- TargetServiceTopologyId : /subscriptions/<SubscriptionID>/resourceGroups/adm0925rg/providers/Microsoft.DeploymentManager/serviceTopologies/adm0925ServiceTopology
- Status : Running
- TotalRetryAttempts : 0
- OperationInfo : Microsoft.Azure.Commands.DeploymentManager.Models.PSRolloutOperationInfo
- Services : {adm0925ServiceEUS, adm0925ServiceWUS}
- Name : adm0925Rollout
- Type : Microsoft.DeploymentManager/rollouts
- Location : centralus
- Id : /subscriptions/<SubscriptionID>/resourcegroups/adm0925rg/providers/Microsoft.DeploymentManager/rollouts/adm0925Rollout
- Tags :
- ```
-
- After the rollout is deployed successfully, you'll see two more resource groups created, one for each service.
-
-## Verify the deployment
-
-1. Open the [Azure portal](https://portal.azure.com).
-1. Browse to the newly created web applications under the new resource groups created by the rollout deployment.
-1. Open the web application in a web browser. Verify the location and the version on the _https://docsupdatetracker.net/index.html_ file.
-
-## Deploy the revision
-
-When you have a new version (1.0.0.1) for the web application. You can use the following procedure to redeploy the web application.
-
-1. Open _CreateADMRollout.Parameters.json_.
-1. Update `binaryArtifactRoot` to _binaries/1.0.0.1_.
-1. Redeploy the rollout as instructed in [Deploy the templates](#deploy-the-rollout-template).
-1. Verify the deployment as instructed in [Verify the deployment](#verify-the-deployment). The web page shall show the 1.0.0.1 version.
-
-## Clean up resources
-
-When the Azure resources are no longer needed, clean up the resources you deployed by deleting the resource group.
-
-1. From the Azure portal, select **Resource group** from the left menu.
-1. Use the **Filter by name** field to narrow down the resource groups created in this tutorial.
-
- * **&lt;projectName>rg**: contains the Deployment Manager resources.
- * **&lt;projectName>ServiceWUSrg**: contains the resources defined by ServiceWUS.
- * **&lt;projectName>ServiceEUSrg**: contains the resources defined by ServiceEUS.
- * The resource group for the user-defined managed identity.
-1. Select the resource group name.
-1. Select **Delete resource group** from the top menu.
-1. Repeat the last two steps to delete other resource groups created by this tutorial.
-
-## Next steps
-
-In this tutorial, you learned how to use Azure Deployment Manager. To integrate health monitoring in Azure Deployment Manager, see [Tutorial: Use health check in Azure Deployment Manager](./deployment-manager-tutorial-health-check.md).
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
Title: Create and deploy template spec description: Learn how to create a template spec from ARM template. Then, deploy the template spec to a resource group in your subscription. Previously updated : 12/14/2020 Last updated : 05/04/2021
-# Quickstart: Create and deploy template spec (Preview)
+# Quickstart: Create and deploy template spec
This quickstart shows you how to package an Azure Resource Manager template (ARM template) into a [template spec](template-specs.md). Then, you deploy that template spec. Your template spec contains an ARM template that deploys a storage account.
This quickstart shows you how to package an Azure Resource Manager template (ARM
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> Template Specs is currently in preview. To use it with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create template You create a template spec from a local template. Copy the following template and save it locally to a file named **azuredeploy.json**. This quickstart assumes you've saved to a path **c:\Templates\azuredeploy.json** but you can use any path. ## Create template spec
The template spec is a resource type named `Microsoft.Resources/templateSpecs`.
"resources": [ { "type": "Microsoft.Resources/templateSpecs",
- "apiVersion": "2019-06-01-preview",
+ "apiVersion": "2021-05-01",
"name": "storageSpec", "location": "westus2", "properties": {
The template spec is a resource type named `Microsoft.Resources/templateSpecs`.
"resources": [ { "type": "versions",
- "apiVersion": "2019-06-01-preview",
+ "apiVersion": "2021-05-01",
"name": "1.0", "location": "westus2", "dependsOn": [ "storageSpec" ], "properties": {
- "template": {
+ "mainTemplate": {
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
Rather than creating a new template spec for the revised template, add a new ver
"resources": [ { "type": "Microsoft.Resources/templateSpecs",
- "apiVersion": "2019-06-01-preview",
+ "apiVersion": "2021-05-01",
"name": "storageSpec", "location": "westus2", "properties": {
Rather than creating a new template spec for the revised template, add a new ver
"resources": [ { "type": "versions",
- "apiVersion": "2019-06-01-preview",
+ "apiVersion": "2021-05-01",
"name": "2.0", "location": "westus2", "dependsOn": [ "storageSpec" ], "properties": {
- "template": {
+ "mainTemplate": {
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
azure-resource-manager Template Specs Create Linked https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs-create-linked.md
Title: Create a template spec with linked templates description: Learn how to create a template spec with linked templates. Previously updated : 01/05/2021 Last updated : 05/04/2021
-# Tutorial: Create a template spec with linked templates (Preview)
+# Tutorial: Create a template spec with linked templates
Learn how to create a [template spec](template-specs.md) with a main template and a [linked template](linked-templates.md#linked-template). You use template specs to share ARM templates with other users in your organization. This article shows you how to create a template spec to package a main template and its linked templates using the `relativePath` property of the [deployment resource](/azure/templates/microsoft.resources/deployments).
Learn how to create a [template spec](template-specs.md) with a main template an
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> Template Specs is currently in preview. To use it with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template specs with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create linked templates
azure-resource-manager Template Specs Deploy Linked Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs-deploy-linked-template.md
Title: Deploy a template spec as a linked template description: Learn how to deploy an existing template spec in a linked deployment. Previously updated : 11/17/2020 Last updated : 05/04/2021
-# Tutorial: Deploy a template spec as a linked template (Preview)
+# Tutorial: Deploy a template spec as a linked template
Learn how to deploy an existing [template spec](template-specs.md) by using a [linked deployment](linked-templates.md#linked-template). You use template specs to share ARM templates with other users in your organization. After you have created a template spec, you can deploy the template spec by using Azure PowerShell or Azure CLI. You can also deploy the template spec as a part of your solution by using a linked template.
Learn how to deploy an existing [template spec](template-specs.md) by using a [l
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). > [!NOTE]
-> Template Specs is currently in preview. To use it with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Create a template spec
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 03/26/2021 Last updated : 05/04/2021
-# Azure Resource Manager template specs (Preview)
+# Azure Resource Manager template specs
A template spec is a resource type for storing an Azure Resource Manager template (ARM template) in Azure for later deployment. This resource type enables you to share ARM templates with other users in your organization. Just like any other Azure resource, you can use Azure role-based access control (Azure RBAC) to share the template spec.
A template spec is a resource type for storing an Azure Resource Manager templat
To deploy the template spec, you use standard Azure tools like PowerShell, Azure CLI, Azure portal, REST, and other supported SDKs and clients. You use the same commands as you would for the template. > [!NOTE]
-> Template Specs is currently in preview. To use it with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
+> To use template spec with Azure PowerShell, you must install [version 5.0.0 or later](/powershell/azure/install-az-ps). To use it with Azure CLI, use [version 2.14.2 or later](/cli/azure/install-azure-cli).
## Why use template specs?
azure-resource-manager Template Tutorial Create Multiple Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md
To complete this article, you need:
1. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` 1. Select **Open** to open the file.
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
The template used in this quickstart is called [Create an Azure Key Vault and a
2. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-key-vault-create/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.keyvault/key-vault-create/azuredeploy.json
``` 3. Select **Open** to open the file.
azure-resource-manager Template Tutorial Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-troubleshoot.md
Open a template called [Create a standard storage account](https://azure.microso
2. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` 3. Select **Open** to open the file.
azure-resource-manager Template Tutorial Use Template Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-use-template-reference.md
To complete this article, you need:
1. In **File name**, paste the following URL: ```url
- https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json
+ https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json
``` 1. Select **Open** to open the file.
azure-resource-manager View Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/view-resources.md
The template reference is linked from each of the Azure service documentation si
Resource Explorer is embedded in the Azure portal. Before using this method, you need a storage account. If you don't have one, select the following button to create one:
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-storage-account-create%2fazuredeploy.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **resource explorer**, and then select **Resource Explorer**.
Resources.azure.com is a public website can be accessed by anyone with an Azure
To demonstrate how to retrieve schema information by using this tool, you need a storage account. If you don't have one, select the following button to create one:
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-storage-account-create%2fazuredeploy.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
1. Browse to [resources.azure.com](https://resources.azure.com/). It takes a few moments for the tool to popular the left pane. 1. Select **subscriptions**.
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
You can use different migration methods to move your data between SQL Server, Az
| **Source** | **Azure SQL Database** | **Azure SQL Managed Instance** | | | | |
-| SQL Server (on-prem, AzureVM, Amazon RDS) | **Online:** [Data Migration Service (DMS)](/sql/dm) |
+| SQL Server (on-prem, AzureVM, Amazon RDS) | **Online:** [Transactional Replication](../managed-instance/replication-transactional-overview.md) <br/> **Offline:** [Data Migration Service (DMS)](/sql/dm) |
| Single database | **Offline:** [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database), BCP | **Offline:** [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database), BCP | | SQL Managed Instance | **Online:** [Transactional Replication](../managed-instance/replication-transactional-overview.md) <br/> **Offline:** [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database), BCP, [Snapshot replication](../managed-instance/replication-transactional-overview.md) | **Online:** [Transactional Replication](../managed-instance/replication-transactional-overview.md) <br/> **Offline:** Cross-instance point-in-time restore ([Azure PowerShell](/powershell/module/az.sql/restore-azsqlinstancedatabase#examples) or [Azure CLI](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/Cross-instance-point-in-time-restore-in-Azure-SQL-Database/ba-p/386208)), [Native backup/restore](../managed-instance/restore-sample-database-quickstart.md), [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database), BCP, [Snapshot replication](../managed-instance/replication-transactional-overview.md) |
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Last updated 04/28/2021
# Maintenance window (Preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-The maintenance window feature allows you to configure maintenance schedule for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL managed instance](../managed-instance/sql-managed-instance-paas-overview.md) resources making impactful maintenance events predictable and less disruptive for your workload.
+The maintenance window feature allows you to configure maintenance schedule for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) resources making impactful maintenance events predictable and less disruptive for your workload.
> [!Note] > The maintenance window feature only protects from planned impact from upgrades or scheduled maintenance. It does not protect from all failover causes; exceptions that may cause short connection interruptions outside of a maintenance window include hardware failures, cluster load balancing, and database reconfigurations due to events like a change in database Service Level Objective.
To get the maximum benefit from maintenance windows, make sure your client appli
* In Azure SQL Database, any connections using the proxy connection policy could be affected by both the chosen maintenance window and a gateway node maintenance window. However, client connections using the recommended redirect connection policy are unaffected by a gateway node maintenance reconfiguration.
-* In Azure SQL managed instance, the gateway nodes are hosted [within the virtual cluster](../../azure-sql/managed-instance/connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture) and have the same maintenance window as the managed instance, but using the redirect connection policy is still recommended to minimize number of disruptions during the maintenance event.
+* In Azure SQL Managed Instance, the gateway nodes are hosted [within the virtual cluster](../../azure-sql/managed-instance/connectivity-architecture-overview.md#virtual-cluster-connectivity-architecture) and have the same maintenance window as the managed instance, but using the redirect connection policy is still recommended to minimize number of disruptions during the maintenance event.
For more on the client connection policy in Azure SQL Database, see [Azure SQL Database Connection policy](../database/connectivity-architecture.md#connection-policy).
-For more on the client connection policy in Azure SQL managed instance see [Azure SQL managed instance connection types](../../azure-sql/managed-instance/connection-types-overview.md).
+For more on the client connection policy in Azure SQL Managed Instance see [Azure SQL Managed Instance connection types](../../azure-sql/managed-instance/connection-types-overview.md).
-## Considerations for Azure SQL managed instance
+## Considerations for Azure SQL Managed Instance
-Azure SQL managed instance consists of service components hosted on a dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These virtual machines form [virtual cluster(s)](../managed-instance/connectivity-architecture-overview.md#high-level-connectivity-architecture) that can host multiple managed instances. Maintenance window configured on instances of one subnet can influence the number of virtual clusters within the subnet and distribution of instances among virtual clusters. This may require a consideration of few effects.
+Azure SQL Managed Instance consists of service components hosted on a dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These virtual machines form [virtual cluster(s)](../managed-instance/connectivity-architecture-overview.md#high-level-connectivity-architecture) that can host multiple managed instances. Maintenance window configured on instances of one subnet can influence the number of virtual clusters within the subnet and distribution of instances among virtual clusters. This may require a consideration of few effects.
### Maintenance window configuration is long running operation All instances hosted in a virtual cluster share the maintenance window. By default, all managed instances are hosted in the virtual cluster with the default maintenance window. Specifying another maintenance window for managed instance during its creation or afterwards means that it must be placed in virtual cluster with corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance. Accommodating additional instance in the existing virtual cluster may require cluster resize. Both operations contribute to the duration of configuring maintenance window for a managed instance.
Configuring and changing maintenance window causes change of the IP address of t
* [Maintenance window FAQ](maintenance-window-faq.yml) * [Azure SQL Database](sql-database-paas-overview.md) * [SQL managed instance](../managed-instance/sql-managed-instance-paas-overview.md)
-* [Plan for Azure maintenance events in Azure SQL Database and Azure SQL managed instance](planned-maintenance.md)
+* [Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance](planned-maintenance.md)
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
Azure SQL Database resource governance is hierarchical in nature. From top to bo
Data IO governance is a process in Azure SQL Database used to limit both read and write physical IO against data files of a database. IOPS limits are set for each service level to minimize the "noisy neighbor" effect, to provide resource allocation fairness in the multi-tenant service, and to stay within the capabilities of the underlying hardware and storage.
-For single databases, workload group limits are applied to all storage IO against the database, while resource pool limits apply to all storage IO against all databases on the same dedicated SQL pool, including the `tempdb` database. For elastic pools, workload group limits apply to each database in the pool, whereas resource pool limit applies to the entire elastic pool, including the `tempdb` database, which is shared among all databases in the pool. In general, resource pool limits may not be achievable by the workload against a database (either single or pooled), because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However, pool limits may be reached by the combined workload against multiple databases on the same pool.
+For single databases, workload group limits are applied to all storage IO against the database, while resource pool limits apply to all storage IO against all databases in the same resource pool, including the `tempdb` database. For elastic pools, workload group limits apply to each database in the pool, whereas resource pool limit applies to the entire elastic pool, including the `tempdb` database, which is shared among all databases in the pool. In general, resource pool limits may not be achievable by the workload against a database (either single or pooled), because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However, pool limits may be reached by the combined workload against multiple databases on the same pool.
For example, if a query generates 1000 IOPS without any IO resource governance, but the workload group maximum IOPS limit is set to 900 IOPS, the query won't be able to generate more than 900 IOPS. However, if the resource pool maximum IOPS limit is set to 1500 IOPS, and the total IO from all workload groups associated with the resource pool exceeds 1500 IOPS, then the IO of the same query may be reduced below the workgroup limit of 900 IOPS.
Because data is physically copied to a different machine, moving larger database
- For information about general Azure limits, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). - For information about DTUs and eDTUs, see [DTUs and eDTUs](purchasing-models.md#dtu-based-purchasing-model).-- For information about tempdb size limits, see [TempDB in Azure SQL Database](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
+- For information about tempdb size limits, see [TempDB in Azure SQL Database](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-github-enterprise-server.md
+
+ Title: Configure GitHub Enterprise Server on Azure VMware Solution
+description: Learn how to Set up GitHub Enterprise Server on your Azure VMware Solution private cloud.
+ Last updated : 02/11/2021++
+# Configure GitHub Enterprise Server on Azure VMware Solution
+
+In this article, we walk through the steps to set up GitHub Enterprise Server, the "on-premises" version of [GitHub.com](https://github.com/), on your Azure VMware Solution private cloud. The scenario we'll cover is a GitHub Enterprise Server instance that can serve up to 3,000 developers running up to 25 jobs per minute on GitHub Actions. It includes the setup of (at time of writing) *preview* features, such as GitHub Actions. To customize the setup for your particular needs, review the requirements listed in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations).
+
+## Before you begin
+
+GitHub Enterprise Server requires a valid license key. You may sign up for a [trial license](https://enterprise.github.com/trial). If you're looking to extend the capabilities of GitHub Enterprise Server via an integration, you may qualify for a free five-seat developer license. Apply for this license through [GitHub's Partner Program](https://partner.github.com/).
+
+## Install GitHub Enterprise Server on VMware
+
+Download [the current release of GitHub Enterprise Server](https://enterprise.github.com/releases/2.19.0/download) for VMware ESXi/vSphere (OVA) and [deploy the OVA template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) you downloaded.
+++
+Provide a recognizable name for your new virtual machine, such as GitHubEnterpriseServer. You don't need to include the release details in the VM name, as these details become stale when the instance is upgraded. Select all the defaults for now (we'll edit these details shortly) and wait for the OVA to be imported.
+
+Once imported, [adjust the hardware configuration](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#creating-the-github-enterprise-server-instance) based on your needs. In our example scenario, we'll need the following configuration.
+
+| Resource | Standard Setup | Standard Set up + "Beta Features" (Actions) |
+| | | |
+| vCPUs | 4 | 8 |
+| Memory | 32 GB | 61 GB |
+| Attached storage | 250 GB | 300 GB |
+| Root storage | 200 GB | 200 GB |
+
+However, your needs may vary. Refer to the guidance on hardware considerations in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations). Also see [Adding CPU or memory resources for VMware](https://docs.github.com/en/enterprise/admin/enterprise-management/increasing-cpu-or-memory-resources#adding-cpu-or-memory-resources-for-vmware) to customize the hardware configuration based on your situation.
+
+## Configure the GitHub Enterprise Server instance
++
+After the newly provisioned virtual machine (VM) has powered on, [configure it via your browser](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#configuring-the-github-enterprise-server-instance). You'll be required to upload your license file and set a management console password. Be sure to write down this password somewhere safe.
++
+We recommend to at least take the following steps:
+
+1. Upload a public SSH key to the management console, so that you can [access the administrative shell via SSH](https://docs.github.com/en/enterprise/admin/configuration/accessing-the-administrative-shell-ssh).
+
+2. [Configure TLS on your instance](https://docs.github.com/en/enterprise/admin/configuration/configuring-tls) so that you can use a certificate signed by a trusted certificate authority.
++
+Apply your settings. While the instance restarts, you can continue with the next step, **Configuring Blob Storage for GitHub Actions**.
++
+After the instance restarts, you can create a new admin account on the instance. Be sure to make a note of this user's password as well.
+
+### Other configuration steps
+
+To harden your instance for production use, the following optional setup steps are recommended:
+
+1. Configure [high availability](https://help.github.com/enterprise/admin/guides/installation/configuring-github-enterprise-for-high-availability/) for protection against:
+
+ - Software crashes (OS or application level)
+ - Hardware failures (storage, CPU, RAM, and so on)
+ - Virtualization host system failures
+ - Logically or physically severed network
+
+2. [Configure](https://docs.github.com/en/enterprise/admin/configuration/configuring-backups-on-your-appliance) [backup-utilities](https://github.com/github/backup-utils), providing versioned snapshots for disaster recovery, hosted in availability that is separate from the primary instance.
+3. [Setup subdomain isolation](https://docs.github.com/en/enterprise/admin/configuration/enabling-subdomain-isolation), using a valid TLS certificate, to mitigate cross-site scripting and other related vulnerabilities.
+
+## Configure blob storage for GitHub Actions
+
+> [!NOTE]
+> GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
+
+External blob storage is necessary to enable GitHub Actions on GitHub Enterprise Server (currently available as a "beta" feature). This external blob storage is used by Actions to store artifacts and logs. Actions on GitHub Enterprise Server [supports Azure Blob Storage as a storage provider](https://docs.github.com/en/enterprise/admin/github-actions/enabling-github-actions-and-configuring-storage#about-external-storage-requirements) (and some others). So we'll provision a new Azure storage account with a [storage account type](../storage/common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#types-of-storage-accounts) of BlobStorage:
++
+Once the deployment of the new BlobStorage resource has completed, copy and make a note of the connection string (available under Access keys). We'll need this string shortly.
+
+## Set up the GitHub Actions runner
+
+> [!NOTE]
+> GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
+
+At this point, you should have an instance of GitHub Enterprise Server running, with an administrator account created. You should also have external blob storage that GitHub Actions will use for persistence.
+
+Now let's create somewhere for GitHub Actions to run; again, we'll use Azure VMware Solution.
+
+First, let's provision a new VM on the cluster. We'll base our VM on [a recent release of Ubuntu Server](http://releases.ubuntu.com/20.04.1/).
+++
+Once the VM is created, power it up and connect to it via SSH.
+
+Next, install [the Actions runner](https://github.com/actions/runner) application, which runs a job from a GitHub Actions workflow. Identify and download the most current Linux x64 release of the Actions runner, either from [the releases page](https://github.com/actions/runner/releases) or by running the following quick script. This script requires both curl and [jq](https://stedolan.github.io/jq/) to be present on your VM.
+
+`LATEST\_RELEASE\_ASSET\_URL=$( curl https://api.github.com/repos/actions/runner/releases/latest | \`
+
+` jq -r '.assets | .[] | select(.name | match("actions-runner-linux-arm64")) | .url' )`
+
+`DOWNLOAD\_URL=$( curl $LATEST\_RELEASE\_ASSET\_URL | \`
+
+` jq -r '.browser\_download\_url' )`
+
+`curl -OL $DOWNLOAD\_URL`
+
+You should now have a file locally on your VM, actions-runner-linux-arm64-\*.tar.gz. Extract this tarball locally:
+
+`tar xzf actions-runner-linux-arm64-\*.tar.gz`
+
+This extraction unpacks a few files locally, including a `config.sh` and `run.sh` script, which we'll come back to shortly.
+
+## Enable GitHub Actions
+
+> [!NOTE]
+> GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
+
+Nearly there! Let's configure and enable GitHub Actions on the GitHub Enterprise Server instance. We'll need to [access the GitHub Enterprise Server instance's administrative shell over SSH](https://docs.github.com/en/enterprise/admin/configuration/accessing-the-administrative-shell-ssh), and then run the following commands:
+
+`# set an environment variable containing your Blob storage connection string`
+
+`export CONNECTION\_STRING="<your connection string from the blob storage step>"`
+
+`# configure actions storage`
+
+`ghe-config secrets.actions.storage.blob-provider azure`
+
+`ghe-config secrets.actions.storage.azure.connection-string "$CONNECTION\_STRING"`
+
+`# apply these settings`
+
+`ghe-config-apply`
+
+`# execute a precheck, this install additional software required by Actions on GitHub Enterprise Server`
+
+`ghe-actions-precheck -p azure -cs "$CONNECTION\_STRING"`
+
+`# enable actions, and re-apply the config`
+
+`ghe-config app.actions.enabled true`
+
+`ghe-config-apply`
+
+Next run:
+
+`ghe-actions-check -s blob`
+
+You should see output: "Blob Storage is healthy".
+
+Now that GitHub Actions is configured, enable it for your users. Sign in to your GitHub Enterprise Server instance as an administrator, and select the ![Rocket icon.](media/github-enterprise-server/rocket-icon.png) in the upper right corner of any page. In the left sidebar, select **Enterprise overview**, then **Policies**, **Actions**, and select the option to **enable Actions for all organizations**.
+
+Next, configure your runner from the **Self-hosted runners** tab. Select **Add new** and then **New runner** from the drop-down.
+
+On the next page, you'll be presented with a set of commands to run, we just need to copy the command to **configure** the runner, for instance:
+
+`./config.sh --url https://10.1.1.26/enterprises/octo-org --token AAAAAA5RHF34QLYBDCHWLJC7L73MA`
+
+Copy the `config.sh` command and paste it into a session on your Actions runner (created previously).
++
+Use the run.sh command to *run* the runner:
++
+To make this runner available to organizations in your enterprise, edit its organization access:
++
+Here we'll make it available to all organizations, but you can limit access to a subset of organizations, and even to specific repositories.
+
+## (Optional) Configure GitHub Connect
+
+Although this step is optional, we recommend it if you plan to consume open-source actions available on GitHub.com. It allows you to build on the work of others by referencing these reusable actions in your workflows.
+
+To enable GitHub Connect, follow the steps in [Enabling automatic access to GitHub.com actions using GitHub Connect](https://docs.github.com/en/enterprise/admin/github-actions/enabling-automatic-access-to-githubcom-actions-using-github-connect).
+
+Once GitHub Connect is enabled, select the **Server to use actions from GitHub.com in workflow runs** option.
++
+## Set up and run your first workflow
+
+Now that Actions and GitHub Connect is set up, let's put all this work to good use. Here's an example workflow that references the excellent [octokit/request-action](https://github.com/octokit/request-action), allowing us to "script" GitHub through interactions using the GitHub API, powered by GitHub Actions.
+
+In this basic workflow, we'll use `octokit/request-action` to just open an issue on GitHub using the API.
++
+>[!NOTE]
+>GitHub.com hosts the action, but when it runs on GitHub Enterprise Server, it *automatically* uses the GitHub Enterprise Server API.
+
+If you chose to not enable GitHub Connect, you can use the following alternative workflow.
++
+Navigate to a repo on your instance, and add the above workflow as: `.github/workflows/hello-world.yml`
++
+In the **Actions** tab for your repo, wait for the workflow to execute.
++
+You can also watch it being processed by the runner.
++
+If everything ran successfully, you should see a new issue in your repo, entitled "Hello world."
++
+Congratulations! You just completed your first Actions workflow on GitHub Enterprise Server, running on your Azure VMware Solution private cloud.
+
+In this article, we set up a new instance of GitHub Enterprise Server, the self-hosted equivalent of GitHub.com, on top of your Azure VMware Solution private cloud. This instance includes support for GitHub Actions and uses Azure Blob Storage for persistence of logs and artifacts. But we're just scratching the surface of what you can do with GitHub Actions. Check out the list of Actions on [GitHub's Marketplace](https://github.com/marketplace), or [create your own](https://docs.github.com/en/actions/creating-actions).
+
+## Next steps
+
+Now that you've covered setting up GitHub Enterprise Server on your Azure VMware Solution private cloud, you may want to learn about:
+
+- [How to get started with GitHub Actions](https://docs.github.com/en/actions)
+- [How to join the beta program](https://resources.github.com/beta-signup/)
+- [Administration of GitHub Enterprise Server](https://githubtraining.github.io/admin-training/#/00_getting_started)
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
+
+ Title: Configure a VPN gateway into Azure VMware Solution
+description: Learn how to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel into Azure VMware Solutions.
+ Last updated : 03/23/2021++
+# Configure a VPN gateway into Azure VMware Solution
+
+In this article, we'll go through the steps to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
++
+In this how to, you'll:
+- Create an Azure Virtual WAN hub and a VPN gateway with a public IP address attached to it.
+- Create an Azure ExpressRoute gateway and establish an Azure VMware Solution endpoint.
+- Enable a policy-based VPN on-premises setup.
+
+## Prerequisites
+You must have a public-facing IP address terminating on an on-premises VPN device.
+
+## Step 1. Create an Azure Virtual WAN
++
+## Step 2. Create a Virtual WAN hub and gateway
+
+>[!TIP]
+>You can also [create a gateway in an existing hub](../virtual-wan/virtual-wan-expressroute-portal.md#existinghub).
+
+1. Select the Virtual WAN you created in the previous step.
+
+1. Select **Create virtual hub**, enter the required fields, and then select **Next: Site to site**.
+
+ Enter the subnet using a `/24` (minimum).
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-virtual-hub.png" alt-text="Screenshot showing the Create virtual hub page.":::
+
+4. Select the **Site-to-site** tab, define the site-to-site gateway by setting the aggregate throughput from the **Gateway scale units** drop-down.
+
+ >[!TIP]
+ >The scale units are in pairs for redundancy, each supporting 500 Mbps (one scale unit = 500 Mbps).
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-hub-include/site-to-site.png" alt-text="Screenshot showing the Site-to-site details.":::
+
+5. Select the **ExpressRoute** tab, create an ExpressRoute gateway.
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-er-hub-include/hub2.png" alt-text="Screenshot of the ExpressRoute settings.":::
+
+ >[!TIP]
+ >A scale unit value is 2 Gbps.
+
+ It takes approximately 30 minutes to create each hub.
+
+## Step 3. Create a site-to-site VPN
+
+1. In the Azure portal, select the virtual WAN you created earlier.
+
+2. In the **Overview** of the virtual hub, select **Connectivity** > **VPN (Site-to-site)** > **Create new VPN site**.
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics.png" alt-text="Screenshot of the Overview page for the virtual hub, with VPN (site-to-site) and Create new VPN site selected.":::
+
+3. On the **Basics** tab, enter the required fields.
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics2.png" alt-text="Screenshot of the Basics tab for the new VPN site.":::
+
+ 1. Set the **Border Gateway Protocol** to **Enable**. When enabled, it ensures that both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX will fail to form the service mesh. For more information, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md).
+
+ 1. For the **Private address space**, enter the on-premises CIDR block. It's used to route all traffic bound for on-premises across the tunnel. The CIDR block is only required if you don't enable BGP.
+
+1. Select **Next : Links** and complete the required fields. Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. BGP and autonomous system number (ASN) must be unique inside your organization.
+
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-links.png" alt-text="Screenshot that shows link details.":::
+
+1. Select **Review + create**.
+
+1. Navigate to the virtual hub that you want, and deselect **Hub association** to connect your VPN site to the hub.
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/connect.png" alt-text="Screenshot that shows the Connected Sites pane for Virtual HUB ready for Pre-shared key and associated settings.":::
+
+## Step 4. (Optional) Create policy-based VPN site-to-site tunnels
+
+>[!IMPORTANT]
+>This is an optional step and applies only to policy-based VPNs.
+
+Policy-based VPN setups require on-premise and Azure VMware Solution networks to be specified, including the hub ranges. These hub ranges specify the encryption domain of the policy-based VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
+
+1. In the Azure portal, go to your Virtual WAN hub site. Under **Connectivity**, select **VPN (Site to site)**.
+
+2. Select your VPN site name, the ellipsis (...) at the far right, and then **edit VPN connection to this hub**.
+
+ :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png" alt-text="Screenshot of the page in Azure for the Virtual WAN hub site showing an ellipsis selected to access Edit VPN connection to this hub." lightbox="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png":::
+
+3. Edit the connection between the VPN site and the hub, and then select **Save**.
+ - Internet Protocol Security (IPSec), select **Custom**.
+ - Use policy-based traffic selector, select **Enable**
+ - Specify the details for **IKE Phase 1** and **IKE Phase 2(ipsec)**.
+
+ :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-connection.png" alt-text="Screenshot of Edit VPN connection page.":::
+
+ Your traffic selectors or subnets that are part of the policy-based encryption domain should be:
+
+ - Virtual WAN hub `/24`
+ - Azure VMware Solution private cloud `/22`
+ - Connected Azure virtual network (if present)
+
+## Step 5. Connect your VPN site to the hub
+
+1. Select your VPN site name and then select **Connect VPN sites**.
+
+1. In the **Pre-shared key** field, enter the key previously defined for the on-premise endpoint.
+
+ >[!TIP]
+ >If you don't have a previously defined key, you can leave this field blank. A key is generated for you automatically.
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/connect.png" alt-text="Screenshot that shows the Connected Sites pane for Virtual HUB ready for a Pre-shared key and associated settings. ":::
+
+1. If you're deploying a firewall in the hub and it's the next hop, set the **Propagate Default Route** option to **Enable**.
+
+ When enabled, the Virtual WAN hub propagates to a connection only if the hub already learned the default route when deploying a firewall in the hub or if another connected site has forced tunneling enabled. The default route does not originate in the Virtual WAN hub.
+
+1. Select **Connect**. After a few minutes, the site shows the connection and connectivity status.
+
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png" alt-text="Screenshot that shows a site-to-site connection and connectivity status." lightbox="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png":::
+
+1. [Download the VPN configuration file](../virtual-wan/virtual-wan-site-to-site-portal.md#device) for the on-premises endpoint.
+
+3. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub.
+
+ >[!IMPORTANT]
+ >You must first have a private cloud created before you can patch the platform.
+
+ [!INCLUDE [request-authorization-key](includes/request-authorization-key.md)]
+
+4. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. You'll use the authorization key and ExpressRoute ID (peer circuit URI) from the previous step.
+
+ 1. Select your ExpressRoute gateway and then select **Redeem authorization key**.
+
+ :::image type="content" source="media/create-ipsec-tunnel/redeem-authorization-key.png" alt-text="Screenshot of the ExpressRoute page for the private cloud, with Redeem authorization key selected.":::
+
+ 1. Paste the authorization key in the **Authorization Key** field.
+ 1. Paste the ExpressRoute ID into the **Peer circuit URI** field.
+ 1. Select **Automatically associate this ExpressRoute circuit with the hub** check box.
+ 1. Select **Add** to establish the link.
+
+5. Test your connection by [creating an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-windows-server-failover-cluster.md
+
+ Title: Configure Windows Server Failover Cluster on Azure VMware Solution vSAN
+description: Learn how to configure Windows Server Failover Cluster (WSFC) on Azure VMware Solution vSAN with native shared disks.
+ Last updated : 05/04/2021++
+# Configure Windows Server Failover Cluster on Azure VMware Solution vSAN
+
+In this article, you'll learn how to configure [Failover Clustering in Windows Server](/windows-server/failover-clustering/failover-clustering-overview) on Azure VMware Solution vSAN with native shared disks.
+
+>[!IMPORTANT]
+>The implementation in this article is for proof of concept and pilot purposes. We recommend using a Cluster-in-a-Box (CIB) configuration until placement policies become available.
+
+Windows Server Failover Cluster, previously known as Microsoft Service Cluster Service (MSCS), is a feature of the Windows Server Operating System (OS). WSFC is a business-critical feature, and for many applications is required. For example, WSFC is required for the following configurations:
+
+- SQL server configured as:
+ - Always On Failover Cluster Instance (FCI), for instance-level high availability.
+ - Always On Availability Group (AG), for database-level high availability.
+- Windows File
+ - Generic File share running on active cluster node.
+ - Scale-Out File Server (SOFS), which stores files in cluster shared volumes (CSV).
+ - Storage Spaces Direct (S2D); local disks used to create storage pools across different cluster nodes.
+
+You can host the WSFC cluster on different Azure VMware Solution instances, known as Cluster-Across-Box (CAB). You can also place the WSFC cluster on a single Azure VMware Solution node. This configuration is known as Cluster-in-a-Box (CIB). We don't recommend using a CIB solution for a production implementation. Were the single Azure VMware Solution node to fail, all WSFC cluster nodes would be powered off, and the application would experience downtime. Azure VMware Solution requires a minimum of three nodes in a private cloud cluster.
+
+It's important to deploy a supported WSFC configuration. You'll want your solution to be supported on vSphere and with Azure VMware Solution. VMware provides a detailed document about WSFC on vSphere 6.7, [Setup for Failover Clustering and Microsoft Cluster Service](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-vcenter-server-67-setup-mscs.pdf).
+
+This article focuses on WSFC on Windows Server 2016 and Windows Server 2019. Older Windows Server versions are out of [mainstream support](https://support.microsoft.com/lifecycle/search?alpha=windows%20server) and so we don't consider them here.
+
+You'll need to first [create a WSFC](/windows-server/failover-clustering/create-failover-cluster). Use the information we provide in this article for the specifics of a WSFC deployment on Azure VMware Solution.
+
+## Prerequisites
+
+- Azure VMware Solution environment
+- Microsoft Windows Server OS installation media
+
+## Reference architecture
+
+Azure VMware Solution provides native support for virtualized WSFC. It supports SCSI-3 Persistent Reservations (SCSI3PR) on a virtual disk level. This support is required by WSFC to arbitrate access to a shared disk between nodes. Support of SCSI3PRs enables configuration of WSFC with a disk resource shared between VMs natively on vSAN datastores.
+
+The following diagram illustrates the architecture of WSFC virtual nodes on an Azure VMware Solution private cloud. It shows where Azure VMware Solution resides, including the WSFC virtual servers (red box), in relation to the broader Azure platform. This diagram illustrates a typical hub-spoke architecture, but a similar setup is possible with the use of Azure Virtual WAN. Both offer all the value other Azure services can bring you.
++
+## Supported configurations
+
+Currently, the configurations supported are:
+
+- Microsoft Windows Server 2012 or later
+- Up to five failover clustering nodes per cluster
+- Up to four PVSCSI adapters per VM
+- Up to 64 disks per PVSCSI adapter
+
+## Virtual machine configuration requirements
+
+### WSFC node configuration parameters
+
+- Install the latest VMware Tools on each WSFC node.
+- Mixing non-shared and shared disks on a single virtual SCSI adapter isn't supported. For example, if the system disk (drive C:) is attached to SCSI0:0, the first shared disk would be attached to SCSI1:0. A VM node of a WSFC has the same virtual SCSI controller maximum as an ordinary VM - up to four (4) virtual SCSI Controllers.
+- Virtual discs SCSI IDs should be consistent between all VMs hosting nodes of the same WSFC.
+
+| **Component** | **Requirements** |
+| | |
+| VM hardware version | 11 or above to support Live vMotion. |
+| Virtual NIC | VMXNET3 paravirtualized network interface card (NIC); enable the in-guest Windows Receive Side Scaling (RSS) on the virtual NIC. |
+| Memory | Use full VM reservation memory for nodes in the WSFC cluster. |
+| Increase the I/O timeout of each WSFC node. | Modify HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValueSet to 60 seconds or more. (If you recreate the cluster, this value might be reset to its default, so you must change it again.) |
+| Windows cluster health monitoring | The value of the SameSubnetThreshold Parameter of Windows cluster health monitoring must be modified to allow 10 missed heartbeats at minimum. This is [the default in Windows Server 2016](https://techcommunity.microsoft.com/t5/failover-clustering/tuning-failover-cluster-network-thresholds/ba-p/371834). This recommendation applies to all applications using WSFC, including shared and non-shared disks. |
+
+### WSFC node - Boot disks configuration parameters
++
+| **Component** | **Requirements** |
+| | |
+| SCSI Controller Type | LSI Logic SAS |
+| Disk mode | Virtual |
+| SCSI bus sharing | None |
+| Modify advanced settings for a virtual SCSI controller hosting the boot device. | Add the following advanced settings to each WSFC node:<br /> scsiX.returnNoConnectDuringAPD = "TRUE"<br />scsiX.returnBusyOnNoConnectStatus = "FALSE"<br />Where X is the boot device SCSI bus controller ID number. By default, X is set to 0. |
+
+### WSFC node - Shared disks configuration parameters
++
+| **Component** | **Requirements** |
+| | |
+| SCSI Controller Type | VMware Paravirtualize (PVSCSI) |
+| Disk mode | Independent - Persistent (step 2 in illustration below). By using this setting, you ensure that all disks are excluded from snapshots. Snapshots aren't supported for WSFC-based VMs. |
+| SCSI bus sharing | Physical (step 1 in illustration below) |
+| Multi-writer flag | Not used |
+| Disk format | Thick provisioned. (Eager Zeroed Thick (EZT) isn't required with vSAN.) |
++
+## Non-supported scenarios
+
+The following functionalities aren't supported for WSFC on Azure VMware Solution:
+
+- NFS data stores
+- Storage Spaces
+- vSAN using iSCSI Service
+- vSAN Stretched Cluster
+- Enhanced vMotion Compatibility (EVC)
+- vSphere Fault Tolerance (FT)
+- Snapshots
+- Live (online) storage vMotion
+- N-Port ID Virtualization (NPIV)
+
+Hot changes to virtual machine hardware might disrupt the heartbeat between the WSFC nodes.
+
+The following activities aren't supported and might cause WSFC node failover:
+
+- Hot adding memory
+- Hot adding CPU
+- Using snapshots
+- Increasing the size of a shared disk
+- Pausing and resuming the virtual machine state
+- Memory over-commitment leading to ESXi swapping or VM memory ballooning
+- Hot Extend Local VMDK file, even if it isn't associated with SCSI bus sharing controller
+
+## Configure WSFC with shared disks on Azure VMware Solution vSAN
+
+1. Ensure that an Active Directory environment is available.
+2. Create virtual machines (VMs) on the vSAN datastore.
+3. Power on all VMs, configure the hostname, IP addresses, join all VMs to an Active Directory domain, and install latest available OS updates.
+4. Install the latest VMware Tools.
+5. Enable and configure the Windows Server Failover Cluster feature on each VM.
+6. Configure a Cluster Witness for quorum (a file share witness works fine).
+7. Power off all nodes of the WSFC cluster.
+8. Add one or more Paravirtual SCSI controllers (up to four) to each VM part of the WSFC. Use the settings per the previous paragraphs.
+9. On the first cluster node, add all needed shared disks using **Add New Device** > **Hard Disk**. Disk sharing should be left as **Unspecified** (default) and Disk mode as **Independent - Persistent**. Attach it to the controller(s) created in the previous steps.
+10. Continue with the remaining WSFC nodes. Add the disks created in the previous step by selecting **Add New Device** > **Existing Hard Disk**. Be sure to maintain the same disk SCSI IDs on all WSFC nodes.
+11. Power on the first WSFC node; sign in and open the disk management console (mmc). Make sure the added shared disks can be managed by the OS and are initialized. Format the disks and assign a drive letter.
+12. Power on the other WSFC nodes.
+13. Add the disk to the WSFC cluster using the **Add Disk wizard** and add them to a Cluster Shared Volume.
+14. Test a failover using the **Move disk wizard** and make sure the WSFC cluster with shared disks works properly.
+15. Run the **Validation Cluster wizard** to confirm whether the cluster and its nodes are working properly.
+
+ It's important to keep the following specific items from the Cluster Validation test in mind:
+
+ - **Validate Storage Spaces Persistent Reservation**. If you aren't using Storage Spaces with your cluster (such as on Azure VMware Solution vSAN), this test isn't applicable. You can ignore any results of the Validate Storage Spaces Persistent Reservation test including this warning. To avoid warnings, you can exclude this test.
+
+ - **Validate Network Communication**. The Cluster Validation test will throw a warning that only one network interface per cluster node is available. You may ignore this warning. Azure VMware Solution provides the required availability and performance needed, since the nodes are connected to one of the NSX-T segments. However, keep this item as part of the Cluster Validation test, as it will validate other aspects of network communication.
+
+16. Create a DRS rule to place the WSFC VMs on the same Azure VMware Solution nodes. To do so, you need a host-to-VM affinity rule. This way, cluster nodes will run on the same Azure VMware Solution host. Again, this is for pilot purposes until placement policies are available.
+
+ >[!NOTE]
+ > For this you need to create a Support Request ticket. Our Azure support organization will be able to help you with this.
+
+## Related information
+
+- [Failover Clustering in Windows Server](/windows-server/failover-clustering/failover-clustering-overview)
+- [Guidelines for Microsoft Clustering on vSphere (1037959) (vmware.com)](https://kb.vmware.com/s/article/1037959)
+- [About Setup for Failover Clustering and Microsoft Cluster Service (vmware.com)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.mscs.doc/GUID-1A2476C0-CA66-4B80-B6F9-8421B6983808.html)
+- [vSAN 6.7 U3 - WSFC with Shared Disks &amp; SCSI-3 Persistent Reservations (vmware.com)](https://blogs.vmware.com/virtualblocks/2019/08/23/vsan67-u3-wsfc-shared-disksupport/)
+- [Azure VMware Solution limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits)
+
+## Next steps
+
+Now that you've covered setting up a WSFC in Azure VMware Solution, you may want to learn about:
+
+- Setting up your new WSFC by adding more applications that require the WSFC capability. For instance, SQL Server and SAP ASCS.
+- Setting up a backup solution.
+ - [Setting up Azure Backup Server for Azure VMware Solution](./set-up-backup-server-for-azure-vmware-solution.md)
+ - [Backup solutions for Azure VMware Solution virtual machines](./ecosystem-back-up-vms.md)
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-public-internet-access.md
+
+ Title: Enable public internet access in Azure VMware Solution
+description: This article explains how to use the public IP functionality in Azure Virtual WAN.
+ Last updated : 02/04/2021+
+# How to use the public IP functionality in Azure VMware Solution
+
+Public IP is a new feature in Azure VMware Solution connectivity. It makes resources, such as web servers, virtual machines (VMs), and hosts accessible through a public network.
+
+You enable public internet access in two ways.
+
+- Applications can be hosted and published under the Application Gateway load balancer for HTTP/HTTPS traffic.
+- Published through public IP features in Azure Virtual WAN.
+
+As a part of Azure VMware Solution private cloud deployment, upon enabling public IP functionality, the required components with automation get created and enabled:
+
+- Virtual WAN
+
+- Virtual WAN hub with ExpressRoute connectivity
+
+- Azure Firewall services with public IP
+
+This article details how you can use the public IP functionality in Virtual WAN.
+
+## Prerequisites
+
+- Azure VMware Solution environment
+- A webserver running in Azure VMware Solution environment.
+- A new non-overlapping IP range for the Virtual WAN hub deployment, typically a `/24`.
+
+## Reference architecture
++
+The architecture diagram shows a web server hosted in the Azure VMware Solution environment and configured with RFC1918 private IP addresses. The web service is made available to the internet through Virtual WAN public IP functionality. Public IP is typically a destination NAT translated in Azure Firewall. With DNAT rules, firewall policy translates public IP address requests to a private address (webserver) with a port.
+
+User requests hit the firewall on a public IP that, in turn, is translated to private IP using DNAT rules in the Azure Firewall. The firewall checks the NAT table, and if the request matches an entry, it forwards the traffic to the translated address and port in the Azure VMware Solution environment.
+
+The web server receives the request and replies with the requested information or page to the firewall, and then the firewall forwards the information to the user on the public IP address.
+
+## Test case
+In this scenario, you'll publish the IIS webserver to the internet. Use the public IP feature in Azure VMware Solution to publish the website on a public IP address. You'll also configure NAT rules on the firewall and access Azure VMware Solution resource (VMs with a web server) with public IP.
+
+>[!TIP]
+>To enable egress traffic, you must set Security configuration > Internet traffic to **Azure Firewall**.
+
+## Deploy Virtual WAN
+
+1. Sign in to the Azure portal and then search for and select **Azure VMware Solution**.
+
+1. Select the Azure VMware Solution private cloud.
+
+ :::image type="content" source="media/public-ip-usage/avs-private-cloud-resource.png" alt-text="Screenshot of the Azure VMware Solution private cloud." border="true" lightbox="media/public-ip-usage/avs-private-cloud-resource.png":::
+
+1. Under **Manage**, select **Connectivity**.
+
+ :::image type="content" source="media/public-ip-usage/avs-private-cloud-manage-menu.png" alt-text="Screenshot of the Connectivity section." border="true" lightbox="media/public-ip-usage/avs-private-cloud-manage-menu.png":::
+
+1. Select the **Public IP** tab and then select **Configure**.
+
+ :::image type="content" source="media/public-ip-usage/connectivity-public-ip-tab.png" alt-text="Screenshot that shows where to begin to configure the public IP" border="true" lightbox="media/public-ip-usage/connectivity-public-ip-tab.png":::
+
+1. Accept the default values or change them, and then select **Create**.
+
+ - Virtual WAN resource group
+
+ - Virtual WAN name
+
+ - Virtual hub address block (using new non-overlapping IP range)
+
+ - Number of public IPs (1-100)
+
+It takes about one hour to complete the deployment of all components. This deployment only has to occur once to support all future public IPs for this Azure VMware Solution environment.
+
+>[!TIP]
+>You can monitor the status from the **Notification** area.
+
+## View and add public IP addresses
+
+We can check and add more public IP addresses by following the below steps.
+
+1. In the Azure portal, search for and select **Firewall**.
+
+1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
+
+ :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall" border="true" lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
+
+1. Select **Secured virtual hubs** and, from the list, select a virtual hub.
+
+ :::image type="content" source="media/public-ip-usage/select-virtual-hub.png" alt-text="Screenshot of Firewall Manager" lightbox="media/public-ip-usage/select-virtual-hub.png":::
+
+1. On the virtual hub page, select **Public IP configuration**, and to add more public IP address, then select **Add**.
+
+ :::image type="content" source="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png" alt-text="Screenshot of how to add a public IP configuration in Firewall Manager" border="true" lightbox="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png":::
+
+1. Provide the number of IPs required and select **Add**.
+
+ :::image type="content" source="media/public-ip-usage/add-number-of-ip-addresses-required.png" alt-text="Screenshot to add a specified number of public IP configurations" border="true":::
++
+## Create firewall policies
+
+Once all components are deployed, you can see them in the added Resource group. The next step is to add a firewall policy.
+
+1. In the Azure portal, search for and select **Firewall**.
+
+1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
+
+ :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall" border="true" lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
+
+1. Select **Azure Firewall Policies** and then select **Create Azure Firewall Policy**.
+
+ :::image type="content" source="media/public-ip-usage/create-firewall-policy.png" alt-text="Screenshot of how to create a firewall policy in Firewall Manager" border="true" lightbox="media/public-ip-usage/create-firewall-policy.png":::
+
+1. Under the **Basics** tab, provide the required details and select **Next: DNS Settings**.
+
+1. Under the **DNS** tab, select **Disable**, and then select **Next: Rules**.
+
+1. Select **Add a rule collection**, provide the below details, and select **Add** and then select **Next: Threat intelligence**.
+
+ - Name
+ - Rules collection Type - DNAT
+ - Priority
+ - Rule collection Action ΓÇô Allow
+ - Name of rule
+ - Source Type- **IPaddress**
+ - Source - **\***
+ - Protocol ΓÇô **TCP**
+ - Destination port ΓÇô **80**
+ - Destination Type ΓÇô **IP Address**
+ - Destination ΓÇô **Public IP Address**
+ - Translated address ΓÇô **Azure VMware Solution Web Server private IP Address**
+ - Translated port - **Azure VMware Solution Web Server port**
+
+1. Leave the default value, and then select **Next: Hubs**.
+
+1. Select **Associate virtual hub**.
+
+1. Select a hub from the list and select **Add**.
+
+ :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." border="true" lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png":::
+
+1. Select **Next: Tags**.
+
+1. (Optional) Create name and value pairs to categorize your resources.
+
+1. Select **Next: Review + create** and then select **Create**.
+
+## Limitations
+
+You can have 100 public IPs per private cloud.
+
+## Next steps
+
+Now that you've covered how to use the public IP functionality in Azure VMware Solution, you may want to learn about:
+
+- Using public IP addresses with [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md).
+- [Creating an IPSec tunnel into Azure VMware Solution](create-ipsec-tunnel.md).
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-scale-private-cloud.md
Title: Tutorial - Scale a private cloud
+ Title: Tutorial - Expand or shrink clusters in a private cloud
description: In this tutorial, you use the Azure portal to scale an Azure VMware Solution private cloud. Last updated 03/13/2021
Last updated 03/13/2021
#Customer intent: As a VMware administrator, I want to learn how to scale an Azure VMware Solution private cloud in the Azure portal.
-# Tutorial: Scale an Azure VMware Solution private cloud
+# Tutorial: Expand or shrink clusters in a private cloud
To get the most out of your Azure VMware Solution private cloud experience, scale the clusters and hosts to reflect what you need for planned workloads. You can scale the clusters and hosts in a private cloud as required for your application workload. Performance and availability limitations for specific services should be addressed on a case by case basis. The cluster and host limits are provided in the [private cloud concept](concepts-private-clouds-clusters.md) article.
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Set up vRealize Operations for Azure VMware Solution
+ Title: Configure vRealize Operations for Azure VMware Solution
description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud. Last updated 01/26/2021
-# Set up vRealize Operations for Azure VMware Solution
+# Configure vRealize Operations for Azure VMware Solution
vRealize Operations Manager is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter, ESXi, NSX-T, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter, NSX-T, vSAN, and HCX deployment.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-template.md
Title: Quickstart - Resource Manager template VM Backup
description: Learn how to back up your virtual machines with Azure Resource Manager template ms.devlang: azurecli Previously updated : 05/14/2019 Last updated : 04/28/2021
A [Recovery Services vault](backup-azure-recovery-services-vault-overview.md) is
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-recovery-services-create-vm-and-configure-backup%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.recoveryservices%2Frecovery-services-create-vm-and-configure-backup%2Fazuredeploy.json)
## Review the template The template used in this quickstart is from [Azure quickstart Templates](https://azure.microsoft.com/resources/templates/101-recovery-services-create-vm-and-configure-backup/). This template allows you to deploy simple Windows VM and Recovery Services vault configured with the DefaultPolicy for Protection. The resources defined in the template are:
$adminPassword = Read-Host -Prompt "Enter the administrator password for the vir
$dnsPrefix = Read-Host -Prompt "Enter the unique DNS Name for the Public IP used to access the virtual machine" $resourceGroupName = "${projectName}rg"
-$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-recovery-services-create-vm-and-configure-backup/azuredeploy.json"
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.recoveryservices/recovery-services-create-vm-and-configure-backup/azuredeploy.json"
New-AzResourceGroup -Name $resourceGroupName -Location $location New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -projectName $projectName -adminUsername $adminUsername -adminPassword $adminPassword -dnsLabelPrefix $dnsPrefix
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
Title: Selective disk backup and restore for Azure virtual machines description: In this article, learn about selective disk backup and restore using the Azure virtual machine backup solution. Previously updated : 07/17/2020 Last updated : 05/03/2021
Ensure you're using Az CLI version 2.0.80 or higher. You can get the CLI version
az --version ```
-Sign in to the subscription ID where the Recovery Services vault and the VM exist:
+Sign in to the subscription ID, where the Recovery Services vault and the VM exist:
```azurecli az account set -s {subscriptionID}
az account set -s {subscriptionID}
During the configure protection operation, you need to specify the disk list setting with an **inclusion** / **exclusion** parameter, giving the LUN numbers of the disks to be included or excluded in the backup.
+>[!NOTE]
+>The configure protection operation overrides the previous settings, they will not be cumulative.
+ ```azurecli az backup protection enable-for-vm --resource-group {resourcegroup} --vault-name {vaultname} --vm {vmname} --policy-name {policyname} --disk-list-setting include --diskslist {LUN number(s) separated by space} ```
This command helps get the details of the backed-up disks and excluded disks as
"Excluded disk(s)": "diskextest_DataDisk_2", ```
+_BackupJobID_ is the Backup Job name. To fetch the job name, run the following command:
+
+```azurecli
+az backup job list --resource-group {resourcegroup} --vault-name {vaultname}
+```
### List recovery points with Azure CLI ```azurecli
Ensure you're using Azure PowerShell version 3.7.0 or higher.
During the configure protection operation, you need to specify the disk list setting with an inclusion / exclusion parameter, giving the LUN numbers of the disks to be included or excluded in the backup.
+>[!NOTE]
+>The configure protection operation overrides the previous settings, they will not be cumulative.
+ ### Enable backup with PowerShell For example:
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-allocation-failures.md
Title: Troubleshooting Cloud Service (classic) allocation failures | Microsoft Docs description: Troubleshoot an allocation failure when you deploy Azure Cloud Services. Learn how allocation works and why allocation can fail.-+ Last updated 10/14/2020
cloud-services Cloud Services Configuration And Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configuration-and-management-faq.md
Title: Configuration and management issues FAQ
description: This article lists the frequently asked questions about configuration and management for Microsoft Azure Cloud Services. + Last updated 10/14/2020
cloud-services Cloud Services Guestos Family1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-family1-retirement.md
Title: Guest OS family 1 retirement notice | Microsoft Docs description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if you are affected + documentationcenter: na
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to the Guest OS you are using. + documentationcenter: na editor: ''
cloud-services Cloud Services Guestos Retirement Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-retirement-policy.md
Title: Supportability and retirement policy guide for Azure Guest OS | Microsoft Docs description: Provides information about what Microsoft will support as regards to the Azure Guest OS used by Cloud Services. + documentationcenter: na
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. + documentationcenter: na editor: ''
cloud-services Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-model-and-package.md
Title: What is a Cloud Service (classic) model and package | Microsoft Docs
description: Describes the cloud service model (.csdef, .cscfg) and package (.cspkg) in Azure + Last updated 10/14/2020
cloud-services Cloud Services Role Config Xpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-config-xpath.md
Title: Cloud Services (classic) Role config XPath cheat sheet | Microsoft Docs
description: The various XPath settings you can use in the cloud service role config to expose settings as an environment variable. + Last updated 10/14/2020
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-sizes-specs.md
Title: Virtual machine sizes for Azure Cloud services (classic) | Microsoft Docs
description: Lists the different virtual machine sizes (and IDs) for Azure cloud service web and worker roles. + Last updated 10/14/2020
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
Title: Common causes of Cloud Service (classic) roles recycling | Microsoft Docs description: A cloud service role that suddenly recycles can cause significant downtime. Here are some common issues that cause roles to be recycled, which may help you reduce downtime.-+ Last updated 10/14/2020
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
Title: Default TEMP folder size is too small for a role | Microsoft Docs description: A cloud service role has a limited amount of space for the TEMP folder. This article provides some suggestions on how to avoid running out of space.-+ Last updated 10/14/2020
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
Title: Troubleshoot cloud service (classic) deployment problems | Microsoft Docs description: There are a few common problems you may run into when deploying a cloud service to Azure. This article provides solutions to some of them.-+ Last updated 10/14/2020
cognitive-services Bing Image Search Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/bing-image-search-resource-faq.md
> Bing Search APIs provisioned using Cognitive Services will be supported for the next three years or until the end of your Enterprise Agreement, whichever happens first. > For migration instructions, see [Bing Search Services](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
-Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Microsoft Cognitive Services on Azure.
+Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure Cognitive Services on Azure.
## Response headers in JavaScript
This approach also protects your API key from exposure to the public, since only
## Next steps
-Is your question about a missing feature or functionality? Consider requesting or voting for it on our [User Voice web site](https://cognitive.uservoice.com/forums/555907-bing-search).
+Is your question about a missing feature or functionality? Consider requesting or voting for it using the [feedback tool](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395749).
## See also
- [Stack Overflow: Cognitive Services](https://stackoverflow.com/questions/tagged/bing-api)
+ [Stack Overflow: Cognitive Services](https://stackoverflow.com/questions/tagged/bing-api)
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-detection-model.md
The different face detection models are optimized for different tasks. See the f
|||| |Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations. |Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
-|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns "faceMask" and "noseAndMouthCovered" attributes if they're specified in the detect call.
+|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns "mask" attribute if it's specified in the detect call.
|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Does not return face landmarks. The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-recognition-model.md
A **PersonGroup** should have one unique recognition model for all of the **Pers
See the following code example for the .NET client library. ```csharp
-// Create an empty PersonGroup with "recognition_02" model
+// Create an empty PersonGroup with "recognition_04" model
string personGroupId = "mypersongroupid";
-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_02");
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
```
-In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it is set up to use the _recognition_02_ model to extract face features.
+In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it is set up to use the _recognition_04_ model to extract face features.
Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
cognitive-services Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-detection.md
The coordinates of the points are returned in units of pixels.
Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
+* **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
* **Age**. The estimated age in years of a particular face. * **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high. * **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
Attributes are a set of features that can optionally be detected by the [Face -
![A head with the pitch, roll, and yaw axes labeled](../Images/headpose.1.jpg) * **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
+* **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high. * **Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded. * **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
Previously updated : 09/02/2020 Last updated : 05/04/2021 zone_pivot_groups: programming-languages-speech-services-one-nomore-no-go
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
A typical response for `detailed` recognition:
"Offset": "1236645672289", "Duration": "1236645672289", "NBest": [
- {
- "Confidence" : "0.87",
- "Lexical" : "remind me to buy five pencils",
- "ITN" : "remind me to buy 5 pencils",
- "MaskedITN" : "remind me to buy 5 pencils",
- "Display" : "Remind me to buy 5 pencils.",
- }
+ {
+ "Confidence": 0.9052885,
+ "Display": "What's the weather like?",
+ "ITN": "what's the weather like",
+ "Lexical": "what's the weather like",
+ "MaskedITN": "what's the weather like"
+ },
+ {
+ "Confidence": 0.92459863,
+ "Display": "what is the weather like",
+ "ITN": "what is the weather like",
+ "Lexical": "what is the weather like",
+ "MaskedITN": "what is the weather like"
+ }
] } ```
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/release-notes.md
+
+ Title: Release notes - Custom Translator
+
+description: Custom Translator releases, improvements, bug fixes, and known issues.
++++ Last updated : 05/03/2021+++
+# Custom Translator release notes
+
+This page has the latest release notes for features, improvements, bug fixes, and known issues for the Custom Translator service.
+
+## 2021-May release
+
+### Improvements and Bug fixes
+
+- We added new training pipeline to improve the custom model generalization and capacity to retain more customer terminology (words and phrases).
+- Refreshed Custom Translator baselines to fix word alignment bug. See list of impacted language pair*.
+
+### Language pair list
+
+| Source Language | Target Language |
+|-|--|
+| Arabic (ar) | English (en-us)|
+| Brazilian Portuguese (pt) | English (en-us)|
+| Bulgarian (bg) | English (en-us)|
+| Chinese Simplified (zh-Hans) | English (en-us)|
+| Chinese Traditional (zh-Hant) | English (en-us)|
+| Croatian (hr) | English (en-us)|
+| Czech (cs) | English (en-us)|
+| Danish (da) | English (en-us)|
+| Dutch (nl) | English (en-us)|
+| English (en-us) | Arabic (ar)|
+| English (en-us) | Bulgarian (bg)|
+| English (en-us) | Chinese Simplified (zh-Hans|
+| English (en-us) | Chinese Traditional (zh-Hant|
+| English (en-us) | Czech (cs)|
+| English (en-us) | Danish (da)|
+| English (en-us) | Dutch (nl)|
+| English (en-us) | Estonian (et)|
+| English (en-us) | Fijian (fj)|
+| English (en-us) | Finnish (fi)|
+| English (en-us) | French|
+| English (en-us) | Greek (el)|
+| English (en-us) | Hindi|
+| English (en-us) | Hungarian (hu)|
+| English (en-us) | Icelandic (is)|
+| English (en-us) | Indonesian (id)|
+| English (en-us) | Inuktitut (iu)|
+| English (en-us) | Irish (ga)|
+| English (en-us) | Italian (it)|
+| English (en-us) | Japanese (ja)|
+| English (en-us) | Korean (ko)|
+| English (en-us) | Lithuanian (lt)|
+| English (en-us) | Norwegian (nb)|
+| English (en-us) | Polish (pl)|
+| English (en-us) | Romanian (ro)|
+| English (en-us) | Samoan|
+| English (en-us) | Slovak (sk)|
+| English (en-us) | Spanish (es)|
+| English (en-us) | Swedish (sv)|
+| English (en-us) | Tahitian (ty)|
+| English (en-us) | Thai (th)|
+| English (en-us) | Tongan (to)|
+| English (en-us) | Turkish (tr)|
+| English (en-us) | Ukrainian|
+| English (en-us) | Welsh (cy)|
+| Estonian (et) | English (en-us)|
+| Fijian | English (en-us)|
+| Finnish (fi) | English (en-us)|
+| German (de) | English (en-us)|
+| Greek (el) | English (en-us)|
+| Hungarian (hu) | English (en-us)|
+| Icelandic (is) | English (en-us)|
+| Indonesian (id) | English (en-us)
+| Inuktitut (iu) | English (en-us)|
+| Irish (ga) | English (en-us)|
+| Italian (it) | English (en-us)|
+| Japanese (ja) | English (en-us)|
+| Kazakh (kk) | English (en-us)|
+| Korean (ko) | English (en-us)|
+| Lithuanian (lt) | English (en-us)|
+| Malagasy (mg) | English (en-us)|
+| Maori (mi) | English (en-us)|
+| Norwegian (nb) | English (en-us)|
+| Persian (fa) | English (en-us)|
+| Polish (pl) | English (en-us)|
+| Romanian (ro) | English (en-us)|
+| Russian (ru) | English (en-us)|
+| Slovak (sk) | English (en-us)|
+| Spanish (es) | English (en-us)|
+| Swedish (sv) | English (en-us)|
+| Tahitian (ty) | English (en-us)|
+| Thai (th) | English (en-us)|
+| Tongan (to) | English (en-us)|
+| Turkish (tr) | English (en-us)|
+| Vietnamese (vi) | English (en-us)|
+| Welsh (cy) | English (en-us)|
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/sentence-alignment.md
For a training to succeed, the table below shows the minimum number of sentences
> - Training will not start and will fail if the 10,000 minimum sentence count for Training is not met. > - Tuning and Testing are optional. If you do not provide them, the system will remove an appropriate percentage from Training to use for validation and testing. > - You can train a model using only dictionary data. Please refer to [What is Dictionary](./what-is-dictionary.md).
-> - If your dictionary contains more than 250,000 sentences, **[Document Translator](../document-translation/overview.md)** is likely a better choice.
+> - If your dictionary contains more than 250,000 sentences, our Document Translator is a better choice. Please refer to [Document Translator](https://docs.microsoft.com/azure/cognitive-services/translator/document-translation/overview).
+> - Free (F0) subscription training has a maximum limit of 2,000,000 characters.
## Next steps -- Learn how to use a [dictionary](what-is-dictionary.md) in Custom Translator.
+- Learn how to use a [dictionary](what-is-dictionary.md) in Custom Translator.
cognitive-services Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/frequently-asked-questions.md
+
+ Title: Frequently asked questions - Personalizer
+description: This article contains answers to frequently asked troubleshooting questions about Personalizer.
+++ Last updated : 02/26/2020++
+ms.
+
+# Personalizer frequently asked questions
+
+This article contains answers to frequently asked troubleshooting questions about the Personalizer service.
+
+## Configuration issues
+
+### I changed a configuration setting and now my loop isn't performing at the same learning level. What happened?
+
+Some configuration settings [reset your model](how-to-settings.md#settings-that-include-resetting-the-model). Configuration changes should be carefully planned.
+
+### When configuring Personalizer with the API, I received an error. What happened?
+
+If you use a single API request to configure your service and change your learning behavior, you will get an error. You need to make two separate API calls: first, to configure your service, then to switch learning behavior.
+
+## Transaction errors
+
+### I get an HTTP 429 (Too many requests) response from the service. What can I do?
+
+If you picked a free price tier when you created the Personalizer instance, there is a quota limit on the number of Rank requests that are allowed. Review your API call rate for the Rank API (in the Metrics pane in the Azure portal for your Personalizer resource) and adjust the pricing tier (in the Pricing Tier pane) if your call volume is expected to increase beyond the threshold for chosen pricing tier.
+
+### I'm getting a 5xx error on Rank or Reward APIs. What should I do?
+
+These issues should be transparent. If they continue, contact support by selecting **New support request** in the **Support + troubleshooting** section, in the Azure portal for your Personalizer resource.
+
+## Learning loop
+
+### The learning loop doesn't attain a 100% match to the system without Personalizer. How do I fix this?
+
+The reasons you don't attain your goal with the learning loop:
+* Not enough features sent with Rank API call
+* Bugs in the features sent - such as sending non-aggregated feature data such as timestamps to Rank API
+* Bugs with loop processing - such as not sending reward data to Reward API for events
+
+To fix, you need to change the processing by either changing the features sent to the loop, or make sure the reward is a correct evaluation of the quality of the Rank's response.
+
+### The learning loop doesn't seem to learn. How do I fix this?
+
+The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
+
+If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
+
+### I keep getting rank results with all the same probabilities for all items. How do I know Personalizer is learning?
+
+Personalizer returns the same probabilities in a Rank API result when it has just started and has an _empty_ model, or when you reset the Personalizer Loop, and your model is still within your **Model update frequency** period.
+
+When the new update period begins, the updated model is used, and you'll see the probabilities change.
+
+### The learning loop was learning but seems to not learn anymore, and the quality of the Rank results isn't that good. What should I do?
+
+* Make sure you've completed and applied one evaluation in the Azure portal for that Personalizer resource (learning loop).
+* Make sure all rewards are sent, via the Reward API, and processed.
+
+### How do I know that the learning loop is getting updated regularly and is used to score my data?
+
+You can find the time when the model was last updated in the **Model and Learning Settings** page of the Azure portal. If you see an old timestamp, it is likely because you are not sending the Rank and Reward calls. If the service has no incoming data, it does not update the learning. If you see the learning loop is not updating frequently enough, you can edit the loop's **Model Update frequency**.
+
+## Offline evaluations
+
+### An offline evaluation's feature importance returns a long list with hundreds or thousands of items. What happened?
+
+This is typically due to timestamps, user IDs or some other fine grained features sent in.
+
+### I created an offline evaluation and it succeeded almost instantly. Why is that? I don't see any results?
+
+The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
+
+## Learning policy
+
+### How do I import a learning policy?
+
+Learn more about [learning policy concepts](concept-active-learning.md#understand-learning-policy-settings) and [how to apply](how-to-manage-model.md) a new learning policy. If you do not want to select a learning policy, you can use the [offline evaluation](how-to-offline-evaluation.md) to suggest a learning policy, based on your current events.
++
+## Security
+
+### The API key for my loop has been compromised. What can I do?
+
+You can regenerate one key after swapping your clients to use the other key. Having two keys allows you to propagate the key in a lazy manner without having to have any downtime. We recommend doing this on a regular cycle as a security measure.
++
+## Next steps
+
+[Configure the model update frequency](how-to-settings.md#model-update-frequency)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cognitive-services Text Analytics Resource External Community https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-resource-external-community.md
+ [Using Text Analytics Key Phrase Cognitive Services API from PowerShell (AutomationNext blog)](https://automationnext.wordpress.com/tag/text-analytics/)
-+ [R Quick tip: Microsoft Cognitive ServicesΓÇÖ Text Analytics API (R Bloggers)](https://www.r-bloggers.com/r-quick-tip-microsoft-cognitive-services-text-analytics-api/)
++ [R Quick tip: Azure Cognitive ServicesΓÇÖ Text Analytics API (R Bloggers)](https://www.r-bloggers.com/r-quick-tip-microsoft-cognitive-services-text-analytics-api/) + [Sentiment analysis in Logic App using SQL Server data (TechNet blog)](https://social.technet.microsoft.com/wiki/contents/articles/36074.logic-apps-with-azure-cognitive-service.aspx)
+ [Logic App to detect sentiment and extract key phrases from your text](https://www.youtube.com/watch?v=jVN9NObAzgk)
-+ [Sentiment Analysis using Power BI and Microsoft Cognitive Services](https://www.youtube.com/watch?v=gJ1j3N7Y75k)
++ [Sentiment Analysis using Power BI and Azure Cognitive Services](https://www.youtube.com/watch?v=gJ1j3N7Y75k)
-+ [Text analytics extract key phrases using Power BI and Microsoft Cognitive Services](https://www.youtube.com/watch?v=R_-1TB2BF14)
++ [Text analytics extract key phrases using Power BI and Azure Cognitive Services](https://www.youtube.com/watch?v=R_-1TB2BF14) ## Next steps
-Are you looking for information about a feature or use-case that we don't cover? Consider requesting or voting for it on [UserVoice](https://cognitive.uservoice.com/forums/555922-text-analytics).
+Are you looking for information about a feature or use-case that we don't cover? Consider requesting or voting for it using the [feedback tool](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395749).
## See also [StackOverflow: Azure Text Analytics API](https://stackoverflow.com/questions/tagged/text-analytics-api)
- [StackOverflow: Azure Cognitive Services](https://stackoverflow.com/questions/tagged/microsoft-cognitive)
+ [StackOverflow: Azure Cognitive Services](https://stackoverflow.com/questions/tagged/microsoft-cognitive)
cognitive-services Text Analytics Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-resource-faq.md
No, the models are pretrained. The only operations available on uploaded data ar
Sentiment analysis and key phrase extraction are available for a [select number of languages](./language-support.md). Natural language processing is complex and requires substantial testing before new functionality can be released. For this reason, we avoid pre-announcing support so that no one takes a dependency on functionality that needs more time to mature.
-To help us prioritize which languages to work on next, vote for specific languages on [User Voice](https://cognitive.uservoice.com/forums/555922-text-analytics).
+To help us prioritize which languages to work on next, vote for specific languages using the [feedback tool](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395749).
## Why does key phrase extraction return some words but not others?
No customer configuration is necessary to enable zone-resiliency. Zone-resilienc
## Next steps
-Is your question about a missing feature or functionality? Consider requesting or voting for it on our [UserVoice web site](https://cognitive.uservoice.com/forums/555922-text-analytics).
+Is your question about a missing feature or functionality? Consider requesting or voting for it using the [feedback tool](https://feedback.azure.com/forums/932041-azure-cognitive-services?category_id=395749).
## See also
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-mq.md
Title: Connect to IBM MQ server
-description: Send and retrieve messages with an Azure or on-premises IBM MQ server and Azure Logic Apps
+description: Connect to an MQ server on premises or in Azure from a workflow using Azure Logic Apps.
ms.suite: integration Previously updated : 03/10/2021 Last updated : 04/26/2021 tags: connectors
-# Connect to an IBM MQ server from Azure Logic Apps
+# Connect to an IBM MQ server from a workflow in Azure Logic Apps
-The MQ connector sends and retrieves messages stored in an MQ server on premises or in Azure. This connector includes a Microsoft MQ client that communicates with a remote IBM MQ server across a TCP/IP network. This article provides a starter guide to use the MQ connector. You can start by browsing a single message on a queue and then try other actions.
+The MQ connector helps you connect your logic app workflows to an IBM MQ server that's either on premises or in Azure. You can then have your workflows receive and send messages stored in your MQ server. This article provides a get started guide to using the MQ connector by showing how to connect to your MQ server and add an MQ action to your workflow. For example, you can start by browsing a single message in a queue and then try other actions.
-The MQ connector includes these actions but provides no triggers:
+This connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network. You can connect to the following IBM WebSphere MQ versions:
-- Browse a single message without deleting the message from the MQ server.-- Browse a batch of messages without deleting the messages from the MQ server.-- Receive a single message and delete the message from the MQ server.-- Receive a batch of messages and delete the messages from the MQ server.-- Send a single message to the MQ server.
+* MQ 7.5
+* MQ 8.0
+* MQ 9.0, 9.1, and 9.2
-Here are the officially supported IBM WebSphere MQ versions:
+<a name="available-operations"></a>
- * MQ 7.5
- * MQ 8.0
- * MQ 9.0
- * MQ 9.1
+## Available operations
-## Prerequisites
-
-* If you use an on-premises MQ server, you need to [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network.
+The IBM MQ connector provides actions but no triggers.
- > [!NOTE]
- > If your MQ server is publicly available or available within Azure, you don't have to use the data gateway.
+* Multi-tenant Azure Logic Apps: When you create a consumption-based logic app workflow, you can connect to an MQ server by using the *managed* MQ connector.
- * For the MQ connector to work, the server where you install the on-premises data gateway also needs to have .NET Framework 4.6 installed.
-
- * After you install the on-premises data gateway, you also need to [create an Azure gateway resource for the on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md) that the MQ connector uses to access your on-premises MQ server.
+* Single-tenant Azure Logic Apps (preview): When you create a preview logic app workflow, you can connect to an MQ server by using either the managed MQ connector or the *built-in* MQ operations (preview).
-* The logic app where you want to use the MQ connector. The MQ connector doesn't have any triggers, so you must add a trigger to your logic app first. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md). If you're new to logic apps, try this [quickstart to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+For more information about the difference between a managed connector and built-in operations, review [key terms in Logic Apps](../logic-apps/logic-apps-overview.md#logic-app-concepts).
-## Limitations
+#### [Managed](#tab/managed)
-The MQ connector doesn't support or use the message's **Format** field and doesn't perform any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
+The following list describes only some of the managed operations available for MQ:
-<a name="create-connection"></a>
+* Browse a single message or an array of messages without deleting from the MQ server. For multiple messages, you can specify the maximum number of messages to return per batch. Otherwise, all messages are returned.
+* Delete a single or an array of messages from the MQ server.
+* Receive a single message or an array of messages and then delete from the MQ server.
+* Send a single message to the MQ server.
-## Create MQ connection
+For all the managed connector operations and other technical information, such as properties, limits, and so on, review the [MQ connector's reference page](/connectors/mq/).
-If you don't already have an MQ connection when you add an MQ action, you're prompted to create the connection, for example:
+#### [Built-in (preview)](#tab/built-in)
-![Provide connection information](media/connectors-create-api-mq/connection-properties.png)
+The following list describes only some of the built-in operations available for MQ:
-1. If you're connecting to an on-premises MQ server, select **Connect via on-premises data gateway**.
+* Receive a single message or an array of messages from the MQ server. For multiple messages, you can specify the maximum number of messages to return per batch and the maximum batch size in KB.
+* Send a single message or an array of messages to the MQ server.
-1. Provide the connection information for your MQ server.
+These built-in MQ operations also have the following capabilities plus the benefits from all the other capabilities for logic apps in the [single-tenant Logic Apps service](../logic-apps/logic-apps-overview-preview.md):
- * For **Server**, you can enter the MQ server name, or enter the IP address followed by a colon and the port number.
+* Transport Layer Security (TLS) encryption for data in transit
+* Message encoding for both the send and receive operations
+* Support for Azure virtual network integration when your logic app uses the Azure Functions Premium plan
- * To use Transport Layer Security (TLS) or Secure Sockets Layer (SSL), select **Enable SSL?**.
+
- The MQ connector currently supports only server authentication, not client authentication. For more information, see [Connection and authentication problems](#connection-problems).
+## Limitations
-1. In the **gateway** section, follow these steps:
+The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
- 1. From the **Subscription** list, select the Azure subscription that's associated with your Azure gateway resource.
+## Prerequisites
- 1. From the **Connection Gateway** list, select the Azure gateway resource that you want to use.
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-1. When you're done, select **Create**.
+* If you're using an on-premises MQ server, [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. For the MQ connector to work, the server with the on-premises data gateway also must have .NET Framework 4.6 installed.
-<a name="connection-problems"></a>
+ After you install the gateway, you must also create a data gateway resource in Azure. The MQ connector uses this resource to access your MQ server. For more information, review [Set up the data gateway connection](../logic-apps/logic-apps-gateway-connection.md).
-### Connection and authentication problems
+ > [!NOTE]
+ > You don't need the gateway in the following scenarios:
+ >
+ > * You're going to use the built-in operations, not the managed connector.
+ > * Your MQ server is publicly available or available in Azure.
-When your logic app tries connecting to your on-premises MQ server, you might get this error:
+* The logic app workflow where you want to access your MQ server. Your logic app resource must have the same location as your gateway resource in Azure.
-`"MQ: Could not Connect the Queue Manager '<queue-manager-name>': The Server was expecting an SSL connection."`
+ The MQ connector doesn't have any triggers, so either your workflow must already start with a trigger, or you first have to add a trigger to your workflow. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
-* If you're using the MQ connector directly in Azure, the MQ server needs to use a certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/).
+ If you're new to Azure Logic Apps, try this [quickstart to create an example logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md), which runs in the multi-tenant Logic Apps service.
-* If you're using the on-premises data gateway, try to use a certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/) when possible. However, if this option isn't possible, you could use a self-signed certificate, which is isn't issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/) and is considered less secure.
+<a name="create-connection"></a>
- To install the server's self-signed certificate, you can use the **Windows Certification Manager** (certmgr.msc) tool. For this scenario, on your local computer where the on-premises data gateway service is running, you need to install the certificate in your **Local Computer** certificates store at the **Trusted Root Certification Authorities** level.
+## Create an MQ connection
- 1. On the computer where the on-premises-data gateway service is running, open the start menu, find and select **Manage user certificates**.
+When you add an MQ action for the first time, you're prompted to create a connection to your MQ server.
- 1. After the Windows Certification Manager tool opens, go to the **Certificates - Local Computer** > **Trusted Root Certification Authorities** folder, and install the certificate.
+> [!NOTE]
+> The MQ connector currently supports only server authentication, not client authentication.
+> For more information, see [Connection and authentication problems](#connection-problems).
- > [!IMPORTANT]
- > Make sure that you install certificate in the **Certificates - Local Computer** > **Trusted Root Certification Authorities** store.
+#### [Managed](#tab/managed)
-* The MQ server requires that you define the cipher specification that you want to use for TLS/SSL connections. However, SslStream in .NET doesn't permit you to specify the order for cipher specifications. To work around this limitation, you can change your MQ server configuration to match the first cipher specification in the suite that the connector sends in the TLS/SSL negotiation.
+1. If you're connecting to an on-premises MQ server, select **Connect via on-premises data gateway**.
- When you try the connection, the MQ server logs an event message that indicates the connection failed because the other end used the incorrect cipher specification. The event message contains the cipher specification that appears first in the list. Update the cipher specification in the channel configuration to match the cipher specification in the event message.
+1. Provide the connection information for your MQ server.
-## Browse single message
+ | Property | On-premises or Azure | Description |
+ |-|-|-|
+ | **Gateways** | On-premises only | Select **Connect via on-premises data gateway**. |
+ | **Connection name** | Both | The name to use for your connection |
+ | **Server** | Both | Either of the following values: <p><p>- MQ server host name <br>- IP address followed by a colon and the port number |
+ | **Queue Manager name** | Both | The Queue Manager that you want to use |
+ | **Channel name** | Both | The channel for connecting to the Queue Manager |
+ | **Default queue name** | Both | The default name for the queue |
+ | **Connect As** | Both | The username for connecting to the MQ server |
+ | **Username** | Both | Your username credential |
+ | **Password** | Both | Your password credential |
+ | **Enable SSL?** | On-premises only | Use Transport Layer Security (TLS) or Secure Sockets Layer (SSL) |
+ | **Gateway - Subscription** | On-premises only | The Azure subscription associated with your gateway resource in Azure |
+ | **Gateway - Connection Gateway** | On-premises only | The gateway resource to use |
+ ||||
-1. In your logic app, under the trigger or another action, select **New step**.
+ For example:
-1. In the search box, enter `mq`, and select the **Browse message** action.
+ ![Screenshot showing the managed MQ connection details.](media/connectors-create-api-mq/managed-connection-properties.png)
- ![Select "Browse message" action](media/connectors-create-api-mq/browse-message.png)
+1. When you're done, select **Create**.
-1. If you haven't already created an MQ connection, you're prompted to [create that connection](#create-connection).
+#### [Built-in (preview)](#tab/built-in)
-1. After you create the connection, set up the **Browse message** action's properties:
+1. Provide the connection information for your MQ server.
- | Property | Description |
- |-|-|
- | **Queue** | If different from the queue specified in the connection, specify that queue. |
- | **MessageId**, **CorrelationId**, **GroupId**, and other properties | Browse for a message that's based on the different MQ message properties |
- | **IncludeInfo** | To include additional message information in the output, select **true**. To omit additional message information in the output, select **false**. |
- | **Timeout** | Enter a value to determine how long to wait for a message to arrive in an empty queue. If nothing is entered, the first message in the queue is retrieved, and there is no time spent waiting for a message to appear. |
- |||
+ | Property | On-premises or Azure | Description |
+ |-|-|-|
+ | **Connection name** | Both | The name to use for your connection |
+ | **Server name** | Both | The MQ server name or IP address |
+ | **Port number** | Both | The TCP port number for connecting to the Queue Manager on the host |
+ | **Channel** | Both | The channel for connecting to the Queue Manager |
+ | **Queue Manager name** | Both | The Queue Manager that you want to use |
+ | **Default queue name** | Both | The default name for the queue |
+ | **Connect As** | Both | The username for connecting to the MQ server |
+ | **Username** | Both | Your username credential |
+ | **Password** | Both | Your password credential |
+ | **Use TLS** | Both | Use Transport Layer Security (TLS) |
+ ||||
For example:
- ![Properties for "Browse message" action](media/connectors-create-api-mq/browse-message-properties.png)
+ ![Screenshot showing the built-in MQ connection details.](media/connectors-create-api-mq/built-in-connection-properties.png)
-1. When you're done, on the designer toolbar, select **Save**. To test your app, select **Run**.
+1. When you're done, select **Create**.
- After the run finishes, the designer shows the workflow steps and their status so that you can review the output.
+
-1. To view the details about each step, click the step's title bar. To review more information about a step's output, select **Show raw outputs**.
+<a name="add-action"></a>
- ![Browse message output](media/connectors-create-api-mq/browse-message-output.png)
+## Add an MQ action
- Here is some sample raw output:
+In Azure Logic Apps, an action follows the trigger or another action and performs some operation in your workflow. The following steps describe the general way to add an action, for example, **Browse a single message**.
- ![Browse message raw output](media/connectors-create-api-mq/browse-message-raw-output.png)
+1. In the Logic Apps Designer, open your workflow, if not already open.
-1. If you set **IncludeInfo** to **true**, additional output is shown:
+1. Under the trigger or another action, add a new step.
- ![Browse message include info](media/connectors-create-api-mq/browse-message-include-info.png)
+ To add a step between existing steps, move your mouse over the arrow. Select the plus sign (+) that appears, and then select **Add an action**.
-## Browse multiple messages
+1. In the operation search box, enter `mq`. From the actions list, select the action named **Browse message**.
-The **Browse messages** action includes a **BatchSize** option to indicate how many messages to return from the queue. If **BatchSize** has no value, all messages are returned. The returned output is an array of messages.
+1. If you're prompted to create a connection to your MQ server, [provide the requested connection information](#create-connection).
-1. Follow the previous steps, but add the **Browse messages** action instead.
+1. In the action, provide the property values that the action needs.
-1. If you haven't already created an MQ connection, you're prompted to [create that connection](#create-connection). Otherwise, by default, the first previously configured connection is used. To create a new connection, select **Change connection**. Or, select a different connection.
+ For more properties, open the **Add new parameter** list, and select the properties that you want to add.
-1. Provide the information for the action.
+1. When you're done, on the designer toolbar, select **Save**.
-1. Save and run the logic app.
+1. To test your workflow, on the designer toolbar, select **Run**.
- After the logic app finishes running, here is some sample output from the **Browse messages** action:
+ After the run finishes, the designer shows the workflow's run history along with the status for step.
- ![Sample "Browse messages" output](media/connectors-create-api-mq/browse-messages-output.png)
+1. To review the inputs and outputs for each step that ran (not skipped), expand or select the step.
-## Receive single message
+ * To review more input details, select **Show raw inputs**.
+ * To review more output details, select **Show raw outputs**. If you set **IncludeInfo** to **true**, more output is included.
-The **Receive message** action has the same inputs and outputs as the **Browse message** action. When you use **Receive message**, the message is deleted from the queue.
+## Troubleshoot problems
-## Receive multiple messages
+### Failures with browse or receive actions
-The **Receive messages** action has the same inputs and outputs as the **Browse messages** action. When you use **Receive messages**, the messages are deleted from the queue.
+If you run a browse or receive action on an empty queue, the action fails with the following header outputs:
-> [!NOTE]
-> When running a browse or a receive action on a queue that doesn't have any messages,
-> the action fails with this output:
->
-> ![MQ "no message" error](media/connectors-create-api-mq/mq-no-message-error.png)
+![MQ "no message" error](media/connectors-create-api-mq/mq-no-message-error.png)
-## Send message
+<a name="connection-problems"></a>
-1. Follow the previous steps, but add the **Send message** action instead.
+### Connection and authentication problems
-1. If you haven't already created an MQ connection, you're prompted to [create that connection](#create-connection). Otherwise, by default, the first previously configured connection is used. To create a new connection, select **Change connection**. Or, select a different connection.
+When your workflow tries connecting to your on-premises MQ server, you might get this error:
-1. Provide the information for the action. For **MessageType**, select a valid message type: **Datagram**, **Reply**, or **Request**
+`"MQ: Could not Connect the Queue Manager '<queue-manager-name>': The Server was expecting an SSL connection."`
- ![Properties for "Send message action"](media/connectors-create-api-mq/send-message-properties.png)
+* If you're using the MQ connector directly in Azure, the MQ server needs to use a certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/).
-1. Save and run the logic app.
+* The MQ server requires that you define the cipher specification to use with TLS connections. However, for security purposes and to include the best security suites, the Windows operating system sends a set of supported cipher specifications.
- After the logic app finishes running, here is some sample output from the **Send message** action:
+ The operating system where the MQ server runs chooses the suites to use. To make the configuration match, you have to change your MQ server setup so that the cipher specification matches the option chosen in the TLS negotiation.
- ![Sample "Send message" output](media/connectors-create-api-mq/send-message-output.png)
+ When you try to connect, the MQ server logs an event message that the connection attempt failed because the MQ server chose the incorrect cipher specification. The event message contains the cipher specification that the MQ server chose from the list. In the channel configuration, update the cipher specification to match the cipher specification in the event message.
## Connector reference
-For technical details, such as actions and limits, which are described in the connector's Swagger file, review the [connector's reference page](/connectors/mq/).
+For all the operations in the managed connector and other technical information, such as properties, limits, and so on, review the [MQ connector's reference page](/connectors/mq/).
## Next steps
container-instances Container Instances Liveness Probe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-liveness-probe.md
Azure Container Instances also supports [readiness probes](container-instances-r
## YAML deployment
-Create a `liveness-probe.yaml` file with the following snippet. This file defines a container group that consists of an NGNIX container that eventually becomes unhealthy.
+Create a `liveness-probe.yaml` file with the following snippet. This file defines a container group that consists of an NGINX container that eventually becomes unhealthy.
```yaml apiVersion: 2019-12-01
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cosmos-db Cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-faq.md
- Title: Frequently asked questions about the Cassandra API for Azure Cosmos DB
-description: Get answers to frequently asked questions about the Cassandra API for Azure Cosmos DB.
---- Previously updated : 08/12/2020--
-# Frequently asked questions about the Cassandra API in Azure Cosmos DB
-
-This article describes the functionality differences between Apache Cassandra and Cassandra API in Azure Cosmos DB. It also provides answers to frequently asked questions about the Cassandra API in Azure Cosmos DB.
-
-## Key differences between Apache Cassandra and the Cassandra API
--- Apache Cassandra recommends a 100-MB limit on the size of a partition key. The Cassandra API for Azure Cosmos DB allows up to 20 GB per partition.-- Apache Cassandra allows you to disable durable commits. You can skip writing to the commit log and go directly to the memtables. This can lead to data loss if the node goes down before memtables are flushed to SSTables on disk. Azure Cosmos DB always does durable commits to help prevent data loss.-- Apache Cassandra can see diminished performance if the workload involves many replacements or deletions. The reason is tombstones that the read workload needs to skip over to fetch the latest data. The Cassandra API won't see diminished read performance when the workload has many replacements or deletions.-- During scenarios of high replacement workloads, compaction needs to run to merge SSTables on disk. (A merge is needed because Apache Cassandra's writes are append only. Multiple updates are stored as individual SSTable entries that need to be periodically merged). This situation can also lead to lowered read performance during compaction. This performance impact doesn't happen in the Cassandra API because the API doesn't implement compaction.-- Setting a replication factor of 1 is possible with Apache Cassandra. However, it leads to low availability if the only node with the data goes down. This is not an issue with the Cassandra API for Azure Cosmos DB because there is always a replication factor of 4 (quorum of 3).-- Adding or removing nodes in Apache Cassandra requires manual intervention, along with high CPU usage on the new node while existing nodes move some of their token ranges to the new node. This situation is the same when you're decommissioning an existing node. However, the Cassandra API scales out without any issues observed in the service or application.-- There is no need to set **num_tokens** on each node in the cluster as in Apache Cassandra. Azure Cosmos DB fully manages nodes and token ranges.-- The Cassandra API is fully managed. You don't need the **nodetool** commands, such as repair and decommission, that are used in Apache Cassandra.-
-## Other frequently asked questions
-
-### What protocol version does the Cassandra API support?
-
-The Cassandra API for Azure Cosmos DB supports CQL version 3.x. Its CQL compatibility is based on the public [Apache Cassandra GitHub repository](https://github.com/apache/cassandra/blob/trunk/doc/cql3/CQL.textile). If you have feedback about supporting other protocols, let us know via [user voice feedback](https://feedback.azure.com/forums/263030-azure-cosmos-db) or send email to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com).
-
-### Why is choosing throughput for a table a requirement?
-
-Azure Cosmos DB sets the default throughput for your container based on where you create the table from: Azure portal or CQL.
-
-Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operations. These guarantees are possible when the engine can enforce governance on the tenant's operations. Setting throughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operation success.
-You can [elastically change throughput](manage-scale-cassandra.md) to benefit from the seasonality of your application and save costs.
-
-The throughput concept is explained in the [Request Units in Azure Cosmos DB](request-units.md) article. The throughput for a table is equally distributed across the underlying physical partitions.
-
-### What is the throughput of a table that's created through CQL?
-
-Azure Cosmos DB uses Request Units per second (RU/s) as a currency for providing throughput. Tables created through CQL have 400 RU by default. You can change the RU from the Azure portal.
-
-CQL
-
-```shell
-CREATE TABLE keyspaceName.tablename (user_id int PRIMARY KEY, lastname text) WITH cosmosdb_provisioned_throughput=1200
-```
-
-.NET
-
-```csharp
-int provisionedThroughput = 400;
-var simpleStatement = new SimpleStatement($"CREATE TABLE {keyspaceName}.{tableName} (user_id int PRIMARY KEY, lastname text)");
-var outgoingPayload = new Dictionary<string, byte[]>();
-outgoingPayload["cosmosdb_provisioned_throughput"] = Encoding.UTF8.GetBytes(provisionedThroughput.ToString());
-simpleStatement.SetOutgoingPayload(outgoingPayload);
-```
-
-### What happens when throughput is used up?
-
-Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operations. These guarantees are possible when the engine can enforce governance on the tenant's operations. Setting throughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operation success.
-
-When you go over this capacity, you get the following error message that indicates your capacity was used up:
-
-**0x1001 Overloaded: the request can't be processed because "Request Rate is large"**
-
-It's essential to see what operations (and their volume) cause this issue. You can get an idea about consumed capacity going over the provisioned capacity with metrics on the Azure portal. Then you need to ensure that capacity is consumed nearly equally across all underlying partitions. If you see that one partition is consuming most of the throughput, you have skew of workload.
-
-Metrics are available that show you how throughput is used over hours, over days, and per seven days, across partitions or in aggregate. For more information, see [Monitoring and debugging with metrics in Azure Cosmos DB](use-metrics.md).
-
-Diagnostic logs are explained in the [Azure Cosmos DB diagnostic logging](./monitor-cosmos-db.md) article.
-
-### Does the primary key map to the partition key concept of Azure Cosmos DB?
-
-Yes, the partition key is used to place the entity in the right location. In Azure Cosmos DB, it's used to find the right logical partition that's stored on a physical partition. The partitioning concept is well explained in the [Partition and scale in Azure Cosmos DB](partitioning-overview.md) article. The essential takeaway here is that a logical partition shouldn't go over the 20-GB limit.
-
-### What happens when I get a notification that a partition is full?
-
-Azure Cosmos DB is a system based on service-level agreement (SLA). It provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. This unlimited storage is based on horizontal scale-out of data, using partitioning as the key concept. The partitioning concept is well explained in the [Partition and scale in Azure Cosmos DB](partitioning-overview.md) article.
-
-You should adhere to the 20-GB limit on the number of entities or items per logical partition. To ensure that your application scales well, we recommend that you *not* create a hot partition by storing all information in one partition and querying it. This error can come only if your data is skewed: that is, you have lot of data for one partition key (more than 20 GB). You can find the distribution of data by using the storage portal. The way to fix this error is to re-create the table and choose a granular primary (partition key), which allows better distribution of data.
-
-### Can I use the Cassandra API as a key value store with millions or billions of partition keys?
-
-Azure Cosmos DB can store unlimited data by scaling out the storage. This storage is independent of the throughput. Yes, you can always use the Cassandra API just to store and retrieve keys and values by specifying the right primary/partition key. These individual keys get their own logical partition and sit atop a physical partition without issues.
-
-### Can I create more than one table with the Cassandra API?
-
-Yes, it's possible to create more than one table with the Cassandra API. Each of those tables is treated as unit for throughput and storage.
-
-### Can I create more than one table in succession?
-
-Azure Cosmos DB is resource-governed system for both data and control plane activities. Containers, like collections and tables, are runtime entities that are provisioned for a given throughput capacity. The creation of these containers in quick succession isn't an expected activity and might be throttled. If you have tests that drop or create tables immediately, try to space them out.
-
-### What is the maximum number of tables that I can create?
-
-There's no physical limit on the number of tables. If you have a large number of tables (where the total steady size goes over 10 TB of data) that need to be created, not the usual tens or hundreds, send email to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com).
-
-### What is the maximum number of keyspaces that I can create?
-
-There's no physical limit on the number of keyspaces because they're metadata containers. If you have a large number of keyspaces, send email to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com).
-
-### Can I bring in a lot of data after starting from a normal table?
-
-Yes. Assuming uniformly distributed partitions, the storage capacity is automatically managed and increases as you push in more data. So you can confidently import as much data as you need without managing and provisioning nodes and more. But if you're anticipating a lot of immediate data growth, it makes more sense to directly [provision for the anticipated throughput](set-throughput.md) rather than starting lower and increasing it immediately.
-
-### Can I use YAML file settings to configure API behavior?
-
-The Cassandra API for Azure Cosmos DB provides protocol-level compatibility for executing operations. It hides away the complexity of management, monitoring, and configuration. As a developer/user, you don't need to worry about availability, tombstones, key cache, row cache, bloom filter, and a multitude of other settings. The Cassandra API focuses on providing the read and write performance that you need without the overhead of configuration and management.
-
-### Will the Cassandra API support node addition, cluster status, and node status commands?
-
-The Cassandra API simplifies capacity planning and responding to the elasticity demands for throughput and storage. With Azure Cosmos DB, you provision the throughput that you need. Then you can scale it up and down any number of times through the day, without worrying about adding, deleting, or managing nodes. You don't need to use tools for node and cluster management.
-
-### What happens with various configuration settings for keyspace creation like simple/network?
-
-Azure Cosmos DB provides global distribution out of the box for availability and low-latency reasons. You don't need to set up replicas or other things. Writes are always durably quorum committed in any region where you write, while providing performance guarantees.
-
-### What happens with various settings for table metadata?
-
-Azure Cosmos DB provides performance guarantees for reads, writes, and throughput. So you don't need to worry about touching any of the configuration settings and accidentally manipulating them. Those settings include bloom filter, caching, read repair chance, gc_grace, and compression memtable_flush_period.
-
-### Is time-to-live supported for Cassandra tables?
-
-Yes, TTL is supported.
-
-### How can I monitor infrastructure along with throughput?
-
-Azure Cosmos DB is a platform service that helps you increase productivity and not worry about managing and monitoring infrastructure. For example, you don't need to monitor node status, replica status, gc, and OS parameters earlier with various tools. You just need to take care of throughput that's available in portal metrics to see if you're getting throttled, and then increase or decrease that throughput. You can:
--- Monitor [SLAs](./monitor-cosmos-db.md)-- Use [metrics](use-metrics.md)-- Use [diagnostic logs](./monitor-cosmos-db.md)-
-### Which client SDKs can work with the Cassandra API?
-
-The Apache Cassandra SDK's client drivers that use CQLv3 were used for client programs. If you have other drivers that you use or if you're facing issues, send mail to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com).
-
-### Are composite partition keys supported?
-
-Yes, you can use regular syntax to create composite partition keys.
-
-### Can I use sstableloader for data loading?
-
-No, sstableloader isn't supported.
-
-### Can I pair an on-premises Apache Cassandra cluster with the Cassandra API?
-
-At present, Azure Cosmos DB has an optimized experience for a cloud environment without the overhead of operations. If you require pairing, send mail to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com) with a description of your scenario. We're working on an offering to help pair the on-premises or cloud Cassandra cluster with the Cassandra API for Azure Cosmos DB.
-
-### Does the Cassandra API provide full backups?
-
-Azure Cosmos DB provides two free full backups taken at four-hour intervals across all APIs. So you don't need to set up a backup schedule.
-
-If you want to modify retention and frequency, send email to [askcosmosdbcassandra@microsoft.com](mailto:askcosmosdbcassandra@microsoft.com) or raise a support case. Information about backup capability is provided in the [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md) article.
-
-### How does the Cassandra API account handle failover if a region goes down?
-
-The Cassandra API borrows from the globally distributed platform of Azure Cosmos DB. To ensure that your application can tolerate datacenter downtime, enable at least one more region for the account in the Azure portal. For more information, see [High availability with Azure Cosmos DB](high-availability.md).
-
-You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. To use the database, you need to provide an application there too. When you do so, your customers won't experience downtime.
-
-### Does the Cassandra API index all attributes of an entity by default?
-
-No. The Cassandra API supports [secondary indexes](cassandra-secondary-index.md), which behave in a similar way to Apache Cassandra. The API does not index every attribute by default.
--
-### Can I use the new Cassandra API SDK locally with the emulator?
-
-Yes, this is supported. You can find details on how to enable this in the [Use the Azure Cosmos DB Emulator for local development and testing](local-emulator.md#cassandra-api) article.
--
-### How can I migrate data from Apache Cassandra clusters to Azure Cosmos DB?
-
-You can read about migration options in the [Migrate your data to Cassandra API account in Azure Cosmos DB](cassandra-import-data.md) tutorial.
--
-### Where can I give feedback on Cassandra API features?
-
-Provide feedback via [user voice feedback](https://feedback.azure.com/forums/263030-azure-cosmos-db).
-
-[azure-portal]: https://portal.azure.com
-[query]: ./sql-query-getting-started.md
-
-## Next steps
--- Get started with [elastically scaling an Azure Cosmos DB Cassandra API account](manage-scale-cassandra.md).
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-introduction.md
The Cassandra API enables you to interact with data stored in Azure Cosmos DB us
* To learn about Apache Cassandra features supported by Azure Cosmos DB Cassandra API, see [Cassandra support](cassandra-support.md) article.
-* Read the [Frequently Asked Questions](cassandra-faq.md).
+* Read the [Frequently Asked Questions](cassandra-faq.yml).
cosmos-db Continuous Backup Restore Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-frequently-asked-questions.md
- Title: Frequently asked questions about Azure Cosmos DB point-in-time restore feature.
-description: This article lists frequently asked questions about the Azure Cosmos DB point-in-time restore feature that is achieved by using the continuous backup mode.
--- Previously updated : 02/01/2021-----
-# Frequently asked questions on the Azure Cosmos DB point-in-time restore feature (Preview)
-
-> [!IMPORTANT]
-> The point-in-time restore feature(continuous backup mode) for Azure Cosmos DB is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-This article lists frequently asked questions about the Azure Cosmos DB point-in-time restore functionality(Preview) that is achieved by using the continuous backup mode.
-
-## How much time does it takes to restore?
-The restore duration dependents on the size of your data.
-
-### Can I submit the restore time in local time?
-The restore may not happen depending on whether the key resources like databases or containers existed at that time. You can verify by entering the time and looking at the selected database or container for a given time. If you see no resources exist to restore, then the restore process doesn't work.
-
-### How can I track if an account is being restored?
-After you submit the restore command, and wait on the same page, after the operation is complete, the status bar shows successfully restored account message. You can also search for the restored account and [track the status of account being restored](continuous-backup-restore-portal.md#track-restore-status). While restore is in progress, the status of the account will be *Creating*, after the restore operation completes, the account status will change to *Online*.
-
-Similarly for PowerShell and CLI, you can track the progress of restore operation by executing `az cosmosdb show` command as follows:
-
-```azurecli-interactive
-az cosmosdb show --name "accountName" --resource-group "resourceGroup"
-```
-
-The provisioningState shows *Succeeded* when the account is online.
-
-```json
-{
-"virtualNetworkRules": [],
-"writeLocations" : [
-{
- "documentEndpoint": "https://<accountname>.documents.azure.com:443/",
- "failoverpriority": 0,
- "id": "accountName" ,
- "isZoneRedundant" : false,
- "locationName": "West US 2",
- "provisioningState": "Succeeded"
-}
-]
-}
-```
-
-### How can I find out whether an account was restored from another account?
-Run the `az cosmosdb show` command, in the output, you can see that the value of `createMode` property. If the value is set to **Restore**. it indicates that the account was restored from another account. The `restoreParameters` property has further details such as `restoreSource`, which has the source account ID. The last GUID in the `restoreSource` parameter is the instanceId of the source account.
-
-For example, in the following output, the source account's instance ID is *7b4bb-f6a0-430e-ade1-638d781830cc*
-
-```json
-"restoreParameters": {
- "databasesToRestore" : [],
- "restoreMode": "PointInTime",
- "restoreSource": "/subscriptions/2a5b-f6a0-430e-ade1-638d781830dd/providers/Microsoft.DocumentDB/locations/westus/restorableDatabaseAccounts/7b4bb-f6a0-430e-ade1-638d781830cc",
- "restoreTimestampInUtc": "2020-06-11T22:05:09Z"
-}
-```
-
-### What happens when I restore a shared throughput database or a container within a shared throughput database?
-The entire shared throughput database is restored. You cannot choose a subset of containers in a shared throughput database for restore.
-
-### What is the use of InstanceID in the account definition?
-At any given point in time, Azure Cosmos DB account's `accountName` property is globally unique while it is alive. However, after the account is deleted, it is possible to create another account with the same name and hence the "accountName" is no longer enough to identify an instance of an account.
-
-ID or the `instanceId` is a property of an instance of an account and it is used to disambiguate across multiple accounts (live and deleted) if they have same name for restore. You can get the instance ID by running the `Get-AzCosmosDBRestorableDatabaseAccount` or `az cosmosdb restorable-database-account` commands. The name attribute value denotes the "InstanceID".
-
-## Next steps
-
-* What is [continuous backup](continuous-backup-restore-introduction.md) mode?
-* Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
-* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
-* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/faq.md
To learn about frequently asked questions in other APIs, see:
* Frequently asked questions about [Azure Cosmos DB's API for MongoDB](mongodb-api-faq.md) * Frequently asked questions about [Gremlin API in Azure Cosmos DB](gremlin-api-faq.md)
-* Frequently asked questions about [Cassandra API in Azure Cosmos DB](cassandra-faq.md)
+* Frequently asked questions about [Cassandra API in Azure Cosmos DB](cassandra-faq.yml)
* Frequently asked questions about [Table API in Azure Cosmos DB](table-api-faq.md)
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/index-overview.md
Previously updated : 04/27/2021 Last updated : 05/04/2021
Here is a table that summarizes the different ways indexes are used in Azure Cos
| Precise index scan | Binary search of indexed values and load only matching items from the transactional data store | Range comparisons (>, <, <=, or >=), StartsWith | Comparable to index seek, increases slightly based on the cardinality of indexed properties | Increases based on number of items in query results | | Expanded index scan | Optimized search (but less efficient than a binary search) of indexed values and load only matching items from the transactional data store | StartsWith (case-insensitive), StringEquals (case-insensitive) | Increases slightly based on the cardinality of indexed properties | Increases based on number of items in query results | | Full index scan | Read distinct set of indexed values and load only matching items from the transactional data store | Contains, EndsWith, RegexMatch, LIKE | Increases linearly based on the cardinality of indexed properties | Increases based on number of items in query results |
-| Full scan | Load all items | Upper, Lower | N/A | Increases based on number of items in container |
+| Full scan | Load all items from the transactional data store | Upper, Lower | N/A | Increases based on number of items in container |
When writing queries, you should use filter predicate that use the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
Azure Cosmos DB uses an inverted index. The index works by mapping each JSON pat
| /locations/0/country | Germany | 1 | | /locations/0/country | Ireland | 2 | | /locations/0/city | Berlin | 1 |
-| /locations/0/city | Dublin | 1 |
+| /locations/0/city | Dublin | 2 |
| /locations/1/country | France | 1 | | /locations/1/city | Paris | 1 |
-| /headquarters/country | Belgium | 2 |
+| /headquarters/country | Belgium | 1,2 |
| /headquarters/employees | 200 | 2 | | /headquarters/employees | 250 | 1 |
Consider the following query:
```sql SELECT * FROM company
-WHERE StartsWith(company.headquarters.country, "United", true)
+WHERE STARTSWITH(company.headquarters.country, "United", true)
```
-The query predicate (filtering on items that have headquarters in a country that start with case-sensitive "United") can be evaluated with an expanded index scan of the `headquarters/country` path. Operations that do an expanded index scan have optimizations that can help avoid needs to scan every index page but are slightly more expensive than a precise index scan's binary search.
+The query predicate (filtering on items that have headquarters in a country that start with case-insensitive "United") can be evaluated with an expanded index scan of the `headquarters/country` path. Operations that do an expanded index scan have optimizations that can help avoid needs to scan every index page but are slightly more expensive than a precise index scan's binary search.
For example, when evaluating case-insensitive `StartsWith`, the query engine will check the index for different possible combinations of uppercase and lowercase values. This optimization allows the query engine to avoid reading the majority of index pages. Different system functions have different optimizations that they can use to avoid reading every index page, so we'll broadly categorize these as expanded index scan.
Consider the following query:
```sql SELECT * FROM company
-WHERE Contains(company.headquarters.country, "United")
+WHERE CONTAINS(company.headquarters.country, "United")
``` The query predicate (filtering on items that have headquarters in a country that contains "United") can be evaluated with an index scan of the `headquarters/country` path. Unlike a precise index scan, a full index scan will always scan through the distinct set of possible values to identify the index pages where there are results. In this case, `Contains` is run on the index. The index lookup time and RU charge for index scans increases as the cardinality of the path increases. In other words, the more possible distinct values that the query engine needs to scan, the higher the latency and RU charge involved in doing a full index scan.
-For example, consider two properties: town and country. The cardinality of town is 5,000 and the cardinality of country is 200. Here are two example queries that each have a [Contains](sql-query-contains.md) system function that does an index scan on the `town` property. The first query will use more RUs than the second query because the cardinality of town is higher than country.
+For example, consider two properties: town and country. The cardinality of town is 5,000 and the cardinality of country is 200. Here are two example queries that each have a [Contains](sql-query-contains.md) system function that does a full index scan on the `town` property. The first query will use more RUs than the second query because the cardinality of town is higher than country.
```sql SELECT *
FROM company
WHERE company.headquarters.employees = 200 AND CONTAINS(company.headquarters.country, "United") ```
-To execute this query, the query engine must do a precise index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If, for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
+To execute this query, the query engine must do an index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If, for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
## Index utilization for scalar aggregate functions
For example, consider the following query:
```sql SELECT * FROM company
-WHERE Contains(company.headquarters.country, "United")
+WHERE CONTAINS(company.headquarters.country, "United")
``` The `Contains` system function may return some false positive matches, so the query engine will need to verify whether each loaded item matches the filter expression. In this example, the query engine may only need to load an extra few items, so the impact on index utilization and RU charge is minimal.
However, queries with aggregate functions must rely exclusively on the index in
```sql SELECT COUNT(1) FROM company
-WHERE Contains(company.headquarters.country, "United")
+WHERE CONTAINS(company.headquarters.country, "United")
``` Like in the first example, the `Contains` system function may return some false positive matches. Unlike the `SELECT *` query, however, the `Count` query can't evaluate the filter expression on the loaded items to verify all index matches. The `Count` query must rely exclusively on the index, so if there's a chance a filter expression will return false positive matches, the query engine will resort to a full scan.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
cosmos-db Sql Query String Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-string-functions.md
Title: String functions in Azure Cosmos DB query language description: Learn about string SQL system functions in Azure Cosmos DB.-+ Previously updated : 04/27/2021- Last updated : 05/04/2021+ # String functions (Azure Cosmos DB)
cosmos-db Sql Query Type Checking Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-type-checking-functions.md
Title: Type checking functions in Azure Cosmos DB query language description: Learn about type checking SQL system functions in Azure Cosmos DB.-+ Previously updated : 09/13/2019- Last updated : 05/04/2021+ # Type checking functions (Azure Cosmos DB)
The type-checking functions let you check the type of an expression within a SQL
## Functions
-HereΓÇÖs a table of supported built-in type-checking functions:
+The following functions support type checking against input values, and each return a Boolean value:
-The following functions support type checking against input values, and each return a Boolean value.
-
-* [IS_ARRAY](sql-query-is-array.md)
-* [IS_BOOL](sql-query-is-bool.md)
-* [IS_DEFINED](sql-query-is-defined.md)
-* [IS_NULL](sql-query-is-null.md)
-* [IS_NUMBER](sql-query-is-number.md)
-* [IS_OBJECT](sql-query-is-object.md)
-* [IS_PRIMITIVE](sql-query-is-primitive.md)
-* [IS_STRING](sql-query-is-string.md)
+| System function | Index usage | [Index usage in queries with scalar aggregate functions](index-overview.md#index-utilization-for-scalar-aggregate-functions) | Remarks |
+| -- | -- | | - |
+| [IS_ARRAY](sql-query-is-array.md) | Full scan | Full scan | |
+| [IS_BOOL](sql-query-is-bool.md) | Index seek | Index seek | |
+| [IS_DEFINED](sql-query-is-defined.md) | Index seek | Index seek | |
+| [IS_NULL](sql-query-is-null.md) | Index seek | Index seek | |
+| [IS_NUMBER](sql-query-is-number.md) | Index seek | Index seek | |
+| [IS_OBJECT](sql-query-is-object.md) | Full scan | Full scan | |
+| [IS_PRIMITIVE](sql-query-is-primitive.md) | Index seek | Index seek | |
+| [IS_STRING](sql-query-is-string.md) | Index seek | Index seek |
## Next steps
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mca-setup-account.md
tags: billing
Previously updated : 03/19/2021 Last updated : 04/12/2021
You can use the following options to start the migration experience for your EA
- Sign in to the Azure portal using the link in the email that was sent to you when you signed the Microsoft Customer Agreement. -- If you don't have the email, sign in using the following link. Replace `enrollmentNumber` with the enrollment number of your enterprise agreement that was renewed.
+- If you don't have the email, sign in using the following link.
- `https://portal.azure.com/#blade/Microsoft_Azure_EA/EATransitionToMCA/enrollmentId/<enrollmentNumber>`
+ `https://portal.azure.com/#blade/Microsoft_Azure_SubscriptionManagement/TransitionEnrollment`
If you have both the enterprise administrator and billing account owner roles or billing profile role, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition.
If you have both the enterprise administrator and billing account owner roles or
If you don't have the enterprise administrator role for the enterprise agreement or the billing profile owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
-### If you're not an enterprise administrator on the enrollment
+#### If you're not an enterprise administrator on the enrollment
You see the following page in the Azure portal if you have a billing account or billing profile owner role but you're not an enterprise administrator.
You have two options:
If you're given the enterprise administrator role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send it to the enterprise administrator.
-### If you're not an owner of the billing profile
+#### If you're not an owner of the billing profile
-If you're an enterprise administrator but you don't have a billing account or billing profile owner role for your Microsoft Customer Agreement, You see the following page in the Azure portal.
+If you're an enterprise administrator but you don't have a billing account, you'll see the following error in the Azure portal that prevents the transition.
If you believe that you have billing profile owner access to the correct Microsoft Customer Agreement and you see the following message, make sure that you are in the correct tenant for your organization. You might need to change directories.
You have two options:
If you're given the billing account owner or billing profile owner role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send the link to the billing account owner.
+#### Prepare enrollment for transition
+
+After you have owner access to both your EA enrollment and billing profile, you prepare them for transition.
+
+Open the migration that you were presented previously, or open the link that you were sent in email. The link is `https://portal.azure.com/#blade/Microsoft_Azure_SubscriptionManagement/TransitionEnrollment`.
+
+The following image shows and example of the Prepare your enterprise agreement enrollments for transition window.
++
+Next, select the source enrollment to transition. Then select the billing account and billing profile. If validation passes without any problems similar to the following screen, select **Continue** to proceed.
++
+**Error conditions**
+
+If you have the Enterprise Administrator (read-only) role, you'll see the following error that prevents the transition. You must have the Enterprise Administrator role before before you can transition your enrollment.
+
+`Select another enrollment. You do not hve Enterprise Administrator write permission to the enrollment.`
+
+If your enrollment has more than 60 days until its end date, you'll see the following error that prevents the transition. The current date must be within 60 of the enrollment end before you can transition your enrollment.
+
+`Select another enrollment. This enrollment has more than 60 days before its end date.`
+
+If your enrollment still has credits, you'll see the following error that prevents the transition. You must use all of your credits before you can transition your enrollment.
+
+`Select another enrollment. This enrollment still has credits and can't be transitioned to a billing account.`
+
+If you don't have owner permission to the billing profile, you'll see the following error that prevents the transition. You must the have billing profile owner role before before you can transition your enrollment.
+
+`Select another Billing Profile. You do not have owner permission to this profile.`
+
+If your new billing profile doesn't have the new plan enabled, you'll see the following error. You must enable the plan before you can transition your enrollment.
+
+`Select another Billing Profile. The current selection does not have Azure Plan and Azure dev test plan enabled on it.`
+ ## Understand changes to your billing hierarchy Your new billing account simplifies billing for your organization while providing you enhanced billing and cost management capabilities. The following diagram explains how billing is organized in the new billing account.
To complete the setup, you need access to both the new billing account and the E
1. Sign in to the Azure portal using the link in the email that was sent to you when you signed the Microsoft Customer Agreement.
-2. If you don't have the email, sign in using the following link. Replace `<enrollmentNumber>` with the enrollment number of your enterprise agreement that was renewed.
+2. If you don't have the email, sign in using the following link.
- `https://portal.azure.com/#blade/Microsoft_Azure_EA/EATransitionToMCA/enrollmentId/<enrollmentNumber>`
+ `https://portal.azure.com/#blade/Microsoft_Azure_SubscriptionManagement/TransitionEnrollment`
3. Select **Start transition** in the last step of the setup. Once you select start transition:
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
An input dataset represents the input for an activity in the pipeline, and an ou
Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the data stores listed in the table in this section. Data from any source can be written to any sink. Click a data store to learn how to copy data to and from that store. For more information, see [Copy Activity - Overview](copy-activity-overview.md) article.
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-marketplace-web-service.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Amazon Marketplace Web Service connector.
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-redshift.md
Specifically, this Amazon Redshift connector supports retrieving data from Redsh
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Amazon Redshift connector.
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-simple-storage-service.md
For the full list of Amazon S3 permissions, see [Specifying Permissions in a Pol
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Amazon S3.
The following properties are supported for an Amazon S3 linked service:
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for Amazon S3 under `location` settings in a format-based dataset:
For a full list of sections and properties available for defining activities, se
### Amazon S3 as a source type The following properties are supported for Amazon S3 under `storeSettings` settings in a format-based copy source:
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
For the Copy activity, this Blob storage connector supports:
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to Blob storage.
These properties are supported for an Azure Blob storage linked service:
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for Azure Blob storage under `location` settings in a format-based dataset:
For a full list of sections and properties available for defining activities, se
### Blob storage as a source type The following properties are supported for Azure Blob storage under `storeSettings` settings in a format-based copy source:
The following properties are supported for Azure Blob storage under `storeSettin
### Blob storage as a sink type The following properties are supported for Azure Blob storage under `storeSettings` settings in a format-based copy sink:
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
You can use the Azure Cosmos DB's API for MongoDB connector to:
## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to Azure Cosmos DB's API for MongoDB.
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Data Factory integrates with the [Azure Cosmos DB bulk executor library](https:/
## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to Azure Cosmos DB (SQL API).
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-explorer.md
With the Azure Data Explorer connector, you can do the following:
>[!TIP] >For a walkthrough of Azure Data Explorer connector, see [Copy data to/from Azure Data Explorer using Azure Data Factory](/azure/data-explorer/data-factory-load-data) and [Bulk copy from a database to Azure Data Explorer](/azure/data-explorer/data-factory-template). The following sections provide details about properties that are used to define Data Factory entities specific to Azure Data Explorer connector.
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
For Copy activity, with this connector you can:
>[!TIP] >For a walk-through of how to use the Data Lake Storage Gen2 connector, see [Load data into Azure Data Lake Storage Gen2](load-azure-data-lake-storage-gen2.md). The following sections provide information about properties that are used to define Data Factory entities specific to Data Lake Storage Gen2.
These properties are supported for the linked service:
For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md). The following properties are supported for Data Lake Storage Gen2 under `location` settings in the format-based dataset:
For a full list of sections and properties available for defining activities, se
### Azure Data Lake Storage Gen2 as a source type You have several options to copy data from ADLS Gen2:
The following properties are supported for Data Lake Storage Gen2 under `storeSe
### Azure Data Lake Storage Gen2 as a sink type The following properties are supported for Data Lake Storage Gen2 under `storeSettings` settings in format-based copy sink:
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
Specifically, with this connector you can:
> [!TIP] > For a walk-through of how to use the Azure Data Lake Store connector, see [Load data into Azure Data Lake Store](load-azure-data-lake-store.md). The following sections provide information about properties that are used to define Data Factory entities specific to Azure Data Lake Store.
In Azure Data Factory, you don't need to specify any properties besides the gene
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for Azure Data Lake Store Gen1 under `location` settings in the format-based dataset:
For a full list of sections and properties available for defining activities, se
### Azure Data Lake Store as source The following properties are supported for Azure Data Lake Store Gen1 under `storeSettings` settings in the format-based copy source:
The following properties are supported for Azure Data Lake Store Gen1 under `sto
### Azure Data Lake Store as sink The following properties are supported for Azure Data Lake Store Gen1 under `storeSettings` settings in the format-based copy sink:
data-factory Connector Azure Database For Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mariadb.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Azure Database for MariaDB connector.
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mysql.md
This Azure Database for MySQL connector is supported for the following activitie
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Azure Database for MySQL connector.
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
Currently, data flow supports Azure database for PostgreSQL Single Server but no
## Getting started The following sections offer details about properties that are used to define Data Factory entities specific to Azure Database for PostgreSQL connector.
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
For cluster configuration details, see [Configure clusters](/azure/databricks/cl
## Get started The following sections provide details about properties that define Data Factory entities specific to an Azure Databricks Delta Lake connector.
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-file-storage.md
Specifically, this Azure File Storage connector supports:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Azure File Storage.
Data Factory supports the following properties for using shared access signature
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for Azure File Storage under `location` settings in format-based dataset:
For a full list of sections and properties available for defining activities, se
### Azure File Storage as source The following properties are supported for Azure File Storage under `storeSettings` settings in format-based copy source:
The following properties are supported for Azure File Storage under `storeSettin
### Azure File Storage as sink The following properties are supported for Azure File Storage under `storeSettings` settings in format-based copy sink:
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-search.md
You can copy data from any supported source data store into search index. For a
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Azure Cognitive Search connector.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
For Copy activity, this Azure Synapse Analytics connector supports these functio
> [!TIP] > To achieve best performance, use PolyBase or COPY statement to load data into Azure Synapse Analytics. The [Use PolyBase to load data into Azure Synapse Analytics](#use-polybase-to-load-data-into-azure-synapse-analytics) and [Use COPY statement to load data into Azure Synapse Analytics](#use-copy-statement) sections have details. For a walkthrough with a use case, see [Load 1 TB into Azure Synapse Analytics under 15 minutes with Azure Data Factory](load-azure-sql-data-warehouse.md). The following sections provide details about properties that define Data Factory entities specific to an Azure Synapse Analytics connector.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
If you use Azure SQL Database [serverless tier](../azure-sql/database/serverless
## Get started The following sections provide details about properties that are used to define Azure Data Factory entities specific to an Azure SQL Database connector.
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
To access the SQL Managed Instance private endpoint, set up a [self-hosted integ
## Get started The following sections provide details about properties that are used to define Azure Data Factory entities specific to the SQL Managed Instance connector.
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-table-storage.md
Specifically, this Azure Table connector supports copying data by using account
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to Table storage.
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-cassandra.md
Specifically, this Cassandra connector supports:
## Prerequisites The Integration Runtime provides a built-in Cassandra driver, therefore you don't need to manually install any driver when copying data from/to Cassandra. ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Cassandra connector.
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-concur.md
You can copy data from Concur to any supported sink data store. For a list of da
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Concur connector.
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-couchbase.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Couchbase connector.
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-db2.md
Specifically, this DB2 connector supports the following IBM DB2 platforms and ve
## Prerequisites The Integration Runtime provides a built-in DB2 driver, therefore you don't need to manually install any driver when copying data from DB2. ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to DB2 connector.
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-drill.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Drill connector.
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-ax.md
Specifically, this Dynamics AX connector supports copying data from Dynamics AX
## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to Dynamics AX connector.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
To use this connector with Azure AD service-principal authentication, you must s
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to Dynamics.
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
Specifically, this file system connector supports:
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to file system.
The following properties are supported for file system linked service:
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for file system under `location` settings in format-based dataset:
For a full list of sections and properties available for defining activities, se
### File system as source The following properties are supported for file system under `storeSettings` settings in format-based copy source:
The following properties are supported for file system under `storeSettings` set
### File system as sink The following properties are supported for file system under `storeSettings` settings in format-based copy sink:
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-ftp.md
The FTP connector support FTP server running in passive mode. Active mode is not
## Prerequisites ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to FTP.
The following properties are supported for FTP linked service:
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for FTP under `location` settings in format-based dataset:
For a full list of sections and properties available for defining activities, se
### FTP as source The following properties are supported for FTP under `storeSettings` settings in format-based copy source:
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-adwords.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Google AdWords connector.
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-bigquery.md
Data Factory provides a built-in driver to enable connectivity. Therefore, you d
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the Google BigQuery connector.
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-google-cloud-storage.md
For the full list of Google Cloud Storage roles and associated permissions, see
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Google Cloud Storage.
Here's an example:
## Dataset properties The following properties are supported for Google Cloud Storage under `location` settings in a format-based dataset:
For a full list of sections and properties available for defining activities, se
### Google Cloud Storage as a source type The following properties are supported for Google Cloud Storage under `storeSettings` settings in a format-based copy source:
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-greenplum.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Greenplum connector.
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hbase.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to HBase connector.
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hdfs.md
Specifically, the HDFS connector supports:
## Prerequisites > [!NOTE] > Make sure that the integration runtime can access *all* the [name node server]:[name node port] and [data node servers]:[data node port] of the Hadoop cluster. The default [name node port] is 50070, and the default [data node port] is 50075. ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to HDFS.
The following properties are supported for the HDFS linked service:
For a full list of sections and properties that are available for defining datasets, see [Datasets in Azure Data Factory](concepts-datasets-linked-services.md). The following properties are supported for HDFS under `location` settings in the format-based dataset:
For a full list of sections and properties that are available for defining activ
### HDFS as source The following properties are supported for HDFS under `storeSettings` settings in the format-based Copy source:
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hive.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Hive connector.
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
You can use this HTTP connector to:
## Prerequisites ## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to the HTTP connector.
In addition, you can configure request headers for authentication along with the
For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for HTTP under `location` settings in format-based dataset:
For a full list of sections and properties that are available for defining activ
### HTTP as source The following properties are supported for HTTP under `storeSettings` settings in format-based copy source:
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-hubspot.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to HubSpot connector.
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-impala.md
Data Factory provides a built-in driver to enable connectivity. Therefore, you d
## Prerequisites ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the Impala connector.
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-informix.md
To use this Informix connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Informix connector.
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-jira.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Jira connector.
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-magento.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Magento connector.
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mariadb.md
This connector currently supports MariaDB of version 10.0 to 10.2.
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to MariaDB connector.
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-marketo.md
Currently, Marketo instance which is integrated with external CRM is not support
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Marketo connector.
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-microsoft-access.md
To use this Microsoft Access connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Access connector.
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-atlas.md
If you use Azure Integration Runtime for copy, make sure you add the effective r
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB Atlas connector.
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-legacy.md
Specifically, this MongoDB connector supports:
## Prerequisites The Integration Runtime provides a built-in MongoDB driver, therefore you don't need to manually install any driver when copying data from MongoDB. ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB connector.
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
Specifically, this MongoDB connector supports **versions up to 4.2**.
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to MongoDB connector.
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mysql.md
Specifically, this MySQL connector supports MySQL **version 5.6, 5.7 and 8.0**.
## Prerequisites The Integration Runtime provides a built-in MySQL driver starting from version 3.7, therefore you don't need to manually install any driver. ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to MySQL connector.
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-netezza.md
Azure Data Factory provides a built-in driver to enable connectivity. You don't
## Prerequisites ## Get started
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
Specifically, this OData connector supports:
## Prerequisites ## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to an OData connector.
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odbc.md
To use this ODBC connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to ODBC connector.
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-eloqua.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Eloqua connector.
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle-service-cloud.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Oracle Service Cloud connector.
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
Specifically, this Oracle connector supports:
## Prerequisites The integration runtime provides a built-in Oracle driver. Therefore, you don't need to manually install a driver when you copy data from and to Oracle. ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the Oracle connector.
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Azure Data Factory supports the following data stores and formats via Copy, Data
## Supported data stores ## Integrate with more data stores
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-paypal.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to PayPal connector.
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-phoenix.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Phoenix connector.
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-postgresql.md
Specifically, this PostgreSQL connector supports PostgreSQL **version 7.4 and ab
## Prerequisites The Integration Runtime provides a built-in PostgreSQL driver starting from version 3.7, therefore you don't need to manually install any driver. ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to PostgreSQL connector.
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-presto.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Presto connector.
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-quickbooks.md
This connector supports QuickBooks OAuth 2.0 authentication.
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to QuickBooks connector.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
Specifically, this generic REST connector supports:
## Prerequisites ## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to the REST connector.
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce-service-cloud.md
You might also receive the "REQUEST_LIMIT_EXCEEDED" error message in both scenar
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the Salesforce Service Cloud connector.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
You might also receive the "REQUEST_LIMIT_EXCEEDED" error message in both scenar
## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the Salesforce connector.
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse-open-hub.md
To use this SAP Business Warehouse Open Hub connector, you need to:
> > For a walkthrough of using SAP BW Open Hub connector, see [Load data from SAP Business Warehouse (BW) by using Azure Data Factory](load-sap-bw-data.md). The following sections provide details about properties that are used to define Data Factory entities specific to SAP Business Warehouse Open Hub connector.
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-business-warehouse.md
To use this SAP Business Warehouse connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to SAP Business Warehouse connector.
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
Specifically, this connector enables Azure Data Factory to copy data from/to SAP
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to SAP Cloud for Customer connector.
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-ecc.md
To use this SAP ECC connector, you need to expose the SAP ECC entities via OData
- **Activate and configure the SAP OData service**. You can activate the OData service through TCODE SICF in seconds. You can also configure which objects need to be exposed. For more information, see the [step-by-step guidance](https://blogs.sap.com/2012/10/26/step-by-step-guide-to-build-an-odata-service-based-on-rfcs-part-1/). ## Get started The following sections provide details about properties that are used to define the Data Factory entities specific to the SAP ECC connector.
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-hana.md
To use this SAP HANA connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to SAP HANA connector.
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
To use this SAP table connector, you need to:
## Get started The following sections provide details about properties that are used to define the Data Factory entities specific to the SAP table connector.
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-servicenow.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to ServiceNow connector.
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
Specifically, the SFTP connector supports:
## Prerequisites ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to SFTP.
To use multi-factor authentication which is a combination of basic and SSH publi
For a full list of sections and properties that are available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. The following properties are supported for SFTP under `location` settings in the format-based dataset:
For a full list of sections and properties that are available for defining activ
### SFTP as source The following properties are supported for SFTP under the `storeSettings` settings in the format-based Copy source:
The following properties are supported for SFTP under the `storeSettings` settin
### SFTP as a sink The following properties are supported for SFTP under `storeSettings` settings in a format-based Copy sink:
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
The SharePoint List Online connector uses service principal authentication to co
## Get started The following sections provide details about properties you can use to define Data Factory entities that are specific to SharePoint Online List connector.
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-shopify.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Shopify connector.
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
For the Copy activity, this Snowflake connector supports the following functions
## Get started The following sections provide details about properties that define Data Factory entities specific to a Snowflake connector.
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-spark.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Spark connector.
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Specifically, this SQL Server connector supports:
## Prerequisites ## Get started The following sections provide details about properties that are used to define Data Factory entities specific to the SQL Server database connector.
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-square.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Square connector.
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sybase.md
To use this Sybase connector, you need to:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Sybase connector.
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-teradata.md
Specifically, this Teradata connector supports:
## Prerequisites If you use Self-hosted Integration Runtime, note it provides a built-in Teradata driver starting from version 3.18. You don't need to manually install any driver. The driver requires "Visual C++ Redistributable 2012 Update 4" on the self-hosted integration runtime machine. If you don't yet have it installed, download it from [here](https://www.microsoft.com/en-sg/download/details.aspx?id=30679). ## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to the Teradata connector.
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-vertica.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Prerequisites ## Getting started
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-web-table.md
To use this Web table connector, you need to set up a Self-hosted Integration Ru
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Web table connector.
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-xero.md
Specifically, this Xero connector supports:
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Xero connector.
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-zoho.md
Azure Data Factory provides a built-in driver to enable connectivity, therefore
## Getting started The following sections provide details about properties that are used to define Data Factory entities specific to Zoho connector.
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-lookup-activity.md
Note the following:
The following data sources are supported for Lookup activity. ## Syntax
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
To copy data from a source to a sink, the service that runs the Copy activity pe
## Supported data stores and formats ### Supported file formats You can use the Copy activity to copy files as-is between two file-based data stores, in which case the data is copied efficiently without any serialization or deserialization. In addition, you can also parse or generate files of a given format, for example, you can perform the following:
The service that enables the Copy activity is available globally in the regions
## Configuration In general, to use the Copy activity in Azure Data Factory, you need to:
data-factory Data Flow Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-tutorials.md
Previously updated : 12/14/2020 Last updated : 05/04/2021 # Mapping data flow video tutorials
As updates are constantly made to the product, some features have added or diffe
[Parse transformation](https://www.youtube.com/watch?v=r7O7AJcuqoY)
+[Transform complex data types](https://youtu.be/Wk0C76wnSDE)
+ ## Source and sink [Reading and writing JSONs](https://www.youtube.com/watch?v=yY5aB7Kdhjg)
As updates are constantly made to the product, some features have added or diffe
[Azure Integration Runtimes for Data Flows](https://www.youtube.com/watch?v=VT_2ZV3a7Fc)
+[Quick cluster start-up time with Azure IR](https://www.youtube.com/watch?v=mxzsOZX6WVY)
+ ## Mapping data flow scenarios [Fuzzy lookups](http://youtu.be/7gdwExjHBbw)
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-movement-security-considerations.md
Previously updated : 05/26/2020 Last updated : 05/03/2021 # Security considerations for data movement in Azure Data Factory
The credentials can be stored within data factory or be [referenced by data fact
#### Ports used when encrypting linked service on self-hosted integration runtime
-By default, PowerShell uses port 8060 on the machine with self-hosted integration runtime for secure communication. If necessary, this port can be changed.
+By default, when remote access from intranet is enabled, PowerShell uses port 8060 on the machine with self-hosted integration runtime for secure communication. If necessary, this port can be changed from the Integration Runtime Configuration Manager on the Settings tab:
-![HTTPS port for the gateway](media/data-movement-security-considerations/https-port-for-gateway.png)
### Encryption in transit
In an enterprise, a corporate firewall runs on the central router of the organiz
The following table provides outbound port and domain requirements for corporate firewalls: > [!NOTE] > You might have to manage ports or set up allow list for domains at the corporate firewall level as required by the respective data sources. This table only uses Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake Store as examples.
data-factory Format Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-excel.md
Follow this article when you want to **parse the Excel files**. Azure Data Facto
Excel format is supported for the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md). It is supported as source but not sink.
-**Note**: ".xls" format is not supported while using [HTTP](connector-http.md).
+>[!NOTE]
+>".xls" format is not supported while using [HTTP](connector-http.md).
## Dataset properties
source(allowSchemaDrift: true,
- [Copy activity overview](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)
+- [GetMetadata activity](control-flow-get-metadata-activity.md)
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 04/28/2021 Last updated : 05/04/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Quickstart Create Data Factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-copy-data-tool.md
In this quickstart, you use the Azure portal to create a data factory. Then, you
> [!NOTE] > If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before doing this quickstart. ## Create a data factory
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-dot-net.md
This quickstart describes how to use .NET SDK to create an Azure Data Factory. T
> [!NOTE] > This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md). ### Visual Studio
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
This quickstart describes how to use the Azure Data Factory UI to create and mon
> [!NOTE] > If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before doing this quickstart. ### Video Watching this video helps you understand the Data Factory UI:
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-powershell.md
This quickstart describes how to use PowerShell to create an Azure Data Factory.
> [!NOTE] > This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md). ### Azure PowerShell
$RunId = Invoke-AzDataFactoryV2Pipeline `
"target": "CopyFromBlobToBlob" ``` ## Next steps
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/supported-file-formats-and-compression-codecs.md
*This article applies to the following connectors: [Amazon S3](connector-amazon-simple-storage-service.md), [Azure Blob](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md), [Azure File Storage](connector-azure-file-storage.md), [File System](connector-file-system.md), [FTP](connector-ftp.md), [Google Cloud Storage](connector-google-cloud-storage.md), [HDFS](connector-hdfs.md), [HTTP](connector-http.md), and [SFTP](connector-sftp.md).* You can use the [Copy activity](copy-activity-overview.md) to copy files as-is between two file-based data stores, in which case the data is copied efficiently without any serialization or deserialization.
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Note the following points:
* For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (Azure Storage, SQL Database, SQL Managed Instance, and so on) and computes (Azure HDInsight, etc.) used by the data factory can be in other regions. ## Create linked services
data-factory Data Factory Azure Blob Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-blob-connector.md
You can copy data from any supported source data store to Azure Blob Storage or
## Supported scenarios You can copy data **from Azure Blob Storage** to the following data stores: You can copy data from the following data stores **to Azure Blob Storage**: > [!IMPORTANT] > Copy Activity supports copying data from/to both general-purpose Azure Storage accounts and Hot/Cool Blob storage. The activity supports **reading from block, append, or page blobs**, but supports **writing to only block blobs**. Azure Premium Storage is not supported as a sink because it is backed by page blobs.
The following sections provide details about JSON properties that are used to de
## Linked service properties There are two types of linked services you can use to link an Azure Storage to an Azure data factory. They are: **AzureStorage** linked service and **AzureStorageSas** linked service. The Azure Storage linked service provides the data factory with global access to the Azure Storage. Whereas, The Azure Storage SAS (Shared Access Signature) linked service provides the data factory with restricted/time-bound access to the Azure Storage. There are no other differences between these two linked services. Choose the linked service that suits your needs. The following sections provide more details on these two linked services. ## Dataset properties To specify a dataset to represent input or output data in an Azure Blob Storage, you set the type property of the dataset to: **AzureBlob**. Set the **linkedServiceName** property of the dataset to the name of the Azure Storage or Azure Storage SAS linked service. The type properties of the dataset specify the **blob container** and the **folder** in the blob storage.
data-factory Data Factory Azure Datalake Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-datalake-connector.md
This article explains how to use Copy Activity in Azure Data Factory to move dat
## Supported scenarios You can copy data **from Azure Data Lake Store** to the following data stores: You can copy data from the following data stores **to Azure Data Lake Store**: > [!NOTE] > Create a Data Lake Store account before creating a pipeline with Copy Activity. For more information, see [Get started with Azure Data Lake Store](../../data-lake-store/data-lake-store-get-started-portal.md).
data-factory Data Factory Azure Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-sql-connector.md
This article explains how to use the Copy Activity in Azure Data Factory to move
## Supported scenarios You can copy data **from Azure SQL Database** to the following data stores: You can copy data from the following data stores **to Azure SQL Database**: ## Supported authentication type Azure SQL Database connector supports basic authentication.
data-factory Data Factory Azure Sql Data Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-sql-data-warehouse-connector.md
This article explains how to use the Copy Activity in Azure Data Factory to move
## Supported scenarios You can copy data **from Azure Synapse Analytics** to the following data stores: You can copy data from the following data stores **to Azure Synapse Analytics**: > [!TIP] > When copying data from SQL Server or Azure SQL Database to Azure Synapse Analytics, if the table does not exist in the destination store, Data Factory can automatically create the table in Azure Synapse Analytics by using the schema of the table in the source data store. See [Auto table creation](#auto-table-creation) for details.
Data Factory creates the table in the destination store with the same table name
| NVarChar | NVarChar (up to 4000) | | Xml | Varchar (up to 8000) | ## Type mapping for Azure Synapse Analytics As mentioned in the [data movement activities](data-factory-data-movement-activities.md) article, Copy activity performs automatic type conversions from source types to sink types with the following 2-step approach:
data-factory Data Factory Azure Table Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-table-connector.md
The following sections provide details about JSON properties that are used to de
## Linked service properties There are two types of linked services you can use to link an Azure blob storage to an Azure data factory. They are: **AzureStorage** linked service and **AzureStorageSas** linked service. The Azure Storage linked service provides the data factory with global access to the Azure Storage. Whereas, The Azure Storage SAS (Shared Access Signature) linked service provides the data factory with restricted/time-bound access to the Azure Storage. There are no other differences between these two linked services. Choose the linked service that suits your needs. The following sections provide more details on these two linked services. ## Dataset properties For a full list of sections & properties available for defining datasets, see the [Creating datasets](data-factory-create-datasets.md) article. Sections such as structure, availability, and policy of a dataset JSON are similar for all dataset types (Azure SQL, Azure blob, Azure table, etc.).
data-factory Data Factory Copy Activity Tutorial Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md
You can also reuse the template to perform repeated tasks. For example, you need
## Next steps In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Activity Tutorial Using Dotnet Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
For complete documentation on .NET API for Data Factory, see [Data Factory .NET
In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Activity Tutorial Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-powershell.md
In this tutorial, you created an Azure data factory to copy data from an Azure b
## Next steps In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Activity Tutorial Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md
In this tutorial, you used REST API to create an Azure data factory to copy data
## Next steps In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Activity Tutorial Using Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-visual-studio.md
It is not advisable and often against security policy to commit sensitive data s
## Next steps In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Data Wizard Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-copy-data-wizard-tutorial.md
In this step, you use the Azure portal to create an Azure data factory named **A
## Next steps In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity: For details about fields/properties that you see in the copy wizard for a data store, click the link for the data store in the table.
data-factory Data Factory Create Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-datasets.md
As you can see, the linked service defines how to connect to a SQL database. The
## <a name="Type"></a> Dataset type The type of the dataset depends on the data store you use. See the following table for a list of data stores supported by Data Factory. Click a data store to learn how to create a linked service and a dataset for that data store. > [!NOTE] > Data stores with * can be on-premises or on Azure infrastructure as a service (IaaS). These data stores require you to install [Data Management Gateway](data-factory-data-management-gateway.md).
data-factory Data Factory Create Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-create-pipelines.md
An input dataset represents the input for an activity in the pipeline and an out
### Data movement activities Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the following data stores. Data from any source can be written to any sink. Click a data store to learn how to copy data to and from that store. > [!NOTE] > Data stores with * can be on-premises or on Azure IaaS, and require you to install [Data Management Gateway](data-factory-data-management-gateway.md) on an on-premises/Azure IaaS machine.
Copy Activity in Data Factory copies data from a source data store to a sink dat
For more information, see [Data Movement Activities](data-factory-data-movement-activities.md) article. ### Data transformation activities For more information, see [Data Transformation Activities](data-factory-data-transformation-activities.md) article.
data-factory Data Factory Data Movement Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-data-movement-activities.md
Copy Activity in Data Factory copies data from a source data store to a sink dat
> [!NOTE] > If you need to move data to/from a data store that Copy Activity doesn't support, use a **custom activity** in Data Factory with your own logic for copying/moving data. For details on creating and using a custom activity, see [Use custom activities in an Azure Data Factory pipeline](data-factory-use-custom-activities.md). > [!NOTE] > Data stores with * can be on-premises or on Azure IaaS, and require you to install [Data Management Gateway](data-factory-data-management-gateway.md) on an on-premises/Azure IaaS machine.
data-factory Data Factory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-faq.md
Pipelines are supposed to bundle related activities. If the datasets that connec
### What are the supported data stores? Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the following data stores. Data from any source can be written to any sink. Click a data store to learn how to copy data to and from that store. > [!NOTE] > Data stores with * can be on-premises or on Azure IaaS, and require you to install [Data Management Gateway](data-factory-data-management-gateway.md) on an on-premises/Azure IaaS machine. ### What are the supported file formats?
+Azure Data Factory supports the following file format types:
+
+* [Text format](data-factory-supported-file-and-compression-formats.md#text-format)
+* [JSON format](data-factory-supported-file-and-compression-formats.md#json-format)
+* [Avro format](data-factory-supported-file-and-compression-formats.md#avro-format)
+* [ORC format](data-factory-supported-file-and-compression-formats.md#orc-format)
+* [Parquet format](data-factory-supported-file-and-compression-formats.md#parquet-format)
### Where is the copy operation performed? See [Globally available data movement](data-factory-data-movement-activities.md#global) section for details. In short, when an on-premises data store is involved, the copy operation is performed by the Data Management Gateway in your on-premises environment. And, when the data movement is between two cloud stores, the copy operation is performed in the region closest to the sink location in the same geography.
data-factory Data Factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-introduction.md
A pipeline can have one or more activities. Activities define the actions to per
### Data movement activities Copy Activity in Data Factory copies data from a source data store to a sink data store. Data from any source can be written to any sink. Select a data store to learn how to copy data to and from that store. Data Factory supports the following data stores: For more information, see [Move data by using Copy Activity](data-factory-data-movement-activities.md). ### Data transformation activities For more information, see [Move data by using Copy Activity](data-factory-data-transformation-activities.md).
data-factory Data Factory Onprem File System Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-onprem-file-system-connector.md
This article explains how to use the Copy Activity in Azure Data Factory to copy
## Supported scenarios You can copy data **from an on-premises file system** to the following data stores: You can copy data from the following data stores **to an on-premises file system**: > [!NOTE] > Copy Activity does not delete the source file after it is successfully copied to the destination. If you need to delete the source file after a successful copy, create a custom activity to delete the file and use the activity in the pipeline.
data-factory Data Factory Onprem Oracle Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-onprem-oracle-connector.md
This article explains how to use Copy Activity in Azure Data Factory to move dat
You can copy data *from an Oracle database* to the following data stores: You can copy data from the following data stores *to an Oracle database*: ## Prerequisites
data-factory Data Factory Salesforce Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-salesforce-connector.md
See [RelationalSource type properties](#copy-activity-properties) for the list o
> [!NOTE] > To map columns from source dataset to columns from sink dataset, see [Mapping dataset columns in Azure Data Factory](data-factory-map-columns.md). ## Performance and tuning See the [Copy Activity performance and tuning guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
data-factory Data Factory Sftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-sftp-connector.md
For a full list of sections & properties available for defining activities, see
Whereas, the properties available in the typeProperties section of the activity vary with each activity type. For Copy activity, the type properties vary depending on the types of sources and sinks. ## Supported file and compression formats See [File and compression formats in Azure Data Factory](data-factory-supported-file-and-compression-formats.md) article on details.
data-factory Data Factory Sqlserver Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-sqlserver-connector.md
This article explains how to use the Copy Activity in Azure Data Factory to move
## Supported scenarios You can copy data **from a SQL Server database** to the following data stores: You can copy data from the following data stores **to a SQL Server database**: ## Supported SQL Server versions This SQL Server connector support copying data from/to the following versions of instance hosted on-premises or in Azure IaaS using both SQL authentication and Windows authentication: SQL Server 2016, SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008, SQL Server 2005
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/28/2021 Last updated : 05/04/2021
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Title: How to deploy VMs on your Azure Stack Edge Pro via the Azure portal
+ Title: Deploy VMs on your Azure Stack Edge Pro via the Azure portal
description: Learn how to create and manage VMs on your Azure Stack Edge Pro via the Azure portal.
Last updated 03/30/2021
-# Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro device so I can use it to transform the data before sending it to Azure.
+# Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro device so that I can use it to transform data before I send it to Azure.
# Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-You can create and manage virtual machines (VMs) on an Azure Stack Edge device using Azure portal, templates, Azure PowerShell cmdlets and via Azure CLI/Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge device using the Azure portal.
-
-This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+You can create and manage virtual machines (VMs) on an Azure Stack Edge device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge device by using the Azure portal.
> [!IMPORTANT] > We recommend that you enable multifactor authentication for the user who manages VMs that are deployed on your device from the cloud.
This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Az
The high-level summary of the deployment workflow is as follows:
-1. Enable a network interface for compute on your Azure Stack Edge device. This creates a virtual switch on the specified network interface.
-1. Enable cloud management of virtual machines from Azure portal.
-1. Upload a VHD to an Azure Storage account using Storage Explorer.
-1. Use the uploaded VHD to download the VHD onto the device and create a VM image from the VHD.
+1. Enable a network interface for compute on your Azure Stack Edge device. This step creates a virtual switch on the specified network interface.
+1. Enable cloud management of VMs from the Azure portal.
+1. Upload a VHD to an Azure Storage account by using Azure Storage Explorer.
+1. Use the uploaded VHD to download the VHD onto the device, and create a VM image from the VHD.
1. Use the resources created in the previous steps: 1. VM image that you created.
- 1. VSwitch associated with the network interface on which you enabled compute.
- 1. Subnet associated with the VSwitch.
+ 1. Virtual switch associated with the network interface on which you enabled compute.
+ 1. Subnet associated with the virtual switch.
And create or specify the following resources inline: 1. VM name, choose a supported VM size, sign-in credentials for the VM. 1. Create new data disks or attach existing data disks.
- 1. Configure static or dynamic IP for the VM. If providing a static IP, choose from a free IP in the subnet range of the network interface enabled for compute.
-
- Use the resources from above to create a virtual machine.
+ 1. Configure static or dynamic IP for the VM. If you're providing a static IP, choose from a free IP in the subnet range of the network interface enabled for compute.
+ Use the preceding resources to create a VM.
## Prerequisites Before you begin to create and manage VMs on your device via the Azure portal, make sure that:
-1. You have completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure Azure Stack Edge Pro device](./azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
+1. You've completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure an Azure Stack Edge Pro device](./azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
- 1. You have enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. In the local UI of your device, go to **Compute**. Select the network interface that you will use to create a virtual switch.
+ 1. You've enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. In the local UI of your device, go to **Compute**. Select the network interface that you'll use to create a virtual switch.
> [!IMPORTANT]
- > You can only configure one port for compute.
+ > You can configure only one port for compute.
1. Enable compute on the network interface. Azure Stack Edge Pro creates and manages a virtual switch corresponding to that network interface.
-1. You have access to a Windows or Linux VHD that you will use to create the VM image for the virtual machine you intend to create.
+1. You have access to a Windows or Linux VHD that you'll use to create the VM image for the VM you intend to create.
## Deploy a VM
-Follow these steps to create a virtual machine on your Azure Stack Edge device.
+Follow these steps to create a VM on your Azure Stack Edge device.
### Add a VM image
-1. Upload a VHD to an Azure Storage account. Follow the steps in [Upload a VHD using Azure Storage Explorer](../devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md).
+1. Upload a VHD to an Azure Storage account. Follow the steps in [Upload a VHD by using Azure Storage Explorer](../devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md).
-1. In the Azure portal, go to the Azure Stack Edge resource for your Azure Stack Edge device. Go to **Edge compute > Virtual Machines**.
+1. In the Azure portal, go to the Azure Stack Edge resource for your Azure Stack Edge device. Go to **Edge compute** > **Virtual machines**.
- ![Add VM image 1](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-1.png)
+ ![Screenshot that shows Edge compute and Virtual machines.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-1.png)
-1. Select **Virtual Machines** to go to the **Overview** page. **Enable** virtual machine cloud management.
- ![Add VM image 2](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-2.png)
+1. Select **Virtual Machines** to go to the **Overview** page. Select **Enable** to enable virtual machine cloud management.
-1. The first step is to add a VM image. You have already uploaded a VHD into the storage account in the earlier step. You will use this VHD to create a VM image.
+ ![Screenshot that shows the Overview page with the Enable button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-2.png)
- Select **Add image** to download the VHD from the storage account and add to the device. The download process takes several minutes depending upon the size of the VHD and the internet bandwidth available for the download.
+1. The first step is to add a VM image. You've already uploaded a VHD into the storage account in the earlier step. You'll use this VHD to create a VM image.
- ![Add VM image 3](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-3.png)
+ Select **Add** to download the VHD from the storage account and add it to the device. The download process takes several minutes depending on the size of the VHD and the internet bandwidth available for the download.
-1. In the **Add image** blade, input the following parameters. Select **Add**.
+ ![Screenshot that shows the Overview page with the Add button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-3.png)
+
+1. On the **Add image** pane, input the following parameters. Select **Add**.
|Parameter |Description | ||| |Download from storage blob |Browse to the location of the storage blob in the storage account where you uploaded the VHD. |
- |Download to | Automatically set to the current device where you are deploying the virtual machine. |
- |Save image as | The name for the VM image that you are creating from the VHD you uploaded to the storage account. |
- |OS type |Choose from Windows or Linux as the operating system of the VHD you will use to create the VM image. |
+ |Download to | Automatically set to the current device where you're deploying the VM. |
+ |Save image as | The name for the VM image that you're creating from the VHD you uploaded to the storage account. |
+ |OS type |Choose from Windows or Linux as the operating system of the VHD you'll use to create the VM image. |
- ![Add VM image 4](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
+ ![Screenshot that shows the Add image page with the Add button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
+
+1. The VHD is downloaded, and the VM image is created. The image creation takes several minutes to complete. You'll see a notification for the successful completion of the VM image.
-1. The VHD is downloaded and the VM image is created. The image creation takes several minutes to complete. You see a notification for the successful completion of the VM image.
+ ![Screenshot that shows the notification for successful completion.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-8.png)
- ![Add VM image 5](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-8.png)
+1. After the VM image is successfully created, it's added to the list of images on the **Images** pane.
-1. After the VM image is successfully created, it is added to the list of images in the **Images** blade.
- ![Add VM image 6](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
+ ![Screenshot that shows the Images pane.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
- The **Deployments** blade updates to indicate the status of the deployment.
+ The **Deployments** pane updates to indicate the status of the deployment.
- ![Add VM image 7](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-10.png)
+ ![Screenshot that shows the Deployments pane.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-10.png)
- The newly added image is also displayed in the **Overview** page.
- ![Add VM image 8](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-11.png)
+ The newly added image is also displayed on the **Overview** page.
+
+ ![Screenshot that shows the Overview page with the image.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-11.png)
### Add a VM
-Follow these steps to create a VM after you have created a VM image.
+Follow these steps to create a VM after you've created a VM image.
-1. In the **Overview** page, select **Add virtual machine**.
+1. On the **Overview** page, select **Add virtual machine**.
- ![Add VM 1](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-1.png)
+ ![Screenshot that shows the Overview page and the Add virtual machine button.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-1.png)
-1. In the **Basics** tab, input the following parameters.
+1. On the **Basics** tab, input the following parameters.
|Parameter |Description |
Follow these steps to create a VM after you have created a VM image.
|Edge resource group | Create a new resource group for all the resources associated with the VM. | |Image | Select from the VM images available on the device. | |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md). |
- |Username | Use the default username *azureuser* for the admin to sign into the VM. |
- |Authentication type | Choose from SSH public key or a user-defined password. |
- |Password | Enter a password to sign into the virtual machine. The password must be at least 12 characters long and meet the defined [Complexity requirements](../virtual-machines/windows/faq.md#what-are-the-password-requirements-when-creating-a-vm). |
+ |Username | Use the default username **azureuser** for the admin to sign in to the VM. |
+ |Authentication type | Choose from an SSH public key or a user-defined password. |
+ |Password | Enter a password to sign in to the VM. The password must be at least 12 characters long and meet the defined [complexity requirements](../virtual-machines/windows/faq.md#what-are-the-password-requirements-when-creating-a-vm). |
|Confirm password | Enter the password again. |
- ![Add VM 2](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-basics-1.png)
+ ![Screenshot that shows the Basics tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-basics-1.png)
Select **Next: Disks**.
-1. In the **Disks** tab, you will attach disks to your VM.
+1. On the **Disks** tab, you'll attach disks to your VM.
1. You can choose to **Create and attach a new disk** or **Attach an existing disk**.
- ![Add VM 3](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-disks-1.png)
+ ![Screenshot that shows the Disks tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-disks-1.png)
- 1. Select **Create and attach a new disk**. In the **Create new disk** blade, provide a name for the disk and the size in GiB.
+ 1. Select **Create and attach a new disk**. On the **Create new disk** pane, provide a name for the disk and the size in GiB.
- ![Add VM 4](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-disks-2.png)
+ ![Screenshot that shows the Create a new disk tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-disks-2.png)
- 1. Repeat the above to process to add more disks. After the disks are created, they show up in the **Disks** tab. Select **Next: Networking**.
+ 1. Repeat the preceding process to add more disks. After the disks are created, they show up on the **Disks** tab. Select **Next: Networking**.
-1. In the **Networking** tab, you will configure the network connectivity for your VM.
+1. On the **Networking** tab, you'll configure the network connectivity for your VM.
|Parameter |Description |
Follow these steps to create a VM after you have created a VM image.
|Subnet | This field is automatically populated with the subnet associated with the network interface on which you enabled compute. | |IP address | Provide a static or a dynamic IP for your VM. The static IP should be an available, free IP from the specified subnet range. |
- ![Add VM 6](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-networking-1.png)
+ ![Screenshot that shows the Networking tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-networking-1.png)
- Select **Next: Review + Create**.
+ Select **Next: Advanced**.
-1. In the **Advanced** tab, you can specify the custom data or the cloud-init to customize your VM.
+1. On the **Advanced** tab, you can specify the custom data or the cloud-init to customize your VM.
- You can use cloud-init to customize a VM on its first boot. Use the cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, no additional steps are requires to apply your configuration. For detailed information on cloud-init, see [Cloud-init overview](../virtual-machines/linux/tutorial-automate-vm-deployment.md#cloud-init-overview).
+ You can use cloud-init to customize a VM on its first boot. Use the cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, no other steps are required to apply your configuration. For more information on cloud-init, see [Cloud-init overview](../virtual-machines/linux/tutorial-automate-vm-deployment.md#cloud-init-overview).
- ![Add VM 7](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-advanced-1.png)
+ ![Screenshot that shows the Advanced tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-advanced-1.png)
+
+ Select **Next: Review + Create**.
-1. In the **Review + Create** tab, review the specifications for the VM and select **Create**.
+1. On the **Review + Create** tab, review the specifications for the VM. Then select **Create**.
- ![Add VM 8](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-review-create-1.png)
+ ![Screenshot that shows the Review + create tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-review-create-1.png)
1. The VM creation starts and can take up to 20 minutes. You can go to **Deployments** to monitor the VM creation.
- ![Add VM 9](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-deployments-page-1.png)
+ ![Screenshot that shows the Deployments page.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-deployments-page-1.png)
1. After the VM is successfully created, the **Overview** page updates to display the new VM.
- ![Add VM 10](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-overview-page-1.png)
+ ![Screenshot that shows the Overview page with the new VM listed.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-overview-page-1.png)
1. Select the newly created VM to go to **Virtual machines**.
- ![Add VM 11](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-page-1.png)
+ ![Screenshot that shows selecting the new VM.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-page-1.png)
Select the VM to see the details.
- ![Add VM 12](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-details-1.png)
+ ![Screenshot that shows the VM details.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-details-1.png)
## Connect to a VM
-Depending on whether you created a Windows or a Linux VM, the steps to connect can be different. You can't connect to the VMs deployed on your device via the Azure portal. You need to take the following steps to connect to your Linux or Windows VM.
+Depending on whether you created a Linux or Windows VM, the steps to connect can be different. You can't connect to the VMs deployed on your device via the Azure portal. Follow the steps to connect to your Linux or Windows VM.
-### Connect to Linux VM
+### Connect to a Linux VM
Follow these steps to connect to a Linux VM. [!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-linux.md)]
-### Connect to Windows VM
+### Connect to a Windows VM
Follow these steps to connect to a Windows VM.
Follow these steps to connect to a Windows VM.
## Next steps
-To learn how to administer your Azure Stack Edge Pro device, see[Use local web UI to administer a Azure Stack Edge Pro](azure-stack-edge-manage-access-power-connectivity-mode.md).
+To learn how to administer your Azure Stack Edge Pro device, see [Use local web UI to administer an Azure Stack Edge Pro](azure-stack-edge-manage-access-power-connectivity-mode.md).
databox-online Azure Stack Edge Mini R Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-install.md
Previously updated : 10/20/2020 Last updated : 03/18/2021 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Mini R device in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following:
- Your Azure Stack Edge Mini R physical device on the installation site. - One power cable. - At least one 1-GbE RJ-45 network cable to connect to the management interface. There are two 1-GbE network interfaces, one management and one data, on the device.-- One 10-GbE SFP+ copper cable for each data network interface to be configured. At least one data network interface from among PORT 3 or PORT 4 needs to be connected to the Internet (with connectivity to Azure).
+- One 10-GbE SFP+ cable for each data network interface to be configured. At least one data network interface from PORT 3 or PORT 4 needs to be connected to the Internet (with connectivity to Azure).
+
+ Use of the highest-performing copper SFP+ (10 Gbps) transceiver is strongly recommended. Compatible fiber-optic transceivers can be used but have not been tested. For more information, see [transceiver and cable specifications](azure-stack-edge-mini-r-technical-specifications-compliance.md#transceivers-cables) for Azure Stack Edge Mini R.
+
- Access to one power distribution unit (recommended). > [!NOTE]
On your Azure Stack Edge device:
- The device has 1 SSD disk in the slot. - The device also has a CFx card that serves as storage for the operating system disk. -- The front panel has network interfaces and access to Wi-Fi.
+- The front panel has network interfaces and access to Wi-Fi.
- - 2 X 1 GbE RJ 45 network interfaces. These are PORT 1 and PORT 2 on the local UI of the device.
- - 2 X 10 GbE SFP+ network interfaces. These are PORT 3 and PORT 4 on the local UI of the device.
+ - 2 X 1 GbE RJ 45 network interfaces (PORT 1 and PORT 2 on the local UI of the device)
+ - 2 X 10 GbE SFP+ network interfaces (PORT 3 and PORT 4 on the local UI of the device)
- One Wi-Fi port with a Wi-Fi transceiver attached to it. - The front panel also has a power button.
Take the following steps to cable your device for power and network.
![Network and storage interfaces on device](./media/azure-stack-edge-mini-r-deploy-install/ports-front-plane.png)
-2. Locate the power button on the bottom left corner of the front of the device.
+2. Locate the power button on the bottom-left corner of the front of the device.
![Front plane of a device with power button on the device](./media/azure-stack-edge-mini-r-deploy-install/device-power-button.png)
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
The hardware components of your Microsoft Azure Stack Edge Mini R device adhere to the technical specifications and regulatory standards outlined in this article. The technical specifications describe the CPU, memory, power supply units (PSUs), storage capacity, enclosure dimensions, and weight.
-## Compute, memory specifications
+## Compute, memory
The Azure Stack Edge Mini R device has the following specifications for compute and memory:
The Azure Stack Edge Mini R device has the following specifications for compute
| Memory: usable | 32 GB RAM |
-## Compute acceleration specifications
+## Compute acceleration
A Vision Processing Unit (VPU) is included on every Azure Stack Edge Mini R device that enables Kubernetes, deep neural network and computer vision based applications.
A Vision Processing Unit (VPU) is included on every Azure Stack Edge Mini R devi
| Compute Acceleration card | Intel Movidius Myriad X VPU <br> For more information, see [Intel Movidius Myriad X VPU](https://www.movidius.com/MyriadX) |
-## Storage specifications
+## Storage
The Azure Stack Edge Mini R device has 1 data disk and 1 boot disk (that serves as operating system storage). The following table shows the details for the storage capacity of the device.
The Azure Stack Edge Mini R device has 1 data disk and 1 boot disk (that serves
**Some space is reserved for internal use.*
-## Network specifications
+## Network
-The Azure Stack Edge Mini R device has the following specifications for network:
+The Azure Stack Edge Mini R device has the following specifications for the network:
+|Specification |Value |
+|-|--|
+|Network interfaces |2 x 10 Gbps SFP+ <br> Shown as PORT 3 and PORT 4 in the local UI |
+|Network interfaces |2 x 1 Gbps RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI |
+|Wi-Fi |802.11ac |
|Specification |Value | |||
The Azure Stack Edge Mini R device has the following specifications for network:
|Network interfaces |2 x 1 GbE RJ45 <br> Shown as PORT 1 and PORT 2 in the local UI | |Wi-Fi |802.11ac |
+The following routers and switches are compatible with the 10 Gbps SPF+ network interfaces (Port 3 and Port 4) on your Azure Stack Edge Mini R devices:
-## Power supply unit specifications
+|Router/Switch |Notes |
+|||
+|[VoyagerESR 2.0](https://klastelecom.com/products/voyageresr2-0/) |Cisco ESS3300 Switch component |
+|[VoyagerSW26G](https://klastelecom.com/products/voyagersw26g/) | |
+|[VoyagerVM 3.0](https://klastelecom.com/products/voyager-vm-3-0/) | |
+|[TDC Switch](https://klastelecom.com/voyager-tdc/) | |
+|[TRX R2](https://klastelecom.com/products/trx-r2/) (8-Core) <!--Better link: https://www.klasgroup.com/products/voyagersw12gg/? On current link target, an "R6" link opens this page.--> | |
+|[SW12GG](https://www.klasgroup.com/products/voyagersw12gg/) | |
+
+## Transceivers, cables
+
+The following copper SFP+ (10 Gbps) transceivers and cables are strongly recommended for use with Azure Stack Edge Mini R devices. Compatible fiber-optic cables can be used with SFP+ network interfaces (Port 3 and Port 4) but have not been tested.
+
+|SFP+ transceiver type |Supported cables | Notes |
+|-|--|-|
+|SFP+ Direct-Attach Copper (10GSFP+Cu)| <ul><li>[FS SFP-10G-DAC](https://www.fs.com/c/fs-10g-sfp-dac-1115) (Available in industrial temperature -40┬║C to +85┬║C as custom order)</li><br><li>[10Gtek CAB-10GSFP-P0.5M](http://www.10gtek.com/10G-SFP+-182)</li><br><li>[Cisco SFP-H10GB-CU1M](https://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html)</li></ul> |<ul><li>Also known as SFP+ Twinax DAC cables.</li><br><li>Recommended option because it has lowest power usage and is simplest.</li><br><li>Autonegotiation is not supported.</li><br><li>Connecting an SFP device to an SFP+ device is not supported.</li></ul>|
+
+## Power supply unit
The Azure Stack Edg