Updates from: 08/12/2021 03:08:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-adfs-saml.md
+
+ Title: Add AD FS as a SAML identity provider by using custom policies
+
+description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C
+++++++ Last updated : 03/15/2021+++
+zone_pivot_groups: b2c-policy-type
++
+# Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C
+++++++
+This article shows you how to enable sign-in for an AD FS user account by using [custom policies](custom-policy-overview.md) in Azure Active Directory B2C (Azure AD B2C). You enable sign-in by adding a [SAML identity provider](identity-provider-generic-saml.md) to a custom policy.
+
+## Prerequisites
++
+## Create a self-signed certificate
++
+## Create a policy key
+
+You need to store your certificate in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+4. On the Overview page, select **Identity Experience Framework**.
+5. Select **Policy Keys** and then select **Add**.
+6. For **Options**, choose `Upload`.
+7. Enter a **Name** for the policy key. For example, `SAMLSigningCert`. The prefix `B2C_1A_` is added automatically to the name of your key.
+8. Browse to and select your certificate .pfx file with the private key.
+9. Click **Create**.
+
+## Add a claims provider
+
+If you want users to sign in using an AD FS account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+
+You can define an AD FS account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. For more information, see [define a SAML identity provider](identity-provider-generic-saml.md).
+
+1. Open the *TrustFrameworkExtensions.xml*.
+1. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
+1. Add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <ClaimsProvider>
+ <Domain>contoso.com</Domain>
+ <DisplayName>Contoso</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Contoso-SAML2">
+ <DisplayName>Contoso</DisplayName>
+ <Description>Login with your AD FS account</Description>
+ <Protocol Name="SAML2"/>
+ <Metadata>
+ <Item Key="WantsEncryptedAssertions">false</Item>
+ <Item Key="PartnerEntity">https://your-AD-FS-domain/federationmetadata/2007-06/federationmetadata.xml</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_SAMLSigningCert"/>
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="userPrincipalName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name"/>
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name"/>
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email"/>
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="contoso.com" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication"/>
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName"/>
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName"/>
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId"/>
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId"/>
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-idp"/>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+1. Replace `your-AD-FS-domain` with the name of your AD FS domain and replace the value of the **identityProvider** output claim with your DNS (Arbitrary value that indicates your domain).
+
+1. Locate the `<ClaimsProviders>` section and add the following XML snippet. If your policy already contains the `SM-Saml-idp` technical profile, skip to the next step. For more information, see [single sign-on session management](custom-policy-reference-sso.md).
+
+ ```xml
+ <ClaimsProvider>
+ <DisplayName>Session Management</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="SM-Saml-idp">
+ <DisplayName>Session Management Provider</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.SamlSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="IncludeSessionIndex">false</Item>
+ <Item Key="RegisterServiceProviders">false</Item>
+ </Metadata>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+1. Save the file.
++
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="ContosoExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="ContosoExchange" TechnicalProfileReferenceId="Contoso-SAML2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
++
+## Configure an AD FS relying party trust
+
+To use AD FS as an identity provider in Azure AD B2C, you need to create an AD FS Relying Party Trust with the Azure AD B2C SAML metadata. The following example shows a URL address to the SAML metadata of an Azure AD B2C technical profile:
+
+```
+https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
+```
+
+When using a [custom domain](custom-domain.md), use the following format:
+
+```
+https://your-domain-name/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
+```
+
+Replace the following values:
+
+- **your-tenant-name** with your tenant name, such as your-tenant.onmicrosoft.com.
+- **your-domain-name** with your custom domain name, such as login.contoso.com.
+- **your-policy** with your policy name. For example, B2C_1A_signup_signin_adfs.
+- **your-technical-profile** with the name of your SAML identity provider technical profile. For example, Contoso-SAML2.
+
+Open a browser and navigate to the URL. Make sure you type the correct URL and that you have access to the XML metadata file. To add a new relying party trust by using the AD FS Management snap-in and manually configure the settings, perform the following procedure on a federation server. Membership in **Administrators** or equivalent on the local computer is the minimum required to complete this procedure.
+
+1. In Server Manager, select **Tools**, and then select **AD FS Management**.
+2. Select **Add Relying Party Trust**.
+3. On the **Welcome** page, choose **Claims aware**, and then select **Start**.
+4. On the **Select Data Source** page, select **Import data about the relying party publish online or on a local network**, provide your Azure AD B2C metadata URL, and then select **Next**.
+5. On the **Specify Display Name** page, enter a **Display name**, under **Notes**, enter a description for this relying party trust, and then select **Next**.
+6. On the **Choose Access Control Policy** page, select a policy, and then select **Next**.
+7. On the **Ready to Add Trust** page, review the settings, and then select **Next** to save your relying party trust information.
+8. On the **Finish** page, select **Close**, this action automatically displays the **Edit Claim Rules** dialog box.
+9. Select **Add Rule**.
+10. In **Claim rule template**, select **Send LDAP attributes as claims**.
+11. Provide a **Claim rule name**. For the **Attribute store**, select **Select Active Directory**, add the following claims, then select **Finish** and **OK**.
+
+ | LDAP attribute | Outgoing claim type |
+ | -- | - |
+ | User-Principal-Name | userPrincipalName |
+ | Surname | family_name |
+ | Given-Name | given_name |
+ | E-Mail-Address | email |
+ | Display-Name | name |
+
+ Note some of the names will not display in the outgoing claim type dropdown. You need to manually type them in. (The dropdown is editable).
+
+12. Based on your certificate type, you may need to set the HASH algorithm. On the relying party trust (B2C Demo) properties window, select the **Advanced** tab and change the **Secure hash algorithm** to `SHA-256`, and select **Ok**.
+13. In Server Manager, select **Tools**, and then select **AD FS Management**.
+14. Select the relying party trust you created, select **Update from Federation Metadata**, and then select **Update**.
+
+## Test your custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Under **Policies**, select **Identity Experience Framework**
+1. Select your relying party policy, for example `B2C_1A_signup_signin`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run now** button.
+1. From the sign-up or sign-in page, select **Contoso AD FS** to sign in with Contoso AD FS identity provider.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+
+## Troubleshooting AD FS service
+
+AD FS is configured to use the Windows application log. If you experience challenges setting up AD FS as a SAML identity provider using custom policies in Azure AD B2C, you may want to check the AD FS event log:
+
+1. On the Windows **Search bar**, type **Event Viewer**, and then select the **Event Viewer** desktop app.
+1. To view the log of a different computer, right-click **Event Viewer (local)**. Select **Connect to another computer**, and fill in the fields to complete the **Select Computer** dialog box.
+1. In **Event Viewer**, open the **Applications and Services Logs**.
+1. Select **AD FS**, then select **Admin**.
+1. To view more information about an event, double-click the event.
+
+### SAML request is not signed with expected signature algorithm event
+
+This error indicates that the SAML request sent by Azure AD B2C is not signed with the expected signature algorithm configured in AD FS. For example, the SAML request is signed with the signature algorithm `rsa-sha256`, but the expected signature algorithm is `rsa-sha1`. To fix this issue, make sure both Azure AD B2C and AD FS are configured with the same signature algorithm.
+
+#### Option 1: Set the signature algorithm in Azure AD B2C
+
+You can configure how to sign the SAML request in Azure AD B2C. The [XmlSignatureAlgorithm](identity-provider-generic-saml.md) metadata controls the value of the `SigAlg` parameter (query string or post parameter) in the SAML request. The following example configures Azure AD B2C to use the `rsa-sha256` signature algorithm.
+
+```xml
+<Metadata>
+ <Item Key="WantsEncryptedAssertions">false</Item>
+ <Item Key="PartnerEntity">https://your-AD-FS-domain/federationmetadata/2007-06/federationmetadata.xml</Item>
+ <Item Key="XmlSignatureAlgorithm">Sha256</Item>
+</Metadata>
+```
+
+#### Option 2: Set the signature algorithm in AD FS
+
+Alternatively, you can configure the expected the SAML request signature algorithm in AD FS.
+
+1. In Server Manager, select **Tools**, and then select **AD FS Management**.
+1. Select the **Relying Party Trust** you created earlier.
+1. Select **Properties**, then select **Advance**
+1. Configure the **Secure hash algorithm**, and select **OK** to save the changes.
+
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-adfs.md
Title: Add AD FS as a SAML identity provider by using custom policies
+ Title: Add AD FS as an OpenID Connect identity provider by using custom policies
-description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C
+description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C
zone_pivot_groups: b2c-policy-type
-# Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C
+# Add AD FS as an OpenID Connect identity provider using custom policies in Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] +
+## Prerequisites
+++
+## Create an AD FS application
+
+To enable sign-in for users with an AD FS account in Azure Active Directory B2C (Azure AD B2C), create an Application Group in your AD FS. For more information, see [Build a web application using OpenID Connect with AD FS 2016 and later](/windows-server/identity/ad-fs/development/enabling-openid-connect-with-ad-fs)
+
+To create an Application Group, follow theses steps:
+
+1. In **Server Manager**, select **Tools**, and then select **AD FS Management**.
+1. In AD FS Management, right-click on **Application Groups** and select **Add Application Group**.
+1. On the Application Group Wizard **Welcome** screen:
+ 1. Enter the **Name** of your application. For example, *Azure AD B2C application*.
+ 1. Under **Client-Server applications**, select the **Web browser accessing a web application** template.
+ 1. Select **Next**.
+1. On the Application Group Wizard **Native Application** screen:
+ 1. Copy the **Client Identifier** value. The client identifier is your AD FS **Application ID**. You will need the application ID later in this article.
+ 1. In **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+ 1. Select **Next**, and then **Next** to complete the app registration wizard.
+ 1. Select **Close**.
++
+### Configure the app claims
+
+In this step, configure the claims AD FS application returns to Azure AD B2C.
+
+1. In the **Application Groups**, select the application your created.
+1. In the application properties window, under the **Applications**, select the **Web Application**. Then select **Edit**.
+ :::image type="content" source="./media/identity-provider-adfs/ad-fs-edit-app.png" alt-text="Screenshot that shows how to edit a web application.":::
+1. Select the **Issuance Transformation Rules** tab. Then select **Add Rule**.
+1. In **Claim rule template**, select **Send LDAP attributes as claims**.
+1. Provide a **Claim rule name**. For the **Attribute store**, select **Select Active Directory**, add the following claims.
+
+ | LDAP attribute | Outgoing claim type |
+ | -- | - |
+ | User-Principal-Name | UPN |
+ | Surname | family_name |
+ | Given-Name | given_name |
+ | Display-Name | name |
+
+ Note some of the names will not display in the outgoing claim type dropdown. You need to manually type them in. (The dropdown is editable).
+
+1. Select **Finish**, then select **Close**.
++ ::: zone pivot="b2c-user-flow"
+## Configure AD FS as an identity provider
+1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Identity providers**, and then select **New OpenID Connect provider**.
+1. Enter a **Name**. For example, *Contoso*.
+1. For **Metadata url**, enter the URL of the [AD FS OpenID Connect Configuration document](/windows-server/identity/ad-fs/development/ad-fs-openid-connect-oauth-concepts#ad-fs-endpoints). For example:
+ ```http
+ https://adfs.contoso.com/adfs/.well-known/openid-configuration
+ ```
+1. For **Client ID**, enter the application ID that you previously recorded.
+1. For the **Scope**, enter the `openid`.
+1. For **Response type**, select **id_token**.
+1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider).
+1. Under **Identity provider claims mapping**, select the following claims:
-This article shows you how to enable sign-in for an AD FS user account by using [custom policies](custom-policy-overview.md) in Azure Active Directory B2C (Azure AD B2C). You enable sign-in by adding a [SAML identity provider](identity-provider-generic-saml.md) to a custom policy.
+ - **User ID**: *upn*
+ - **Display name**: *unique_name*
+ - **Given name**: *given_name*
+ - **Surname**: *family_name*
-## Prerequisites
+1. Select **Save**.
+## Add AD FS identity provider to a user flow
-## Create a self-signed certificate
+At this point, the AD FS (Contoso) identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the AD FS identity provider to a user flow:
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Select the user flow that you want to add the AD FS identity provider (Contoso).
+1. Under the **Social identity providers**, select **Contoso**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run user flow** button.
+1. From the sign-up or sign-in page, select **Contoso** to sign in with the Contoso account.
-## Create a policy key
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
-You need to store your certificate in your Azure AD B2C tenant.
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
-3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-4. On the Overview page, select **Identity Experience Framework**.
-5. Select **Policy Keys** and then select **Add**.
-6. For **Options**, choose `Upload`.
-7. Enter a **Name** for the policy key. For example, `SAMLSigningCert`. The prefix `B2C_1A_` is added automatically to the name of your key.
-8. Browse to and select your certificate .pfx file with the private key.
-9. Click **Create**.
-## Add a claims provider
-If you want users to sign in using an AD FS account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+## Configure AD FS as an identity provider
-You can define an AD FS account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. For more information, see [define a SAML identity provider](identity-provider-generic-saml.md).
+To enable users to sign in using an AD FS account, you need to define the AD FS as a claims provider that Azure AD B2C can communicate with through an endpoint.
1. Open the *TrustFrameworkExtensions.xml*.
-1. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
-1. Add a new **ClaimsProvider** as follows:
+2. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
+3. Add a new **ClaimsProvider** as follows:
```xml <ClaimsProvider> <Domain>contoso.com</Domain> <DisplayName>Contoso</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Contoso-SAML2">
+ <TechnicalProfile Id="Contoso-OpenIdConnect">
<DisplayName>Contoso</DisplayName>
- <Description>Login with your AD FS account</Description>
- <Protocol Name="SAML2"/>
+ <Protocol Name="OpenIdConnect" />
<Metadata>
- <Item Key="WantsEncryptedAssertions">false</Item>
- <Item Key="PartnerEntity">https://your-AD-FS-domain/federationmetadata/2007-06/federationmetadata.xml</Item>
+ <Item Key="METADATA">https://your-adfs-domain/adfs/.well-known/openid-configuration</Item>
+ <Item Key="response_types">id_token</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="scope">openid</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">0</Item>
+ <!-- Update the Client ID below to the Application ID -->
+ <Item Key="client_id">Your AD FS application ID</Item>
</Metadata>
- <CryptographicKeys>
- <Key Id="SamlMessageSigning" StorageReferenceId="B2C_1A_SAMLSigningCert"/>
- </CryptographicKeys>
<OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="userPrincipalName" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name"/>
- <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name"/>
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email"/>
- <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name"/>
- <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="contoso.com" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication"/>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="upn" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="unique_name" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
</OutputClaims> <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName"/>
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName"/>
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId"/>
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId"/>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
</OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-Saml-idp"/>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
</TechnicalProfile> </TechnicalProfiles> </ClaimsProvider> ```
-1. Replace `your-AD-FS-domain` with the name of your AD FS domain and replace the value of the **identityProvider** output claim with your DNS (Arbitrary value that indicates your domain).
-
-1. Locate the `<ClaimsProviders>` section and add the following XML snippet. If your policy already contains the `SM-Saml-idp` technical profile, skip to the next step. For more information, see [single sign-on session management](custom-policy-reference-sso.md).
+4. For the **Metadata url**, enter the URL of the [AD FS OpenID Connect Configuration document](/windows-server/identity/ad-fs/development/ad-fs-openid-connect-oauth-concepts#ad-fs-endpoints). For example:
- ```xml
- <ClaimsProvider>
- <DisplayName>Session Management</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="SM-Saml-idp">
- <DisplayName>Session Management Provider</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.SSO.SamlSSOSessionProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <Item Key="IncludeSessionIndex">false</Item>
- <Item Key="RegisterServiceProviders">false</Item>
- </Metadata>
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
```-
-1. Save the file.
+ https://adfs.contoso.com/adfs/.well-known/openid-configuration
+ ```
+5. Set **client_id** to the application ID from the application registration.
+6. Save the file.
[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)] + ```xml <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin"> <ClaimsProviderSelections>
You can define an AD FS account as a claims provider by adding it to the **Claim
<OrchestrationStep Order="2" Type="ClaimsExchange"> ... <ClaimsExchanges>
- <ClaimsExchange Id="ContosoExchange" TechnicalProfileReferenceId="Contoso-SAML2" />
+ <ClaimsExchange Id="ContosoExchange" TechnicalProfileReferenceId="Contoso-OpenIdConnect" />
</ClaimsExchanges> </OrchestrationStep> ``` [!INCLUDE [active-directory-b2c-configure-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
-## Configure an AD FS relying party trust
-
-To use AD FS as an identity provider in Azure AD B2C, you need to create an AD FS Relying Party Trust with the Azure AD B2C SAML metadata. The following example shows a URL address to the SAML metadata of an Azure AD B2C technical profile:
-
-```
-https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
-```
-
-When using a [custom domain](custom-domain.md), use the following format:
-
-```
-https://your-domain-name/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
-```
-
-Replace the following values:
--- **your-tenant-name** with your tenant name, such as your-tenant.onmicrosoft.com.-- **your-domain-name** with your custom domain name, such as login.contoso.com.-- **your-policy** with your policy name. For example, B2C_1A_signup_signin_adfs.-- **your-technical-profile** with the name of your SAML identity provider technical profile. For example, Contoso-SAML2.-
-Open a browser and navigate to the URL. Make sure you type the correct URL and that you have access to the XML metadata file. To add a new relying party trust by using the AD FS Management snap-in and manually configure the settings, perform the following procedure on a federation server. Membership in **Administrators** or equivalent on the local computer is the minimum required to complete this procedure.
-
-1. In Server Manager, select **Tools**, and then select **AD FS Management**.
-2. Select **Add Relying Party Trust**.
-3. On the **Welcome** page, choose **Claims aware**, and then click **Start**.
-4. On the **Select Data Source** page, select **Import data about the relying party publish online or on a local network**, provide your Azure AD B2C metadata URL, and then click **Next**.
-5. On the **Specify Display Name** page, enter a **Display name**, under **Notes**, enter a description for this relying party trust, and then click **Next**.
-6. On the **Choose Access Control Policy** page, select a policy, and then click **Next**.
-7. On the **Ready to Add Trust** page, review the settings, and then click **Next** to save your relying party trust information.
-8. On the **Finish** page, click **Close**, this action automatically displays the **Edit Claim Rules** dialog box.
-9. Select **Add Rule**.
-10. In **Claim rule template**, select **Send LDAP attributes as claims**.
-11. Provide a **Claim rule name**. For the **Attribute store**, select **Select Active Directory**, add the following claims, then click **Finish** and **OK**.
-
- | LDAP attribute | Outgoing claim type |
- | -- | - |
- | User-Principal-Name | userPrincipalName |
- | Surname | family_name |
- | Given-Name | given_name |
- | E-Mail-Address | email |
- | Display-Name | name |
-
- Note that these names will not display in the outgoing claim type dropdown. You need to manually type them in. (The dropdown is actually editable).
-
-12. Based on your certificate type, you may need to set the HASH algorithm. On the relying party trust (B2C Demo) properties window, select the **Advanced** tab and change the **Secure hash algorithm** to `SHA-256`, and click **Ok**.
-13. In Server Manager, select **Tools**, and then select **AD FS Management**.
-14. Select the relying party trust you created, select **Update from Federation Metadata**, and then click **Update**.
- ## Test your custom policy
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. Under **Policies**, select **Identity Experience Framework**
1. Select your relying party policy, for example `B2C_1A_signup_signin`. 1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button.
-1. From the sign-up or sign-in page, select **Contoso AD FS** to sign in with Contoso AD FS identity provider.
+1. From the sign-up or sign-in page, select **Contoso** to sign in with Contoso account.
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
-## Troubleshooting AD FS service
-
-AD FS is configured to use the Windows application log. If you experience challenges setting up AD FS as a SAML identity provider using custom policies in Azure AD B2C, you may want to check the AD FS event log:
-
-1. On the Windows **Search bar**, type **Event Viewer**, and then select the **Event Viewer** desktop app.
-1. To view the log of a different computer, right-click **Event Viewer (local)**. Select **Connect to another computer**, and fill in the fields to complete the **Select Computer** dialog box.
-1. In **Event Viewer**, open the **Applications and Services Logs**.
-1. Select **AD FS**, then select **Admin**.
-1. To view more information about an event, double-click the event.
-
-### SAML request is not signed with expected signature algorithm event
-This error indicates that the SAML request sent by Azure AD B2C is not signed with the expected signature algorithm configured in AD FS. For example, the SAML request is signed with the signature algorithm `rsa-sha256`, but the expected signature algorithm is `rsa-sha1`. To fix this issue, make sure both Azure AD B2C and AD FS are configured with the same signature algorithm.
-
-#### Option 1: Set the signature algorithm in Azure AD B2C
-
-You can configure how to sign the SAML request in Azure AD B2C. The [XmlSignatureAlgorithm](identity-provider-generic-saml.md) metadata controls the value of the `SigAlg` parameter (query string or post parameter) in the SAML request. The following example configures Azure AD B2C to use the `rsa-sha256` signature algorithm.
-
-```xml
-<Metadata>
- <Item Key="WantsEncryptedAssertions">false</Item>
- <Item Key="PartnerEntity">https://your-AD-FS-domain/federationmetadata/2007-06/federationmetadata.xml</Item>
- <Item Key="XmlSignatureAlgorithm">Sha256</Item>
-</Metadata>
-```
-
-#### Option 2: Set the signature algorithm in AD FS
-
-Alternatively, you can configure the expected the SAML request signature algorithm in AD FS.
-1. In Server Manager, select **Tools**, and then select **AD FS Management**.
-1. Select the **Relying Party Trust** you created earlier.
-1. Select **Properties**, then select **Advance**
-1. Configure the **Secure hash algorithm**, and select **OK** to save the changes.
+## Next steps
+Learn how to [pass AD-FS token to your application](idp-pass-through-user-flow.md).
active-directory-b2c Troubleshoot Custom Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-custom-policies.md
Azure AD B2C correlation ID is a unique identifier value that is attached to aut
- Find the sign-in request's Azure Application Insights logs. - Pass the correlation ID to your REST API and use it to identify the sign-in flow.
-The correlation ID is changed every time a new session is established. When debugging your policies, make sure close existing browser tabs. Or open a new in-private mode browser.
+The correlation ID is changed every time a new session is established. When you debug your policies, make sure that you close existing browser tabs or open a new in-private mode browser.
### Get the Azure AD B2C correlation ID
active-directory-domain-services Concepts Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/concepts-forest-trust.md
The flow of secured communications over trusts determines the elasticity of a tr
The flow of communication over trusts is determined by the direction of the trust. Trusts can be one-way or two-way, and can be transitive or non-transitive.
-The following diagram shows that all domains in *Tree 1* and *Tree 2* have transitive trust relationships by default. As a result, users in *Tree 1* can access resources in domains in *Tree 2* and users in *Tree 1* can access resources in *Tree 2*, when the proper permissions are assigned at the resource.
+The following diagram shows that all domains in *Tree 1* and *Tree 2* have transitive trust relationships by default. As a result, users in *Tree 1* can access resources in domains in *Tree 2* and users in *Tree 2* can access resources in *Tree 1*, when the proper permissions are assigned at the resource.
![Diagram of trust relationships between two forests](./media/concepts-forest-trust/trust-relationships.png)
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Previously updated : 09/24/2020 Last updated : 08/11/2021
In the *preparation* stage, Azure AD DS takes a backup of the domain to get the
![Preparation stage for migrating Azure AD DS](media/migrate-from-classic-vnet/migration-preparation.png)
-In the *migration* stage, the underlying virtual disks for the domain controllers from the Classic managed domain are copied to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domain ΓÇô they continue to be joined to the managed domain and run without changes.
+In the *migration* stage, the underlying virtual disks for the domain controllers from the Classic managed domain are copied to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domainΓÇôthey continue to be joined to the managed domain and run without changes.
![Migration of Azure AD DS](media/migrate-from-classic-vnet/migration-process.png)
To prepare the managed domain for migration, complete the following steps:
With the managed domain prepared and backed up, the domain can be migrated. This step recreates the Azure AD DS domain controller VMs using the Resource Manager deployment model. This step can take 1 to 3 hours to complete.
-Run the `Migrate-Aadds` cmdlet using the *-Commit* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in the previous section, such as *aaddscontoso.com*:
+Run the `Migrate-Aadds` cmdlet using the *-Commit* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in the previous section, such as *aaddscontoso.com*.
Specify the target resource group that contains the virtual network you want to migrate Azure AD DS to, such as *myResourceGroup*. Provide the target virtual network, such as *myVnet*, and the subnet, such as *DomainServices*.
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/password-policy.md
Previously updated : 07/06/2020 Last updated : 08/11/2021
To create a custom password policy, you use the Active Directory Administrative
Set the precedence for your custom password policy to override the default, such as *1*.
-1. Edit other password policy settings as desired. Remember the following key points:
-
- * Settings like password complexity, age, or expiration time only to users manually created in a managed domain.
- * Account lockout settings apply to all users, but only take effect within the managed domain and not in Azure AD itself.
+1. Edit other password policy settings as desired. Account lockout settings apply to all users, but only take effect within the managed domain and not in Azure AD itself.
![Create a custom fine-grained password policy](./media/password-policy/custom-fgpp.png)
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
To help you better understand usage and identify potential security threats, the
To access the workbook template for the security overview report, complete the following steps:
-1. Search for and select **Azure Active Directory Domain Services** in the Azure portal.
+1. Search for and select **Azure AD Domain Services** in the Azure portal.
1. Select your managed domain, such as *aaddscontoso.com* 1. From the menu on the left-hand side, choose **Monitoring > Workbooks**
To help you troubleshoot issues for a specific user account, the account activit
To access the workbook template for the account activity report, complete the following steps:
-1. Search for and select **Azure Active Directory Domain Services** in the Azure portal.
+1. Search for and select **Azure AD Domain Services** in the Azure portal.
1. Select your managed domain, such as *aaddscontoso.com* 1. From the menu on the left-hand side, choose **Monitoring > Workbooks** 1. Choose the **Account Activity Report**.
For problems with users, learn how to troubleshoot [account sign-in problems][tr
[troubleshoot-sign-in]: troubleshoot-sign-in.md [troubleshoot-account-lockout]: troubleshoot-account-lockout.md [azure-monitor-queries]: /azure/data-explorer/kusto/query/
-[kusto-queries]: /azure/kusto/query/tutorial?pivots=azuredataexplorer
+[kusto-queries]: /azure/kusto/query/tutorial?pivots=azuredataexplorer
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 07/28/2021 Last updated : 08/11/2021
The most common use for a Temporary Access Pass is for a user to register authen
The user is now signed in and can update or register a method such as FIDO2 security key. Users who update their authentication methods due to losing their credentials or device should make sure they remove the old authentication methods.
+Users can also continue to sign-in by using their password; a TAP doesnΓÇÖt replace a userΓÇÖs password.
Users can also use their Temporary Access Pass to register for Passwordless phone sign-in directly from the Authenticator app. For more information, see [Add your work or school account to the Microsoft Authenticator app](../user-help/user-help-auth-app-add-work-school-account.md).
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Previously updated : 07/26/2021 Last updated : 08/11/2021
Azure [GOV endpoints](../../azure-government/compare-azure-government-global-azu
If you need more granularity, see the [list of Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653). This list is updated every Wednesday and goes into effect the next Monday.
+To determine if access to a url and port are restricted in an environment, run the following cmdlet:
+
+```powershell
+Test-NetConnection -ComputerName https://ssprdedicatedsbprodncu.servicebus.windows.net -Port 443
+```
+
+Or run the following:
+
+```powershell
+Invoke-WebRequest -Uri https://ssprdedicatedbprodscu.windows.net -Verbose
+```
+ For more information, see the [connectivity prerequisites for Azure AD Connect](../hybrid/how-to-connect-install-prerequisites.md). ### Restart the Azure AD Connect Sync service
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-prerequisites.md
To enable TLS 1.2, follow these steps.
``` 1. Restart the server.
+## NTLM requirement
+
+You should not enable NTLM on the Windows Server that is running the Azure AD Connect Provisioning Agent and if it is enabled you should make sure you disable it.
## Known limitations
When using OU scoping filter
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-create-self-signed-certificate.md
+
+ Title: Create a self-signed public certificate to authenticate your application | Azure
+
+description: Create a self-signed public certificate to authenticate your application.
++++++++ Last updated : 08/10/2021+++
+#Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
++
+# How to: Create a self-signed public certificate to authenticate your application
+
+Azure Active Directory (Azure AD) supports two types of authentication for service principals: **password-based authentication** (app secret) and **certificate-based authentication**. While app secrets can easily be created in the Azure portal, it's recommended that your application uses a certificate.
+
+For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. This article shows you how to use Windows PowerShell to create and export a self-signed certificate.
+
+> [!CAUTION]
+> Using a self-signed certificate is only recommended for development, not production.
+
+You configure various parameters for the certificate. For example, the cryptographic and hash algorithms, the certificate validity period, and your domain name. Then export the certificate with or without its private key depending on your application needs.
+
+The application that initiates the authentication session requires the private key while the application that confirms the authentication requires the public key. So, if you're authenticating from your PowerShell desktop app to Azure AD, you only export the public key (`.cer` file) and upload it to the Azure portal. Your PowerShell app uses the private key from your local certificate store to initiate authentication and obtain access tokens for Microsoft Graph.
+
+Your application may also be running from another machine, such as Azure Automation. In this scenario, you export the public and private key pair from your local certificate store, upload the public key to the Azure portal, and the private key (a `.pfx` file) to Azure Automation. Your application running in Azure Automation will use the private key to initiate authentication and obtain access tokens for Microsoft Graph.
+
+This article uses the `New-SelfSignedCertificate` PowerShell cmdlet to create the self-signed certificate and the `Export-Certificate` cmdlet to export it to a location that is easily accessible. These cmdlets are built-in to modern versions of Windows (Windows 8.1 and greater, and Windows Server 2012R2 and greater). The self-signed certificate will have the following configuration:
+++ A 2048-bit key length. While longer values are supported, the 2048-bit size is highly recommended for the best combination of security and performance.++ Uses the RSA cryptographic algorithm. Azure AD currently supports only RSA.++ The certificate is signed with the SHA256 hash algorithm. Azure AD also supports certificates signed with SHA384 and SHA512 hash algorithms.++ The certificate is valid for only one year.++ The certificate is supported for use for both client and server authentication.+
+> [!NOTE]
+> To customize the start and expiry date as well as other properties of the certificate, see the [`New-SelfSignedCertificate` reference](/powershell/module/pki/new-selfsignedcertificate?view=windowsserver2019-ps&preserve-view=true).
++
+## Option 1: Create and export your public certificate without a private key
+
+Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell.
+
+In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate.
+
+```powershell
+
+$cert = New-SelfSignedCertificate -Subject "CN={certificateName}" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 ## Replace {certificateName}
+
+```
+
+The **$cert** variable in the previous command stores your certificate in the current session and allows you to export it. The command below exports the certificate in `.cer` format. You can also export it in other formats supported on the Azure portal including `.pem` and `.crt`.
+
+```powershell
+
+Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{certificateName}.cer" ## Specify your preferred location and replace {certificateName}
+
+```
+
+Your certificate is now ready to upload to the Azure portal. Once uploaded, retrieve the certificate thumbprint for use to authenticate your application.
++
+## Option 2: Create and export your public certificate with its private key
+
+Use this option to create a certificate and its private key if your application will be running from another machine or cloud, such as Azure Automation.
+
+In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with name that you wish to give your certificate.
+
+```powershell
+
+$cert = New-SelfSignedCertificate -Subject "CN={certificateName}" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 ## Replace {certificateName}
+
+```
+
+The **$cert** variable in the previous command stores your certificate in the current session and allows you to export it. The command below exports the certificate in `.cer` format. You can also export it in other formats supported on the Azure portal including `.pem` and `.crt`.
++
+```powershell
+
+Export-Certificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{certificateName}.cer" ## Specify your preferred location and replace {certificateName}
+
+```
+
+Still in the same session, create a password for your certificate private key and save it in a variable. In the following command, replace `{myPassword}` with the password that you wish to use to protect your certificate private key.
+
+```powershell
+
+$mypwd = ConvertTo-SecureString -String "{myPassword}" -Force -AsPlainText ## Replace {myPassword}
+
+```
+
+Now, using the password you stored in the `$mypwd` variable, secure, and export your private key.
+
+```powershell
+
+Export-PfxCertificate -Cert $cert -FilePath "C:\Users\admin\Desktop\{privateKeyName}.pfx" -Password $mypwd ## Specify your preferred location and replace {privateKeyName}
+
+```
+
+Your certificate (`.cer` file) is now ready to upload to the Azure portal. You also have a private key (`.pfx` file) that is encrypted and can't be read by other parties. Once uploaded, retrieve the certificate thumbprint for use to authenticate your application.
++
+## Optional task: Delete the certificate from the keystore.
+
+If you created the certificate using Option 2, you can delete the key pair from your personal store. First, run the following command to retrieve the certificate thumbprint.
+
+```powershell
+
+Get-ChildItem -Path "Cert:\CurrentUser\My" | Where-Object {$_.Subject -Match "{certificateName}"} | Select-Object Thumbprint, FriendlyName ## Replace {privateKeyName} with the name you gave your certificate
+
+```
+
+Then, copy the thumbprint that is displayed and use it to delete the certificate and its private key.
+
+```powershell
+
+Remove-Item -Path Cert:\CurrentUser\My\{pasteTheCertificateThumbprintHere} -DeleteKey
+
+```
+
+### Know your certificate expiry date
+
+The self-signed certificate you created following the steps above has a limited lifetime before it expires. In the **App registrations** section of the Azure portal, the **Certificates & secrets** screen displays the expiration date of the certificate. If you're using Azure Automation, the **Certificates** screen on the Automation account displays the expiration date of the certificate. Follow the previous steps to create a new self-signed certificate.
+
+## Next steps
+
+[Manage certificates for federated single sign-on in Azure Active Directory](../manage-apps/manage-certificates-for-federated-single-sign-on.md)
active-directory Howto Get List Of All Active Directory Auth Library Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-get-list-of-all-active-directory-auth-library-apps.md
Previously updated : 05/27/2021 Last updated : 07/22/2021
Support for Active Directory Authentication Library (ADAL) will end on June 30,
## Sign-ins workbook
-Workbooks are a set of queries that collect and visualize information that is available in Azure AD logs. [Learn more about the sign-in logs schema here](../reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md). The Sign-ins workbook in the Azure AD admin portal now has a new table to assist you in determining which applications use ADAL and how often they are used. First, weΓÇÖll detail how to access the workbook before showing the visualization for the list of applications.
+Workbooks are a set of queries that collect and visualize information that is available in Azure Active Directory (Azure AD) logs. [Learn more about the sign-in logs schema here](../reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md). The Sign-ins workbook in the Azure AD admin portal now has a table to assist you in determining which applications use ADAL and how often they are used. First, weΓÇÖll detail how to access the workbook before showing the visualization for the list of applications.
-### Access the workbook
+## Step 1: Integrate audit logs with Azure Monitor
-If your organization is new to Azure Monitoring workbooks, [integrate your Azure AD sign-in and audit logs with Azure Monitor](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) before accessing the workbook. This integration allows you to store, query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored. Insights before the date of the Azure Monitor integration won't be available. You can use the workbook to assess past insights if your Azure AD sign-in and audit logs is already integrated with Azure Monitor.
+Follow the steps in the [Integrate your Azure AD sign-in and audit logs with Azure Monitor](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) before accessing workbook. Only sign-in and audit events created after Azure Monitor integration will be stored.
-To access the workbook:
+## Step 2: Access sign-ins workbook in Azure portal
-1. Sign into the Azure portal
-2. Navigate toΓÇ»**Azure Active Directory**ΓÇ»>ΓÇ»**Monitoring**ΓÇ»>ΓÇ»**Workbooks**
-3. In the Usage section, open theΓÇ»**Sign-ins** workbook
+Once you've integrated your Azure AD sign-in and audit logs with Azure Monitor as specified in the Azure Monitor integration, access the sign-ins workbook:
- :::image type="content" source="media/howto-get-list-of-all-active-directory-auth-library-apps/sign-in-workbook.png" alt-text="Screenshot of the Azure Active Directory portal workbooks interface highlighting the sign-ins workbook.":::
+ 1. Sign into the Azure portal
+ 1. Navigate toΓÇ»**Azure Active Directory**ΓÇ»>ΓÇ»**Monitoring**ΓÇ»>ΓÇ»**Workbooks**
+ 1. In the **Usage** section, open theΓÇ»**Sign-ins** workbook
-### Identify apps using ADAL for authentication
+ :::image type="content" source="media/howto-get-list-of-all-active-directory-auth-library-apps/sign-in-workbook.png" alt-text="Screenshot of the Azure Active Directory portal workbooks interface highlighting the sign-ins workbook.":::
-The Sign-ins workbook has a new table at the bottom of the page that can show you which recently used apps are using ADAL as shown below. You can also export a list of the apps. Update these apps to use MSAL.
+## Step 3: Identify apps that use ADAL
+The table at the bottom of the Sign-ins workbook page lists apps that recently used ADAL. You can also export a list of the apps. Update these apps to use MSAL.
+
:::image type="content" source="media/howto-get-list-of-all-active-directory-auth-library-apps/active-directory-auth-library-apps-present.png" alt-text="Screenshot of sign-ins workbook displaying apps that use Active Directory Authentication Library.":::-
+
If there are no apps using ADAL, the workbook will display a view as shown below. -
+
:::image type="content" source="media/howto-get-list-of-all-active-directory-auth-library-apps/no-active-directory-auth-library-apps.png" alt-text="Screenshot of sign-ins workbook when no app is using Active Directory Authentication Library.":::
+## Step 4: Update your code
+
+After identifying your apps that use ADAL, migrate them to MSAL depending on your application type as illustrated below.
++ ## Next steps
-After identifying your apps, we recommend you [start migrating all ADAL apps to MSAL](msal-migration.md).
+For more information about MSAL, including usage information and which libraries are available for different programming languages and application types, see:
+
+- [Acquire and cache tokens using MSAL](msal-acquire-cache-tokens.md)
+- [Application configuration options](msal-client-application-configuration.md)
+- [List of MSAL authentication libraries](reference-v2-libraries.md)
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-migration.md
Previously updated : 08/07/2020 Last updated : 07/22/2021
-# Customer intent: As an application developer, I want to learn about the differences between the ADAL and MSAL libraries so I can migrate my applications to MSAL.
+# Customer intent: As an application developer, I want to learn about MSAL library so I can migrate my ADAL applications to MSAL.
+ # Migrate applications to the Microsoft Authentication Library (MSAL)
-Many developers have built and deployed applications using the Azure Active Directory Authentication Library (ADAL). We now recommend using the Microsoft Authentication Library (MSAL) for authentication and authorization of Azure AD entities.
+If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization functionality, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](msal-overview.md#languages-and-frameworks).
+
+- All Microsoft support and development for ADAL, including security fixes, ends on June 30, 2022.
+- No new features have been added to ADAL since June 30, 2020.
-By using MSAL instead of ADAL:
+> [!WARNING]
+> If you choose not to migrate to MSAL before ADAL support ends on June 30, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
-- You can authenticate a broader set of identities:
- - Azure AD identities
- - Microsoft accounts
- - Social and local accounts by using Azure AD B2C
-- Your users will get the best single-sign-on experience.-- Your application can enable incremental consent.-- Supporting Conditional Access is easier.-- You benefit from innovation. Because all Microsoft development efforts are now focused on MSAL, no new features will be implemented in ADAL.
+## Why switch to MSAL?
-**MSAL is now the recommended authentication library for use with the Microsoft identity platform**.
+MSAL provides multiple benefits over ADAL, including the following features:
-## Migration guidance
+|Features|MSAL|ADAL|
+||||
+|**Security**|||
+|Security fixes beyond June 30, 2022|![Security fixes beyond June 30, 2022 - MSAL provides the feature][y]|![Security fixes beyond June 30, 2022 - ADAL doesn't provide the feature][n]|
+| Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support [Continuous Access Evaluation (CAE)](app-resilience-continuous-access-evaluation.md).|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - MSAL provides the feature][y]|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - ADAL doesn't provide the feature][n]|
+| Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) |![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - MSAL provides the feature][y]|![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - ADAL doesn't provide the feature][n]|
+|**User accounts and experiences**|||
+|Azure Active Directory (Azure AD) accounts|![Azure Active Directory (Azure AD) accounts - MSAL provides the feature][y]|![Azure Active Directory (Azure AD) accounts - ADAL provides the feature][y]|
+| Microsoft account (MSA) |![Microsoft account (MSA) - MSAL provides the feature][y]|![Microsoft account (MSA) - ADAL doesn't provide the feature][n]|
+| Azure AD B2C accounts |![Azure AD B2C accounts - MSAL provides the feature][y]|![Azure AD B2C accounts - ADAL doesn't provide the feature][n]|
+| Best single sign-on experience |![Best single sign-on experience - MSAL provides the feature][y]|![Best single sign-on experience - ADAL doesn't provide the feature][n]|
+|**Resilience**|||
+| Proactive token renewal |![Proactive token renewal - MSAL provides the feature][y]|![Proactive token renewal - ADAL doesn't provide the feature][n]|
+| Throttling |![Throttling - MSAL provides the feature][y]|![Throttling - ADAL doesn't provide the feature][n]|
-The following articles can help you migrate to MSAL:
+## AD FS support in MSAL.NET
-- [Migrate to MSAL.Android](migrate-android-adal-msal.md)-- [Migrate to MSAL.iOS / macOS](migrate-objc-adal-msal.md)-- [Migrate to MSAL Java](migrate-adal-msal-java.md)-- [Migrate to MSAL.js](msal-compare-msal-js-and-adal-js.md)-- [Migrate to MSAL.NET](msal-net-migration.md)-- [Migrate to MSAL Python](migrate-python-adal-msal.md)-- [Migrate Xamarin apps using brokers to MSAL.NET](msal-net-migration-ios-broker.md)
+You can use MSAL.NET, MSAL Java, and MSAL Python to get tokens from Active Directory Federation Services (AD FS) 2019 or later. Earlier versions of AD FS, including AD FS 2016, are unsupported by MSAL.
-## Frequently asked questions (FAQ)
+If you need to continue using AD FS, you should upgrade to AD FS 2019 or later before you update your applications from ADAL to MSAL.
-__Q: Is ADAL being deprecated?__
-A: Yes. Starting June 30th, 2020, we will no longer add new features to ADAL. We'll continue adding critical security fixes to ADAL until June 30th, 2022. After this date, your apps using ADAL will continue to work, but we recommend upgrading to MSAL to take advantage of the latest features and to stay secure.
+## How to migrate to MSAL
-__Q: Will my existing ADAL apps stop working?__
-A: No. Your existing apps will continue working without modification. If you're planning to keep them beyond June 30th, 2022, you should consider updating your apps to MSAL to keep them secure, but migrating to MSAL isn't required to maintain existing functionality.
+Before you start the migration, you need to identify which of your apps are using ADAL for authentication. Follow the steps in this article to get a list by using the Azure portal:
+- [How to: Get a complete list of apps using ADAL in your tenant](howto-get-list-of-all-active-directory-auth-library-apps.md)
-__Q: How do I know which of my apps are using ADAL?__
-A: If you have the source code for the application, you can reference the above migration guides to help determine which library the app uses and how to migrate it to MSAL. If you partnered with an ISV, we suggest you reach out to them directly to understand their migration journey to MSAL.
+After identifying your apps that use ADAL, migrate them to MSAL depending on your application type as illustrated below.
-__Q: Why should I invest in moving to MSAL?__
-A: MSAL contains new features not in ADAL including incremental consent, single sign-on, and token cache management. Also, unlike ADAL, MSAL will continue to receive security patches beyond June 30th, 2022. [Learn more](msal-overview.md).
-__Q: Will Microsoft update its own apps to MSAL?__
-Yes. Microsoft is in the process of migrating its applications to MSAL by the end-of-support deadline, ensuring they'll benefit from MSAL's ongoing security and feature improvements.
+## Migration help
-__Q: Will you release a tool that helps me move my apps from ADAL to MSAL?__
-A: No. Differences between the libraries would require dedicating resources to development and maintenance of the tool that would otherwise be spent improving MSAL. However, we do provide the preceding set of migration guides to help you make the required changes in your application.
+If you have questions about migrating your app from ADAL to MSAL, here are some options:
-__Q: How does MSAL work with AD FS?__
-A: MSAL.NET supports certain scenarios to authenticate against AD FS 2019. If your app needs to acquire tokens directly from earlier version of AD FS, you should remain on ADAL. [Learn more](msal-net-adfs-support.md).
+- Post your question on [Microsoft Q&A](/answers/topics/azure-ad-adal-deprecation.html) and tag it with `[azure-ad-adal-deprecation]`.
+- Open an issue in the library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
-__Q: How do I get help migrating my application?__
-A: See the [Migration guidance](#migration-guidance) section of this article. If, after reading the guide for your app's platform, you have additional questions, you can post on [Microsoft Q&A](/answers/topics/azure-ad-adal-deprecation.html) with the tag `[azure-ad-adal-deprecation]` or open an issue in library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
+If you partnered with an Independent Software Vendor (ISV) in the development of your application, we recommend that you contact them directly to understand their migration journey to MSAL.
## Next steps -- [Update your applications to use Microsoft Authentication Library and Microsoft Graph API](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/update-your-applications-to-use-microsoft-authentication-library/ba-p/1257363)-- [Overview of the Microsoft identity platform](v2-overview.md)-- [Review our MSAL code samples](sample-v2-code.md)
+For more information about MSAL, including usage information and which libraries are available for different programming languages and application types, see:
+
+- [Acquire and cache tokens using MSAL](msal-acquire-cache-tokens.md)
+- [Application configuration options](msal-client-application-configuration.md)
+- [MSAL authentication libraries](reference-v2-libraries.md)
+
+<!--
+ ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
+-->
+[y]: media/common/yes.png
+[n]: media/common/no.png
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-overview.md
Previously updated : 10/30/2019 Last updated : 07/22/2021
MSAL can be used in many application scenarios, including the following:
| [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)|Windows, macOS, Linux| | [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)| Single-page apps with React and React-based libraries (Next.js, Gatsby.js)|
-## Differences between ADAL and MSAL
+## Migrate apps that use ADAL to MSAL
Active Directory Authentication Library (ADAL) integrates with the Azure AD for developers (v1.0) endpoint, where MSAL integrates with the Microsoft identity platform. The v1.0 endpoint supports work accounts, but not personal accounts. The v2.0 endpoint is the unification of Microsoft personal accounts and work accounts into a single authentication system. Additionally, with MSAL you can also get authentications for Azure AD B2C.
-For more specific information, read about [migrating to MSAL.NET from ADAL.NET](msal-net-migration.md) and [migrating to MSAL.js from ADAL.js](msal-compare-msal-js-and-adal-js.md).
+For more information about how to migrate to MSAL, see [Migrate applications to the Microsoft Authentication Library (MSAL)](msal-migration.md).
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
At this point, Azure AD enforces that only a tenant administrator can sign into
If the admin approves the permissions for your application, the successful response looks like this: ```HTTP
-GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True
+GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=12345&admin_consent=True
``` | Parameter | Description |
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
Consider the following example. A user has a license for Office 365 Enterprise *
- Exchange Online (Plan 2) conflicts with Exchange Online (Plan 1).
-To solve this conflict, you need to disable two of the plans. You can disable the E1 license that's directly assigned to the user. Or, you need to modify the entire group license assignment and disable the plans in the E3 license. Alternatively, you might decide to remove the E1 license from the user if it's redundant in the context of the E3 license.
+To solve this conflict, you need to disable one of the plans. You can disable the E1 license that's directly assigned to the user. Or, you need to modify the entire group license assignment and disable the plans in the E3 license. Alternatively, you might decide to remove the E1 license from the user if it's redundant in the context of the E3 license.
The decision about how to resolve conflicting product licenses always belongs to the administrator. Azure AD doesn't automatically resolve license conflicts.
To learn more about other scenarios for license management through groups, see t
* [How to migrate individual licensed users to group-based licensing in Azure Active Directory](licensing-groups-migrate-users.md) * [How to migrate users between product licenses using group-based licensing in Azure Active Directory](licensing-groups-change-licenses.md) * [Azure Active Directory group-based licensing additional scenarios](./licensing-group-advanced.md)
-* [PowerShell examples for group-based licensing in Azure Active Directory](licensing-ps-examples.md)
+* [PowerShell examples for group-based licensing in Azure Active Directory](licensing-ps-examples.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 8/04/2021 Last updated : 8/11/2021
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on August 4th, 2021.
+>This information last updated on August 11th, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power Automate unattended RPA add-on | POWERAUTOMATE_UNATTENDED_RPA | 3539d28c-6e35-4a30-b3a9-cd43d5d3e0e2 |CDS_UNATTENDED_RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_UNATTENDED_RPA (0d373a98-a27a-426f-8993-f9a425ae99c5) | Common Data Service Unattended RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate Unattended RPA add-on (0d373a98-a27a-426f-8993-f9a425ae99c5) | | POWER BI (FREE) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | POWER BI (FREE) (2049e525-b859-401b-b2a0-e0a31c4b1fe4)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | POWER BI FOR OFFICE 365 ADD-ON | POWER_BI_ADDON | 45bc2c81-6072-436a-9b0b-3b12eefbc402 | BI_AZURE_P1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>SQL_IS_SSIM (fc0a60aa-feee-4746-a0e3-aecfe81a38dd) |MICROSOFT POWER BI REPORTING AND ANALYTICS PLAN 1 (2125cfd7-2110-4567-83c4-c1cd5275163d)<br/>MICROSOFT POWER BI INFORMATION SERVICES PLAN 1(fc0a60aa-feee-4746-a0e3-aecfe81a38dd) |
+| Power BI Premium Per User | PBI_PREMIUM_PER_USER | c1d032e0-5619-4761-9b5c-75b6831e1711 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P3 (0bf3c642-7bb5-4ccc-884e-59d09df0266c)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Premium Per User (0bf3c642-7bb5-4ccc-884e-59d09df0266c)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
| POWER BI PRO | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION( 113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) |
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
There are two ways to modify users' Intranet zone settings:
#### Mozilla Firefox (all platforms)
-Mozilla Firefox doesn't automatically use Kerberos authentication. Each user must manually add the Azure AD URL to their Firefox settings by using the following steps:
-1. Run Firefox and enter `about:config` in the address bar. Dismiss any notifications that you see.
-2. Search for the **network.negotiate-auth.trusted-uris** preference. This preference lists Firefox's trusted sites for Kerberos authentication.
-3. Right-click and select **Modify**.
-4. Enter `https://autologon.microsoftazuread-sso.com` in the field.
-5. Select **OK** and then reopen the browser.
+If you are using the [Authentication](https://github.com/mozill#authentication) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to the **SPNEGO** section. You can also set the **PrivateBrowsing** option to true to allow seamless SSO in private browsing mode.
#### Safari (macOS)
For Microsoft Edge based on Chromium on macOS and other non-Windows platforms, r
#### Google Chrome (all platforms)
-If you have overridden the [AuthNegotiateDelegateWhitelist](https://www.chromium.org/administrators/policy-list-3#AuthNegotiateDelegateWhitelist) or the [AuthServerWhitelist](https://www.chromium.org/administrators/policy-list-3#AuthServerWhitelist) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to them as well.
+If you have overridden the [AuthNegotiateDelegateAllowlist](https://chromeenterprise.google/policies/#AuthNegotiateDelegateAllowlist) or the [AuthServerAllowlist](https://chromeenterprise.google/policies/#AuthServerAllowlist) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to them as well.
-#### Google Chrome (macOS and other non-Windows platforms)
+#### macOS
-For Google Chrome on macOS and other non-Windows platforms, refer to [The Chromium Project Policy List](https://chromeenterprise.google/policies/) for information on how to control the allow list for the Azure AD URL for integrated authentication.
-
-The use of third-party Active Directory Group Policy extensions to roll out the Azure AD URL to Firefox and Google Chrome on Mac users is outside the scope of this article.
+The use of third-party Active Directory Group Policy extensions to roll out the Azure AD URL to Firefox and Google Chrome to macOS users is outside the scope of this article.
#### Known browser limitations
-Seamless SSO doesn't work in private browsing mode on Firefox. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Microsoft Edge (legacy) is no longer supported. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Seamless SSO may require additional configuration to work in InPrivate and Guest mode with versions of Microsoft Edge Chromium and Google Chrome browsers:
+Seamless SSO doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Microsoft Edge (legacy) is no longer supported.
-**AmbientAuthenticationInPrivateModesEnabled** ΓÇô may need to be configured for InPrivate and / or guest users based on the corresponding documentations: [Microsoft Edge Chromium](/DeployEdge/microsoft-edge-policies#ambientauthenticationinprivatemodesenabled); [Google Chrome](https://chromeenterprise.google/policies/?policy=AmbientAuthenticationInPrivateModesEnabled).
+ `AmbientAuthenticationInPrivateModesEnabled`may need to be configured for InPrivate and / or guest users based on the corresponding documentations:
+
+ - [Microsoft Edge Chromium](/DeployEdge/microsoft-edge-policies#ambientauthenticationinprivatemodesenabled)
+ - [Google Chrome](https://chromeenterprise.google/policies/?policy=AmbientAuthenticationInPrivateModesEnabled)
## Step 4: Test the feature
To test the feature for a specific user, ensure that all the following condition
- You have [rolled out the feature](#step-3-roll-out-the-feature) to this user through Group Policy. To test the scenario where the user enters only the username, but not the password:
- - Sign in to `https://myapps.microsoft.com/. Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode.
+ - Sign in to https://myapps.microsoft.com/. Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode.
To test the scenario where the user doesn't have to enter the username or the password, use one of these steps: - Sign in to `https://myapps.microsoft.com/contoso.onmicrosoft.com` Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode. Replace *contoso* with your tenant's name.
active-directory Plan Connect Design Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-design-concepts.md
The purpose of this document is to describe areas that must be considered while configuring Azure AD Connect. This document is a deep dive on certain areas and these concepts are briefly described in other documents as well. ## sourceAnchor
-The sourceAnchor attribute is defined as *an attribute immutable during the lifetime of an object*. It uniquely identifies an object as being the same object on-premises and in Azure AD. The attribute is also called **immutableId** and the two names are used interchangeable.
+The sourceAnchor attribute is defined as *an attribute immutable during the lifetime of an object*. It uniquely identifies an object as being the same object on-premises and in Azure AD. The attribute is also called **immutableId** and the two names are used interchangeably.
The word immutable, that is "cannot be changed", is important to this document. Since this attributeΓÇÖs value cannot be changed after it has been set, it is important to pick a design that supports your scenario.
active-directory Tshoot Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-password-hash-synchronization.md
Follow these steps to determine why no passwords are synchronized:
4. Look in the event log for errors. Look for the following events, which would indicate a problem: * Source: "Directory synchronization" ID: 0, 611, 652, 655
- If you see these events, you have a connectivity problem. The event log message contains forest information where you have a problem. For more information, see [Connectivity problem](#connectivity problem).
+ If you see these events, you have a connectivity problem. The event log message contains forest information where you have a problem.
5. If you see no heartbeat or if nothing else worked, run [Trigger a full sync of all passwords](#trigger-a-full-sync-of-all-passwords). Run the script only once.
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
The table below outlines the scenarios and audit values available for the admin
**I turned on this workflow, but when testing out the functionality, why canΓÇÖt I see the new ΓÇ£Approval requiredΓÇ¥ prompt allowing me to request access?**
-After turning on the feature, it may take up to 60 minutes for end users to see the update. You can verify that the configuration has properly taken effect by viewing the **EnableAdminConsentRequests** value in the `https://graph.microsoft.com/beta/settings` API.
+After turning on the feature, it may take up to 60 minutes for end users to see the update, though it's usually available to all users within a few minutes.
**As a reviewer, why canΓÇÖt I see all pending requests?**
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Identity Protection Center | Read all security reports and settings information
[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports Windows Defender ATP and EDR | View and investigate alerts. When you turn on role-based access control in Windows Defender ATP, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Windows Defender ATP role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune.
-[Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts
+[Cloud App Security](/cloud-app-security/manage-admins) | Has read permissions and can manage alerts
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"]
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![image3](common/idp-intiated.png) > [!Note]
- > If the **Identifier** and **Reply URL** values are not getting auto polulated, then fill in the values manually according to your requirement.
+ > If the **Identifier** and **Reply URL** values are not getting auto populated, then fill in the values manually according to your requirement.
1. If you don't have **Service Provider metadata file**, perform the following steps on the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure AWS Single Sign-on you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS Single Sign-on you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Corporateexperience Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/corporateexperience-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with CorporateExperience | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and CorporateExperience.
++++++++ Last updated : 08/05/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with CorporateExperience
+
+In this tutorial, you'll learn how to integrate CorporateExperience with Azure Active Directory (Azure AD). When you integrate CorporateExperience with Azure AD, you can:
+
+* Control in Azure AD who has access to CorporateExperience.
+* Enable your users to be automatically signed-in to CorporateExperience with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CorporateExperience single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* CorporateExperience supports **SP** initiated SSO.
+
+## Add CorporateExperience from the gallery
+
+To configure the integration of CorporateExperience into Azure AD, you need to add CorporateExperience from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **CorporateExperience** in the search box.
+1. Select **CorporateExperience** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for CorporateExperience
+
+Configure and test Azure AD SSO with CorporateExperience using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CorporateExperience.
+
+To configure and test Azure AD SSO with CorporateExperience, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure CorporateExperience SSO](#configure-corporateexperience-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CorporateExperience test user](#create-corporateexperience-test-user)** - to have a counterpart of B.Simon in CorporateExperience that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **CorporateExperience** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<CustomerName>.corporateparking.parso.cr/users/saml/metadata`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.corporateparking.parso.cr/users/saml/auth`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [CorporateExperience Client support team](mailto:support@parso.cr) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your CorporateExperience application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but CorporateExperience expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, CorporateExperience application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | email | user.mail |
+ | first_name | user.givenname |
+ | user_name | user.netbiosname |
+ | organization | user.companyname |
+ | uid | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up CorporateExperience** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CorporateExperience.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **CorporateExperience**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure CorporateExperience SSO
+
+To configure single sign-on on **CorporateExperience** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [CorporateExperience support team](mailto:support@parso.cr). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CorporateExperience test user
+
+In this section, you create a user called Britta Simon in CorporateExperience. Work with [CorporateExperience support team](mailto:support@parso.cr) to add the users in the CorporateExperience platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to CorporateExperience Sign-on URL where you can initiate the login flow.
+
+* Go to CorporateExperience Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the CorporateExperience tile in the My Apps, this will redirect to CorporateExperience Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure CorporateExperience you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Harmony Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/harmony-tutorial.md
Previously updated : 04/10/2020 Last updated : 08/09/2021
In this tutorial, you'll learn how to integrate Harmony with Azure Active Direct
* Enable your users to be automatically signed-in to Harmony with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Harmony supports **IDP** initiated SSO
-* Once you configure Harmony you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* Harmony supports **IDP** initiated SSO.
-## Adding Harmony from the gallery
+## Add Harmony from the gallery
To configure the integration of Harmony into Azure AD, you need to add Harmony from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Harmony** in the search box. 1. Select **Harmony** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Harmony
+## Configure and test Azure AD SSO for Harmony
Configure and test Azure AD SSO with Harmony using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Harmony.
-To configure and test Azure AD SSO with Harmony, complete the following building blocks:
+To configure and test Azure AD SSO with Harmony, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Harmony, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Harmony** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Harmony** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Harmony**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Harmony. Work with [H
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Harmony tile in the Access Panel, you should be automatically signed in to the Harmony for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Harmony for which you set up the SSO.
-- [Try Harmony with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Harmony tile in the My Apps, you should be automatically signed in to the Harmony for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Harmony with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Harmony you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Ice Contact Center Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ice-contact-center-tutorial.md
Previously updated : 05/15/2020 Last updated : 08/09/2021
In this tutorial, you'll learn how to integrate ice Contact Center with Azure Ac
* Enable your users to be automatically signed-in to ice Contact Center with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ice Contact Center supports **SP** initiated SSO
-* Once you configure ice Contact Center you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* ice Contact Center supports **SP** initiated SSO.
-## Adding ice Contact Center from the gallery
+## Add ice Contact Center from the gallery
To configure the integration of ice Contact Center into Azure AD, you need to add ice Contact Center from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ice Contact Center** in the search box. 1. Select **ice Contact Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ice Contact Center
+## Configure and test Azure AD SSO for ice Contact Center
Configure and test Azure AD SSO with ice Contact Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ice Contact Center.
-To configure and test Azure AD SSO with ice Contact Center, complete the following building blocks:
+To configure and test Azure AD SSO with ice Contact Center, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with ice Contact Center, complete the followi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ice Contact Center** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ice Contact Center** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<TENANT>.iceuc.com/iceManager`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using one of the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using one of the following patterns:
- ```http
- https://<TENANT>-imrpool.icescape365.com:PORT/identity
- https://<TENANT>-imrpool.icescape.com:PORT/identity
- https://<TENANT>-imrpool.iceuc.com:PORT/identity
- ```
+ | **Identifier** |
+ ||
+ | `https://<TENANT>-imrpool.icescape365.com:PORT/identity` |
+ | `https://<TENANT>-imrpool.icescape.com:PORT/identity` |
+ | `https://<TENANT>-imrpool.iceuc.com:PORT/identity` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
- c. In the **Reply URL** textbox, type a URL using one of the following pattern:
+ | **Reply URL** |
+ ||
+ | `https://<TENANT>-imrpool.icescape365.com:PORT/identity` |
+ | `https://<TENANT>-imrpool.icescape.com:PORT/identity` |
+ | `https://<TENANT>-imrpool.iceuc.com:PORT/identity` |
- ```http
- https://<TENANT>-imrpool.icescape365.com:PORT/identity
- https://<TENANT>-imrpool.icescape.com:PORT/identity
- https://<TENANT>-imrpool.iceuc.com:PORT/identity
- ```
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<TENANT>.iceuc.com/iceManager`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [ice Contact Center Client support team](mailto:support@computer-talk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign on URL. Contact [ice Contact Center Client support team](mailto:support@computer-talk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ice Contact Center**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in ice Contact Center. Wo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ice Contact Center tile in the Access Panel, you should be automatically signed in to the ice Contact Center for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to ice Contact Center Sign-on URL where you can initiate the login flow.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Go to ice Contact Center Sign-on URL directly and initiate the login flow from there.
-- [Try ice Contact Center with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the ice Contact Center tile in the My Apps, this will redirect to ice Contact Center Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect ice Contact Center with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure ice Contact Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Idrive Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/idrive-tutorial.md
Previously updated : 01/23/2019 Last updated : 08/09/2021 # Tutorial: Azure Active Directory integration with IDrive
-In this tutorial, you learn how to integrate IDrive with Azure Active Directory (Azure AD).
-Integrating IDrive with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate IDrive with Azure Active Directory (Azure AD). When you integrate IDrive with Azure AD, you can:
-* You can control in Azure AD who has access to IDrive.
-* You can enable your users to be automatically signed-in to IDrive (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to IDrive.
+* Enable your users to be automatically signed-in to IDrive with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with IDrive, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* IDrive single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* IDrive single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* IDrive supports **SP and IDP** initiated SSO
+* IDrive supports **SP and IDP** initiated SSO.
-## Adding IDrive from the gallery
+## Add IDrive from the gallery
To configure the integration of IDrive into Azure AD, you need to add IDrive from the gallery to your list of managed SaaS apps.
-**To add IDrive from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **IDrive**, select **IDrive** from result panel then click **Add** button to add the application.
-
- ![IDrive in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **IDrive** in the search box.
+1. Select **IDrive** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with IDrive based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in IDrive needs to be established.
+## Configure and test Azure AD SSO for IDrive
-To configure and test Azure AD single sign-on with IDrive, you need to complete the following building blocks:
+Configure and test Azure AD SSO with IDrive using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in IDrive.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure IDrive Single Sign-On](#configure-idrive-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create IDrive test user](#create-idrive-test-user)** - to have a counterpart of Britta Simon in IDrive that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with IDrive, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure IDrive SSO](#configure-idrive-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create IDrive test user](#create-idrive-test-user)** - to have a counterpart of B.Simon in IDrive that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with IDrive, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **IDrive** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **IDrive** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- ![Screenshot shows the Basic SAML Configuration.](common/preintegrated.png)
- 5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://www.idrive.com/idrive/login/loginForm` 6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with IDrive, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure IDrive Single Sign-On
-
-To configure single sign-on on **IDrive** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [IDrive support team](https://www.idrive.com/support). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to IDrive.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to IDrive.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **IDrive**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **IDrive**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure IDrive SSO
-2. In the applications list, select **IDrive**.
-
- ![The IDrive link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **IDrive** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [IDrive support team](https://www.idrive.com/support). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create IDrive test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in IDrive. Work with [IDrive support team](https://www.idrive.com/support) to add the users in the IDrive platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create IDrive test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in IDrive. Work with [IDrive support team](https://www.idrive.com/support) to add the users in the IDrive platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to IDrive Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to IDrive Sign-on URL directly and initiate the login flow from there.
-When you click the IDrive tile in the Access Panel, you should be automatically signed in to the IDrive for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the IDrive for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the IDrive tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the IDrive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure IDrive you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Pantheon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/pantheon-tutorial.md
Previously updated : 12/05/2019 Last updated : 08/09/2021
In this tutorial, you'll learn how to integrate Pantheon with Azure Active Direc
* Enable your users to be automatically signed-in to Pantheon with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
+* Pantheon supports **IDP** initiated SSO.
-* Pantheon supports **IDP** initiated SSO
----
-## Adding Pantheon from the gallery
+## Add Pantheon from the gallery
To configure the integration of Pantheon into Azure AD, you need to add Pantheon from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Pantheon** in the search box. 1. Select **Pantheon** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Pantheon
+## Configure and test Azure AD SSO for Pantheon
Configure and test Azure AD SSO with Pantheon using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Pantheon.
-To configure and test Azure AD SSO with Pantheon, complete the following building blocks:
+To configure and test Azure AD SSO with Pantheon, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Pantheon, complete the following buildin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Pantheon** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Pantheon** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Set up single sign-on with SAML** page, perform the following steps:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
`urn:auth0:pantheon:<orgname>-SSO` b. In the **Reply URL** text box, type a URL using the following pattern:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Pantheon**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Pantheon SSO
-To configure single sign-on on **Pantheon** side, you need to send the downloaded **Certificate** and appropriate copied URLs to [Pantheon support team](https://pantheon.io/docs/getting-support/).
+To configure single sign-on on **Pantheon** side, you need to send the downloaded **Certificate(Base64)** and appropriate copied URLs to [Pantheon support team](https://pantheon.io/docs/getting-support/).
> [!Note]
-> You also need to provide the Email Domain(s) information and Date Time when you want to enable this connection. You can find more details about it from [here](https://pantheon.io/docs/sso-organizations/)
+> You also need to provide the Email Domain(s) information and Date Time when you want to enable this connection. You can find more details about it from [here](https://pantheon.io/docs/sso-organizations/).
### Create Pantheon test user
In this section, you create a user called B.Simon in Pantheon. Please follow the
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Pantheon tile in the Access Panel, you should be automatically signed in to the Pantheon for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Pantheon for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Pantheon tile in the My Apps, you should be automatically signed in to the Pantheon for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Pantheon with Azure AD](https://aad.portal.azure.com/)
+Once you configure Pantheon you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Safetynet Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/safetynet-tutorial.md
Previously updated : 08/07/2019 Last updated : 08/09/2021
In this tutorial, you'll learn how to integrate SafetyNet with Azure Active Dire
* Enable your users to be automatically signed-in to SafetyNet with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* SafetyNet supports **SP and IDP** initiated SSO
+* SafetyNet supports **SP and IDP** initiated SSO.
-## Adding SafetyNet from the gallery
+## Add SafetyNet from the gallery
To configure the integration of SafetyNet into Azure AD, you need to add SafetyNet from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **SafetyNet** in the search box. 1. Select **SafetyNet** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for SafetyNet
Configure and test Azure AD SSO with SafetyNet using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SafetyNet.
-To configure and test Azure AD SSO with SafetyNet, complete the following building blocks:
+To configure and test Azure AD SSO with SafetyNet, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure SafetyNet SSO](#configure-safetynet-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-5. **[Create SafetyNet test user](#create-safetynet-test-user)** - to have a counterpart of B.Simon in SafetyNet that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SafetyNet SSO](#configure-safetynet-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SafetyNet test user](#create-safetynet-test-user)** - to have a counterpart of B.Simon in SafetyNet that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **SafetyNet** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **SafetyNet** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<subdomain>.predictivesolutions.com/sp`
+ `https://<SUBDOMAIN>.predictivesolutions.com/sp`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<subdomain>.predictivesolutions.com/CRMApp/saml/SSO`
+ `https://<SUBDOMAIN>.predictivesolutions.com/CRMApp/saml/SSO`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.predictivesolutions.com`
+ `https://<SUBDOMAIN>.predictivesolutions.com`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [SafetyNet Client support team](mailto:dev@predictivesolutions.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/copy-metadataurl.png)
-### Configure SafetyNet SSO
-
-To configure single sign-on on **SafetyNet** side, you need to send the **App Federation Metadata Url** to [SafetyNet support team](mailto:dev@predictivesolutions.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **SafetyNet**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure SafetyNet SSO
+
+To configure single sign-on on **SafetyNet** side, you need to send the **App Federation Metadata Url** to [SafetyNet support team](mailto:dev@predictivesolutions.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ### Create SafetyNet test user In this section, you create a user called Britta Simon in SafetyNet. Work with [SafetyNet support team](mailto:dev@predictivesolutions.com) to add the users in the SafetyNet platform. Users must be created and activated before you use single sign-on.
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to SafetyNet Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to SafetyNet Sign-on URL directly and initiate the login flow from there.
-When you click the SafetyNet tile in the Access Panel, you should be automatically signed in to the SafetyNet for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SafetyNet for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the SafetyNet tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SafetyNet for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure SafetyNet you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicenow-tutorial.md
Previously updated : 09/09/2020 Last updated : 07/21/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Basic SAML Configuration** section, perform the following steps: a. In **Sign on URL**, enter a URL that uses the following pattern:
- `https://<instancename>.service-now.com/navpage.do`
+ `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>`
+
+ > [!NOTE]
+ > Please copy the sys_id value from step 5.d.iii in **Configure ServiceNow** section.
b. In **Identifier (Entity ID)**, enter a URL that uses the following pattern: `https://<instance-name>.service-now.com`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the **Basic SAML Configuration** section, perform the following steps: a. For **Sign on URL**, enter a URL that uses the following pattern:
- `https://<instancename>.service-now.com/navpage.do`
+ `https://<instance-name>.service-now.com/login_with_sso.do?glide_sso_id=<sys_id of the sso configuration>` please copy the sys_id value from step 5.d.iii in **Configure ServiceNow** section.
b. For **Identifier (Entity ID)**, enter a URL that uses the following pattern: `https://<instance-name>.service-now.com`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot of Identity Provider section](./media/servicenow-tutorial/automatic-config.png "Configure single sign-on")
- a. For **Name**, enter a name for your configuration (for example, **Microsoft Azure Federated single sign-on**).
+ a. Right click on the grey bar at the top of the screen and click **Copy sys_id** and use this value to the **Sign on URL** in **Basic SAML Configuration** section.
- b. Copy the **ServiceNow Homepage** value, and paste it in **Sign-on URL** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
+ b. For **Name**, enter a name for your configuration (for example, **Microsoft Azure Federated single sign-on**).
+
+ c. Copy the **ServiceNow Homepage** value, and paste it in **Sign-on URL** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
> [!NOTE] > The ServiceNow instance homepage is a concatenation of your **ServiceNow tenant URL** and **/navpage.do** (for example:`https://fabrikam.service-now.com/navpage.do`).
- c. Copy the **Entity ID / Issuer** value, and paste it in **Identifier** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
+ d. Copy the **Entity ID / Issuer** value, and paste it in **Identifier** in the **ServiceNow Basic SAML Configuration** section of the Azure portal.
- d. Confirm that **NameID Policy** is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` value.
+ e. Confirm that **NameID Policy** is set to `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` value.
- e. Click on **Advanced** and give the **Single Sign-On Script** value as **MultiSSOv2_SAML2_custom**.
+ f. Click on **Advanced** and give the **Single Sign-On Script** value as **MultiSSOv2_SAML2_custom**.
1. Scroll down to the **X.509 Certificate** section, and select **Edit**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. It reads the IdP metadata URL, and populates all the fields information.
- ![Screenshot of Identity Provider](./media/servicenow-tutorial/ic7694982.png "Configure single sign-on")
+ ![Screenshot of Identity Provider](./media/servicenow-tutorial/identity-provider.png "Configure single sign-on")
a. For **Name**, enter a name for your configuration (for example, **Microsoft Azure Federated single sign-on**).
The objective of this section is to create a user called B.Simon in ServiceNow.
5. In the **X.509 Certificates** dialog box, perform the following steps:
- ![Screenshot of X.509 Certificates dialog box](./media/servicenow-tutorial/ic7694975.png "Configure single sign-on")
+ ![Screenshot of X.509 Certificates dialog box](./media/servicenow-tutorial/certificate.png "Configure single sign-on")
a. For **Name**, enter a name for your configuration (for example: **TestSAML2.0**).
The objective of this section is to create a user called B.Simon in ServiceNow.
7. In the **Add New Identity Provider** dialog box, under **Configure Identity Provider**, perform the following steps:
- ![Screenshot of Add New Identity Provider dialog box](./media/servicenow-tutorial/ic7694982ex.png "Configure single sign-on")
+ ![Screenshot of Add New Identity Provider dialog box](./media/servicenow-tutorial/new-identity-provider.png "Configure single sign-on")
a. For **Name**, enter a name for your configuration (for example: **SAML 2.0**).
The objective of this section is to create a user called B.Simon in ServiceNow.
8. Select **Advanced Settings**. Under **Additional Identity Provider Properties**, perform the following steps:
- ![Screenshot of Add New Identity Provider dialog box, with Advanced Settings highlighted](./media/servicenow-tutorial/ic7694983ex.png "Configure single sign-on")
+ ![Screenshot of Add New Identity Provider dialog box, with Advanced Settings highlighted](./media/servicenow-tutorial/advanced-settings.png "Configure single sign-on")
a. For **Protocol Binding for the IDP's SingleLogoutRequest**, enter **urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect**.
The objective of this section is to create a user called B.Simon in ServiceNow.
9. Under **Additional Service Provider Properties**, perform the following steps:
- ![Screenshot of Add New Identity Provider dialog box, with various properties highlighted](./media/servicenow-tutorial/ic7694984ex.png "Configure single sign-on")
+ ![Screenshot of Add New Identity Provider dialog box, with various properties highlighted](./media/servicenow-tutorial/service-provider.png "Configure single sign-on")
a. For **ServiceNow Homepage**, enter the URL of your ServiceNow instance homepage.
active-directory Synergi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/synergi-tutorial.md
Previously updated : 03/07/2019 Last updated : 08/09/2021 # Tutorial: Azure Active Directory integration with Synergi
-In this tutorial, you learn how to integrate Synergi with Azure Active Directory (Azure AD).
-Integrating Synergi with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Synergi with Azure Active Directory (Azure AD). When you integrate Synergi with Azure AD, you can:
-* You can control in Azure AD who has access to Synergi.
-* You can enable your users to be automatically signed-in to Synergi (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Synergi.
+* Enable your users to be automatically signed-in to Synergi with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Synergi, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Synergi single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Synergi single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Synergi supports **IDP** initiated SSO
+* Synergi supports **IDP** initiated SSO.
-## Adding Synergi from the gallery
+## Add Synergi from the gallery
To configure the integration of Synergi into Azure AD, you need to add Synergi from the gallery to your list of managed SaaS apps.
-**To add Synergi from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Synergi**, select **Synergi** from result panel then click **Add** button to add the application.
-
- ![Synergi in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Synergi** in the search box.
+1. Select **Synergi** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Synergi based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Synergi needs to be established.
+## Configure and test Azure AD SSO for Synergi
-To configure and test Azure AD single sign-on with Synergi, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Synergi using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Synergi.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Synergi Single Sign-On](#configure-synergi-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Synergi test user](#create-synergi-test-user)** - to have a counterpart of Britta Simon in Synergi that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Synergi, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Synergi SSO](#configure-synergi-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Synergi test user](#create-synergi-test-user)** - to have a counterpart of B.Simon in Synergi that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Synergi, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Synergi** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Synergi** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Synergi Domain and URLs single sign-on information](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<company name>.irmsecurity.com`
To configure Azure AD single sign-on with Synergi, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Synergi Single Sign-On
-
-To configure single sign-on on **Synergi** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Synergi support team](https://www.irmsecurity.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Synergi.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Synergi**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Synergi.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Synergi**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Synergi**.
+## Configure Synergi SSO
- ![The Synergi link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Synergi** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Synergi support team](https://www.irmsecurity.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Synergi test user In this section, you create a user called Britta Simon in Synergi. Work with [Synergi support team](https://www.irmsecurity.com/contact/) to add the users in the Synergi platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Synergi tile in the Access Panel, you should be automatically signed in to the Synergi for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Synergi for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Synergi tile in the My Apps, you should be automatically signed in to the Synergi for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Synergi you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tutorocean Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorocean-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TutorOcean | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and TutorOcean.
++++++++ Last updated : 08/09/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with TutorOcean
+
+In this tutorial, you'll learn how to integrate TutorOcean with Azure Active Directory (Azure AD). When you integrate TutorOcean with Azure AD, you can:
+
+* Control in Azure AD who has access to TutorOcean.
+* Enable your users to be automatically signed-in to TutorOcean with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TutorOcean single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TutorOcean supports **SP and IDP** initiated SSO.
+
+## Add TutorOcean from the gallery
+
+To configure the integration of TutorOcean into Azure AD, you need to add TutorOcean from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TutorOcean** in the search box.
+1. Select **TutorOcean** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for TutorOcean
+
+Configure and test Azure AD SSO with TutorOcean using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TutorOcean.
+
+To configure and test Azure AD SSO with TutorOcean, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TutorOcean SSO](#configure-tutorocean-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TutorOcean test user](#create-tutorocean-test-user)** - to have a counterpart of B.Simon in TutorOcean that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TutorOcean** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one
+ of the following patterns:
+
+ | **Identifier** |
+ |-|
+ | `https://<SUBDOMAIN>.tutorocean.com` |
+ |`https://<SUBDOMAIN>.quadc.io` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://<SUBDOMAIN>.tutorocean.com/_saml/validate` |
+ | `https://<SUBDOMAIN>.quadc.io/_saml/validate` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ | `https://<SUBDOMAIN>.tutorocean.com` |
+ | `https://<SUBDOMAIN>.quadc.io` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [TutorOcean Client support team](mailto:support@tutorocean.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up TutorOcean** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TutorOcean.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TutorOcean**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TutorOcean SSO
+
+To configure single sign-on on **TutorOcean** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TutorOcean support team](mailto:support@tutorocean.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create TutorOcean test user
+
+In this section, you create a user called Britta Simon in TutorOcean. Work with [TutorOcean support team](mailto:support@tutorocean.com) to add the users in the TutorOcean platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to TutorOcean Sign on URL where you can initiate the login flow.
+
+* Go to TutorOcean Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the TutorOcean for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the TutorOcean tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TutorOcean for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure TutorOcean you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workable Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workable-tutorial.md
Previously updated : 04/15/2019 Last updated : 08/09/2021 # Tutorial: Azure Active Directory integration with Workable
-In this tutorial, you learn how to integrate Workable with Azure Active Directory (Azure AD).
-Integrating Workable with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Workable with Azure Active Directory (Azure AD). When you integrate Workable with Azure AD, you can:
-* You can control in Azure AD who has access to Workable.
-* You can enable your users to be automatically signed-in to Workable (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Workable.
+* Enable your users to be automatically signed-in to Workable with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Workable, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Workable single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Workable single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Workable supports **SP and IDP** initiated SSO
-* Workable supports **Just In Time** user provisioning
-
-## Adding Workable from the gallery
-
-To configure the integration of Workable into Azure AD, you need to add Workable from the gallery to your list of managed SaaS apps.
-
-**To add Workable from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Workable supports **SP and IDP** initiated SSO.
+* Workable supports **Just In Time** user provisioning.
-4. In the search box, type **Workable**, select **Workable** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Workable in the results list](common/search-new-app.png)
+## Add Workable from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Workable based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Workable needs to be established.
-
-To configure and test Azure AD single sign-on with Workable, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Workable Single Sign-On](#configure-workable-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Workable test user](#create-workable-test-user)** - to have a counterpart of Britta Simon in Workable that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Workable into Azure AD, you need to add Workable from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Workable** in the search box.
+1. Select **Workable** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Workable, perform the following steps:
+## Configure and test Azure AD SSO for Workable
-1. In the [Azure portal](https://portal.azure.com/), on the **Workable** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Workable using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Workable.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Workable, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Workable SSO](#configure-workable-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Workable test user](#create-workable-test-user)** - to have a counterpart of B.Simon in Workable that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Workable** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Screenshot shows the Basic SAML Configuration, where you can enter a Reply U R L.](common/both-replyurl.png)
+4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://www.workable.com/auth/saml/<SUBDOMAIN>/callback` 5. Click **set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/both-signonurl.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://www.workable.com/sso/signin` > [!NOTE]
To configure Azure AD single sign-on with Workable, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Workable Single Sign-On
-
-To enable SSO in Workable, contact your dedicated Workable account manager and provide them with the following items.
-
-1. Login URL
-
-2. Certificate file
-
-3. Logout URL
-
-Once Single Sign On has been enabled, your Workable account manager will let you know and you can use [Workable's SSO page](https://help.workable.com/hc/en-us/articles/360000067753-Single-Sign-on-SSO-Overview-Pro) to sign in using your Workable account subdomain.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`, for example: `brittasimon@contoso.com`.
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Workable.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Workable**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Workable.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Workable**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Workable**.
+## Configure Workable SSO
- ![The Workable link in the Applications list](common/all-applications.png)
+To enable SSO in Workable, contact your dedicated Workable account manager and provide them with the following items.
-3. In the menu on the left, select **Users and groups**.
+1. Login URL.
- ![The "Users and groups" link](common/users-groups-blade.png)
+2. Certificate file.
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+3. Logout URL.
- ![The Add Assignment pane](common/add-assign-user.png)
+Once Single Sign On has been enabled, your Workable account manager will let you know and you can use [Workable's SSO page](https://help.workable.com/hc/articles/360000067753-Single-Sign-on-SSO-Overview-Pro) to sign in using your Workable account subdomain.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Workable test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, a user called Britta Simon is created in Workable. Workable supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Workable, a new one is created after authentication.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Workable test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, a user called Britta Simon is created in Workable. Workable supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Workable, a new one is created after authentication.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Workable Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Workable Sign-on URL directly and initiate the login flow from there.
-When you click the Workable tile in the Access Panel, you should be automatically signed in to the Workable for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Workable for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Workable tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Workable for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Workable you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zoho One China Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zoho-one-china-tutorial.md
Previously updated : 01/20/2021 Last updated : 08/09/2021
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Zoho One China single sign-on (SSO) enabled subscription.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Zoho One China supports **SP and IDP** initiated SSO
+* Zoho One China supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
## Add Zoho One China from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
In the **Reply URL** text box, type a URL using the following pattern: `https://accounts.zoho.com.cn/signin/samlsp/<zoid>`
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
When adding node pool, reference the node subnet using `--vnet-subnet-id` and th
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
-az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newNodepool \
+az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepool \
--max-pods 250 \ --node-count 2 \ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/coredns-custom.md
data:
} ```
-## Enable logging for DNS query debugging
+## Troubleshooting
+
+For general CoreDNS troubleshooting steps, such as checking the endpoints or resolution, see [Debugging DNS Resolution][coredns-troubleshooting].
To enable DNS query logging, apply the following configuration in your coredns-custom ConfigMap:
data:
log ```
+After you apply the configuration changes, use the `kubectl logs` command to view the CoreDNS debug logging. For example:
+
+```console
+kubectl logs --namespace kube-system --selector k8s-app=kube-dns
+```
+ ## Next steps This article showed some example scenarios for CoreDNS customization. For information on the CoreDNS project, see [the CoreDNS upstream project page][coredns].
To learn more about core network concepts, see [Network concepts for application
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [coredns hosts]: https://coredns.io/plugins/hosts/
+[coredns-troubleshooting]: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
<!-- LINKS - internal --> [concepts-network]: concepts-network.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
Add an *A* record to your DNS zone with the external IP address of the NGINX ser
az network dns record-set a add-record \ --resource-group myResourceGroup \ --zone-name MY_CUSTOM_DOMAIN \
- --record-set-name * \
+ --record-set-name "*" \
--ipv4-address MY_EXTERNAL_IP ```
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/node-image-upgrade.md
# Azure Kubernetes Service (AKS) node image upgrade
-AKS supports upgrading the images on a node so you're up to date with the newest OS and runtime updates. AKS provides one new image per week with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest features, including Linux or Windows patches. This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the version of Kubernetes.
+AKS supports upgrading the images on a node so you're up to date with the newest OS and runtime updates. AKS provides one new image per week with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest features, including Linux or Windows patches. Although customers will be notified of image upgrades via the AKS release notes, it might take up to a week for updates to be rolled out in all regions. This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the version of Kubernetes.
For more information about the latest images provided by AKS, see the [AKS release notes](https://github.com/Azure/AKS/releases).
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-scale.md
Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the n
> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml > ```
-To use the autoscaler, all containers in your pods and your pods must have CPU requests and limits defined. In the `azure-vote-front` deployment, the front-end container already requests 0.25 CPU, with a limit of 0.5 CPU. These resource requests and limits are defined as shown in the following example snippet:
+To use the autoscaler, all containers in your pods and your pods must have CPU requests and limits defined. In the `azure-vote-front` deployment, the front-end container already requests 0.25 CPU, with a limit of 0.5 CPU.
+
+These resource requests and limits are defined for each container as shown in the following example snippet:
```yaml
-resources:
- requests:
- cpu: 250m
- limits:
- cpu: 500m
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ ports:
+ - containerPort: 80
+ resources:
+ requests:
+ cpu: 250m
+ limits:
+ cpu: 500m
``` The following example uses the [kubectl autoscale][kubectl-autoscale] command to autoscale the number of pods in the *azure-vote-front* deployment. If average CPU utilization across all pods exceeds 50% of their requested usage, the autoscaler increases the pods up to a maximum of *10* instances. A minimum of *3* instances is then defined for the deployment:
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
az aks create \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --enable-managed-identity \
- --assign-identity <identity-id> \
+ --assign-identity <identity-id>
``` A successful cluster creation using your own managed identities contains this userAssignedIdentities profile information:
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
The app must be running in the **Standard**, **Premium**, or **Isolated** tier i
The new deployment slot has no content, even if you clone the settings from a different slot. For example, you can [publish to this slot with Git](./deploy-local-git.md). You can deploy to the slot from a different repository branch or a different repository.
+The slot's URL will be of the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the site name will be truncated at 40 characters, the slot name will be truncated at 19 characters, and an additional 4 random characters will be appended to ensure the resulting domain name is unique.
+ <a name="AboutConfiguration"></a> ## What happens during a swap
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
Build high-availability into your application architecture by co-locating your c
- **Zonal services** ΓÇô where a resource is pinned to a specific zone (for example, virtual machines, managed disks, Standard IP addresses), or - **Zone-redundant services** ΓÇô when the Azure platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).
+> [!NOTE]
+> Both Standard SKU Public IP Addresses and Public IP Address Prefix resource types also have a "no-zone" option. This allows customers to utilize Standard SKU public IPs (and associate them to resources which only allow Standard SKU), but does not give a guarantee of redundancy. (All Public IP addresses that are [upgraded](https://docs.microsoft.com/azure/virtual-network/public-ip-upgrade-portal) from Basic to Standard SKU will be of type "no-zone".)
+ To achieve comprehensive business continuity on Azure, build your application architecture using the combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications and data using Availability Zones within an Azure region for high-availability and asynchronously replicate across Azure regions for disaster recovery protection. ![conceptual view of one zone going down in a region](./media/az-overview/az-graphic-two.png)
azure-arc Manage Vm Extensions Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-portal.md
Title: Enable VM extension from Azure portal
+ Title: Enable VM extension from the Azure portal
description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments from the Azure portal. Previously updated : 08/05/2021 Last updated : 08/11/2021 # Enable Azure VM extensions from the Azure portal
-This article shows you how to deploy and uninstall Azure VM extensions supported by Azure Arc-enabled servers, on a Linux or Windows hybrid machine through the Azure portal.
+This article shows you how to deploy, update, and uninstall Azure VM extensions supported by Azure Arc enabled servers, on a Linux or Windows hybrid machine using the Azure portal.
> [!NOTE] > The Key Vault VM extension does not support deployment from the Azure portal, only using the Azure CLI, the Azure PowerShell, or using an Azure Resource Manager template.
This article shows you how to deploy and uninstall Azure VM extensions supported
> [!NOTE] > Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](../../virtual-machines/extensions/overview.md) article.
-## Enable extensions from the portal
+## Enable extensions
-VM extensions can be applied to your Arc-enabled server managed machine through the Azure portal.
+VM extensions can be applied to your Arc-enabled server managed machine using the Azure portal.
1. From your browser, go to the [Azure portal](https://portal.azure.com).
You can get a list of the VM extensions on your Arc-enabled server from the Azur
:::image type="content" source="media/manage-vm-extensions/list-vm-extensions.png" alt-text="List VM extension deployed to selected machine." border="true":::
+## Update extensions
+
+When a new version of a supported extension is released, you can update the extension to that latest release. Arc enabled servers will present a banner in the Azure portal when you navigate to Arc enabled servers, informing you there are updates available for one or more extensions installed on a machine. When you view the list of installed extensions for a selected Arc enabled server, you'll notice a column labeled **Update available**. If a newer version of an extension is released, the **Update available** value for that extension shows a value of **Yes**.
+
+Updating an extension to the newest version does not affect the configuration of that extension. You are not required to respecify configuration information for any extension you update.
++
+You can update one or select multiple extensions eligible for an update from the Azure portal by performing the following steps.
+
+> [!NOTE]
+> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+
+3. Choose **Extensions**, and review the status of extensions under the **Update available** column.
+
+You can update one extension by one of three ways:
+
+* By selecting an extension from the list of installed extensions, and under the properties of the extension, select the **Update** option.
+
+ :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-from-extension.png" alt-text="Update extension from selected extension." border="true":::
+
+* By selecting the extension from the list of installed extensions, and select the **Update** option from the top of the page.
+
+* By selecting one or more extensions that are eligible for an update from the list of installed extensions, and then select the **Update** option.
+
+ :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-selected.png" alt-text="Update selected extension." border="true":::
+ ## Uninstall extensions You can remove one or more extensions from an Arc-enabled server from the Azure portal. Perform the following steps to remove an extension.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 08/09/2021 Last updated : 08/11/2021
Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used.
-Azure Arc-enabled servers enables you to deploy and remove Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers:
+Azure Arc-enabled servers enables you to deploy, remove, and update Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers:
- The [Azure portal](manage-vm-extensions-portal.md) - The [Azure CLI](manage-vm-extensions-cli.md)
Azure Arc-enabled servers enables you to deploy and remove Azure VM extensions t
> [!NOTE] > Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](../../virtual-machines/extensions/overview.md) article.
+> [!NOTE]
+> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
+ ## Key benefits Azure Arc-enabled servers VM extension support provides the following key benefits:
azure-cache-for-redis Cache How To Multi Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-multi-replicas.md
To create a cache, follow these steps:
1. On the **Advanced** page, choose **Replica count**. :::image type="content" source="media/cache-how-to-multi-replicas/create-multi-replicas.png" alt-text="Replica count.":::
+
+ > [!NOTE]
+ > Currently, you can't use Append-only File (AOF) persistence or geo-replication with multiple replicas (more than one replica).
+ >
1. Leave other options in their default settings.
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
To create a cache, follow these steps:
:::image type="content" source="media/cache-how-to-zone-redundancy/create-zones.png" alt-text="Availability zones":::
-1. Leave other options in their default settings.
+1. Configure your settings for clustering and/or RDB persistence.
> [!NOTE] > Zone redundancy doesn't support AOF persistence or work with geo-replication currently.
To create a cache, follow these steps:
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use. > [!NOTE]
- > Availability zones can't be changed after a cache is created.
+ > Availability zones can't be changed or enabled after a cache is created.
> ## Zone Redundancy FAQ
When using zone redundancy, configured with multiple Availability Zones, data is
Learn more about Azure Cache for Redis features. > [!div class="nextstepaction"]
-> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-react-native-plugin.md
Title: Native React plugin for Application Insights JavaScript SDK
-description: How to install and use the Native React plugin for Application Insights JavaScript SDK.
+ Title: React Native plugin for Application Insights JavaScript SDK
+description: How to install and use the React Native plugin for Application Insights JavaScript SDK.
Last updated 08/06/2020
-# Native React plugin for Application Insights JavaScript SDK
+# React Native plugin for Application Insights JavaScript SDK
-The Native React plugin for Application Insights JavaScript SDK collects device information, by default this plugin automatically collects:
+The React Native plugin for Application Insights JavaScript SDK collects device information, by default this plugin automatically collects:
- **Unique Device ID** (Also known as Installation ID.) - **Device Model Name** (Such as iPhone X, Samsung Galaxy Fold, Huawei P30 Pro etc.)
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-dedicated-clusters.md
Azure Monitor Logs Dedicated Clusters are a deployment option that enables advan
The capabilities that require dedicated clusters are: - **[Customer-managed Keys](../logs/customer-managed-keys.md)** - Encrypt the cluster data using keys that are provided and controlled by the customer.-- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Customers can control Microsoft support engineers access requests for data.
+- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Control Microsoft support engineers access requests to your data.
- **[Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption)** protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data.
+- **[Availability Zone](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures with Availability Zone on dedicated cluster -- limited to East US 2 and West US 2 regions initially. A cluster created with Availability Zone is indicated with `isAvailabilityZonesEnabled`: `true` and your data is stored protected in ZRS storage type. Availability Zone is defined in the cluster at creation time and this setting canΓÇÖt be modified. To have a cluster in Availability Zone, you need to create a new cluster in supported regions.
- **[Multi-workspace](../logs/cross-workspace-query.md)** - If a customer is using more than one workspace for production it might make sense to use dedicated cluster. Cross-workspace queries will run faster if all workspaces are on the same cluster. It might also be more cost effective to use dedicated cluster as the assigned commitment tier takes into account all cluster ingestion and applies to all its workspaces, even if some of them are small and not eligible for commitment tier discount. Dedicated clusters require customers to commit using a capacity of at least 1 TB of data ingestion per day. Migration to a dedicated cluster is simple. There is no data loss or service interruption.
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Previously updated : 09/22/2020 Last updated : 08/10/2020 #Customer intent: As an IT admin new to Azure NetApp Files, I want to quickly set up Azure NetApp Files and create a volume.
See [Register for Azure NetApp Files](azure-netapp-files-register.md) for more i
![Select Azure NetApp Files](../media/azure-netapp-files/azure-netapp-files-select-azure-netapp-files.png)
-2. Click **+ Add** to create a new NetApp account.
-
- ![Create new NetApp account](../media/azure-netapp-files/azure-netapp-files-create-new-netapp-account.png)
+2. Click **+ Create** to create a new NetApp account.
3. In the New NetApp Account window, provide the following information: 1. Enter **myaccount1** for the account name.
The following code snippet shows how to create a capacity pool in an Azure Resou
![Specify NFS protocol for quickstart](../media/azure-netapp-files/azure-netapp-files-quickstart-protocol-nfs.png)
-5. Click **Review + create**.
-
- ![Review and create window](../media/azure-netapp-files/azure-netapp-files-review-and-create-window.png)
+5. Click **Review + create** to display information for the volume you are creating.
-6. Review information for the volume, then click **Create**.
+6. Click **Create** to create the volume.
The created volume appears in the Volumes blade. ![Volume created](../media/azure-netapp-files/azure-netapp-files-create-volume-created.png)
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-create-peering.md
na ms.devlang: na Previously updated : 08/09/2021 Last updated : 08/11/2021 # Create volume replication for Azure NetApp Files
To authorize the replication, you need to obtain the resource ID of the replicat
6. In the Authorize field, paste the destination replication volume resource ID that you obtained in Step 3, then click **OK**. > [!NOTE]
- > ThereΓÇÖs likely a difference between the used space of the source volume and the used space of the destination volume. <!-- ANF-14038 -->
+ > Due to various factors, like the state of the destination storage at a given time, thereΓÇÖs likely a difference between the used space of the source volume and the used space of the destination volume. <!-- ANF-14038 -->
## Next steps
azure-percept Azureeyemodule Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azureeyemodule-overview.md
+
+ Title: Overview of the Azure Percept AI vision module
+description: An overview of the azureeyemodule, which is the module responsible for running the AI vision workload on the Azure Percept DK.
++++ Last updated : 08/09/2021+++
+# What is azureeyemodule?
+
+Azureeyemodule is the name of the edge module responsible for running the AI vision workload on the Azure Percept DK. It's part of the Azure IoT suite of edge modules and is deployed to the Azure Percept DK during the [setup experience](./quickstart-percept-dk-set-up.md). This article provides an overview of the module and its architecture.
+
+## Architecture
++
+The Azure Percept Workload on the Azure Percept DK is a C++ application that runs inside the azureeyemodule docker container. It uses OpenCV GAPI for image processing and model execution. Azureeyemodule runs on the Mariner operating system as part of the Azure IoT suite of modules that run on the Azure Percept DK.
+
+The Azure Percept Workload is meant to take in images and output images and messages. The output images may be marked up with drawings such as bounding boxes, segmentation masks, joints, labels, and so on. The output messages are a JSON stream of inference results that can be ingested and used by downstream tasks.
+The results are served up as an RTSP stream that is available on port 8554 of the device. The results are also shipped over to another module running on the device, which serves the RTSP stream wrapped in an HTTP server, running on port 3000. Either way, they'll be viewable only on the local network.
+
+> [!CAUTION]
+> There is *no* encryption or authentication with respect to the RTSP feeds. Anyone on the local network can view exactly what the Azure Percept Vision is seeing by typing in the correct address into a web browser or RTSP media player.
+
+The Azure Percept Workload enables several features that end users can take advantage of:
+- A no-code solution for common computer vision use cases, such as object classification and common object detection.
+- An advanced solution, where a developer can bring their own (potentially cascaded) trained model to the device and run it, possibly passing results to another IoT module of their own creation running on the device.
+- A retraining loop for grabbing images from the device periodically, retraining the model in the cloud, and then pushing the newly trained model back down to the device. Using the device's ability to update and swap models on the fly.
+
+## AI workload details
+The Workload application is open-sourced in the Azure Percept Advanced Development [github repository](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app) and is made up of many small C++ modules, with some of the more important being:
+- [main.cpp](https://github.com/microsoft/azure-percept-advanced-development/blob/main/azureeyemodule/app/main.cpp): Sets up everything and then runs the main loop.
+- [iot](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/iot): This folder contains modules that handle incoming and outgoing messages from the Azure IoT Edge Hub, and the twin update method.
+- [model](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/model): This folder contains modules for a class hierarchy of computer vision models.
+- [kernels](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/kernels): This folder contains modules for G-API kernels, ops, and C++ wrapper functions.
+
+Developers can build custom modules or customize the current azureeyemodule using this workload application.
+
+## Next steps
+
+- Now that you know more about the azureeyemodule and Azure Percept Workload, try using your own model or pipeline by following one of [these tutorials](https://github.com/microsoft/azure-percept-advanced-development/blob/main/tutorials/README.md)
+- Or, try **transfer learning** using one of our ready-made [machine learning notebooks](https://github.com/microsoft/azure-percept-advanced-development/tree/main/machine-learning-notebooks)
+
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
Title: Set up your Azure Percept DK
-description: Connect your dev kit to Azure and deploy your first AI model
+description: Set up you Azure Percept DK and connect it to Azure IoT Hub
Last updated 03/17/2021-+
-# Set up your Azure Percept DK and deploy your first AI model
+# Set up your Azure Percept DK
-Complete the Azure Percept DK setup experience to configure your dev kit and deploy your first AI model. After verifying that your Azure account is compatible with Azure Percept, you will:
+Complete the Azure Percept DK setup experience to configure your dev kit. After verifying that your Azure account is compatible with Azure Percept, you will:
+- Launch the Azure Percept DK setup experience
- Connect your dev kit to a Wi-Fi network - Set up an SSH login for remote access to your dev kit-- Create a new IoT Hub to use with Azure Percept-- Connect your dev kit to your IoT Hub and Azure account
+- Create a new device in Azure IoT Hub
If you experience any issues during this process, refer to the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for possible solutions.
-> [!TIP]
-> You can return to the setup experience at any time to reinitialize your dev kit for things like connecting to a new wi-fi network, creating a new SSH user, and reconnecting to IoT Hub.
+> [!NOTE]
+> The setup experience web service automatically shuts down after 30 minutes of non-use. If you are unable to connect to the dev kit or do not see its Wi-Fi access point, restart the device.
## Prerequisites - An Azure Percept DK (dev kit). - A Windows, Linux, or OS X based host computer with Wi-Fi capability and a web browser.-- An Azure account with an active subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- The Azure account must have the **owner** or **contributor** role within the subscription. Follow the steps below to check your Azure account role. For more information on Azure role definitions, check out the [Azure role-based access control documentation](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
+- An active Azure subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- Users must have the **owner** or **contributor** role within the subscription. Follow the steps below to check your Azure account role. For more information on Azure role definitions, check out the [Azure role-based access control documentation](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
> [!CAUTION]
- > If you have multiple Azure accounts, your browser may cache credentials from another account. To avoid confusion, it is recommended that you close all unused browser windows and log into the [Azure portal](https://portal.azure.com/) before starting the setup experience. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for additional information on how to ensure you are signed in with the correct account.
+ > Close all browser windows and log into your subscription via the [Azure portal](https://portal.azure.com/) before starting the setup experience. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for additional information on how to ensure you are signed in with the correct account.
### Check your Azure account role
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
## Launch the Azure Percept DK Setup Experience
-1. Connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point. Like connecting to any other Wi-Fi network, open the network and internet settings on your computer, select the following network, and enter the network password when prompted:
+1. Connect your host computer to the dev kitΓÇÖs Wi-Fi access point. Select the following network, and enter the Wi-Fi password when prompted:
- - **Network name**: depending on your dev kit's operating system version, the name of the Wi-Fi access point is either **scz-xxxx** or **apd-xxxx** (where ΓÇ£xxxxΓÇ¥ is the last four digits of the dev kitΓÇÖs MAC address)
- - **Password**: can be found on the Welcome Card that came with the dev kit
+ - **Network name**: **scz-xxxx** or **apd-xxxx** (where **xxxx** is the last four digits of the dev kitΓÇÖs MAC address)
+ - **Password**: found on the welcome card that came with the dev kit
> [!WARNING]
- > While connected to the Azure Percept DK Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
+ > While connected to the Azure Percept DK's Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
1. Once connected to the dev kitΓÇÖs Wi-Fi access point, the host computer will automatically launch the setup experience in a new browser window with **your.new.device/** in the address bar. If the tab does not open automatically, launch the setup experience by going to [http://10.1.1.1](http://10.1.1.1) in a web browser. Make sure your browser is signed in with the same Azure account credentials you intend to use with Azure Percept.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-01-welcome.png" alt-text="Welcome page.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-welcome.png" alt-text="Welcome page.":::
- > [!CAUTION]
- > The setup experience web service will shut down after 30 minutes of non-use. If this happens, restart the dev kit to avoid connectivity issues with the dev kit's Wi-Fi access point.
+ > [!NOTE]
+ > **Mac users** - When going through the setup experience on a Mac, it initially opens a captive portal window that is unable to complete the Setup Experience. Close this window and open a web browser to https://10.1.1.1, which will allow you to complete the setup experience.
-## Complete the Azure Percept DK Setup Experience
+## Connect your dev kit to a Wi-Fi network
1. Select **Next** on the **Welcome** screen.
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
1. Select your Wi-Fi network from the list of available networks and select **connect**. Enter your network password when prompted. > [!NOTE]
- > **Mac users** - When going through the setup experience on a Mac, it initially opens in a window rather than a web browser. The window isn't persisted once the connection switches from the device's access point to Wi-Fi. Open a web browser and go to https://10.1.1.1, which will allow you to complete the setup experience.
+ > It is recommended that you set this network as a ΓÇ£Preferred NetworkΓÇ¥ (Mac) or check the box to ΓÇ£connect automaticallyΓÇ¥ (Windows). This will allow the host computer to reconnect to the dev kitΓÇÖs Wi-Fi access point if the connection is interrupted during this process.
-1. Once your dev kit has successfully connected to your network of choice, the page will show the IPv4 address assigned to your dev kit. **Write down the IPv4 address displayed on the page.** You will need the IP address when connecting to your dev kit over SSH for troubleshooting and device updates.
+1. Once your dev kit has successfully connected, the page will show the IPv4 address assigned to your dev kit. **Write down the IPv4 address displayed on the page.** You will need the IP address when connecting to your dev kit over SSH for troubleshooting and device updates.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-04-success-wi-fi.png" alt-text="Copy IP address.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-success-wi-fi.png" alt-text="Copy IP address.":::
> [!NOTE] > The IP address may change with each device boot.
-1. Read through the License Agreement, select **I have read and agree to the License Agreement** (you must scroll to the bottom of the agreement), and select **Next**.
+1. Read through the License Agreement (you must scroll to the bottom of the agreement), select **I have read and agree to the License Agreement**, and select **Next**.
+
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-eula.png" alt-text="Accept EULA.":::
+
+## Set up an SSH login for remote access to your dev kit
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-05-eula.png" alt-text="Accept EULA.":::
+1. Create an SSH account name and public key/password, and select **Next**.
-1. Enter an SSH account name and password, and **write down your login information for later use**.
+ If you have already created an SSH account, you can skip this step.
+
+ **Write down your login information for later use**.
> [!NOTE] > SSH (Secure Shell) is a network protocol that enables you to connect to the dev kit remotely via a host computer.
-1. On the next page, select **Setup as a new device** to create a new device within your Azure account.
+## Create a new device in Azure IoT Hub
+
+1. Select **Setup as a new device** to create a new device within your Azure account.
+
+ A Device Code will now be obtained from Azure.
-1. Select **Copy** to copy your device code. Afterward, select **Login to Azure**.
+1. Select **Copy**.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-08-copy-code.png" alt-text="Copy device code.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-copy-code.png" alt-text="Copy device code.":::
> [!NOTE]
- > If you receive this error: *Unable to get device code. Please make sure the device is connected to internet*. The most common cause is your on-site network. Try plugging in an Ethernet cable to the dev kit or connecting to a different Wi-Fi network and try again. Less common causes could be your host computer's date/time are off.
+ > If you receive an error when using your Device Code in the next steps or if the Device Code wonΓÇÖt display, please see our [troubleshooting steps](./how-to-troubleshoot-setup.md) for more information.
+
+1. Select **Login to Azure**.
1. A new browser tab will open with a window that says **Enter code**. Paste the code into the window and select **Next**. Do NOT close the **Welcome** tab with the setup experience.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-09-enter-code.png" alt-text="Enter device code.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-enter-code.png" alt-text="Enter device code.":::
1. Sign into Azure Percept using the Azure account credentials you will use with your dev kit. Leave the browser tab open when complete. > [!CAUTION] > Your browser may auto cache other credentials. Double check that you are signing in with the correct account.
- After successfully signing into Azure Percent on the device, return to the **Welcome** tab to continue the setup experience.
+ After successfully signing into Azure Percent on the device, select **Allow**.
+
+ Return to the **Welcome** tab to continue the setup experience.
1. When the **Assign your device to your Azure IoT Hub** page appears on the **Welcome** tab, take one of the following actions:
- - If you already have an IoT Hub you would like to use with Azure Percept and it is listed on this page, select it and jump to step 15.
+ - Jump ahead to **Select your Azure IoT Hub**, if your Iot Hub is listed on this page.
- If you do not have an IoT Hub or would like to create a new one, select **Create a new Azure IoT Hub**. > [!IMPORTANT] > If you have an IoT Hub, but it is not appearing in the list, you may have signed into Azure Percept with the wrong credentials. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for help.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-13-iot-hub-select.png" alt-text="Select an IoT Hub.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-iot-hub-select.png" alt-text="Select an IoT Hub.":::
-1. To create a new IoT Hub, complete the following fields:
+1. To create a new IoT Hub,
- Select the Azure subscription you will use with Azure Percept. - Select an existing Resource Group. If one does not exist, select **Create new** and follow the prompts.
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
- Select the S1 (standard) pricing tier. > [!NOTE]
- > If you end up needing a higher [message throughput](../iot-hub/iot-hub-scaling.md#message-throughput) for your edge AI applications, you may [upgrade your IoT Hub to a higher standard tier](../iot-hub/iot-hub-upgrade.md) in the Azure Portal at any time. B and F tiers do NOT support Azure Percept.
+ > It may take a few minutes for your IoT Hub deployment to complete. If you need a higher [message throughput](../iot-hub/iot-hub-scaling.md#message-throughput) for your edge AI applications, you may [upgrade your IoT Hub to a higher standard tier](../iot-hub/iot-hub-upgrade.md) in the Azure Portal at any time. B and F tiers do NOT support Azure Percept.
-1. IoT Hub deployment may take a few minutes. When the deployment is complete, select **Register**.
+1. When the deployment is complete, select **Register**.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-16-iot-hub-success.png" alt-text="IoT Hub successfully deployed.":::
+1. Select your Azure IoT Hub
1. Enter a device name for your dev kit and select **Next**.
-1. Wait for the device modules to download ΓÇô this will take a few minutes.
+1. The device modules will now be deployed to your device. ΓÇô this can take a few minutes.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-18-finalize.png" alt-text="Finalizing setup.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-finalize.png" alt-text="Finalizing setup.":::
-1. When you see the **Device setup complete!** page, your dev kit has successfully linked to your IoT Hub and downloaded the necessary software. Your dev kit will automatically disconnect from the Wi-Fi access point resulting in these two notifications:
+1. **Device setup complete!** Your dev kit has successfully linked to your IoT Hub and deployed all modules.
> [!NOTE]
- > The IoT Edge containers that get configured as part of this set up process use certificates that will expire after 90 days. The certificates can be automatically regenerated by restarting IoT Edge. Refer to [Manage certificates on an IoT Edge device](../iot-edge/how-to-manage-device-certificates.md) for more details.
+ > After completion, the dev kitΓÇÖs Wi-Fi access point will automatically disconnect and the setup experience web service will be terminated resulting in two notifications.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-19-0-warning.png" alt-text="Setup experience disconnect warning.":::
+ > [!NOTE]
+ > The IoT Edge containers that get configured as part of this set up process use certificates that will expire after 90 days. The certificates can be automatically regenerated by restarting IoT Edge. Refer to [Manage certificates on an IoT Edge device](../iot-edge/how-to-manage-device-certificates.md) for more details.
-1. Connect your host computer to the Wi-Fi network your devkit connected to in Step 2.
+1. Connect your host computer to the Wi-Fi network your dev kit is connected to.
1. Select **Continue to the Azure portal**.
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-20-Azure-portal-continue.png" alt-text="Go to Azure Percept Studio.":::
-
-## View your dev kit video stream and deploy a sample model
-
-1. The [Azure Percept Studio Overview page](https://go.microsoft.com/fwlink/?linkid=2135819) is your launch point for accessing many different workflows for both beginning and advanced edge AI solution development. To get started, select **Devices** from the left menu.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-01-get-device-list.png" alt-text="View your list of devices.":::
-
-1. Verify your dev kit is listed as **Connected** and select it to view the device page.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-02-select-device.png" alt-text="Select your device.":::
-
-1. Select **View your device stream**. If this is the first time viewing the video stream of your device, you will see a notification that a new model is being deployed in the upper right-hand corner. This may take a few minutes.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/view-stream.png" alt-text="View your video stream.":::
-
- Once the model has deployed, you will get another notification with a **View stream** link. Select the link to view the video stream from your Azure Percept Vision camera in a new browser window. The dev kit is preloaded with an AI model that automatically performs object detection of many common objects.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-03-2-object-detection.png" alt-text="See object detection.":::
-
-1. Azure Percept Studio also has a number of sample AI models. To deploy a sample model to your dev kit, navigate back to your device page and select **Deploy a sample model**.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/deploy-sample-model.png" alt-text="Explore pre-built models.":::
-
-1. Select a sample model from the library and select **Deploy to device**.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-05-2-select-journey.png" alt-text="See object detection in action.":::
-
-1. Once the model has successfully deployed, you will see a notification with a **View stream** link in the upper right corner of the screen. To view the model inferencing in action, select the link in the notification or return to the device page and select **View your device stream**. Any models previously running on the dev kit will now be replaced with the new model.
-
-## Video walkthrough
-
-For a visual walkthrough of the steps described above, please see the following video (setup experience starts at 0:50):
-
-</br>
+ :::image type="content" source="./media/quickstart-percept-dk-setup/main-Azure-portal-continue.png" alt-text="Go to Azure Percept Studio.":::
+### Video walk-through
+See the below video for a visual walk-through of the steps described above.
> [!VIDEO https://www.youtube.com/embed/-dmcE2aQkDE] ## Next steps
-> [!div class="nextstepaction"]
-> [Create a no-code vision solution](./tutorial-nocode-vision.md)
+Now that your dev kit is set up, it's time to see vision AI in action.
+- [View your dev kit video stream](./how-to-view-video-stream.md)
+- [Deploy a vision AI model to your dev kit](./how-to-deploy-model.md)
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
Previously updated : 07/07/2021 Last updated : 08/11/2021 # Configure and manage Azure AD authentication with Azure SQL
However, using Azure Active Directory authentication with SQL Database and Azure
> Special characters like colon `:` or ampersand `&` when included as user names in the T-SQL `CREATE LOGIN` and `CREATE USER` statements are not supported. > [!IMPORTANT]
-> Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security groups are not supported to login into the database via Security Groups in SQL Database, Managed Instance, or Azure Synapse.
+> Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security groups are not supported to login into the database in SQL Database, Managed Instance, or Azure Synapse.
To create an Azure AD-based contained database user (other than the server administrator that owns the database), connect to the database with an Azure AD identity, as a user with at least the **ALTER ANY USER** permission. Then use the following Transact-SQL syntax:
azure-sql Authentication Aad Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-overview.md
Title: Azure Active Directory authentication description: Learn about how to use Azure Active Directory for authentication with Azure SQL Database, Azure SQL Managed Instance, and Synapse SQL in Azure Synapse Analytics-
Previously updated : 07/07/2021 Last updated : 08/11/2021 # Use Azure Active Directory authentication
The following authentication methods are supported for Azure AD server principal
- Only one Azure AD administrator (a user or group) can be configured for a server in SQL Database or Azure Synapse at any time. - The addition of Azure AD server principals (logins) for SQL Managed Instance allows the possibility of creating multiple Azure AD server principals (logins) that can be added to the `sysadmin` role. - Only an Azure AD administrator for the server can initially connect to the server or managed instance using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users.-- Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security groups are not supported to login into the database via Security Groups in SQL Database, Managed Instance, or Azure Synapse.
+- Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security groups are not supported to login into the database in SQL Database, Managed Instance, or Azure Synapse.
- We recommend setting the connection timeout to 30 seconds. - SQL Server 2016 Management Studio and SQL Server Data Tools for Visual Studio 2015 (version 14.0.60311.1April 2016 or later) support Azure Active Directory authentication. (Azure AD authentication is supported by the **.NET Framework Data Provider for SqlServer**; at least version .NET Framework 4.6). Therefore the newest versions of these tools and data-tier applications (DAC and BACPAC) can use Azure AD authentication. - Beginning with version 15.0.1, [sqlcmd utility](/sql/tools/sqlcmd-utility) and [bcp utility](/sql/tools/bcp-utility) support Active Directory Interactive authentication with Multi-Factor Authentication.
azure-sql Sql Database Vulnerability Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-vulnerability-assessment-rules.md
SQL Vulnerability Assessment rules have five categories, which are in the follow
|VA1048 |Database principals should not be mapped to the `sa` account |High |A database principal that is mapped to the `sa` account can be exploited by an attacker to elevate permissions to `sysadmin` |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1052 |Remove BUILTIN\Administrators as a server login |Low |The BUILTIN\Administrators group contains the Windows Local Administrators group. In older versions of Microsoft SQL Server this group has administrator rights by default. This rule checks that this group is removed from SQL Server. |<nobr>SQL Server 2012+<nobr/> | |VA1053 |Account with default name `sa` should be renamed or disabled |Low |`sa` is a well-known account with principal ID 1. This rule verifies that the `sa` account is either renamed or disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1054 |Excessive permissions should not be granted to PUBLIC role on objects or columns |Low |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This rule displays a list of all securable objects or columns that are accessible to all users through the PUBLIC role. |<nobr>SQL Server 2012+<nobr/> |
+|VA1054 |Excessive permissions should not be granted to PUBLIC role on objects or columns |Low |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This rule displays a list of all securable objects or columns that are accessible to all users through the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Database |
|VA1058 |`sa` login should be disabled |High |`sa` is a well-known account with principal ID 1. This rule verifies that the `sa` account is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1059 |xp_cmdshell should be disabled |High |xp_cmdshell spawns a Windows command shell and passes it a string for execution. This rule checks that xp_cmdshell is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1067 |Database Mail XPs should be disabled when it is not in use |Medium |This rule checks that Database Mail is disabled when no database mail profile is configured. Database Mail can be used for sending e-mail messages from the SQL Server Database Engine and is disabled by default. If you are not using this feature, it is recommended to disable it to reduce the surface area. |<nobr>SQL Server 2012+<nobr/> |
SQL Vulnerability Assessment rules have five categories, which are in the follow
|VA1070 |Database users shouldn't share the same name as a server login |Low |Database users may share the same name as a server login. This rule validates that there are no such users. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1072 |Authentication mode should be Windows Authentication |Medium |There are two possible authentication modes: Windows Authentication mode and mixed mode. Mixed mode means that SQL Server enables both Windows authentication and SQL Server authentication. This rule checks that the authentication mode is set to Windows Authentication. |<nobr>SQL Server 2012+<nobr/> | |VA1094 |Database permissions shouldn't be granted directly to principals |Low |Permissions are rules associated with a securable object to regulate which users can gain access to the object. This rule checks that there are no DB permissions granted directly to users. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1095 |Excessive permissions should not be granted to PUBLIC role |Medium |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This displays a list of all permissions that are granted to the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1096 |Principal GUEST should not be granted permissions in the database |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1097 |Principal GUEST should not be granted permissions on objects or columns |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1099 |GUEST user should not be granted permissions on database securables |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA1095 |Excessive permissions should not be granted to PUBLIC role |Medium |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This displays a list of all permissions that are granted to the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
+|VA1096 |Principal GUEST should not be granted permissions in the database |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
+|VA1097 |Principal GUEST should not be granted permissions on objects or columns |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
+|VA1099 |GUEST user should not be granted permissions on database securables |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
|VA1246 |Application roles should not be used |Low |An application role is a database principal that enables an application to run with its own user-like permissions. Application roles enable that only users connecting through a particular application can access specific data. Application roles are password-based (which applications typically hardcode) and not permission based which exposes the database to app role impersonation by password-guessing. This rule checks that no application roles are defined in the database. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database | |VA1248 |User-defined database roles should not be members of fixed roles |Medium |To easily manage the permissions in your databases SQL Server provides several roles, which are security principals that group other principals. They are like groups in the Microsoft Windows operating system. Database accounts and other SQL Server roles can be added into database-level roles. Each member of a fixed-database role can add other users to that same role. This rule checks that no user-defined roles are members of fixed roles. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database<br/><br/>Azure Synapse | |VA1267 |Contained users should use Windows Authentication |Medium |Contained users are users that exist within the database and do not require a login mapping. This rule checks that contained users use Windows Authentication. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
Service broker is enabled by default and cannot be disabled. The following ALTER
- `remote data archive` - `remote proc trans` - `scan for startup procs`
+- The following [sp_configure](/sql/relational-databases/system-stored-procedures/sp-configure-transact-sql) options are ignored and have no effect:
+ - `Ole Automation Procedures`
- `sp_execute_external_scripts` isn't supported. See [sp_execute_external_scripts](/sql/relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql#examples). - `xp_cmdshell` isn't supported. See [xp_cmdshell](/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql). - `Extended stored procedures` aren't supported, and this includes `sp_addextendedproc` and `sp_dropextendedproc`. This functionality won't be supported because it's on a deprecation path for SQL Server. For more details, see [Extended Stored Procedures](/sql/relational-databases/extended-stored-procedures-programming/database-engine-extended-stored-procedures-programming).
azure-sql Availability Group Distributed Network Name Dnn Listener Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-distributed-network-name-dnn-listener-configure.md
Test the connectivity to your DNN listener with these steps:
## Port considerations
-DNN listeners are designed to listen on all IP addresses, but on a specific, unique port. The DNS entry for the listener name should resolve to the addresses of all replicas in the availability group. This is done automatically with the PowerShell script provided in the [Create Script](#create-script) section. Since DNN listeners accept connections on all IP addresses, it is critical that the listener port be unique, and not in use by any other replica in the availability group. Since SQL Server always listens on port 1433, either directly or via the SQL Browser service, port 1433 cannot be used for any DNN listener.
+DNN listeners are designed to listen on all IP addresses, but on a specific, unique port. The DNS entry for the listener name should resolve to the addresses of all replicas in the availability group. This is done automatically with the PowerShell script provided in the [Create Script](#create-script) section. Since DNN listeners accept connections on all IP addresses, it is critical that the listener port be unique, and not in use by any other replica in the availability group. Since SQL Server listens on port 1433 by default, either directly or via the SQL Browser service, using port 1433 for the DNN listener is strongly discouraged.
## Next steps
To learn more, see:
- [Windows Server Failover Cluster with SQL Server on Azure VMs](hadr-windows-server-failover-cluster-overview.md) - [Always On availability groups with SQL Server on Azure VMs](availability-group-overview.md) - [Always On availability groups overview](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server)-
azure-video-analyzer Ai Composition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/ai-composition-overview.md
+
+ Title: AI compositions in Azure Video Analyzer
+description: This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition. The topic also provides scenario explanation for each kind of AI composition.
+ Last updated : 08/11/2021+++
+# AI composition
+
+This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition.
+
+* [Sequential](#sequential-ai-composition)
+* [Parallel](#parallel-ai-composition)
+* [Combined](#combined-ai-composition)
+
+## Sequential AI composition
+
+AI nodes can be sequentially composed. This allows a downstream node to augment inferences generated by an upstream node.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/ai-composition/sequential.svg" alt-text="Sequential AI composition":::
+
+### Key aspects
+
+* Pipeline extensions act as media passthrough nodes and can be configured such that external AI servers receive frames at different rates, formats and resolutions. Additionally, configuration can be specified such that external AI servers can receive all frames or only frames, which already contain inferences.
+* Inferences are added to the frames as they go through the different extension nodes, an unlimited number of such nodes can be added in sequence.
+* Other scenarios such as continuous video recording or event-based video recording can be combined with sequential AI composition.
+
+
+## Parallel AI composition
+
+AI nodes can also be composed in parallel instead of in sequence. This allows independent inferences to be performed on the ingested video stream, saving ingest bandwidth on the edge.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/ai-composition/parallel.svg" alt-text="Parallel AI composition":::
+
+### Key aspects
+
+* Video can be split into an arbitrary number of parallel branches and such split can happen at any point after the following nodes.
+
+ * RTSP source
+ * Motion Detector
+ * Pipeline extension
+
+## Combined AI composition
+
+Both sequential and parallel composition constructs can be combined to develop complex composable AI pipelines. This is possible since AVA pipelines allow extension nodes to be combined sequentially and/or with a parallel composition indefinitely alongside other supported nodes.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/ai-composition/complex.svg" alt-text="Combined AI composition":::
+
++
+## Next steps
+
+[Analyze live video streams with multiple AI models using AI composition](analyze-ai-composition.md)
azure-video-analyzer Analyze Ai Composition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/analyze-ai-composition.md
+
+ Title: Analyze live video streams with multiple AI models using AI composition
+description: This article provides guidance on how to analyze live video streams with multiple AI models using AI composition feature of Azure Video Analyzer.
++ Last updated : 04/01/2021+++
+# Analyze live video streams with multiple AI models using AI composition
+
+Certain customer scenarios require that video be analyzed with multiple AI models. Such models can be either [augmenting each other](ai-composition-overview.md#sequential-ai-composition) or [working independently in parallel](ai-composition-overview.md#parallel-ai-composition) on the [same video stream or a combination](ai-composition-overview.md#combined-ai-composition) of such augmented and independently parallel models can be acting on the same video stream to derive actionable insights.
+
+Azure Video Analyzer supports such scenarios via a feature called [AI Composition](ai-composition-overview.md). This guide shows you how you can apply multiple models in an augmented fashion on the same video stream. It uses a Tiny(Light) YOLO and a regular YOLO model in parallel, to detect an object of interest. The Tiny YOLO model is computationally lighter but less accurate than the YOLO model and is called first. If the object detected passes a specific confidence threshold, then the sequentially staged regular YOLO model is not invoked, thus utilizing the underlying resources efficiently.
+
+After completing the steps in this guide, you'll be able to run a simulated live video stream through a pipeline with AI composability and extend it to your specific scenarios. The following diagram graphically represents that pipeline.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/ai-composition/motion-with-object-detection-using-ai-composition.svg" alt-text="AI composition overview":::
+
+## Prerequisites
+
+* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
+
+ > [!NOTE]
+ > You will need an Azure subscription with permissions for creating service principals (owner role provides this). If you do not have the right permissions, please reach out to your account administrator to grant you the right permissions.
+* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol (AMQP) over port 5671 for outbound traffic. This setup enables Azure IoT Tools to communicate with Azure IoT Hub.
+* Complete the [Analyze live video by using your own gRPC model](analyze-live-video-use-your-model-grpc.md) quickstart. Please do not skip this as this is a strict requirement for the how to guide.
+
+> [!TIP]
+> You might be prompted to install Docker while you're installing the Azure IoT Tools extension. Feel free to ignore the prompt.
+>
+> If you run into issues with Azure resources that get created, please view our [troubleshooting guide](troubleshoot.md#common-error-resolutions) to resolve some commonly encountered issues.
+
+## Review the video sample
+
+Since you have already completed the quickstart specified in the prerequisite section, you will have an edge device already created. This edge device will have the following input folder - /home/localedgeuser/samples/input- that includes certain video files. Log into the IoT Edge device, change to the directory to: /home/localedgeuser/samples/input/ and run the following command to get the input file we will be using for this how to guide.
+
+wget https://lvamedia.blob.core.windows.net/public/co-final.mkv
+
+Additionally, if you like, on your machine that has [VLC media player](https://www.videolan.org/vlc/), select Ctrl+N and then paste a link to [sample video (.mkv)](https://lvamedia.blob.core.windows.net/public/co-final.mkv) to start playback. You see the footage of cars on a freeway.
+
+## Create and deploy the pipeline
+
+Similar to the steps in the quickstart that you completed in the prerequisites, you can follow the steps here but with minor adjustments.
+
+1. Follow the guidelines in [Create and deploy the pipeline](analyze-live-video-use-your-model-grpc.md#create-and-deploy-the-pipeline) section of the quickstart you just finished. Be sure to make the following adjustments as you continue with the steps. These steps help to ensure that the correct body for the direct method calls are used.
+
+Edit the *operations.json* file:
+
+* Change the link to the pipeline topology:
+ `"pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/ai-composition/topology.json"`
+* Under `livePipelineSet`,
+ 1. ensure : `"topologyName" : "AIComposition"` and
+ 2. Change the `rtspUrl` parameter value to `"rtsp://rtspsim:554/media/co-final.mkv"`.
+
+* Under `pipelineTopologyDelete`, edit the name:
+ `"name" : "AIComposition"`
+
+2. Follow the guidelines in [Generate and deploy the IoT Edge deployment manifest](analyze-live-video-use-your-model-grpc.md#generate-and-deploy-the-iot-edge-deployment-manifest) section but use the following deployment manifest instead - src/edge/deployment.composite.template.json
+
+3. Follow the guidelines in [Run the sample program](analyze-live-video-use-your-model-grpc.md#run-the-sample-program) section.
+
+4. For result details, see the [interpret the results](analyze-live-video-use-your-model-grpc.md#interpret-results) section. In addition to the analytics events on the hub and the diagnostic events, the topology that you have used also creates a relevant video clip on the cloud that is triggered by the AI signal-based activation of the signal gate. This clip is also accompanied with [operational events](record-event-based-live-video.md#operational-events) on the hub for downstream workflows to take. You can [examine and play](record-event-based-live-video.md#playing-back-the-recording) the video clip by logging into the Azure portal.
+
+## Clean up
+
+If you're not going to continue to use this application, delete the resources you created in this quickstart.
+
+## Next steps
+
+Learn more about [diagnostic messages](monitor-log-edge.md).
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/deploy-iot-edge-linux-on-windows.md
In this article, you'll learn how to deploy Azure Video Analyzer on an edge devi
The following depicts the overall flow of the document and in 5 simple steps you should be all set up to run Azure Video Analyzer on a Windows device that has EFLOW:
-![IoT Edge for Linux on Windows (EFLOW) diagram](./media/deploy-iot-edge-linux-on-windows/eflow.png)
+![Diagram of IoT Edge for Linux on Windows (E FLOW).](./media/deploy-iot-edge-linux-on-windows/eflow-updated.png)
-1. [Install EFLOW](../../iot-edge/how-to-install-iot-edge-on-windows.md) on your Windows device.
+1. [Install EFLOW](../../iot-edge/how-to-install-iot-edge-on-windows.md) on your Windows device using PowerShell.
- 1. If you are using your Windows PC, then on the [Windows Admin Center](/windows-server/manage/windows-admin-center/overview) start page, under the list of connections, you will see a local host connection representing the PC where you running Windows Admin Center.
- 1. Any additional servers, PCs, or clusters that you manage will also show up here.
- 1. You can use Windows Admin Center to install and manage Azure EFLOW on either your local device or remote managed devices. In this guide, the local host connection served as the target device for the deployment of Azure IoT Edge for Linux on Windows. Hence you see the localhost also listed as an IoT Edge device.
- ![Deployments steps - windows admin center](./media/deploy-iot-edge-linux-on-windows/windows-admin-center.png)
-1. Click on the IoT Edge device to connect to it and you should see an Overview and Command Shell tab. The command shell tab is where you can issue commands to your edge device.
+1. Once EFLOW is set up, type the command `Connect-EflowVm` into PowerShell (with administrative privilege) to connect. This will bring up a bash terminal within PowerShell to control the EFLOW VM, where you can run Linux commands including utilities like Top and Nano.
- ![Deployments steps - Azure IoT Edge Manager](./media/deploy-iot-edge-linux-on-windows/azure-iot-edge-manager.png)
-1. Go to the command shell and type in the following command:
+ > [!TIP]
+ > To exit the EFLOW VM, type `exit` within the terminal.
+
+1. Log into the EFLOW VM via PowerShell and type in the following command:
`bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"`
The following depicts the overall flow of the document and in 5 simple steps you
* `/var/media` Note the video files (*.mkv) in the /home/localedgeuser/samples/input folder, which serve as input files to be analyzed.
-1. Now that you have the edge device set up, registered to the hub and running successfully with the correct folder structures created, the next step is to set up the following additional Azure resources and deploy the AVA module.
-
- * Storage account
- * Azure Media Services account
+1. Now that you have the edge device set up, registered to the hub, and running successfully with the correct folder structures created, the next step is to set up the following additional Azure resources and deploy the AVA module. The following deployment template will take care of the resource creation:
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/ava-click-to-deploy)
+
+ The deployment process will take about 20 minutes. Upon completion, you will have certain Azure resources deployed in the Azure subscription, including:
+
+ * Video Analyzer account - This cloud service is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
+ * Storage account - For storing recorded video and video analytics.
+ * Managed Identity - This is the user assigned managed identity used to manage access to the above storage account.
+ * IoT Hub - This acts as a central message hub for bi-directional communication between your IoT application, IoT Edge modules and the devices it manages.
In the template, when asked if you need an edge device, choose the "Use and existing edge device" option since you created both the device and the IoT Hub earlier. You will also be prompted for your IoT Hub name and IoT Edge device ID in the subsequent steps. ![Use Existing Device](./media/deploy-iot-edge-linux-on-windows/use-existing-device.png)
- Once finished, you can log back onto the IoT Edge device command shell and run the following command.
+ Once finished, you can log back onto the EFLOW VM and run the following command.
**`sudo iotedge list`**
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 07/20/2021 Last updated : 08/10/2021
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
Follow these steps to add a user to a speech resource so they can use Audio Cont
2. Click **Access control (IAM)**. Click the **Role assignments** tab to view all the role assignments for this subscription. :::image type="content" source="media/audio-content-creation/access-control-roles.png" alt-text="Role assignment tab"::: 3. Click **Add** > **Add role assignment** to open the Add role assignment pane. In the Role drop-down list, select the **Cognitive Service User** role. If you want to give the user ownership of this speech resource, you can select the **Owner** role.
-4. In the list, select a user. If you do not see the user in the list, you can type in the Select box to search the directory for display names and email addresses. If the user is not in this directory, you can input the user's [Microsoft account](https://account.microsoft.com/account) (which is trusted by Azure active directory).
-5. Click **Save** to assign the role. The user will receive an email invitation. Accept the invitation by clicking **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user do not need to take further action in the Azure portal.
-6. After a few moments, the user is assigned the Cognitive Service User role at the speech resource scope. User can visit or refresh the [Audio Content Creation](https://aka.ms/audiocontentcreation) page, and choose the speech resource to get started.
+4. Type in user's email address and select the user in the directory. The email address must be a **Microsoft account**, which is trusted by Azure active directory. Users can easily sign up a [Microsoft account](https://account.microsoft.com/account) using a personal email address.
+5. Click **Save** to assign the role.
+6. The user will receive an email invitation. Accept the invitation by clicking **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user does not need to take further action in the Azure portal. After a few moments, the user is assigned the role at the speech resource scope, and will have the access to this speech resource. If the user didn't receive the invitation email, you can search the user's account under "Role assignments" and go inside the user's profile. Find "Identity" -> "Invitation accepted", and click **(manage)** to resend the email invitation. You can also copy the invitation link to the users.
+7. The user now visits or refreshes the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with the user's Microsoft account. Select **Audio Content Creation** block among all speech products. Choose the speech resource in the pop-up window or in the settings at the upper right of the page. If the user cannot find available speech resource, check if you are in the right directory. To check the right directory, click the account profile in the upper right corner, and click **Switch** besides the "Current directory". If there are more than one directory available, it means you have access to multiple directories. Switch to different directories and go to settings to see if the right speech resource is available.
:::image type="content" source="media/audio-content-creation/add-role-first.png" alt-text="Add role dialog":::
Users who are in the same speech resource will see each other's work in Audio Co
If you want one of the users to give access to other users, you need to give the user the owner role for the speech resource and set the user as the Azure directory reader. 1. Add the user as the owner of the speech resource. See [how to add users to a speech resource](#add-users-to-a-speech-resource). :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Role Owner field":::
-1. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Click **Azure Active Directory**, and then Click **Users**.
-1. Search the user's Microsoft account, and go to the user's detail page. Click **Assigned roles**.
-1. Click **Add assignments** -> **Directory Readers**.
+2. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Click **Azure Active Directory**, and then Click **Users**.
+3. Search the user's Microsoft account, and go to the user's detail page. Click **Assigned roles**.
+4. Click **Add assignments** -> **Directory Readers**. If the button "Add assignments" is grayed out, it means that you do not have the access. Only the global administrator of this directory can add assignment to users.
## See also
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Previously updated : 07/22/2021 Last updated : 08/10/2021 keywords: on-premises, Docker, container
Speech containers enable customers to build a speech application architecture th
| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.13.0 | Generally Available | | Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.13.0 | Generally Available | | Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.14.1 | Generally Available |
-| Speech Language Identification | Detect the language spoken in audio files. | 1.0 | Gated preview |
+| Speech Language Identification | Detect the language spoken in audio files. | 1.3.0 | Gated preview |
| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.8.0 | Generally Available | If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
This command:
* Exposes TCP port 5003 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
-If you're sending only Speech Language Identification requests, you will need to set the Speech client's `phraseDetection` value to `None`.
-
-```python
-speech_config.set_service_property(
- name='speechcontext-phraseDetection.Mode',
- value='None',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
- )
-```
- If you want to run this container with the speech-to-text container, you can use this [Docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this Docker Run command to execute `speech-to-text-with-languagedetection-client`. ```Docker
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
Translator container enables you to build a translator application architecture
To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need the following to use Translator containers:
+You'll also need the following:
| Required | Purpose | |--|--|
cognitive-services Client Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/client-sdks.md
Last updated 07/06/2021
-# Document Translation client libraries and SDKs
+# Document Translation client-library SDKs
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD001 -->
-[Document Translation](overview.md) is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you'll learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](get-started-with-document-translation.md) guide.
+[Document Translation](overview.md) is a cloud-based feature of the [Azure Translator](../translator-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you'll learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](get-started-with-document-translation.md) guide.
## Prerequisites
using System;
using System.Threading; ```
-In the application's **Program** class, create variable for your subscription key and custom endpoint. For details, *see* [Get your custom domain name and subscription key](get-started-with-document-translation.md#get-your-custom-domain-name-and-subscription-key)
+In the application's **Program** class, create variable for your subscription key and custom endpoint. For details, *see* [Custom domain name and subscription key](get-started-with-document-translation.md#custom-domain-name-and-subscription-key)
```csharp private static readonly string endpoint = "<your custom endpoint>";
Create a new Python application in your preferred editor or IDE. Then import the
``` Create variables for your resource subscription key, custom endpoint, sourceUrl, and targetUrl. For
-more information, *see* [Get your custom domain name and subscription key](get-started-with-document-translation.md#get-your-custom-domain-name-and-subscription-key)
+more information, *see* [Custom domain name and subscription key](get-started-with-document-translation.md#custom-domain-name-and-subscription-key)
```python subscriptionKey = "<your-subscription-key>"
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 07/06/2021 Last updated : 08/09/2021 # Get started with Document Translation
- In this article, you'll learn to use Document Translation with HTTP REST API methods. Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service. The Document Translation API enables the translation of whole documents while preserving source document structure and text formatting.
+ In this article, you'll learn to use Document Translation with HTTP REST API methods. Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service. The Document Translation API enables the translation of whole documents while preserving source document structure and text formatting.
## Prerequisites
To get started, you'll need:
* An [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM). You will create containers to store and organize your blob data within your storage account.
-## Get your custom domain name and subscription key
+## Custom domain name and subscription key
> [!IMPORTANT] >
Requests to the Translator service require a read-only key for authenticating ac
:::image type="content" source="../media/translator-keys.png" alt-text="Image of the get your subscription key field in Azure portal.":::
-## Create your Azure blob storage containers
+## Create Azure blob storage containers
-You'll need to [**create containers**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source, target, and optional glossary files.
+You'll need to [**create containers**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure blob storage account**](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
* **Source container**. This container is where you upload your files for translation (required).
-* **Target container**. This container is where your translated files will be stored (required).
-* **Glossary container**. This container is where you upload your glossary files (optional).
+* **Target container**. This container is where your translated files will be stored (required).
+
+> [!NOTE]
+> Document Translation supports glossaries as blobs in target containers (not separate glossary containers). If want to include a custom glossary, add it to the target container and include the` glossaryUrl` with the request. If the translation language pair is not present in the glossary, it will not be applied. *See* [Translate documents using a custom glossary](#translate-documents-using-a-custom-glossary)
### **Create SAS access tokens for Document Translation**
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
* Your **source** container or blob must have designated **read** and **list** access. * Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** container or blob must have designated **read** and **list** access.
+* Your **glossary** blob must have designated **read** and **list** access.
> [!TIP] >
-> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
-> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
>
-## Set up your coding Platform
+## Document Translation: HTTP requests
+
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
+
+### HTTP headers
+
+The following headers are included with each Document Translator API request:
+
+|HTTP header|Description|
+||--|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
+|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
+|Content-Length|**Required**: the length of the request body.|
+
+### POST request body properties
+
+* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
+* The POST request body is a JSON object named `inputs`.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs
+* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* A value for the `glossaries` field (optional) is applied when the document is being translated.
+* The `targetUrl` for each target language must be unique.
+
+>[!NOTE]
+> If a file with the same name already exists in the destination, it will be overwritten.
+
+<!-- markdownlint-disable MD024 -->
+### Translate all documents in a container
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate a specific document in a container
+
+* Ensure you have specified "storageType": "File"
+* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* Sample request below shows a single document getting translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate documents using a custom glossary
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://myblob.blob.core.windows.net/source",
+ "filter": {
+ "prefix": "myfolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://myblob.blob.core.windows.net/target",
+ "language": "es",
+ "glossaries": [
+ {
+ "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
+ "format": "xliff"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Use code to submit Document Translation requests
+
+### Set up your coding Platform
### [C#](#tab/csharp)
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
* Set your endpoint, subscription key, and container URL values. * Run the program.
-### [Python](#tab/python)
+### [Python](#tab/python)
* Create a new project. * Copy and paste the code from one of the samples into your project.
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
mkdir sample-project ```
-* In your project directory, create the following subdirectory structure:
+* In your project directory, create the following subdirectory structure:
src</br> &emsp; Γöö main</br>
gradle build
gradle run ```
-### [Go](#tab/go)
+### [Go](#tab/go)
* Create a new Go project. * Add the code provided below.
gradle run
-## Make Document Translation requests
-
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
-
-### HTTP headers
-
-The following headers are included with each Document Translator API request:
-
-|HTTP header|Description|
-||--|
-|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure subscription key for your Translator or Cognitive Services resource.|
-|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
-|Content-Length|**Required**: the length of the request body.|
-
-### POST request body properties
-
-* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
-* The POST request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs and can optionally contain a `glossaryURL` container address.
-* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
-* A value for the `glossaries` field (optional) is applied when the document is being translated.
-* The `targetUrl` for each target language must be unique.
-
->[!NOTE]
-> If a file with the same name already exists in the destination, it will be overwritten.
-
-## POST a translation request
-
-<!-- markdownlint-disable MD024 -->
-### POST request body to translate all documents in a container
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "fr"
- }
- ]
- }
- ]
-}
-```
--
-### POST request body to translate a specific document in a container
-
-* Ensure you have specified "storageType": "File"
-* Ensure you have created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you have specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-* Sample request below shows a single document getting translated into two target languages
-
-```json
-{
- "inputs": [
- {
- "storageType": "File",
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "es"
- },
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "de"
- }
- ]
- }
- ]
-}
-```
-- > [!IMPORTANT] > > For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](../../cognitive-services-security.md?tabs=command-line%2ccsharp) for ways to securely store and access your credentials.
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
>
-## _POST Document Translation_ request
-
-Submit a batch Document Translation request to the translation service.
+ ## Translate documents
### [C#](#tab/csharp)
Submit a batch Document Translation request to the translation service.
using System.Net.Http; using System.Threading.Tasks; using System.Text;
-
+ class Program {
Submit a batch Document Translation request to the translation service.
private static readonly string subscriptionKey = "<YOUR-SUBSCRIPTION-KEY>"; static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\",\"filter\":{\"prefix\": \"Demo_1/\"} }, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
-
+ static async Task Main(string[] args) { using HttpClient client = new HttpClient(); using HttpRequestMessage request = new HttpRequestMessage(); {
-
+ StringContent content = new StringContent(json, Encoding.UTF8, "application/json"); request.Method = HttpMethod.Post; request.RequestUri = new Uri(endpoint + route); request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey); request.Content = content;
-
+ HttpResponseMessage response = await client.SendAsync(request); string result = response.Content.ReadAsStringAsync().Result; if (response.IsSuccessStatusCode) { Console.WriteLine($"Status code: {response.StatusCode}"); Console.WriteLine();
- Console.WriteLine($"Response Headers:");
+ Console.WriteLine($"Response Headers:");
Console.WriteLine(response.Headers); } else
if err != nil {
-## _GET file formats_
+## Get file formats
Retrieve a list of supported file formats. If successful, this method returns a `200 OK` response code. ### [C#](#tab/csharp) ```csharp
-
+ using System; using System.Net.Http; using System.Threading.Tasks;
let route = '/documents/formats';
let config = { method: 'get', url: endpoint + route,
- headers: {
+ headers: {
'Ocp-Apim-Subscription-Key': subscriptionKey } };
public class GetFileFormats {
public void get() throws IOException { Request request = new Request.Builder().url( url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
- Response response = client.newCall(request).execute();
+ Response response = client.newCall(request).execute();
System.out.println(response.body().string()); }
func main() {
-## _GET job status_
+## Get job status
Get the current status for a single job and a summary of all jobs in a Document Translation request. If successful, this method returns a `200 OK` response code. <!-- markdownlint-disable MD024 -->
Get the current status for a single job and a summary of all jobs in a Document
### [C#](#tab/csharp) ```csharp
-
+ using System; using System.Net.Http; using System.Threading.Tasks;
let route = '/batches/{id}';
let config = { method: 'get', url: endpoint + route,
- headers: {
+ headers: {
'Ocp-Apim-Subscription-Key': subscriptionKey } };
public class GetJobStatus {
public void get() throws IOException { Request request = new Request.Builder().url( url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
- Response response = client.newCall(request).execute();
+ Response response = client.newCall(request).execute();
System.out.println(response.body().string()); }
func main() {
-## _GET document status_
+## Get document status
### Brief overview
Retrieve the status of a specific document in a Document Translation request. If
### [C#](#tab/csharp) ```csharp
-
+ using System; using System.Net.Http; using System.Threading.Tasks;
let route = '/{id}/document/{documentId}';
let config = { method: 'get', url: endpoint + route,
- headers: {
+ headers: {
'Ocp-Apim-Subscription-Key': subscriptionKey } };
public class GetDocumentStatus {
public void get() throws IOException { Request request = new Request.Builder().url( url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
- Response response = client.newCall(request).execute();
+ Response response = client.newCall(request).execute();
System.out.println(response.body().string()); }
func main() {
-## _DELETE job_
+## Delete job
### Brief overview
Cancel currently processing or queued job. Only documents for which translation
### [C#](#tab/csharp) ```csharp
-
+ using System; using System.Net.Http; using System.Threading.Tasks;
let route = '/batches/{id}';
let config = { method: 'delete', url: endpoint + route,
- headers: {
+ headers: {
'Ocp-Apim-Subscription-Key': subscriptionKey } };
public class DeleteJob {
public void get() throws IOException { Request request = new Request.Builder().url( url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", subscriptionKey).build();
- Response response = client.newCall(request).execute();
+ Response response = client.newCall(request).execute();
System.out.println(response.body().string()); }
Document Translation can not be used to translate secured documents such as thos
||-|--| | 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, please ensure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
+| 401 | Unauthorized | The request is not authorized. Check to make sure your subscription key or token is valid and in the correct region. When managing your subscription on the Azure portal, please ensure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/overview.md
Title: What is Document Translation?
+ Title: What is Microsoft Azure Cognitive Services Document Translation?
description: An overview of the cloud-based batch document translation service and process. Previously updated : 06/20/2021 Last updated : 08/09/2021 # What is Document Translation?
-Document Translation is a cloud-based feature of the [Azure Translator](../translator-info-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format.
+Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. In this overview, you'll learn how the Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md) while preserving original document structure and data format.
-This documentation contains the following article types:
+This documentation contains the following article types:
* [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
-* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values, keywords and configuration.
+* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
+* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values, keywords, and configuration.
## Document Translation key features
This documentation contains the following article types:
|**Apply custom translation**| Translate documents using general and [custom translation](../customization.md#custom-translator) models.| |**Apply custom glossaries**|Translate documents using custom glossaries.| |**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
-|**Translate documents with content in multiple languages**|Use the auto-detect feature to translate documents with content in multiple languages into your target language.|
+|**Translate documents with content in multiple languages**|Use the autodetect feature to translate documents with content in multiple languages into your target language.|
> [!NOTE] > When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
->
-## How to get started?
+>
+
+## Document Translation development options
+
+You can add Document Translation to your applications using the REST API or a client-library SDK:
+
+* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
+
+* The [**client-library SDKs**](client-sdks.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](/python/azure-ai-translation-document/latest/azure.ai.translation.document.html).
+
+## Get started
In our how-to guide, you'll learn how to quickly get started using Document Translator. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free). > [!div class="nextstepaction"]
-> [Get Started](get-started-with-document-translation.md)
+> [Start here](get-started-with-document-translation.md "Learn how to use Document Translation with HTTP REST")
## Supported document formats
The following document file types are supported by Document Translation:
| File type| File extension|Description| |||--|
-|Adobe PDF|.pdf|Adobe Acrobat portable document format|
-|Comma-Separated Values |.csv| A comma-delimited raw-data file used by spreadsheet programs.|
-|HTML|.html, .htm|Hyper Text Markup Language.|
-|Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Markdown| .markdown, .mdown, .mkdn, .md, .mkd, .mdwn, .mdtxt, .mdtext, .rmd| A lightweight markup language for creating formatted text.|
-|MHTML|.mthml, .mht| A web page archive format used to combine HTML code and its companion resources.|
-|Microsoft Excel|.xls, .xlsx|A spreadsheet file for data analysis and documentation.|
-|Microsoft Outlook|.msg|An email message created or saved within Microsoft Outlook.|
-|Microsoft PowerPoint|.ppt, .pptx| A presentation file used to display content in a slideshow format.|
-|Microsoft Word|.doc, .docx| A text document file.|
-|OpenDocument Text|.odt|An open-source text document file.|
-|OpenDocument Presentation|.odp|An open-source presentation file.|
-|OpenDocument Spreadsheet|.ods|An open-source spreadsheet file.|
-|Rich Text Format|.rtf|A text document containing formatting.|
-|Tab Separated Values/TAB|.tsv/.tab| A tab-delimited raw-data file used by spreadsheet programs.|
-|Text|.txt| An unformatted text document.|
+|Adobe PDF|pdf|Adobe Acrobat portable document format|
+|Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.|
+|HTML|html, htm|Hyper Text Markup Language.|
+|Localization Interchange File Format|xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Markdown| markdown, mdown, mkdn, md, mkd, mdwn, mdtxt, mdtext, rmd| A lightweight markup language for creating formatted text.|
+|MHTML|mthml, mht| A web page archive format used to combine HTML code and its companion resources.|
+|Microsoft Excel|xls, xlsx|A spreadsheet file for data analysis and documentation.|
+|Microsoft Outlook|msg|An email message created or saved within Microsoft Outlook.|
+|Microsoft PowerPoint|ppt, pptx| A presentation file used to display content in a slideshow format.|
+|Microsoft Word|doc, docx| A text document file.|
+|OpenDocument Text|odt|An open-source text document file.|
+|OpenDocument Presentation|odp|An open-source presentation file.|
+|OpenDocument Spreadsheet|ods|An open-source spreadsheet file.|
+|Rich Text Format|rtf|A text document containing formatting.|
+|Tab Separated Values/TAB|tsv/tab| A tab-delimited raw-data file used by spreadsheet programs.|
+|Text|txt| An unformatted text document.|
## Supported glossary formats
The following glossary file types are supported by Document Translation:
| File type| File extension|Description| |||--|
-|Comma-Separated Values| .csv |A comma-delimited raw-data file used by spreadsheet programs.|
-|Localization Interchange File Format|.xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Tab-Separated Values/TAB|.tsv, .tab| A tab-delimited raw-data file used by spreadsheet programs.|
+|Comma-Separated Values| csv |A comma-delimited raw-data file used by spreadsheet programs.|
+|Localization Interchange File Format| xlf , xliff| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.|
+|Tab-Separated Values/TAB|tsv, tab| A tab-delimited raw-data file used by spreadsheet programs.|
## Next steps
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/rest-api-guide.md
+
+ Title: "Text Translation REST API reference guide"
+
+description: View a list of with links to the Text Translation REST APIs.
++++++ Last updated : 08/10/2021+++
+# Text Translation REST API
+
+Text Translation is a cloud-based feature of the Azure Translator service and is part of the Azure Cognitive Service family of REST APIs. The Text Translation API translates text between language pairs across all [supported languages and dialects](../../language-support.md). The available methods are listed in the table below:
+
+| Request| Method| Description|
+||--||
+| [**languages**](v3-0-languages.md) | **GET** | Returns the set of languages currently supported by the **translation**, **transliteration**, and **dictionary** methods. This request does not require authentication headers and you do not need a Translator resource to view the supported language set.|
+|[**translate**](v3-0-translate.md) | **POST**| Translate specified source language text into the target language text.|
+|[**transliterate**](v3-0-transliterate.md) | **POST** | Map source language script or alphabet to a target language script or alphabet.
+|[**detect**](v3-0-detect.md) | **POST** | Identify the source language. |
+|[**breakSentence**](v3-0-break-sentence.md) | **POST** | Returns an array of integers representing the length of sentences in a source text. |
+| [**dictionary/lookup**](v3-0-dictionary-lookup.md) | **POST** | Returns alternatives for single word translations. |
+| [**dictionary/examples**](v3-0-dictionary-lookup.md) | **POST** | Returns how a term is used in context. |
+
+> [!div class="nextstepaction"]
+> [Create a Translator resource in the Azure portal.](/translator-how-to-signup.md)
+
+> [!div class="nextstepaction"]
+> [Quickstart: REST API and your programming language](../quickstart-translator.md)
cognitive-services Text Translation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/text-translation-overview.md
+
+ Title: What is Microsoft Azure Cognitive Services Text Translation?
+
+description: Integrate the Text Translation API into your applications, websites, tools, and other solutions to provide multi-language user experiences.
++++++ Last updated : 08/09/2021++
+keywords: translator, text translation, machine translation, translation service, custom translator
++
+# What is Text Translation?
+
+ Text translation is a cloud-based REST API feature of the Translator service that uses neural machine translation technology to enable quick and accurate source-to-target text translation in real time across all [supported languages](language-support.md). In this overview, you'll learn how the Text Translation REST APIs enable you to build intelligent solutions for your applications and workflows.
+
+Text translation documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-translator.md). Getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](translator-how-to-signup.md). Instructions for accessing and using the service in more specific or customized ways.
+* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
+
+## Text translation features
+
+ The following methods are supported by the Text Translation feature:
+
+* [**Languages**](reference/v3-0-languages.md). Returns a list of languages supported by **Translate**, **Transliterate**, and **Dictionary Lookup** operations. This request does not require authentication; just copy and paste the following GET request into Postman or your favorite API tool or browser:
+
+ ```http
+ https://api.cognitive.microsofttranslator.com/languages?api-version=3.0
+ ```
+
+* [**Translate**](reference/v3-0-translate.md#translate-to-multiple-languages). Renders single source-language text to multiple target-language texts with a single request.
+
+* [**Transliterate**](reference/v3-0-transliterate.md). Converts characters or letters of a source language to the corresponding characters or letters of a target language.
+
+* [**Detect**](reference/v3-0-detect.md). Returns the source code language code and a boolean variable denoting whether the detected language is supported for text translation and transliteration.
+
+ > [!NOTE]
+ > You can **Translate, Transliterate, and Detect** text with [a single REST API call](reference/v3-0-translate.md#translate-a-single-input-with-language-autodetection) .
+
+* [**Dictionary lookup**](reference/v3-0-dictionary-lookup.md). Returns equivalent words for the source term in the target language.
+* [**Dictionary example**](reference/v3-0-dictionary-examples.md) Returns grammatical structure and context examples for the source term and target term pair.
+
+## Text translation deployment options
+
+Add Text Translation to your projects and applications using the following resources:
+
+* Access the cloud-based Translator service via the [**REST API**](reference/rest-api-guide.md), available in Azure.
+
+* Use the REST API [translate request](containers/translator-container-supported-parameters.md) with the [**Text translation Docker container**](containers/translator-how-to-install-container.md).
+
+ > [!IMPORTANT]
+ >
+ > * The Translator container is in gated preview. To use it, you must complete and submit the [**Azure Cognitive Services Application for Gated Services**](https://aka.ms/csgate-translator) online request form and have it approved to acquire access to the container.
+ >
+ > * The [**Translator container image**](https://hub.docker.com/_/microsoft-azure-cognitive-services-translator-text-translation) supports limited features compared to cloud offerings.
+ >
+
+## Get started with Text Translation
+
+Ready to begin?
+
+* [**Create a Translator resource**](translator-how-to-signup.md "Go to the Azure portal.") in the Azure portal.
+
+* [**Get your access keys and API endpoint**](translator-how-to-signup.md#authentication-keys-and-endpoint-url). An endpoint URL and read-only key are required for authentication.
+
+* Explore our [**Quickstart**](quickstart-translator.md "Learn to use Translator via REST and a preferred programming language.") and view use cases and code samples for the following programming languages:
+ * [**C#/.NET**](quickstart-translator.md?tabs=csharp)
+ * [**Go**](quickstart-translator.md?tabs=go)
+ * [**Java**](quickstart-translator.md?tabs=java)
+ * [**JavaScript/Node.js**](quickstart-translator.md?tabs=nodejs)
+ * [**Python**](quickstart-translator.md?tabs=python)
+
+## Next steps
+
+Dive deeper into the Text Translation REST API:
+
+> [!div class="nextstepaction"]
+> [See the REST API reference](./reference/v3-0-reference.md)
cognitive-services Translator How To Signup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-how-to-signup.md
Last updated 02/16/2021
# Create a Translator resource
-In this article, you'll learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-info-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **subscription key** for authenticating access.
+In this article, you'll learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **subscription key** for authenticating access.
## Prerequisites
The Translator service can be accessed through two different resource types:
* **Single-service** resource types enable access to a single service API key and endpoint. * **Multi-service** resource types enable access to multiple Cognitive Services using a single API key and endpoint. The Cognitive Services resource is currently available for the following
- * Language ([Translator](../translator/translator-info-overview.md), [Language Understanding (LUIS)](../luis/what-is-luis.md), [Text Analytics](../text-analytics/overview.md))
+ * Language ([Translator](../translator/translator-overview.md), [Language Understanding (LUIS)](../luis/what-is-luis.md), [Text Analytics](../text-analytics/overview.md))
* Vision ([Computer Vision](../computer-vision/overview.md)), ([Face](../face/overview.md)) * Decision ([Content Moderator](../content-moderator/overview.md))
cognitive-services Translator Info Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-info-overview.md
- Title: Microsoft Translator service-
-description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.
------ Previously updated : 03/15/2021--
-keywords: translator, text translation, machine translation, translation service
-
-# What is the Translator service?
-
-Translator is a cloud-based machine translation service and is part of the [Azure Cognitive Services](../../index.yml?panel=ai&pivot=products) family of cognitive APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in [90 languages and dialects](./language-support.md) and can be used for text translation with any operating system.
-
-This documentation contains the following article types:
-
-* [**Quickstarts**](quickstart-translator.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](translator-how-to-signup.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](tutorial-wpf-translation-csharp.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## About Microsoft Translator
-
-Translator powers many Microsoft products and services, and is used by thousands of businesses worldwide in their applications and workflows.
-
-Speech translation, powered by Translator, is also available through the [Azure Speech service](../speech-service/index.yml). It combines functionality from the Translator Speech API and the Custom Speech Service into a unified and fully customizable service.
-
-## Language support
-
-Translator provides multi-language support for text translation, transliteration, language detection, and dictionaries. See [language support](language-support.md) for a complete list, or access the list programmatically with the [REST API](./reference/v3-0-languages.md).
-
-## Microsoft Translator Neural Machine Translation
-
-Neural Machine Translation (NMT) is the new standard for high-quality AI-powered machine translations. It replaces the legacy Statistical Machine Translation (SMT) technology that reached a quality plateau in the mid-2010s.
-
-NMT provides better translations than SMT not only from a raw translation quality scoring standpoint but also because they will sound more fluent and human. The key reason for this fluidity is that NMT uses the full context of a sentence to translate words. SMT only took the immediate context of a few words before and after each word.
-
-NMT models are at the core of the API and are not visible to end users. The only noticeable difference is improved translation quality, especially for languages such as Chinese, Japanese, and Arabic.
-
-Learn more about [how NMT works](https://www.microsoft.com/translator/mt.aspx#nnt).
-
-## Improve translations with Custom Translator
-
- [Custom Translator](customization.md), an extension of the Translator service, can be used to customize the neural translation system and improve the translation for your specific terminology and style.
-
-With Custom Translator, you can build translation systems to handle the terminology used in your own business or industry. Your customized translation system can easily integrate with your existing applications, workflows, websites, and devices, through the regular Translator, by using the category parameter.
-
-## Next steps
-
-* [Create a Translator service](./translator-how-to-signup.md) to get your access keys and endpoint.
-* Try our [Quickstart](quickstart-translator.md) to quickly call the Translator service.
-* [API reference](./reference/v3-0-reference.md) provides the technical documentation for the APIs.
-* [Pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-overview.md
+
+ Title: What is the Microsoft Azure Cognitive Services Translator?
+
+description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.
++++++ Last updated : 08/10/2021++
+keywords: translator, text translation, machine translation, translation service, custom translator
+
+# What is Translator?
+
+ Translator is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+
+Translator documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-translator.md). Getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](translator-how-to-signup.md). Instructions for accessing and using the service in more specific or customized ways.
+* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
+
+## Translator features and development options
+
+The following features are supported by the Translator service. Use the links in this table to learn more about each feature and browse the API references.
+
+| Feature | Description | Development options |
+|-|-|--|
+| [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. | <ul><li>[**REST API**](reference/rest-api-guide.md) </li><li>[Text translation Docker container](/containers/translator-how-to-install-container)ΓÇöcurrently in gated preview.</li></ul> |
+| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/client-sdks.md)</li></ul> |
+| [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> |
+
+## Try the Translator service for free
+
+First, you'll need a Microsoft account; if you do not one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
+
+Next, you'll need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
+
+Now, you're ready to get started! [**Create a Translator service**](translator-how-to-signup.md "Go to the Azure portal."), [**get your access keys and API endpoint**](translator-how-to-signup.md#authentication-keys-and-endpoint-url "An endpoint URL and read-only key are required for authentication."), and try our [**quickstart**](quickstart-translator.md "Learn to use Translator via REST.").
+
+## Next steps
+
+* Learn more about the following features:
+ * [**Text Translation**](text-translation-overview.md)
+ * [**Document Translation**](document-translation/overview.md)
+ * [**Custom Translator**](custom-translator/overview.md)
+* Review [**Translator pricing**](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
The [Speech language detection][sp-lid] container image can be found on the `mcr
This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list).
+# [Latest version](#tab/current)
+
+* Release notes for version `1.3.0`:
+ * Support for standalone language IDs with `SingleLanguage` and contiuous mode.
+ | Image Tags | Notes | ||:| | `latest` | |
+| `1.3.0-amd64-preview` | |
+
+# [Previous versions](#tab/previous)
+
+| Image Tags | Notes |
+||:|
+| `1.2.0-amd64-preview` | |
| `1.1.0-amd64-preview` | | ++ ## Key Phrase Extraction container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `keyphrase`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase`.
cognitive-services Text Analytics How To Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
|`documents` | Includes the `id` and `text` fields below | Required | Contains information for each document being sent, and the raw text of the document. | |`id` | String | Required | The IDs you provide are used to structure the output. | |`text` | Unstructured raw text, up to 125,000 characters. | Required | Must be in the English language, which is the only language currently supported. |
-|`tasks` | Includes the following Text Analytics features: `entityRecognitionTasks`,`entityLinkingTasks`,`keyPhraseExtractionTasks`,`entityRecognitionPiiTasks`, `extractiveSummarizationTasks` or `sentimentAnalysisTasks`. | Required | One or more of the Text Analytics features you want to use. Note that `entityRecognitionPiiTasks` has an optional `domain` parameter that can be set to `pii` or `phi` and the `pii-categories` for detection of selected entity types. If the `domain` parameter is unspecified, the system defaults to `pii`. Similarly `sentimentAnalysisTasks` has the `opinionMining` boolean parameter to include Opinion Mining results in the output for Sentiment Analysis. |
+|`tasks` | Includes the following Text Analytics features: `entityRecognitionTasks`,`entityLinkingTasks`,`keyPhraseExtractionTasks`,`entityRecognitionPiiTasks`, `extractiveSummarizationTasks` or `sentimentAnalysisTasks`. | Required | One or more of the Text Analytics features you want to use. Note that `entityRecognitionPiiTasks` has an optional `domain` parameter that can be set to `pii` or `phi` and the `piiCategories` for detection of selected entity types. If the `domain` parameter is unspecified, the system defaults to `pii`. Similarly `sentimentAnalysisTasks` has the `opinionMining` boolean parameter to include Opinion Mining results in the output for Sentiment Analysis. |
|`parameters` | Includes the `model-version` and `stringIndexType` fields below | Required | This field is included within the above feature tasks that you choose. They contain information about the model version that you want to use and the index type. | |`model-version` | String | Required | Specify which version of the model being called that you want to use. | |`stringIndexType` | String | Required | Specify the text decoder that matches your programming environment. Types supported are `textElement_v8` (default), `unicodeCodePoint`, `utf16CodeUnit`. Please see the [Text offsets article](../concepts/text-offsets.md#offsets-in-api-version-31) for more information. |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
> [!NOTE] > Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Sentiment Analysis is `2020-04-01`.
-| Language | Language code | v3 support | Starting v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting v3 model version: | Notes |
|:-|:-:|:-:|:--:|-:| | Chinese-Simplified | `zh-hans` | Γ£ô | 2019-10-01 | `zh` also accepted | | Chinese-Traditional | `zh-hant` | Γ£ô | 2019-10-01 | |
> * Only "Person", "Location" and "Organization" entities are returned for languages marked with *. > * Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for NER is `2021-06-01`.
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:--|:-:|:-:|:-:|::| | Arabic | `ar` | Γ£ô* | 2019-10-01 | | | Chinese-Simplified | `zh-hans` | Γ£ô | 2021-01-15 | `zh` also accepted |
> [!NOTE] > Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Key Phrase Extraction is `2021-06-01`.
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:-|:-:|:-:|:--:|::| | Afrikaans      |     `af`  |     ✓      |                2020-07-01                 |                    | | Bulgarian      |     `bg`  |     ✓      |                2020-07-01                 |                    |
> [!NOTE] > Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Entity Linking is `2020-02-01`.
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:|:-:|:-:|:--:|:--:| | English | `en` | Γ£ô | 2019-10-01 | | | Spanish | `es` | Γ£ô | 2019-10-01 | |
> * Container: `2021-03-01`
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:|:-:|:-:|:--:|:--:| | English | `en` | Γ£ô | API endpoint: 2019-10-01 <br> Container: 2020-04-16 | |
> [!NOTE] > Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for PII is `2021-01-15`.
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:--|:-:|:-:|:-:|::| | Chinese-Simplified | `zh-hans` | Γ£ô | 2021-01-15 | `zh` also accepted | | English | `en` | Γ£ô | 2020-07-01 | |
The Text Analytics API can detect a wide range of languages, variants, dialects,
If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
-| Language | Language Code | v3 support | Starting with v3 model version: |
+| Language | Language Code | v3.x support | Starting with v3 model version: |
|:-|:-:|:-:|:-:| |Afrikaans|`af`|Γ£ô| | |Albanian|`sq`|Γ£ô| |
If you have content expressed in a less frequently used language, you can try La
#### [Text summarization](#tab/summarization)
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+| Language | Language code | v3.x support | Starting with v3 model version: | Notes |
|:|:-:|:-:|:--:|:--:| | Chinese-Simplified | `zh-hans` | Γ£ô | 2021-08-01 | `zh` also accepted | | English | `en` | Γ£ô | 2021-08-01 | |
cognitive-services Named Entity Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/named-entity-types.md
Previously updated : 08/02/2021 Last updated : 08/11/2021
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-infrastructure.md
Placing these three FQDNs in order is required to:
The FQDNs ΓÇô sip.pstnhub.microsoft.com, sip2.pstnhub.microsoft.com, and sip3.pstnhub.microsoft.com ΓÇô will be resolved to one of the following IP addresses: -- `52.114.148.0`-- `52.114.132.46`-- `52.114.75.24` -- `52.114.76.76` -- `52.114.7.24` -- `52.114.14.70`-- `52.114.16.74`-- `52.114.20.29`-
-Open firewall ports for these IP addresses to allow incoming and outgoing traffic to and from the addresses for signaling. If your firewall supports DNS names, the FQDN `sip-all.pstnhub.microsoft.com` resolves to all these IP addresses.
+- `52.112.0.0/14`
+- `52.120.0.0/14`
+
+Open firewall ports for all these IP address ranges to allow incoming and outgoing traffic to and from the addresses for signaling. If your firewall supports DNS names, the FQDN `sip-all.pstnhub.microsoft.com` resolves to all these IP addresses.
## SIP Signaling: Ports
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-office365-outlook.md
Title: Integrate with Office 365 Outlook
+ Title: Connect to Office 365 Outlook
description: Automate tasks and workflows that manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps ms.suite: integration-+ Previously updated : 11/13/2020 Last updated : 08/11/2021 tags: connectors
-# Manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps
+# Connect to Office 365 Outlook using Azure Logic Apps
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Office 365 Outlook connector](/connectors/office365connector/), you can create automated tasks and workflows that manage your work or school account by building logic apps. For example, you can automate these tasks:
You can use any trigger to start your workflow, for example, when a new email ar
## Prerequisites
-* An Outlook account where you sign in with a [work or school account](https://www.office.com/). If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md) instead. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
+* Your Microsoft Office 365 account for Outlook where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool).
+
+ You need these credentials so that you can authorize your workflow to access your Outlook account.
+
+ > [!NOTE]
+ > If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md).
+ > To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts).
+ >
+ > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), Azure Active Directory (Azure AD) authentication
+ > works only with an account for Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts.
* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app where you want to access your Outlook account. To start your workflow with an Office 365 Outlook trigger, you need to have a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add an Office 365 Outlook action to your workflow, your logic app needs to already have a trigger.
+* The logic app where you want to access your Outlook account. To start your workflow with an Office 365 Outlook trigger, you need to have a [blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add an Office 365 Outlook action to your workflow, your logic app workflow needs to already have a trigger.
+
+## Connector reference
+
+For technical details about this connector, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/office365/).
## Add a trigger A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event that starts the workflow in your logic app. This example logic app uses a "polling" trigger that checks for any updated calendar event in your email account, based on the specified interval and frequency.
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app in the visual designer.
1. In the search box, enter `office 365 outlook` as your filter. This example selects **When an upcoming event is starting soon**.
-
+ ![Select trigger to start your logic app](./media/connectors-create-api-office365-outlook/office365-trigger.png) 1. If you don't have an active connection to your Outlook account, you're prompted to sign in and create that connection. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). Otherwise, provide the information for the trigger's properties.
A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event
1. In the trigger, set the **Frequency** and **Interval** values. To add other available trigger properties, such as **Time zone**, select those properties from the **Add new parameter** list.
- For example, if you want the trigger to check the calendar every 15 minutes, set **Frequency** to **Minute**, and set **Interval** to `15`.
+ For example, if you want the trigger to check the calendar every 15 minutes, set **Frequency** to **Minute**, and set **Interval** to `15`.
![Set frequency and interval for the trigger](./media/connectors-create-api-office365-outlook/calendar-settings.png)
Now add an action that runs after the trigger fires. For example, you can add th
## Add an action
-An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an operation that's run by the workflow in your logic app. This example logic app creates a new contact in Office 365 Outlook. You can use the output from another trigger or action to create the contact. For example, suppose your logic app uses the Dynamics 365 trigger, **When a record is created**. You can add the Office 365 Outlook **Create contact** action and use the outputs from the SalesForce trigger to create the new contact.
+An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an operation that's run by the workflow in your logic app. This example logic app creates a new contact in Office 365 Outlook. You can use the output from another trigger or action to create the contact. For example, suppose your logic app uses the Salesforce trigger, **When a record is created**. You can add the Office 365 Outlook **Create contact** action and use the outputs from the trigger to create the new contact.
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the visual designer.
-1. To add an action as the last step in your workflow, select **New step**.
+1. To add an action as the last step in your workflow, select **New step**.
To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
If you try connecting to Outlook by using a different account than the one curre
1. After the parameter appears on the action, enter the service account's email address.
-## Connector reference
-
-For technical details about this connector, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/office365/).
- ## Next steps * Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Create Api Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-sharepoint.md
Title: Connect to SharePoint from Azure Logic Apps
+ Title: Connect to SharePoint
description: Monitor and manage resources in SharePoint Online or SharePoint Server on premises by using Azure Logic Apps ms.suite: integration-+ Previously updated : 04/27/2021 Last updated : 08/11/2021 tags: connectors
-# Connect to SharePoint resources with Azure Logic Apps
+# Connect to SharePoint resources using Azure Logic Apps
To automate tasks that monitor and manage resources, such as files, folders, lists, and items, in SharePoint Online or in on-premises SharePoint Server, you can create automated integration workflows by using Azure Logic Apps and the SharePoint connector.
In your logic app workflow, you can use a trigger that monitors events in ShareP
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* Your Microsoft Office 365 account credentials that you use with SharePoint where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool).
-* Your SharePoint site address and user credentials. You need these credentials so that you can authorize your workflow to access your your SharePoint account.
+ You need these credentials so that you can authorize your workflow to access your SharePoint account.
+
+ > [!NOTE]
+ > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), Azure Active Directory (Azure AD) authentication
+ > works only with an account for Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts.
+
+* Your SharePoint site address
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* For connections to an on-premises SharePoint server, you need to [install and set up the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
In your logic app workflow, you can use a trigger that monitors events in ShareP
* To start the workflow with a SharePoint trigger, you need a blank logic app workflow. * To add a SharePoint action, your workflow needs to already have a trigger.
+## Connector reference
+
+For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, review the [connector's reference page](/connectors/sharepoint/).
+ ## Connect to SharePoint [!INCLUDE [Create connection general intro](../../includes/connectors-create-connection-general-intro.md)] ## Add a trigger
-1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the Logic App Designer, if not open already.
+1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the visual designer, if not open already.
1. On the designer, in the search box, enter `sharepoint` as the search term. Select the **SharePoint** connector.
In your logic app workflow, you can use a trigger that monitors events in ShareP
## Add an action
-1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the Logic App Designer, if not open already.
+1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the visual designer, if not open already.
1. Choose one of the following options:
In your logic app workflow, you can use a trigger that monitors events in ShareP
1. Provide the information to set up the action and continue building your workflow.
-## Connector reference
-
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, review the [connector's reference page](/connectors/sharepoint/).
- ## Next steps Learn about other [Logic Apps connectors](../connectors/apis-list.md)
container-registry Container Registry Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-azure-policy.md
Title: Compliance using Azure Policy
-description: Assign built-in policies in Azure Policy to audit compliance of your Azure container registries
+description: Assign built-in policy definitions in Azure Policy to audit compliance of your Azure container registries
Previously updated : 03/01/2021 Last updated : 08/10/2021 # Audit compliance of Azure container registries using Azure Policy
-[Azure Policy](../governance/policy/overview.md) is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
+[Azure Policy](../governance/policy/overview.md) is a service in Azure that you use to create, assign, and manage policy definitions. These policy definitions enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
-This article introduces built-in policies for Azure Container Registry. Use these policies to audit new and existing registries for compliance.
+This article introduces built-in policy definitions for Azure Container Registry. Use these definitions to audit new and existing registries for compliance.
-There are no charges for using Azure Policy.
+There is no charge for using Azure Policy.
## Built-in policy definitions
The following built-in policy definitions are specific to Azure Container Regist
[!INCLUDE [azure-policy-reference-rp-containerreg](../../includes/policy/reference/byrp/microsoft.containerregistry.md)]
-## Assign policies
+## Create policy assignments
-* Assign policies using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+* Create policy assignments using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). Container registry policy assignments apply to existing and new container registries within the scope. * Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. > [!NOTE]
-> After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+> After you create or update a policy assignment, it takes some time for the assignment to evaluate resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Review policy compliance
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
description: The article helps you understand and manage tenants associated with
tags: billing -+ Last updated 05/05/2021
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
description: This article helps you get started as you begin to manage Azure bil
tags: billing -+ Last updated 06/14/2021
cost-management-billing Troubleshoot Subscription Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access.md
description: This article helps you troubleshoot subscription access after you s
tags: billing -+ Last updated 04/07/2021
data-factory Compute Optimized Data Flow Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-data-flow-retire.md
+
+ Title: Compute optimized retirement
+description: Data flow compute optimized option is being retired
++++ Last updated : 06/29/2021++
+# Retirement of data flow compute optimized option
++
+Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mechanism to transform data in ETL jobs at scale using a graphical design paradigm. Data flows execute on the Azure Data Factory and Azure Synapse Analytics serverless Integration Runtime facility. The scalable nature of Azure Data Factory and Azure Synapse Analytics Integration Runtimes enabled three different compute options for the Azure Databricks Spark environment that is utilized to execute data flows at scale: Memory Optimized, General Purpose, and Compute Optimized. Memory Optimized and General Purpose are the recommended classes of data flow compute to use with your Integration Runtime for production workloads. Because Compute Optimized will often not suffice for common use cases with data flows, we recommend using General Purpose or Memory Optimized data flows in production workloads.
+
+## Migration steps
+
+From now through 31 August 2024, your Compute Optimized data flows will continue to work in your existing pipelines. To avoid service disruption, please remove your existing Compute Optimized data flows before 31 August 2024 and follow the steps below to create a new Azure Integration Runtime and data flow activity. When creating a new data flow activity:
+
+1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type.
+2. Set your data flow activity using either of those compute types.
+
+ ![Compute types](media/data-flow/compute-types.png)
+
+## Comparison between different compute options
+
+| Compute Option | Performance |
+| :-- | :-- |
+| General Purpose Data Flows (Basic) | Good for general use cases in production workloads |
+| Memory Optimized Data Flows (Standard) | Best performing runtime for data flows when working with large datasets and many calculations |
+| Compute Optimized Data Flows (Deprecated) | Not recommended for production workloads |
+
+* [Visit the Azure Data Factory pricing page for the latest updated pricing available for General Purpose and Memory Optimized data flows](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/)
+* [Find more detailed information at the data flows FAQ here](https://aka.ms/dataflowsqa)
+* [Post questions and find answers on data flows on Microsoft Q&A](https://aka.ms/datafactoryqa)
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-odata.md
To copy data from Project Online, you can use the OData connector and an access
- **Authentication type**: Select **Anonymous**. - **Auth headers**: - **Property name**: Choose **Authorization**.
- - **Value**: Enter the **access token** copied from step 1.
+ - **Value**: Enter `Bearer <access token from step 1>`.
- Test the linked service. ![Create OData linked service](./media/connector-odata/odata-project-online-linked-service.png)
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-lookup-activity.md
Previously updated : 02/25/2021 Last updated : 08/10/2021 # Lookup activity in Azure Data Factory and Azure Synapse Analytics
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
- Previously updated : 07/20/2021+ Last updated : 07/27/2021 # Sink transformation in mapping data flow
To use an inline dataset, select the format you want in the **Sink type** select
![Screenshot that shows Inline selected.](media/data-flow/inline-selector.png "Screenshot that shows Inline selected.")
+## Workspace DB (Synapse workspaces only)
+
+When using data flows in Azure Synapse workspaces, you will have an additional option to sink your data directly into a database type that is inside your Synapse workspace. This will alleviate the need to add linked services or datasets for those databases.
+
+> [!NOTE]
+> Azure Synapse Workspace DB is currently in public preview
+
+![Screenshot that shows workspace db selected.](media/data-flow/syms-sink.png "Screenshot that shows Inline selected.")
+ ## <a name="supported-sinks"></a> Supported sink types Mapping data flow follows an extract, load, and transform (ELT) approach and works with *staging* datasets that are all in Azure. Currently, the following datasets can be used in a source transformation.
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
- Previously updated : 03/10/2021+ Last updated : 07/27/2021 # Source transformation in mapping data flow
To use an inline dataset, select the format you want in the **Source type** sele
![Screenshot that shows Inline selected.](media/data-flow/inline-selector.png "Screenshot that shows Inline selected.")
+## Workspace DB (Synapse workspaces only)
+
+In Azure Synapse workspaces, an additional option is present in data flow source transformations called ```Workspace DB```. This will allow you to directly pick a workspace database of any available type as your source data without requiring additional linked services or datasets.
+
+> [!NOTE]
+> Azure Synapse Workspace DB is currently in public preview
+
+![Screenshot that shows workspacedb selected.](media/data-flow/syms-source.png "Screenshot that shows workspace DB selected.")
+ ## <a name="supported-sources"></a> Supported source types Mapping data flow follows an extract, load, and transform (ELT) approach and works with *staging* datasets that are all in Azure. Currently, the following datasets can be used in a source transformation.
data-factory Format Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-json.md
If **Document per line** is selected, mapping data flows read one JSON document
``` json File1.json
-{"json": "record 1 }
+{"json": "record 1"}
File2.json {"time":"2015-04-29T07:12:20.9100000Z","callingimsi":"466920403025604","callingnum1":"678948008","callingnum2":"567834760","switch1":"China","switch2":"Germany"}
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 07/05/2021 Last updated : 08/06/2021 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
This article shows you how to use the Data Factory copy data tool to copy data f
1. On the home page, select the **Ingest** tile to launch the copy data tool. ![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png )
-2. On the **Properties** page, specify **CopyFromADLSGen1ToGen2** for the **Task name** field. Select **Next**.
- ![Properties page](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-data-tool-properties-page.png)
-3. On the **Source data store** page, select **+ Create new connection**.
+2. On the **Properties** page, choose **Built-in copy task** under **Task type**, and choose **Run once now** under **Task cadence or task schedule**, then select **Next**.
- ![Source data store page](./media/load-azure-data-lake-storage-gen2-from-gen1/source-data-store-page.png)
+3. On the **Source data store** page, select **+ New connection**.
4. Select **Azure Data Lake Storage Gen1** from the connector gallery, and select **Continue**. ![Source data store Azure Data Lake Storage Gen1 page](./media/load-azure-data-lake-storage-gen2-from-gen1/source-data-store-page-adls-gen1.png)
-5. On the **Specify Azure Data Lake Storage Gen1 connection** page, follow these steps:
-
- a. Select your Data Lake Storage Gen1 for the account name, and specify or validate the **Tenant**.
-
- b. Select **Test connection** to validate the settings. Then select **Finish**.
+5. On the **New connection (Azure Data Lake Storage Gen1)** page, follow these steps:
+ 1. Select your Data Lake Storage Gen1 for the account name, and specify or validate the **Tenant**.
+ 1. Select **Test connection** to validate the settings. Then select **Create**.
- c. You see that a new connection was created. Select **Next**.
-
> [!IMPORTANT] > In this walk-through, you use a managed identity for Azure resources to authenticate your Azure Data Lake Storage Gen1. To grant the managed identity the proper permissions in Azure Data Lake Storage Gen1, follow [these instructions](connector-azure-data-lake-store.md#managed-identity). ![Specify Azure Data Lake Storage Gen1 account](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen1-account.png)
-6. On the **Choose the input file or folder** page, browse to the folder and file that you want to copy over. Select the folder or file, and select **Choose**.
-
- ![Choose input file or folder](./media/load-azure-data-lake-storage-gen2-from-gen1/choose-input-folder.png)
-
-7. Specify the copy behavior by selecting the **Copy files recursively** and **Binary copy** options. Select **Next**.
-
- ![Screenshot shows the Choose the input file or folder where you can select Copy file recursively and Binary Copy.](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-binary-copy.png)
+6. On the **Source data store** page, complete the following steps.
+ 1. Select the newly created connection in the **Connection** section.
+ 1. Under **File or folder**, browse to the folder and file that you want to copy over. Select the folder or file, and select **OK**.
+ 1. Specify the copy behavior by selecting the **Recursively** and **Binary copy** options. Select **Next**.
-8. On the **Destination data store** page, select **+ Create new connection** > **Azure Data Lake Storage Gen2** > **Continue**.
+ :::image type="content" source="./media/load-azure-data-lake-storage-gen2-from-gen1/source-data-store-page.png" alt-text="Screenshot showing the source data store page.":::
+
+7. On the **Destination data store** page, select **+ New connection** > **Azure Data Lake Storage Gen2** > **Continue**.
- ![Destination data store page](./media/load-azure-data-lake-storage-gen2-from-gen1/destination-data-storage-page.png)
+ ![Destination data store page](./media/load-azure-data-lake-storage-gen2-from-gen1/destination-data-store-page-adls-gen2.png)
-9. On the **Specify Azure Data Lake Storage Gen2 connection** page, follow these steps:
+8. On the **New connection (Azure Data Lake Storage Gen2)** page, follow these steps:
+ 1. Select your Data Lake Storage Gen2 capable account from the **Storage account name** drop-down list.
+ 1. Select **Create** to create the connection.
- a. Select your Data Lake Storage Gen2 capable account from the **Storage account name** drop-down list.
-
- b. Select **Finish** to create the connection. Then select **Next**.
-
![Specify Azure Data Lake Storage Gen2 account](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen2-account.png)
-10. On the **Choose the output file or folder** page, enter **copyfromadlsgen1** as the output folder name, and select **Next**. Data Factory creates the corresponding Azure Data Lake Storage Gen2 file system and subfolders during copy if they don't exist.
+9. On the **Destination data store** page, complete the following steps.
+ 1. Select the newly created connection in the **Connection** block.
+ 1. Under **Folder path**, enter **copyfromadlsgen1** as the output folder name, and select **Next**. Data Factory creates the corresponding Azure Data Lake Storage Gen2 file system and subfolders during copy if they don't exist.
- ![Screenshot shows the folder path you enter.](./media/load-azure-data-lake-storage-gen2-from-gen1/specify-adls-gen2-path.png)
+ :::image type="content" source="./media/load-azure-data-lake-storage-gen2-from-gen1/destination-data-store-page.png" alt-text="Screenshot showing the destination data store page.":::
-11. On the **Settings** page, select **Next** to use the default settings.
+10. On the **Settings** page, specify **CopyFromADLSGen1ToGen2** for the **Task name** field, then select **Next** to use the default settings.
-12. On the **Summary** page, review the settings, and select **Next**.
+
+11. On the **Summary** page, review the settings, and select **Next**.
![Summary page](./media/load-azure-data-lake-storage-gen2-from-gen1/copy-summary.png)
-13. On the **Deployment page**, select **Monitor** to monitor the pipeline.
+
+12. On the **Deployment page**, select **Monitor** to monitor the pipeline.
![Deployment page](./media/load-azure-data-lake-storage-gen2-from-gen1/deployment-page.png)
-14. Notice that the **Monitor** tab on the left is automatically selected. The **Actions** column includes links to view activity run details and to rerun the pipeline.
+
+13. Notice that the **Monitor** tab on the left is automatically selected. The **Pipeline name** column includes links to view activity run details and to rerun the pipeline.
![Monitor pipeline runs](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-pipeline-runs.png)
-15. To view activity runs that are associated with the pipeline run, select the **View Activity Runs** link in the **Actions** column. There's only one activity (copy activity) in the pipeline, so you see only one entry. To switch back to the pipeline runs view, select the **Pipelines** link at the top. Select **Refresh** to refresh the list.
+14. To view activity runs that are associated with the pipeline run, select the link in the **Pipeline name** column. There's only one activity (copy activity) in the pipeline, so you see only one entry. To switch back to the pipeline runs view, select the **All pipeline runs** link in the breadcrumb menu at the top. Select **Refresh** to refresh the list.
![Monitor activity runs](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-activity-runs.png)
-16. To monitor the execution details for each copy activity, select the **Details** link (eyeglasses image) under **Actions** in the activity monitoring view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations.
+15. To monitor the execution details for each copy activity, select the **Details** link (eyeglasses image) under the **Activity name** column in the activity monitoring view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations.
- ![Monitor activity run details](./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-activity-run-details.png)
+ :::image type="content" source="./media/load-azure-data-lake-storage-gen2-from-gen1/monitor-activity-run-details.png" alt-text="Screenshot showing the activity run details.":::
-17. Verify that the data is copied into your Azure Data Lake Storage Gen2 account.
+16. Verify that the data is copied into your Azure Data Lake Storage Gen2 account.
## Best practices
event-grid Authenticate With Access Keys Shared Access Signatures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authenticate-with-access-keys-shared-access-signatures.md
+
+ Title: Authenticate Azure Event Grid publishing clients using access keys or shared access signatures
+description: This article describes how to authenticate Azure Event Grid publishing clients using access keys and shared access signatures.
+ Last updated : 08/10/2021++
+# Authenticate Azure Event Grid publishing clients using access keys or shared access signatures
+This article provides information on authenticating clients that publish events to Azure Event Grid topics, domains, partner namespaces using **access key** or **Shared Access Signature (SAS)** token.
+
+> [!IMPORTANT]
+> Authenticating and authorizing users or applications using Azure AD identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Azure AD, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend you use Azure AD with your Azure Event Grid event publishing applications. For more information, see [Authenticate publishing clients using Azure Active Directory](authenticate-with-active-directory.md).
+
+## Authenticate using access key
+Access key authentication is the simplest form of authentication. You can pass the access key as an HTTP header or a URL query parameter.
+
+### Access key in an HTTP header
+Pass the access key as a value for the HTTP header: `aeg-sas-key`.
+
+```
+aeg-sas-key: XXXXXXXXXXXXXXXXXX0GXXX/nDT4hgdEj9DpBeRr38arnnm5OFg==
+```
+
+### Access key as a query parameter
+You can also specify `aeg-sas-key` as a query parameter.
+
+```
+https://<yourtopic>.<region>.eventgrid.azure.net/api/events?aeg-sas-key=XXXXXXXX53249XX8XXXXX0GXXX/nDT4hgdEj9DpBeRr38arnnm5OFg==
+```
+
+For instructions on how to get access keys for a topic or domain, see [Get access keys](get-access-keys.md).
+
+## Authenticate using SAS
+SAS tokens for an Event Grid resource include the resource, expiration time, and a signature. The format of the SAS token is: `r={resource}&e={expiration}&s={signature}`.
+
+The resource is the path for the event grid topic to which you're sending events. For example, a valid resource path is: `https://<yourtopic>.<region>.eventgrid.azure.net/api/events`. To see all the supported API versions, see [Microsoft.EventGrid resource types](/azure/templates/microsoft.eventgrid/allversions).
+
+First, programmatically generate a SAS token and then use the `aeg-sas-token` header or `Authorization SharedAccessSignature` header to authenticate with Event Grid.
+
+### Generate SAS token programmatically
+The following example creates a SAS token for use with Event Grid:
+
+```cs
+static string BuildSharedAccessSignature(string resource, DateTime expirationUtc, string key)
+{
+ const char Resource = 'r';
+ const char Expiration = 'e';
+ const char Signature = 's';
+
+ string encodedResource = HttpUtility.UrlEncode(resource);
+ var culture = CultureInfo.CreateSpecificCulture("en-US");
+ var encodedExpirationUtc = HttpUtility.UrlEncode(expirationUtc.ToString(culture));
+
+ string unsignedSas = $"{Resource}={encodedResource}&{Expiration}={encodedExpirationUtc}";
+ using (var hmac = new HMACSHA256(Convert.FromBase64String(key)))
+ {
+ string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(unsignedSas)));
+ string encodedSignature = HttpUtility.UrlEncode(signature);
+ string signedSas = $"{unsignedSas}&{Signature}={encodedSignature}";
+
+ return signedSas;
+ }
+}
+```
+
+```python
+def generate_sas_token(uri, key, expiry=3600):
+ ttl = datetime.datetime.utcnow() + datetime.timedelta(seconds=expiry)
+ encoded_resource = urllib.parse.quote_plus(uri)
+ encoded_expiration_utc = urllib.parse.quote_plus(ttl.isoformat())
+
+ unsigned_sas = f'r={encoded_resource}&e={encoded_expiration_utc}'
+ signature = b64encode(HMAC(b64decode(key), unsigned_sas.encode('utf-8'), sha256).digest())
+ encoded_signature = urllib.parse.quote_plus(signature)
+
+ token = f'r={encoded_resource}&e={encoded_expiration_utc}&s={encoded_signature}'
+
+ return token
+```
+
+### Using aeg-sas-token header
+Here's an example of passing the SAS token as a value for the `aeg-sas-token` header.
+
+```http
+aeg-sas-token: r=https%3a%2f%2fmytopic.eventgrid.azure.net%2fapi%2fevents&e=6%2f15%2f2017+6%3a20%3a15+PM&s=XXXXXXXXXXXXX%2fBPjdDLOrc6THPy3tDcGHw1zP4OajQ%3d
+```
+
+### Using Authorization header
+Here's an example of passing the SAS token as a value for the `Authorization` header.
+
+```http
+Authorization: SharedAccessSignature r=https%3a%2f%2fmytopic.eventgrid.azure.net%2fapi%2fevents&e=6%2f15%2f2017+6%3a20%3a15+PM&s=XXXXXXXXXXXXX%2fBPjdDLOrc6THPy3tDcGHw1zP4OajQ%3d
+```
+
+## Next steps
+See [Event delivery authentication](security-authentication.md) to learn about authentication with event handlers to deliver events.
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authenticate-with-active-directory.md
+
+ Title: Authenticate Event Grid publishing clients using Azure Active Directory (Preview)
+description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory.
+ Last updated : 08/10/2021++
+# Authentication and authorization with Azure Active Directory (Preview)
+This article describes how to authenticate Azure Event Grid publishing clients using Azure Active Directory (Azure AD).
+
+## Overview
+The [Microsoft Identity](../active-directory/develop/v2-overview.md) platform provides an integrated authentication and access control management for resources and applications that use Azure Active Directory (Azure AD) as their identity provider. Use the Microsoft Identity platform to provide authentication and authorization support in your applications. It's based on open standards such as OAuth 2.0 and OpenID Connect and offers tools and open-source libraries that support many authentication scenarios. It provides advanced features such as [Conditional Access](../active-directory/conditional-access/overview.md) that allows you to set policies that require multifactor authentication or allow access from specific locations, for example.
+
+An advantage that improves your security stance when using Azure AD is that you don't need to store credentials, such as authentication keys, in the code or repositories. Instead, you rely on the acquisition of OAuth 2.0 access tokens from the Microsoft Identity platform that your application presents when authenticating to a protected resource. You can register your event publishing application with Azure AD and obtain a service principal associated with your app that you manage and use. Instead, you can use [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md), either system assigned or user assigned, for an even simpler identity management model as some aspects of the identity lifecycle are managed for you.
+
+[Role-based access control](../active-directory/develop/custom-rbac-for-developers.md) (RBAC) allows you to configure authorization in a way that certain security principals (identities for users, groups, or apps) have specific permissions to execute operations over Azure resources. This way, the security principal used by a client application that sends events to Event Grid must have the RBAC role **EventGrid Data Sender** associated with it.
+
+### Security principals
+There are two broad categories of security principals that are applicable when discussing authentication of an Event Grid publishing client:
+
+- **Managed identities**. A managed identity can be system assigned, which you enable on an Azure resource and is associated to only that resource, or user assigned, which you explicitly create and name. User assigned managed identities can be associated to more than one resource.
+- **Application security principal**. It's a type of security principal that represents an application, which accesses resources protected by Azure AD.
+
+Regardless of the security principal used, a managed identity or an application security principal, your client uses that identity to authenticate before Azure AD and obtain an [OAuth 2.0 access token](../active-directory/develop/access-tokens.md) that's sent with requests when sending events to Event Grid. That token is cryptographically signed and once Event Grid receives it, the token is validated. For example, the audience (the intended recipient of the token) is confirmed to be Event Grid (`https://eventgrid.azure.net`), among other things. The token contains information about the client identity. Event Grid takes that identity and validates that the client has the role **EventGrid Data Sender** assigned to it. More precisely, Event Grid validates that the identity has the ``Microsoft.EventGrid/events/send/action`` permission in an RBAC role associated to the identity before allowing the event publishing request to complete.
+
+If you're using the Event Grid SDK, you don't need to worry about the details on how to implement the acquisition of access tokens and how to include it with every request to Event Grid because the [Event Grid data plane SDKs](#publish-events-using-event-grids-client-sdks) do that for you.
+
+### high-level steps
+Perform the following steps to ready your client to use Azure AD authentication when sending events to a topic, domain, or partner namespace.
+
+1. Create or use a security principal you want to use to authenticate. You can use a [managed identity](#authenticate-using-a-managed-identity) or an [application security principal](#authenticate-using-a-security-principal-of-a-client-application).
+2. [Grant permission to a security principal to publish events](#assign-permission-to-a-security-principal-to-publish-events) by assigning the **EventGrid Data Sender** role to the security principal.
+3. Use the Event Grid SDK to publish events to an Event Grid.
+
+## Authenticate using a managed identity
+
+Managed identities are identities associated with Azure resources. Managed identities provide an identity that applications use when using Azure resources that support Azure AD authentication. Applications may use the managed identity of the hosting resource like a virtual machine or Azure App service to obtain Azure AD tokens that are presented with the request when publishing events to Event Grid. When the application connects, Event Grid binds the managed entity's context to the client. Once it's associated with a managed identity, your Event Grid publishing client can do all authorized operations. Authorization is granted by associating a managed entity to an Event Grid RBAC role.
+
+Managed identity provides Azure services with an automatically managed identity in Azure AD. Contrasting to other authentication methods, you don't need to store and protect access keys or Shared Access Signatures (SAS) in your application code or configuration, either for the identity itself or for the resources you need to access.
+
+To authenticate your event publishing client using managed identities, first decide on the hosting Azure service for your client application and then enable system assigned or user assigned managed identities on that Azure service instance. For example, you can enable managed identities on a [VM](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md), an [Azure App Service or Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet).
+
+Once you have a managed identity configured in a hosting service, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events).
+
+## Authenticate using a security principal of a client application
+
+Besides managed identities, another identity option is to create a security principal for your client application. To that end, you need to register your application with Azure AD. Registering your application is a gesture through which you delegate identity and access management control to Azure AD. Follow the steps in section [Register an application](../active-directory/develop/quickstart-register-app.md#register-an-application) and in section [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret). Make sure to review the [prerequisites](../active-directory/develop/quickstart-register-app.md#prerequisites) before starting.
+
+Once you have an application security principal and followed above steps, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events).
+
+> [!NOTE]
+> When you register an application in the portal, an [application object](../active-directory/develop/app-objects-and-service-principals.md#application-object) and a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step.
+
+## Assign permission to a security principal to publish events
+
+The identity used to publish events to Event Grid must have the permission ``Microsoft.EventGrid/events/send/action`` that allows it to send events to Event Grid. That permission is included in the built-in RBAC role [EventGrid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender). This role can be assigned to a [security principal](../role-based-access-control/overview.md#security-principal), for a given [scope](../role-based-access-control/overview.md#scope), which can be a management group, an Azure subscription, a resource group, or a specific event grid topic, domain, or partner namespace. Follow the steps in [Assign Azure roles](../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a security principal the **EventGrid Data Sender** role and in that way grant an application using that security principal access to send events. Alternatively, you can define a [custom role](../role-based-access-control/custom-roles.md) that includes the ``Microsoft.EventGrid/events/send/action`` permission and assign that custom role to your security principal.
+
+With RBAC privileges taken care of, you can now [build your client application to send events](#publish-events-using-event-grids-client-sdks) to Event Grid.
+
+> [!NOTE]
+> Event Grid supports more RBAC roles for purposes beyond sending events. For more information, see[Event Grid built-in roles](security-authorization.md#built-in-roles).
++
+## Publish events using Event Grid's client SDKs
+
+Use [Event Grid's data plane SDK](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) to publish events to Event Grid. Event Grid's SDK support all authentication methods, including Azure AD authentication.
+
+### Prerequisites
+
+Following are the prerequisites to authenticate to Event Grid.
+
+- Install the SDK on your application.
+ - [Java](/java/api/overview/azure/messaging-eventgrid-readme#include-the-package)
+ - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme-pre#install-the-package)
+ - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid.md#install-the-azureeventgrid-package)
+ - [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid#install-the-package)
+- Install the Azure Identity client library. The Event Grid SDK depends on the Azure Identity client library for authentication.
+ - [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
+ - [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
+ - [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
+ - [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
+- A topic, domain, or partner namespace created to which your application will send events.
+
+### Publish events using Azure AD Authentication
+
+To send events to a topic, domain or partner namespace, you can build the client in the following way. The api version that first provided support for Azure AD authentication is ``2021-06-01-preview``. Use that API version or a more recent version in your application.
+
+```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
+ EventGridPublisherClient cloudEventClient = new EventGridPublisherClientBuilder()
+ .endpoint("<your-event-grid-topic-domain-or-partner-namespace-endpoint>?api-version=2021-06-01-preview")
+ .credential(credential)
+ .buildCloudEventPublisherClient();
+```
+If you're using a security principal associated with a client publishing application, you have to configure environmental variables as shown in the [Java SDK readme article](/java/api/overview/azure/identity-readme#environment-variables). The `DefaultCredentialBuilder` reads those environment variables to use the right identity. For more information, see [Java API overview](/java/api/overview/azure/identity-readme#defaultazurecredential).
++
+For more information, see the following articles:
+
+- [Azure Event Grid client library for Java](/java/api/overview/azure/messaging-eventgrid-readme)
+- [Azure Event Grid client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid#authenticate-using-azure-active-directory)
+- [Azure Event Grid client library for JavaScript](/javascript/api/overview/azure/eventgrid-readme)
+- [Azure Event Grid client library for Python](/python/api/overview/azure/eventgrid-readme)
+
+## Disable key and shared access signature authentication
+
+Azure AD authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Azure AD authentication, the identity is validated against Azure AD identity provider. As a developer, you won't have to handle keys in your code if you use Azure AD authentication. you'll also benefit from all security features built into the Microsoft Identity platform, such as [Conditional Access](../active-directory/conditional-access/overview.md), that can help you improve your application's security stance.
+
+Once you decide to use Azure AD authentication, you can disable authentication based on access keys or SAS tokens.
+
+> [!NOTE]
+> Acess keys or SAS token authentication is a form of **local authentication**. you'll hear sometimes referring to "local auth" when discussing this category of authentication mechanisms that don't rely on Azure AD. The API parameter used to disable local authentication is called, appropriately so, ``disableLocalAuth``.
+
+The following CLI command shows the way to create a custom topic with local authentication disabled. The disable local auth feature is currently available as a preview and you need to use API version ``2021-06-01-preview``.
+
+```cli
+az resource create --subscription <subscriptionId> --resource-group <resourceGroup> --resource-type Microsoft.EventGrid/topics --api-version 2021-06-01-preview --name <topicName> --location <location> --properties "{ \"disableLocalAuth\": true}"
+```
+
+For your reference, the following are the resource type values that you can use according to the topic you're creating or updating.
+
+| Topic type | Resource type |
+| | :|
+| Domains | Microsoft.EventGrid/domains |
+| Partner Namespace | Microsoft.EventGrid/partnerNamespaces|
+| Custom Topic | Microsoft.EventGrid/topics |
+
+If you're using PowerShell, use the following cmdlets to create a custom topic with local authentication disabled.
+
+```PowerShell
+
+Set-AzContext -SubscriptionId <SubscriptionId>
+
+New-AzResource -ResourceGroupName <ResourceGroupName> -ResourceType Microsoft.EventGrid/topics -ApiVersion 2021-06-01-preview -ResourceName <TopicName> -Location <Location> -Properties @{disableLocalAuth=$true}
+```
+
+> [!NOTE]
+> - To learn about using the access key or shared access signature authentication, see [Authenticate publishing clients with keys or SAS tokens](security-authenticate-publishing-clients.md)
+> - This article deals with authentication when publishing events to Event Grid (event ingress). Authenticating Event Grid when delivering events (event egress) is the subject of article [Authenticate event delivery to event handlers](security-authentication.md).
+
+## Resources
+- Data plane SDKs
+ - Java SDK: [github](https://github.com/Azure/azure-sdk-for-jav)
+ - .NET SDK: [github](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid) | [samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid/MigrationGuide.md)
+ - Python SDK: [github](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid) | [samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventgrid/azure-eventgrid/migration_guide.md)
+ - JavaScript SDK: [github](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/) | [samples](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/migration.md)
+- [Event Grid SDK blog](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/)
+- Azure Identity client library
+ - [Java](https://github.com/Azure/azure-sdk-for-jav)
+ - [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md)
+ - [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/identity/azure-identity/README.md)
+ - [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md)
+- Learn about [managed identities](../active-directory/managed-identities-azure-resources/overview.md)
+- Learn about [how to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet)
+- Learn about [applications and service principals](../active-directory/develop/app-objects-and-service-principals.md)
+- Learn about [registering an application with the Microsoft Identity platform](../active-directory/develop/quickstart-register-app.md).
+- Learn about how [authorization](../role-based-access-control/overview.md) (RBAC access control) works.
+- Learn about Event Grid built-in RBAC roles including its [EventGrid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) role. [Event Grid's roles list](security-authorization.md#built-in-roles).
+- Learn about [assigning RBAC roles](../role-based-access-control/role-assignments-portal.md?tabs=current) to identities.
+- Learn about how to define [custom RBAC roles](../role-based-access-control/custom-roles.md).
+- Learn about [application and service principal objects in Azure AD](../active-directory/develop/app-objects-and-service-principals.md).
+- Learn about [Microsoft Identity Platform access tokens](../active-directory/develop/access-tokens.md).
+- Learn about [OAuth 2.0 authentication code flow and Microsoft Identity Platform](../active-directory/develop/v2-oauth2-auth-code-flow.md)
event-grid Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authentication-overview.md
+
+ Title: Authenticate clients publishing events to Event Grid custom topics, domains, and partner namespaces.
+description: This article describes different ways of authenticating clients publishing events to Event Grid custom topics, domains, and partner namespaces.
+ Last updated : 08/10/2021++
+# Client authentication when publishing events to Event Grid
+Authentication for clients publishing events to Event Grid is supported using the following methods:
+
+- Azure Active Directory (Azure AD)
+- Access key or shared access signature (SAS)
+
+## Authenticate using Azure Active Directory (preview)
+Azure AD integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft Identity platform](authenticate-with-active-directory.md).
++
+> [!IMPORTANT]
+> Authenticating and authorizing users or applications using Azure AD identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Azure AD, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend that you use Azure AD with your Azure Event Grid event publishing applications.
+
+> [!NOTE]
+> Azure AD authentication support by Azure Event Grid has been released as preview.
+> Azure Event Grid on Kubernetes does not support Azure AD authentication yet.
+
+## Authenticate using access keys and shared access signatures
+You can authenticate clients that publish events to Azure Event Grid topics, domains, partner namespaces using **access key** or **Shared Access Signature (SAS)** token. For more information, see [Using access keys or using Shared Access Signatures (SAS)](authenticate-with-access-keys-shared-access-signatures.md).
+
+
+## Next steps
+This article deals with authentication when **publishing** events to Event Grid (event ingress). To learn about authenticating when **delivering** events (event egress), see [Authenticate event delivery to event handlers](security-authentication.md).
+
event-grid How To Filter Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/how-to-filter-events.md
Title: How to filter events for Azure Event Grid description: This article shows how to filter events (by event type, by subject, by operators and data, etc.) when creating an Event Grid subscription. Previously updated : 07/07/2020 - Last updated : 08/11/2021 # Filter events for Event Grid This article shows how to filter events when creating an Event Grid subscription. To learn about the options for event filtering, see [Understand event filtering for Event Grid subscriptions](event-filtering.md). - ## Filter by event type When creating an Event Grid subscription, you can specify which [event types](event-schema.md) to send to the endpoint. The examples in this section create event subscriptions for a resource group but limit the events that are sent to `Microsoft.Resources.ResourceWriteFailure` and `Microsoft.Resources.ResourceWriteSuccess`. If you need more flexibility when filtering events by event types, see Filter by advanced operators and data fields.
+### Azure PowerShell
For PowerShell, use the `-IncludedEventType` parameter when creating the subscription. ```powershell
New-AzEventGridSubscription `
-IncludedEventType $includedEventTypes ```
+### Azure CLI
For Azure CLI, use the `--included-event-types` parameter. The following example uses Azure CLI in a Bash shell: ```azurecli
az eventgrid event-subscription create \
--included-event-types $includedEventTypes ```
+### Azure portal
+
+1. On the **Event Subscription** page, switch to the **Filters** tab.
+1. Select **Add Event Type** next to **Filter to Event Types**.
+
+ :::image type="content" source="./media/how-to-filter-events/add-event-type-button.png" alt-text="Screenshot of the Event Subscription page with Add Event Type button selected.":::
+1. Type the event type and press ENTER. In the following example, the event type is `Microsoft.Resources.ResourceWriteSuccess`.
+
+ :::image type="content" source="./media/how-to-filter-events/sample-event-type.png" alt-text="Screenshot of the Event Subscription page with a sample event type.":::
+
+### Azure Resource Manager template
For a Resource Manager template, use the `includedEventTypes` property. ```json
For a Resource Manager template, use the `includedEventTypes` property.
] ```
+> [!NOTE]
+> To learn more about these filters (event types, subject, and advanced), see [Understand event filtering for Event Grid subscriptions](event-filtering.md).
+ ## Filter by subject You can filter events by the subject in the event data. You can specify a value to match for the beginning or end of the subject. If you need more flexibility when filtering events by subject, see Filter by advanced operators and data fields. In the following PowerShell example, you create an event subscription that filters by the beginning of the subject. You use the `-SubjectBeginsWith` parameter to limit events to ones for a specific resource. You pass the resource ID of a network security group.
+### Azure PowerShell
```powershell $resourceId = (Get-AzResource -ResourceName demoSecurityGroup -ResourceGroupName myResourceGroup).ResourceId
New-AzEventGridSubscription `
-SubjectEndsWith ".jpg" ```
+### Azure CLI
In the following Azure CLI example, you create an event subscription that filters by the beginning of the subject. You use the `--subject-begins-with` parameter to limit events to ones for a specific resource. You pass the resource ID of a network security group. ```azurecli
az eventgrid event-subscription create \
--subject-ends-with ".jpg" ```
+### Azure portal
+
+1. On the **Event Subscription** page, select **Enable subject filtering**.
+1. Enter values for one or more of the following fields: **Subject begins with** and **Subject ends with**. In the following options both options are selected.
+
+ :::image type="content" source="./media/how-to-filter-events/subject-filter-example.png" alt-text="Screenshot of Event Subscription page with subject filtering example.":::
+1. Select **Case-sensitive subject matching** option if you want the subject of the event to match the case of the filters specified.
+
+### Azure Resource Manager template
In the following Resource Manager template example, you create an event subscription that filters by the beginning of the subject. You use the `subjectBeginsWith` property to limit events to ones for a specific resource. You pass the resource ID of a network security group. ```json
The next Resource Manager template example creates a subscription for a blob sto
] ```
+> [!NOTE]
+> To learn more about these filters (event types, subject, and advanced), see [Understand event filtering for Event Grid subscriptions](event-filtering.md).
+ ## Filter by operators and data For more flexibility in filtering, you can use operators and data properties to filter events.
To learn about the operators and keys that you can use for advanced filtering, s
These examples create a custom topic. They subscribe to the custom topic and filter by a value in the data object. Events that have the color property set to blue, red, or green are sent to the subscription.
-For Azure CLI, use:
-
-```azurecli
-topicName=<your-topic-name>
-endpointURL=<endpoint-URL>
-
-az group create -n gridResourceGroup -l eastus2
-az eventgrid topic create --name $topicName -l eastus2 -g gridResourceGroup
-
-topicid=$(az eventgrid topic show --name $topicName -g gridResourceGroup --query id --output tsv)
-
-az eventgrid event-subscription create \
- --source-resource-id $topicid \
- -n demoAdvancedSub \
- --advanced-filter data.color stringin blue red green \
- --endpoint $endpointURL \
- --expiration-date "<yyyy-mm-dd>"
-```
-
-Notice that an [expiration date](concepts.md#event-subscription-expiration) is set for the subscription.
+### Azure PowerShell
For PowerShell, use:
New-AzEventGridSubscription `
-AdvancedFilter @($AdvFilter1) ```
-### Test filter
-
-To test the filter, send an event with the color field set to green. Because green is one of the values in the filter, the event is delivered to the endpoint.
+### Azure CLI
For Azure CLI, use: ```azurecli
-topicEndpoint=$(az eventgrid topic show --name $topicName -g gridResourceGroup --query "endpoint" --output tsv)
-key=$(az eventgrid topic key list --name $topicName -g gridResourceGroup --query "key1" --output tsv)
+topicName=<your-topic-name>
+endpointURL=<endpoint-URL>
-event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/cars", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "model": "SUV", "color": "green"},"dataVersion": "1.0"} ]'
+az group create -n gridResourceGroup -l eastus2
+az eventgrid topic create --name $topicName -l eastus2 -g gridResourceGroup
-curl -X POST -H "aeg-sas-key: $key" -d "$event" $topicEndpoint
+topicid=$(az eventgrid topic show --name $topicName -g gridResourceGroup --query id --output tsv)
+
+az eventgrid event-subscription create \
+ --source-resource-id $topicid \
+ -n demoAdvancedSub \
+ --advanced-filter data.color stringin blue red green \
+ --endpoint $endpointURL \
+ --expiration-date "<yyyy-mm-dd>"
```
+Notice that an [expiration date](concepts.md#event-subscription-expiration) is set for the subscription.
++
+### Azure portal
+
+1. On the **Event Subscription** page, select **Add new filter** in the **ADVANCED FILTERS** section.
+
+ :::image type="content" source="./media/how-to-filter-events/add-new-filter-button.png" alt-text="Screenshot showing the Event Subscription page with Add new filter link highlighted.":::
+2. Specify a key, operator, and value or values to compared. In the following example, **data.color** is used as a key, **String is in** as an operator, and **blue**, **red**, and **green** values are specified for values.
+
+ :::image type="content" source="./media/how-to-filter-events/advanced-filter-example.png" alt-text="Screenshot showing an example of an advanced filter.":::
+
+ > [!NOTE]
+ > To learn more about advanced filters, see [Understand event filtering for Event Grid subscriptions](event-filtering.md).
++
+### Test the filter
+To test the filter, send an event with the color field set to green. Because green is one of the values in the filter, the event is delivered to the endpoint.
+
+### Azure PowerShell
For PowerShell, use: ```powershell
Invoke-WebRequest -Uri $endpoint -Method POST -Body $body -Headers @{"aeg-sas-ke
To test a scenario where the event isn't sent, send an event with the color field set to yellow. Yellow isn't one of the values specified in the subscription, so the event isn't delivered to your subscription.
-For Azure CLI, use:
-
-```azurecli
-event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/cars", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "model": "SUV", "color": "yellow"},"dataVersion": "1.0"} ]'
-
-curl -X POST -H "aeg-sas-key: $key" -d "$event" $topicEndpoint
-```
-For PowerShell, use:
- ```powershell $htbody = @{ id= $eventID
$body = "["+(ConvertTo-Json $htbody)+"]"
Invoke-WebRequest -Uri $endpoint -Method POST -Body $body -Headers @{"aeg-sas-key" = $keys.Key1} ``` +
+### Azure CLI
+For Azure CLI, use:
+
+```azurecli
+topicEndpoint=$(az eventgrid topic show --name $topicName -g gridResourceGroup --query "endpoint" --output tsv)
+key=$(az eventgrid topic key list --name $topicName -g gridResourceGroup --query "key1" --output tsv)
+
+event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/cars", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "model": "SUV", "color": "green"},"dataVersion": "1.0"} ]'
+
+curl -X POST -H "aeg-sas-key: $key" -d "$event" $topicEndpoint
+```
+
+To test a scenario where the event isn't sent, send an event with the color field set to yellow. Yellow isn't one of the values specified in the subscription, so the event isn't delivered to your subscription.
+
+For Azure CLI, use:
+
+```azurecli
+event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/cars", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "model": "SUV", "color": "yellow"},"dataVersion": "1.0"} ]'
+
+curl -X POST -H "aeg-sas-key: $key" -d "$event" $topicEndpoint
+```
+ ## Next steps
+To learn more about filters (event types, subject, and advanced), see [Understand event filtering for Event Grid subscriptions](event-filtering.md).
-* For information about monitoring event deliveries, see [Monitor Event Grid message delivery](monitor-event-delivery.md).
-* For more information about the authentication key, see [Event Grid security and authentication](security-authentication.md).
-* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-arm.md
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering and Private Link is now in Public preview.
+FastPath support for virtual network peering is now in Public preview.
### FastPath and virtual network peering
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 | | **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported |Dallas, Queretaro(Mexico)|
-| **T-Systems** |Supported |Supported |Frankfurt|
+| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported |Frankfurt|
| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported |Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported |Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
To learn more, see [ExpressRoute in China](http://www.windowsazure.cn/home/featu
| **[e-shelter](https://www.e-shelter.de/en/microsoft-expressroute)** |Supported |Not Supported |Berlin | | **Interxion** |Supported |Not Supported |Frankfurt | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Not Supported | Berlin |
-| **T-Systems** |Supported |Not Supported |Berlin |
+| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Not Supported |Berlin |
## Connectivity through Exchange providers
healthcare-apis Access Fhir Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/access-fhir-postman-tutorial.md
Select **Get New Access Token**.
To obtain a valid access token, select **Authorization** and select **OAuth 2.0** from the **TYPE** drop-down menu.
-![Set OAuth 2.0](media/tutorial-postman/postman-select-oauth2.png)
+![Set OAuth 2.0](media/tutorial-postman/postman-select-oauth-two.png)
Select **Get New Access Token**.
Scroll down to view the returned token screen, and then select **Use Token**.
Refer to the **Access Token** field to view the newly populated token. If you select **Send** to repeat the `Patient` resource search, a **Status** `200 OK` gets returned. A returned status `200 OK` indicates a successful HTTP request.
-![200 OK](media/tutorial-postman/postman-200-OK.png)
+![200 OK](media/tutorial-postman/postman-two-hundred-ok.png)
In the *Patient search* example, there are no patients in the database such that the search result is empty.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
+
+ Title: Azure Active Directory identity configuration for Azure API for FHIR
+description: Learn the principles of identity, authentication, and authorization for Azure FHIR servers.
++++++ Last updated : 08/05/2021+++
+# Azure Active Directory identity configuration for Azure API for FHIR
+
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. The [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, we'll walk through Azure API for FHIR as the FHIR server and Azure Active Directory (Azure AD) as our identity provider in this article.
+
+## Access control overview
+
+In order for a client application to access Azure API for FHIR, it must present an access token. The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity and roles and privileges granted to the client.
+
+There are many ways to obtain a token, but the Azure API for FHIR doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
+
+Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps below:
+
+![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
+
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.
+1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
+
+It's important to note that the Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The Azure API for FHIR simply validates that the token is signed correctly (it is authentic) and that it has appropriate claims.
+
+## Structure of an access token
+
+Development of FHIR applications often involves debugging access issues. If a client is denied access to the Azure API for FHIR, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token.
+
+FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts:
+
+**Part 1**: A header, which could look like:
+ ```json
+ {
+ "alg": "HS256",
+ "typ": "JWT"
+ }
+ ```
+
+**Part 2**: The payload (the claims), for example:
+ ```json
+ {
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+ }
+ ```
+
+**Part 3**: A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server will be able to obtain public keys from the identity provider and validate that this token was issued by a specific identity provider and it hasn't been tampered with.
+
+The full token consists of the Base64 encoded (actually Base64 url encoded) versions of those three segments. The three segments are concatenated and separated with a `.` (dot).
+
+An example of a token is shown as:
+
+```
+eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJvaWQiOiIxMjMiLCAiaXNzIjoiaHR0cHM6Ly9pc3N1ZXJ1cmwiLCJpYXQiOjE0MjI3Nzk2MzgsInJvbGVzIjpbImFkbWluIl19.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI
+```
+
+The token can be decoded and inspected with tools such as [https://jwt.ms](https://jwt.ms). The result of decoding the token is:
+
+```json
+{
+ "alg": "HS256",
+ "typ": "JWT"
+}.{
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+}.[Signature]
+```
+
+## Obtaining an access token
+
+As mentioned above, there are several ways to obtain a token from Azure AD. They are described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
+
+Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
+
+When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you are using in your client application.
+
+The pertinent sections of the Azure AD documentation are:
+
+* `v1.0` endpoint:
+ * [Authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+ * [Client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
+* `v2.0` endpoint:
+ * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+ * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+
+There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the Azure API for FHIR, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+
+## Next steps
+
+In this document, you learned some of the basic concepts involved in securing access to the Azure API for FHIR using Azure AD. For information about how to deploy the Azure API for FHIR service, see.
+
+>[!div class="nextstepaction"]
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
+
+ Title: Azure API for FHIR access token validation
+description: Walks through token validation and gives tips on how to troubleshoot access issues
++++++ Last updated : 08/05/2021++
+# Azure API for FHIR access token validation
+
+How Azure API for FHIR validates the access token will depend on implementation and configuration. In this article, we will walk through the validation steps, which can be helpful when troubleshooting access issues.
+
+## Validate token has no issues with identity provider
+
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+
+```
+GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
+```
+
+where `<TENANT-ID>` is the specific Azure AD tenant (either a tenant ID or a domain name).
+
+Azure AD will return a document like the one below to the FHIR server.
+
+```json
+{
+ "authorization_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/authorize",
+ "token_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/token",
+ "token_endpoint_auth_methods_supported": [
+ "client_secret_post",
+ "private_key_jwt",
+ "client_secret_basic"
+ ],
+ "jwks_uri": "https://login.microsoftonline.com/common/discovery/keys",
+ "response_modes_supported": [
+ "query",
+ "fragment",
+ "form_post"
+ ],
+ "subject_types_supported": [
+ "pairwise"
+ ],
+ "id_token_signing_alg_values_supported": [
+ "RS256"
+ ],
+ "http_logout_supported": true,
+ "frontchannel_logout_supported": true,
+ "end_session_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/logout",
+ "response_types_supported": [
+ "code",
+ "id_token",
+ "code id_token",
+ "token id_token",
+ "token"
+ ],
+ "scopes_supported": [
+ "openid"
+ ],
+ "issuer": "https://sts.windows.net/<TENANT-ID>/",
+ "claims_supported": [
+ "sub",
+ "iss",
+ "cloud_instance_name",
+ "cloud_instance_host_name",
+ "cloud_graph_host_name",
+ "msgraph_host",
+ "aud",
+ "exp",
+ "iat",
+ "auth_time",
+ "acr",
+ "amr",
+ "nonce",
+ "email",
+ "given_name",
+ "family_name",
+ "nickname"
+ ],
+ "microsoft_multi_refresh_token": true,
+ "check_session_iframe": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/checksession",
+ "userinfo_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/openid/userinfo",
+ "tenant_region_scope": "WW",
+ "cloud_instance_name": "microsoftonline.com",
+ "cloud_graph_host_name": "graph.windows.net",
+ "msgraph_host": "graph.microsoft.com",
+ "rbac_url": "https://pas.windows.net"
+}
+```
+The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it is receiving an authentic token.
+
+## Validate claims of the token
+
+Once the server has verified the authenticity of the token, the FHIR server will then proceed to validate that the client has the required claims to access the token.
+
+When using the Azure API for FHIR, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The user or principal that the token was issued for is allowed to access the FHIR server data plane. The `oid` claim of the token contains an identity object ID, which uniquely identifies the user or principal.
+
+We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. But you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure Active Directory tenant.
+
+When using the OSS Microsoft FHIR server for Azure, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The token has a role in the `roles` claim, which is allowed access to the FHIR server.
+
+Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
+
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the Azure API for FHIR and the FHIR server for Azure do not validate token scopes.
+
+## Next steps
+Now that you know how to walk through token validation, you can complete the tutorial to create a JavaScript application and read FHIR data.
+
+>[!div class="nextstepaction"]
+>[Web application tutorial](tutorial-web-app-fhir-server.md)
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
+
+ Title: 'Quickstart: Deploy Azure API for FHIR using an ARM template'
+description: In this quickstart, learn how to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®), by using an Azure Resource Manager template (ARM template).
++++++ Last updated : 08/05/2021++
+# Quickstart: Use an ARM template to deploy Azure API for FHIR
+
+In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®). You can deploy Azure API for FHIR through the Azure portal, PowerShell, or CLI.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure an Azure API for FHIR service using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.healthcareapis%2fazure-api-for-fhir%2fazuredeploy.json)
+
+## Prerequisites
+
+# [Portal](#tab/azure-portal)
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
+ * [Azure CLI](/cli/azure/install-azure-cli).
+++
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-api-for-fhir/).
++
+The template defines one Azure resource:
+
+* **Microsoft.HealthcareApis/services**
+
+<!--
+
+Replace the line above with the following line once https://docs.microsoft.com/azure/templates/microsoft.healthcareapis/services goes live:
+
+* [**Microsoft.HealthcareApis/services**](/azure/templates/microsoft.healthcareapis/services)
+
+-->
+
+## Deploy the template
+
+# [Portal](#tab/azure-portal)
+
+Select the following link to deploy the Azure API for FHIR using the ARM template in the Azure portal:
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure an Azure API for FHIR service using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.healthcareapis%2fazure-api-for-fhir%2fazuredeploy.json)
+
+On the **Deploy Azure API for FHIR** page:
+
+1. If you want, change the **Subscription** from the default to a different subscription.
+
+2. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
+
+3. If you created a new resource group, select a **Region** for the resource group.
+
+4. Enter a new **Service Name** and choose the **Location** of the Azure API for FHIR. The location can be the same as or different from the region of the resource group.
+
+ [ ![Deploy Azure API for FHIR using the ARM template in the Azure portal.](media/fhir-resource-manager-template/deploy-azure-api-fhir.png) ](media/fhir-resource-manager-template/deploy-azure-api-fhir.png#lightbox)
+
+5. Select **Review + create**.
+
+6. Read the terms and conditions, and then select **Create**.
+
+# [PowerShell](#tab/PowerShell)
+
+> [!NOTE]
+> If you want to run the PowerShell scripts locally, first enter `Connect-AzAccount` to set up your Azure credentials.
+
+If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with the following interactive code. To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.HealthcareApis
+```
+
+Use the following code to deploy the Azure API for FHIR service using the ARM template. The code prompts you for the new FHIR service name, the name of a new resource group, and the locations for each of them.
+
+```azurepowershell-interactive
+$serviceName = Read-Host -Prompt "Enter a name for the new Azure API for FHIR service"
+$serviceLocation = Read-Host -Prompt "Enter an Azure region (for example, westus2) for the service"
+$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group to contain the service"
+$resourceGroupRegion = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
+
+Write-Verbose "New-AzResourceGroup -Name $resourceGroupName -Location $resourceGroupRegion" -Verbose
+New-AzResourceGroup -Name $resourceGroupName -Location $resourceGroupRegion
+Write-Verbose "Run New-AzResourceGroupDeployment to create an Azure API for FHIR service using an ARM template" -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
+ -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/azure-api-for-fhir/azuredeploy.json `
+ -serviceName $serviceName `
+ -location $serviceLocation
+Read-Host "Press [ENTER] to continue"
+```
+
+# [CLI](#tab/CLI)
+
+If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with the following interactive code. To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurecli-interactive
+az extension add --name healthcareapis
+```
+
+Use the following code to deploy the Azure API for FHIR service using the ARM template. The code prompts you for the new FHIR service name, the name of a new resource group, and the locations for each of them.
+
+```azurecli-interactive
+read -p "Enter a name for the new Azure API for FHIR service: " serviceName &&
+read -p "Enter an Azure region (for example, westus2) for the service: " serviceLocation &&
+read -p "Enter a name for the new resource group to contain the service: " resourceGroupName &&
+read -p "Enter an Azure region (for example, centralus) for the resource group: " resourceGroupRegion &&
+params='serviceName='$serviceName' location='$serviceLocation &&
+echo "CREATE RESOURCE GROUP: az group create --name $resourceGroupName --location $resourceGroupRegion" &&
+az group create --name $resourceGroupName --location $resourceGroupRegion &&
+echo "RUN az deployment group create, which creates an Azure API for FHIR service using an ARM template" &&
+az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/azure-api-for-fhir/azuredeploy.json &&
+read -p "Press [ENTER] to continue: "
+```
+++
+> [!NOTE]
+> The deployment takes a few minutes to complete. Note the names for the Azure API for FHIR service and the resource group, which you use to review the deployed resources later.
+
+## Review deployed resources
+
+# [Portal](#tab/azure-portal)
+
+Follow these steps to see an overview of your new Azure API for FHIR service:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure API for FHIR**.
+
+2. In the FHIR list, select your new service. The **Overview** page for the new Azure API for FHIR service appears.
+
+3. To validate that the new FHIR API account is provisioned, select the link next to **FHIR metadata endpoint** to fetch the FHIR API capability statement. The link has a format of `https://<service-name>.azurehealthcareapis.com/metadata`. If the account is provisioned, a large JSON file is displayed.
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following interactive code to view details about your Azure API for FHIR service. You'll have to enter the name of the new service and the resource group.
+
+```azurepowershell-interactive
+$serviceName = Read-Host -Prompt "Enter the name of your Azure API for FHIR service"
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
+Write-Verbose "Get-AzHealthcareApisService -ResourceGroupName $resourceGroupName -Name $serviceName" -Verbose
+Get-AzHealthcareApisService -ResourceGroupName $resourceGroupName -Name $serviceName
+Read-Host "Press [ENTER] to fetch the FHIR API capability statement, which shows that the new service has been provisioned"
+
+$requestUri="https://" + $serviceName + ".azurehealthcareapis.com/metadata"
+$metadata = Invoke-WebRequest -Uri $requestUri
+$metadata.RawContent
+Read-Host "Press [ENTER] to continue"
+```
+
+# [CLI](#tab/CLI)
+
+Run the following interactive code to view details about your Azure API for FHIR service. You'll have to enter the name of the new service and the resource group.
+
+```azurecli-interactive
+read -p "Enter the name of your Azure API for FHIR service: " serviceName &&
+read -p "Enter the resource group name: " resourceGroupName &&
+echo "SHOW SERVICE DETAILS: az healthcareapis service show --resource-group $resourceGroupName --resource-name $serviceName" &&
+az healthcareapis service show --resource-group $resourceGroupName --resource-name $serviceName &&
+read -p "Press [ENTER] to fetch the FHIR API capability statement, which shows that the new service has been provisioned: " &&
+requestUrl='https://'$serviceName'.azurehealthcareapis.com/metadata' &&
+curl --url $requestUrl &&
+read -p "Press [ENTER] to continue: "
+```
+++
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+
+2. In the resource group list, choose the name of your resource group.
+
+3. In the **Overview** page of your resource group, select **Delete resource group**.
+
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the name of the resource group to delete"
+Write-Verbose "Remove-AzResourceGroup -Name $resourceGroupName" -Verbose
+Remove-AzResourceGroup -Name $resourceGroupName
+Read-Host "Press [ENTER] to continue"
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+read -p "Enter the name of the resource group to delete: " resourceGroupName &&
+echo "DELETE A RESOURCE GROUP (AND ITS RESOURCES): az group delete --name $resourceGroupName" &&
+az group delete --name $resourceGroupName &&
+read -p "Press [ENTER] to continue: "
+```
+++
+For a step-by-step tutorial that guides you through the process of creating an ARM template, see the [tutorial to create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+
+## Next steps
+
+In this quickstart guide, you've deployed the Azure API for FHIR into your subscription. To set additional settings in your Azure API for FHIR, proceed to the additional settings how-to guide. If you are ready to start using the Azure API for FHIR, read more on how to register applications.
+
+>[!div class="nextstepaction"]
+>[Additional settings in Azure API for FHIR](azure-api-for-fhir-additional-settings.md)
+
+>[!div class="nextstepaction"]
+>[Register Applications Overview](fhir-app-registration.md)
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
+
+ Title: Tutorial - Centers for Medicare and Medicaid Services (CMS) introduction - Azure API for FHIR
+description: This overview introduces a series of tutorials that pertains to the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule.
+++++++ Last updated : 08/05/2021++
+# Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule introduction
+
+In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure the Azure API for FHIR to support these implementation guides.
++
+## Rule overview
+
+The CMS released the [Interoperability and Patient Access rule](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index) on May 1, 2020. This rule requires free and secure data flow between all parties involved in patient care (patients, providers, and payers) to allow patients to access their health information when they need it. Interoperability has plagued the healthcare industry for decades, resulting in siloed data that causes negative health outcomes with higher and unpredictable costs for care. CMS is using their authority to regulate Medicare Advantage (MA), Medicaid, Children's Health Insurance Program (CHIP), and Qualified Health Plan (QHP) issuers on the Federally Facilitated Exchanges (FFEs) to enforce this rule.
+
+In August 2020, CMS detailed how organizations can meet the mandate. To ensure that data can be exchanged securely and in a standardized manner, CMS identified FHIR version release 4 (R4) as the foundational standard required for the data exchange.
+
+There are three main pieces to the Interoperability and Patient Access ruling:
+
+* **Patient Access API (Required July 1, 2021)** ΓÇô CMS-regulated payers (as defined above) are required to implement and maintain a secure, standards-based API that allows patients to easily access their claims and encounter information, including cost, as well as a defined subset of their clinical information through third-party applications of their choice.
+
+* **Provider Directory API (Required July 1, 2021)** ΓÇô CMS-regulated payers are required by this portion of the rule to make provider directory information publicly available via a standards-based API. Through making this information available, third-party application developers will be able to create services that help patients find providers for specific care needs and clinicians find other providers for care coordination.
+
+* **Payer-to-Payer Data Exchange (Required January 1, 2022)** ΓÇô CMS-regulated payers are required to exchange certain patient clinical data at the patientΓÇÖs request with other payers. While there's no requirement to follow any kind of standard, applying FHIR to exchange this data is encouraged.
+
+## Key FHIR concepts
+
+As mentioned above, FHIR R4 is required to meet this mandate. In addition, there have been several implementation guides developed that provide guidance for the rule. [Implementation guides](https://www.hl7.org/fhir/implementationguide.html) provide extra context on top of the base FHIR specification. This includes defining additional search parameters, profiles, extensions, operations, value sets, and code systems.
+
+The Azure API for FHIR has the following capabilities to help you configure your database for the various implementation guides:
+
+* [Support for RESTful interactions](fhir-features-supported.md)
+* [Storing and validating profiles](validation-against-profiles.md)
+* [Defining and indexing custom search parameters](how-to-do-custom-search.md)
+* [Converting data](convert-data.md)
+
+## Patient Access API Implementation Guides
+
+The Patient Access API describes adherence to four FHIR implementation guides:
+
+* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they are referenced.
+* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents.
+* [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
+
+* [HL7 FHIR Da Vinci - PDex US Drug Formulary IG](http://hl7.org/fhir/us/Davinci-drug-formulary/https://docsupdatetracker.net/index.html): Part D Medicare Advantage plans have to make formulary information available via the Patient API. They do this using the PDex US Drug Formulary Implementation Guide (USDF IG). The USDF IG defines a FHIR interface to a health insurer’s drug formulary information, which is a list of brand-name and generic prescription drugs that a health insurer agrees to pay for. The main use case of this is so that patients can understand if there are alternative drug available to one that has been prescribed to them and to compare drug costs.
+
+## Provider Directory API Implementation Guide
+
+The Provider Directory API describes adherence to one implementation guide:
+
+* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+
+## Touchstone
+
+To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the Azure API for FHIR is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
+
+## Next steps
+
+Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), weΓÇÖll walk through setting up the Azure API for FHIR for the CARIN IG for Blue Button.
+
+>[!div class="nextstepaction"]
+>[CARIN Implementation Guide for Blue Button](../fhir/carin-implementation-guide-blue-button-tutorial.md)
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Previously updated : 03/15/2020 Last updated : 08/05/2021
Select **Create** to create a new Azure API for FHIR account:
Select an existing resource group or create a new one, choose a name for the account, and finally click **Review + create**: Confirm creation and await FHIR API deployment.
healthcare-apis Iot Azure Resource Manager Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/iot-azure-resource-manager-template-quickstart.md
+
+ Title: 'Quickstart: Deploy Azure IoT Connector for FHIR (preview) using an ARM template'
+description: In this quickstart, describes how to deploy Azure IoT Connector for FHIR (preview) using an Azure Resource Manager template (ARM template).
+++++ Last updated : 08/05/2021 +++
+# Quickstart: Use an Azure Resource Manager (ARM) template to deploy Azure IoT Connector for FHIR (preview)
+
+In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) to deploy Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)*, a feature of Azure API for FHIR. To deploy a working instance of Azure IoT Connector for FHIR, this template also deploys a parent Azure API for FHIR service and an Azure IoT Central application that exports telemetry from a device simulator to Azure IoT Connector for FHIR. You can execute ARM template to deploy Azure IoT Connector for FHIR through the Azure portal, PowerShell, or CLI.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in.
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure an Azure IoT Connector for FHIR using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fmicrosoft%2Fiomt-fhir%2Fmaster%2Fdeploy%2Ftemplates%2Fmanaged%2Fazuredeploy.json)
+
+## Prerequisites
+
+# [Portal](#tab/azure-portal)
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
+ * [Azure CLI](/cli/azure/install-azure-cli).
+++
+## Review the template
+
+The [template](https://raw.githubusercontent.com/microsoft/iomt-fhir/master/deploy/templates/managed/azuredeploy.json) defines following Azure resources:
+
+* **Microsoft.HealthcareApis/services**
+* **Microsoft.HealthcareApis/services/iomtconnectors**
+* **Microsoft.IoTCentral/IoTApps**
+
+## Deploy the template
+
+# [Portal](#tab/azure-portal)
+
+Select the following link to deploy the Azure IoT Connector for FHIR using the ARM template in the Azure portal:
+
+[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure an Azure IoT Connector for FHIR service using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fmicrosoft%2Fiomt-fhir%2Fmaster%2Fdeploy%2Ftemplates%2Fmanaged%2Fazuredeploy.json)
+
+On the **Deploy Azure API for FHIR** page:
+
+1. If you want, change the **Subscription** from the default to a different subscription.
+
+2. For **Resource group**, select **Create new**, enter a name for the new resource group, and select **OK**.
+
+3. If you created a new resource group, select a **Region** for the resource group.
+
+4. Enter a name for your new Azure API for FHIR instance in **FHIR Service Name**.
+
+5. Choose the **Location** for your Azure API for FHIR. The location can be the same as or different from the region of the resource group.
+
+6. Provide a name for your Azure IoT Connector for FHIR instance in **Iot Connector Name**.
+
+7. Provide a name for a connection created within Azure IoT Connector for FHIR in **Connection Name**. This connection is used by Azure IoT Central application to push simulated device telemetry into Azure IoT Connector for FHIR.
+
+8. Enter a name for your new Azure IoT Central application in **Iot Central Name**. This application will use *Continuous patient monitoring* template to simulate a device.
+
+9. Choose the location of your IoT Central application from **IoT Central Location** drop-down.
+
+10. Select **Review + create**.
+
+11. Read the terms and conditions, and then select **Create**.
+
+# [PowerShell](#tab/PowerShell)
+
+> [!NOTE]
+> If you want to run the PowerShell scripts locally, first enter `Connect-AzAccount` to set up your Azure credentials.
+
+If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with the following interactive code. To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.HealthcareApis
+```
+
+If the `Microsoft.IoTCentral` resource provider isn't already registered for your subscription, you can register it with the following interactive code. To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.IoTCentral
+```
+
+Use the following code to deploy the Azure IoT Connector for FHIR service using the ARM template.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group to contain the service"
+$resourceGroupRegion = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
+$fhirServiceName = Read-Host -Prompt "Enter a name for the new Azure API for FHIR service"
+$location = Read-Host -Prompt "Enter an Azure region (for example, westus2) for the service"
+$iotConnectorName = Read-Host -Prompt "Enter a name for the new Azure IoT Connector for FHIR"
+$connectionName = Read-Host -Prompt "Enter a name for the connection with Azure IoT Connector for FHIR"
+$iotCentralName = Read-Host -Prompt "Enter a name for the new Azure IoT Central Application"
+$iotCentralLocation = Read-Host -Prompt "Enter a location for the new Azure IoT Central Application"
+
+Write-Verbose "New-AzResourceGroup -Name $resourceGroupName -Location $resourceGroupRegion" -Verbose
+New-AzResourceGroup -Name $resourceGroupName -Location $resourceGroupRegion
+Write-Verbose "Run New-AzResourceGroupDeployment to create an Azure IoT Connector for FHIR service using an ARM template" -Verbose
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
+ -TemplateUri https://raw.githubusercontent.com/microsoft/iomt-fhir/master/deploy/templates/managed/azuredeploy.json `
+ -fhirServiceName $fhirServiceName `
+ -location $location
+ -iotConnectorName $iotConnectorName
+ -connectionName $connectionName
+ -iotCentralName $iotCentralName
+ -iotCentralLocation $iotCentralLocation
+Read-Host "Press [ENTER] to continue"
+```
+
+# [CLI](#tab/CLI)
+
+If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with the following interactive code. To run the code in Azure Cloud Shell, select **Try it** at the upper corner of any code block.
+
+```azurecli-interactive
+az extension add --name healthcareapis
+```
+
+If the `Microsoft.IoTCentral` resource provider isn't already registered for your subscription, you can register it with the following interactive code.
+
+```azurecli-interactive
+az extension add --name azure-iot
+```
+
+Use the following code to deploy the Azure IoT Connector for FHIR service using the ARM template.
+
+```azurecli-interactive
+read -p "Enter a name for the new resource group to contain the service: " resourceGroupName &&
+read -p "Enter an Azure region (for example, centralus) for the resource group: " resourceGroupRegion &&
+read -p "Enter a name for the new Azure API for FHIR service: " fhirServiceName &&
+read -p "Enter an Azure region (for example, westus2) for the service: " location &&
+read -p "Enter a name for the new Azure IoT Connector for FHIR: " iotConnectorName &&
+read -p "Enter a name for the connection with Azure IoT Connector for FHIR: " connectionName &&
+read -p "Enter a name for the new Azure IoT Central Application: " iotCentralName &&
+read -p "Enter a location for the new Azure IoT Central Application: " iotCentralLocation &&
+
+params='fhirServiceName='$fhirServiceName' location='$location' iotConnectorName='$iotConnectorName' connectionName='$connectionName' iotCentralName='$iotCentralName' iotCentralLocation='$iotCentralLocation &&
+echo "CREATE RESOURCE GROUP: az group create --name $resourceGroupName --location $resourceGroupRegion" &&
+az group create --name $resourceGroupName --location $resourceGroupRegion &&
+echo "RUN az deployment group create, which creates an Azure IoT Connector for FHIR service using an ARM template" &&
+az deployment group create --resource-group $resourceGroupName --parameters $params --template-uri https://raw.githubusercontent.com/microsoft/iomt-fhir/master/deploy/templates/managed/azuredeploy.json &&
+read -p "Press [ENTER] to continue: "
+```
+++
+> [!NOTE]
+> The deployment takes a few minutes to complete. Note the names for the Azure API for FHIR service, Azure IoT Central application, and the resource group, which you use to review the deployed resources later.
+
+## Review deployed resources
+
+# [Portal](#tab/azure-portal)
+
+Follow these steps to see an overview of your new Azure API for FHIR service:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure API for FHIR**.
+
+2. In the FHIR list, select your new service. The **Overview** page for the new Azure API for FHIR service appears.
+
+3. To validate that the new FHIR API account is provisioned, select the link next to **FHIR metadata endpoint** to fetch the FHIR API capability statement. The link has a format of `https://<service-name>.azurehealthcareapis.com/metadata`. If the account is provisioned, a large JSON file is displayed.
+
+4. To validate that the new Azure IoT Connector for FHIR is provisioned, select the **IoT Connector (preview)** from left navigation menu to open the **IoT Connectors** page. The page must show the provisioned Azure IoT Connector for FHIR with *Status* value as *Online*, *Connections* value as *1*, and both *Device mapping* and *FHIR mapping* show *Success* icon.
+
+5. In the [Azure portal](https://portal.azure.com), search for and select **IoT Central Applications**.
+
+6. In the list of IoT Central Applications, select your new service. The **Overview** page for the new IoT Central application appears.
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following interactive code to view details about your Azure API for FHIR service. You'll have to enter the name of the new FHIR service and the resource group.
+
+```azurepowershell-interactive
+$fhirServiceName = Read-Host -Prompt "Enter the name of your Azure API for FHIR service"
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
+Write-Verbose "Get-AzHealthcareApisService -ResourceGroupName $resourceGroupName -Name $serviceName" -Verbose
+Get-AzHealthcareApisService -ResourceGroupName $resourceGroupName -Name $serviceName
+Read-Host "Press [ENTER] to fetch the FHIR API capability statement, which shows that the new FHIR service has been provisioned"
+
+$requestUri="https://" + $fhirServiceName + ".azurehealthcareapis.com/metadata"
+$metadata = Invoke-WebRequest -Uri $requestUri
+$metadata.RawContent
+Read-Host "Press [ENTER] to continue"
+```
+
+> [!NOTE]
+> Azure IoT Connector for FHIR doesn't provide PowerShell commands at this time. To validate your Azure IoT Connector for FHIR has been provisioned correctly, use the validation process provided in the **Portal** tab.
+
+Run the following interactive code to view details about your Azure IoT Central application. You'll have to enter the name of the new IoT Central application and the resource group.
+
+```azurepowershell-interactive
+$iotCentralName = Read-Host -Prompt "Enter the name of your Azure IoT Central application"
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
+Write-Verbose "Get-AzIotCentralApp -ResourceGroupName $resourceGroupName -Name $iotCentralName" -Verbose
+Get-AzIotCentralApp -ResourceGroupName $resourceGroupName -Name $iotCentralName
+```
+
+# [CLI](#tab/CLI)
+
+Run the following interactive code to view details about your Azure API for FHIR service. You'll have to enter the name of the new FHIR service and the resource group.
+
+```azurecli-interactive
+read -p "Enter the name of your Azure API for FHIR service: " fhirServiceName &&
+read -p "Enter the resource group name: " resourceGroupName &&
+echo "SHOW SERVICE DETAILS: az healthcareapis service show --resource-group $resourceGroupName --resource-name $fhirServiceName" &&
+az healthcareapis service show --resource-group $resourceGroupName --resource-name $fhirServiceName &&
+read -p "Press [ENTER] to fetch the FHIR API capability statement, which shows that the new service has been provisioned: " &&
+requestUrl='https://'$fhirServiceName'.azurehealthcareapis.com/metadata' &&
+curl --url $requestUrl &&
+read -p "Press [ENTER] to continue: "
+```
+
+> [!NOTE]
+> Azure IoT Connector for FHIR doesn't provide CLI commands at this time. To validate your Azure IoT Connector for FHIR has been provisioned correctly, use the validation process provided in the **Portal** tab.
+
+Run the following interactive code to view details about your Azure IoT Central application. You'll have to enter the name of the new IoT Central application and the resource group.
+
+```azurecli-interactive
+read -p "Enter the name of your IoT Central application: " iotCentralName &&
+read -p "Enter the resource group name: " resourceGroupName &&
+echo "SHOW SERVICE DETAILS: az iot central app show -g $resourceGroupName -n $iotCentralName" &&
+az iot central app show -g $resourceGroupName -n $iotCentralName &&
+```
+++
+## Connect your IoT data with the Azure IoT Connector for FHIR (preview)
+> [!IMPORTANT]
+> The Device mapping template provided in this guide is designed to work with Data Export (legacy) within IoT Central.
+
+IoT Central application currently doesn't provide ARM template or PowerShell and CLI commands to set data export. So, follow the instructions below using Azure portal.
+
+Once you've deployed your IoT Central application, your two out-of-the-box simulated devices will start generating telemetry. For this tutorial, we'll ingest the telemetry from *Smart Vitals Patch* simulator into FHIR via the Azure IoT Connector for FHIR. To export your IoT data to the Azure IoT Connector for FHIR, we'll want to [set up a Data export (legacy) within IoT Central](../../iot-central/core/howto-export-data-legacy.md). On the Data export (legacy) page:
+- Pick *Azure Event Hubs* as the export destination.
+- Select *Use a connection string* value for **Event Hubs namespace** field.
+- Provide Azure IoT Connector for FHIR's connection string obtained in a previous step for the **Connection String** field.
+- Keep **Telemetry** option *On* for **Data to Export** field.
+++
+## View device data in Azure API for FHIR
+
+You can view the FHIR-based Observation resource(s) created by Azure IoT Connector for FHIR on your FHIR server using Postman. Set up your [Postman to access Azure API for FHIR](access-fhir-postman-tutorial.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value.
+
+> [!TIP]
+> Ensure that your user has appropriate access to Azure API for FHIR data plane. Use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign required data plane roles.
+++
+## Clean up resources
+
+When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
+
+2. In the resource group list, choose the name of your resource group.
+
+3. In the **Overview** page of your resource group, select **Delete resource group**.
+
+4. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the name of the resource group to delete"
+Write-Verbose "Remove-AzResourceGroup -Name $resourceGroupName" -Verbose
+Remove-AzResourceGroup -Name $resourceGroupName
+Read-Host "Press [ENTER] to continue"
+```
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+read -p "Enter the name of the resource group to delete: " resourceGroupName &&
+echo "DELETE A RESOURCE GROUP (AND ITS RESOURCES): az group delete --name $resourceGroupName" &&
+az group delete --name $resourceGroupName &&
+read -p "Press [ENTER] to continue: "
+```
+++
+For a step-by-step tutorial that guides you through the process of creating an ARM template, see the [tutorial to create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+
+## Next steps
+
+In this quickstart guide, you've deployed Azure IoT Connector for FHIR feature in your Azure API for FHIR resource. Select from below next steps to learn more about Azure IoT Connector for FHIR:
+
+Understand different stages of data flow within Azure IoT Connector for FHIR.
+
+>[!div class="nextstepaction"]
+>[Azure IoT Connector for FHIR data flow](iot-data-flow.md)
+
+Learn how to configure IoT Connector using device and FHIR mapping templates.
+
+>[!div class="nextstepaction"]
+>[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
+
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Previously updated : 04/08/2021 Last updated : 08/05/2021
To register a new confidential client application, refer to the steps below.
1. Select **App registrations**.
- :::image type="content" source="media/how-to-aad/portal-aad-new-app-registration.png" alt-text="Azure portal. New App Registration.":::
+ :::image type="content" source="media/add-azure-active-directory/portal-aad-new-app-registration.png" alt-text="Azure portal. New App Registration.":::
1. Select **New registration**.
To register a new confidential client application, refer to the steps below.
1. (Optional) Provide a **Redirect URI**. These details can be changed later, but if you know the reply URL of your application, enter it now.
- :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT.png" alt-text="New Confidential Client App Registration.":::
+ :::image type="content" source="media/add-azure-active-directory/portal-aad-register-new-app-registration-confidential-client.png" alt-text="New Confidential Client App Registration.":::
1. Select **Register**.
Now that you've registered your application, you must select which API permissio
1. Select **API permissions**.
- :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-API-Permissions.png" alt-text="Confidential client. API Permissions.":::
+ :::image type="content" source="media/add-azure-active-directory/portal-aad-register-new-app-registration-confidential-client-api-permissions.png" alt-text="Confidential client. API Permissions.":::
1. Select **Add a permission**.
Now that you've registered your application, you must select which API permissio
If you're referencing a different resource application, select your [FHIR API Resource Application Registration](register-resource-azure-ad-client-app.md) that you created previously under **My APIs**.
- :::image type="content" source="media/conf-client-app/confidential-client-org-api.png" alt-text="Confidential client. My Org APIs" lightbox="media/conf-client-app/confidential-app-org-api-expanded.png":::
+ :::image type="content" source="media/confidential-client-application/confidential-client-org-api.png" alt-text="Confidential client. My Org APIs" lightbox="media/confidential-client-application/confidential-app-org-api-expanded.png":::
1. Select scopes (permissions) that the confidential client application will ask for on behalf of a user. Select **user_impersonation**, and then select **Add permissions**.
- :::image type="content" source="media/conf-client-app/confidential-client-add-permission.png" alt-text="Confidential client. Delegated Permissions":::
+ :::image type="content" source="media/confidential-client-application/confidential-client-add-permission.png" alt-text="Confidential client. Delegated Permissions":::
## Application secret 1. Select **Certificates & secrets**, and then select **New client secret**.
- :::image type="content" source="media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-SECRET.png" alt-text="Confidential client. Application Secret.":::
+ :::image type="content" source="media/add-azure-active-directory/portal-aad-register-new-app-registration-confidential-client-secret.png" alt-text="Confidential client. Application Secret.":::
1. Enter a **Description** for the client secret. Select the **Expires** drop-down menu to choose an expiration time frame, and then click **Add**.
- :::image type="content" source="media/how-to-aad/add-a-client-secret.png" alt-text="Add a client secret.":::
+ :::image type="content" source="media/add-azure-active-directory/add-a-client-secret.png" alt-text="Add a client secret.":::
1. After the client secret string is created, copy its **Value** and **ID**, and store them in a secure location of your choice.
- :::image type="content" source="media/how-to-aad/client-secret-string-password.png" alt-text="Client secret string.":::
+ :::image type="content" source="media/add-azure-active-directory/client-secret-string-password.png" alt-text="Client secret string.":::
> [!NOTE] >The client secret string is visible only once in the Azure portal. When you navigate away from the Certificates & secrets web page and then return back to it, the Value string becomes masked. It's important to make a copy your client secret string immediately after it is generated. If you don't have a backup copy of your client secret, you must repeat the above steps to regenerate it.
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Previously updated : 02/07/2019 Last updated : 08/05/2021
The quickstart provides general information about how to [register an applicatio
2. In the **Azure Active Directory** blade, click **App registrations**:
- ![Azure portal. New App Registration.](media/how-to-aad/portal-aad-new-app-registration.png)
+ ![Azure portal. New App Registration.](media/add-azure-active-directory/portal-aad-new-app-registration.png)
3. Click the **New registration**.
The quickstart provides general information about how to [register an applicatio
2. Provide a reply URL. The reply URL is where authentication codes will be returned to the client application. You can add more reply URLs and edit existing ones later.
- ![Azure portal. New public App Registration.](media/how-to-aad/portal-aad-register-new-app-registration-PUB-CLIENT-NAME.png)
+ ![Azure portal. New public App Registration.](media/add-azure-active-directory/portal-aad-register-new-app-registration-pub-client-name.png)
To configure your [desktop](../../active-directory/develop/scenario-desktop-app-registration.md), [mobile](../../active-directory/develop/scenario-mobile-app-registration.md) or [single-page](../../active-directory/develop/scenario-spa-app-registration.md) application as public application:
Similarly to the [confidential client application](register-confidential-azure-a
If you are referencing a different Resource Application, select your [FHIR API Resource Application Registration](register-resource-azure-ad-client-app.md) that you created previously under **My APIs**:
- ![Azure portal. New public API permissions - Azure API for FHIR Default](media/public-client-app/api-permissions.png)
+ ![Azure portal. New public API permissions - Azure API for FHIR Default](media/public-client-application/api-permissions.png)
2. Select the permissions that you would like the application to be able to request:
- ![Azure portal. App permissions](media/public-client-app/app-permissions.png)
+ ![Azure portal. App permissions](media/public-client-application/app-permissions.png)
## Validate FHIR server authority If the application you registered in this article and your FHIR server are in the same Azure AD tenant, you are good to proceed to the next steps.
-If you configure your client application in a different Azure AD tenant from your FHIR server, you will need to update the **Authority**. In Azure API for FHIR, you do set the Authority under Settings --> Authentication. Set your Authority to **https://login.microsoftonline.com/\<TENANT-ID>**.
+If you configure your client application in a different Azure AD tenant from your FHIR server, you will need to update the **Authority**. In Azure API for FHIR, you do set the Authority under Settings --> Authentication. Set your Authority to ``https://login.microsoftonline.com/\<TENANT-ID>`.
## Next steps
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Previously updated : 02/07/2019 Last updated : 08/05/2021
Follow these steps to create a new service client.
2. Select **App registrations**.
- ![Azure portal. New App Registration.](media/how-to-aad/portal-aad-new-app-registration.png)
+ ![Azure portal. New App Registration.](media/add-azure-active-directory/portal-aad-new-app-registration.png)
3. Select **New registration**.
The service client needs a secret (password) to obtain a token.
1. Select **Certificates & secrets**. 2. Select **New client secret**.
- ![Azure portal. Service Client Secret](media/how-to-aad/portal-aad-register-new-app-registration-SERVICE-CLIENT-SECRET.png)
+ ![Azure portal. Service Client Secret](media/add-azure-active-directory/portal-aad-register-new-app-registration-service-client-secret.png)
3. Provide a description and duration of the secret (either 1 year, 2 years or never).
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Previously updated : 01/03/2020 Last updated : 08/10/2021 # Testing the FHIR API on Azure API for FHIR+ In the previous two steps, you deployed the Azure API for FHIR and registered your client application. You are now ready to test that your Azure API for FHIR is set up with your client application. ## Retrieve capability statement
If you do the GET step above to retrieve a patient again, you will see James Tib
## Troubleshooting access issues If you ran into issues during any of these steps, review the documents we have put together on Azure Active Directory and the Azure API for FHIR.
-* [Azure AD and Azure API for FHIR](azure-ad-hcapi.md) - This document outlines some of the basic principles of Azure Active Directory and how it interacts with the Azure API for FHIR.
-* [Access token validation](azure-ad-hcapi-token-validation.md) - This how-to guide gives more specific details on access token validation and steps to take to resolve access issues.
+* [Azure AD and Azure API for FHIR](azure-active-directory-identity-configuration.md) - This document outlines some of the basic principles of Azure Active Directory and how it interacts with the Azure API for FHIR.
+* [Access token validation](azure-api-fhir-access-token-validation.md) - This how-to guide gives more specific details on access token validation and steps to take to resolve access issues.
## Next Steps Now that you can successfully connect to your client application, you are ready to write your web application.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration.md
+
+ Title: Azure Active Directory identity configuration for Healthcare APIs FHIR service
+description: Learn the principles of identity, authentication, and authorization for FHIR service
+++++ Last updated : 08/06/2019+++
+# Azure Active Directory identity configuration for FHIR service
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. FHIR service is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps will apply to any FHIR server and any identity provider, we'll walk through the Healthcare APIs FHIR service and Azure Active Directory (Azure AD) as our identity provider in this article.
+
+## Access control overview
+
+In order for a client application to access FHIR service, it must present an access token. The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity and roles and privileges granted to the client.
+
+There are many ways to obtain a token, but FHIR service doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
+
+Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps below:
+
+![FHIR Authorization](media/azure-active-directory-fhir-service/fhir-authorization.png)
+
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client makes a request to the FHIR service, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.
+1. The FHIR service validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
+
+It's important to note that the FHIR service isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The FHIR service simply validates that the token is signed correctly (it is authentic) and that it has appropriate claims.
+
+## Structure of an access token
+
+Development of FHIR applications often involves debugging access issues. If a client is denied access to the FHIR service, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token.
+
+FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts:
+
+**Part 1**: A header, which could look like:
+ ```json
+ {
+ "alg": "HS256",
+ "typ": "JWT"
+ }
+ ```
+
+**Part 2**: The payload (the claims), for example:
+ ```json
+ {
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+ }
+ ```
+
+**Part 3**: A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server will be able to obtain public keys from the identity provider and validate that this token was issued by a specific identity provider and it hasn't been tampered with.
+
+The full token consists of the Base64 encoded (actually Base64 url encoded) versions of those three segments. The three segments are concatenated and separated with a `.` (dot).
+
+An example token is seen below:
+
+```
+eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJvaWQiOiIxMjMiLCAiaXNzIjoiaHR0cHM6Ly9pc3N1ZXJ1cmwiLCJpYXQiOjE0MjI3Nzk2MzgsInJvbGVzIjpbImFkbWluIl19.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI
+```
+
+The token can be decoded and inspected with tools such as [https://jwt.ms](https://jwt.ms). The result of decoding the token is:
+
+```json
+{
+ "alg": "HS256",
+ "typ": "JWT"
+}.{
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+}.[Signature]
+```
+
+## Obtaining an access token
+
+As mentioned above, there are several ways to obtain a token from Azure AD. They are described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
+
+Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
+
+When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you are using in your client application.
+
+The pertinent sections of the Azure AD documentation are:
+
+* `v1.0` endpoint:
+ * [Authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+ * [Client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
+* `v2.0` endpoint:
+ * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+ * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+
+There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the FHIR service, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+
+## Next steps
+
+In this document, you learned some of the basic concepts involved in securing access to the FHIR service using Azure AD. For information about how to deploy the FHIR service, see
+
+>[!div class="nextstepaction"]
+>[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
Outside of defining search parameters, the other update you need to make to pass
To assist with creation of these search parameters and profiles, we have a [sample http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB.http) that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. ## Touchstone read test After testing the capabilities statement, we will test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) of the FHIR service against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You will need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against. ## Touchstone EOB query test The next test we'll review is the [EOB query test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/02-EOBQuery&activeOnly=false&contentEntry=TEST_SCRIPTS). If you've already completed the read test, you have all the data loaded that you'll need. This test validates that you can search for specific `Patient` and `ExplanationOfBenefit` resources using various parameters. ## Touchstone error handling test The final test we'll walk through is testing [error handling](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/99-ErrorHandling&activeOnly=false&contentEntry=TEST_SCRIPTS). The only step you need to do is delete an ExplanationOfBenefit resource from your database and use the ID of the deleted `ExplanationOfBenefit` resource in the test. ## Next steps
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
+
+ Title: Tutorial - Centers for Medicare and Medicaid Services (CMS) introduction - FHIR service
+description: Introduces a series of tutorials that pertains to the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule.
+++++++ Last updated : 08/05/2021++
+# Introduction: Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure the FHIR service in the Azure Healthcare APIs (hear by called the FHIR service) to support these implementation guides.
++
+## Rule overview
+
+The CMS released the [Interoperability and Patient Access rule](https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index) on May 1, 2020. This rule requires free and secure data flow between all parties involved in patient care (patients, providers, and payers) to allow patients to access their health information when they need it. Interoperability has plagued the healthcare industry for decades, resulting in siloed data that causes negative health outcomes with higher and unpredictable costs for care. CMS is using their authority to regulate Medicare Advantage (MA), Medicaid, Children's Health Insurance Program (CHIP), and Qualified Health Plan (QHP) issuers on the Federally Facilitated Exchanges (FFEs) to enforce this rule.
+
+In August 2020, CMS detailed how organizations can meet the mandate. To ensure that data can be exchanged securely and in a standardized manner, CMS identified FHIR version release 4 (R4) as the foundational standard required for the data exchange.
+
+There are three main pieces to the Interoperability and Patient Access ruling:
+
+* **Patient Access API (Required July 1, 2021)** ΓÇô CMS-regulated payers (as defined above) are required to implement and maintain a secure, standards-based API that allows patients to easily access their claims and encounter information, including cost, as well as a defined subset of their clinical information through third-party applications of their choice.
+
+* **Provider Directory API (Required July 1, 2021)** ΓÇô CMS-regulated payers are required by this portion of the rule to make provider directory information publicly available via a standards-based API. Through making this information available, third-party application developers will be able to create services that help patients find providers for specific care needs and clinicians find other providers for care coordination.
+
+* **Payer-to-Payer Data Exchange (Required January 1, 2022)** ΓÇô CMS-regulated payers are required to exchange certain patient clinical data at the patientΓÇÖs request with other payers. While there's no requirement to follow any kind of standard, applying FHIR to exchange this data is encouraged.
+
+## Key FHIR concepts
+
+As mentioned above, FHIR R4 is required to meet this mandate. In addition, there have been several implementation guides developed that provide guidance for the rule. [Implementation guides](https://www.hl7.org/fhir/implementationguide.html) provide extra context on top of the base FHIR specification. This includes defining additional search parameters, profiles, extensions, operations, value sets, and code systems.
+
+The FHIR service has the following capabilities to help you configure your database for the various implementation guides:
+
+* [Support for RESTful interactions](fhir-features-supported.md)
+* [Storing and validating profiles](validation-against-profiles.md)
+* [Defining and indexing custom search parameters](how-to-do-custom-search.md)
+* [Converting data](../data-transformation/convert-data.md)
+
+## Patient Access API Implementation Guides
+
+The Patient Access API describes adherence to four FHIR implementation guides:
+
+* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they are referenced.
+* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents.
+* [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
+
+* [HL7 FHIR Da Vinci - PDex US Drug Formulary IG](http://hl7.org/fhir/us/Davinci-drug-formulary/https://docsupdatetracker.net/index.html): Part D Medicare Advantage plans have to make formulary information available via the Patient API. They do this using the PDex US Drug Formulary Implementation Guide (USDF IG). The USDF IG defines a FHIR interface to a health insurer’s drug formulary information, which is a list of brand-name and generic prescription drugs that a health insurer agrees to pay for. The main use case of this is so that patients can understand if there are alternative drug available to one that has been prescribed to them and to compare drug costs.
+
+## Provider Directory API Implementation Guide
+
+The Provider Directory API describes adherence to one implementation guide:
+
+* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+
+## Touchstone
+
+To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the FHIR service is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
+
+## Next steps
+
+Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), weΓÇÖll walk through setting up the FHIR service for the CARIN IG for Blue Button.
+
+>[!div class="nextstepaction"]
+>[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
Previously updated : 08/03/2021 Last updated : 08/06/2021 # Tutorial for Da Vinci Drug Formulary
Outside of defining search parameters, the only other update you need to make to
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run: ## Touchstone query test The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against. ## Next steps
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
Previously updated : 08/03/2021 Last updated : 08/06/2021 # Da Vinci PDex
The first set of tests that we'll focus on is testing the FHIR service against t
* The third test validates that the [$patient-everything operation](patient-everything.md) is supported. ## Touchstone $member-match test
The [second test](https://touchstone.aegis.net/touchstone/testdefinitions?select
In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you will need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data is not loaded, you'll receive a 422 response due to not finding an exact match. ## Touchstone patient by reference The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validate that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we have uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well. ## Touchstone patient/$everything test The final test we'll walk through is testing patient-everything. For this test, you'll need to load a patient, and then you'll use that patientΓÇÖs ID to test that you can use the $everything operation to pull all data related to the patient. ## Next steps
healthcare-apis Fhir Service Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-access-token-validation.md
+
+ Title: FHIR service access token validation
+description: Access token validation procedure and troubleshooting guide for FHIR service
+++++ Last updated : 08/05/2021++
+# FHIR service access token validation
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+How FHIR service validates the access token will depend on implementation and configuration. In this article, we will walk through the validation steps, which can be helpful when troubleshooting access issues.
+
+## Validate token has no issues with identity provider
+
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+
+```
+GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
+```
+
+where `<TENANT-ID>` is the specific Azure AD tenant (either a tenant ID or a domain name).
+
+Azure AD will return a document like the one below to the FHIR server.
+
+```json
+{
+ "authorization_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/authorize",
+ "token_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/token",
+ "token_endpoint_auth_methods_supported": [
+ "client_secret_post",
+ "private_key_jwt",
+ "client_secret_basic"
+ ],
+ "jwks_uri": "https://login.microsoftonline.com/common/discovery/keys",
+ "response_modes_supported": [
+ "query",
+ "fragment",
+ "form_post"
+ ],
+ "subject_types_supported": [
+ "pairwise"
+ ],
+ "id_token_signing_alg_values_supported": [
+ "RS256"
+ ],
+ "http_logout_supported": true,
+ "frontchannel_logout_supported": true,
+ "end_session_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/logout",
+ "response_types_supported": [
+ "code",
+ "id_token",
+ "code id_token",
+ "token id_token",
+ "token"
+ ],
+ "scopes_supported": [
+ "openid"
+ ],
+ "issuer": "https://sts.windows.net/<TENANT-ID>/",
+ "claims_supported": [
+ "sub",
+ "iss",
+ "cloud_instance_name",
+ "cloud_instance_host_name",
+ "cloud_graph_host_name",
+ "msgraph_host",
+ "aud",
+ "exp",
+ "iat",
+ "auth_time",
+ "acr",
+ "amr",
+ "nonce",
+ "email",
+ "given_name",
+ "family_name",
+ "nickname"
+ ],
+ "microsoft_multi_refresh_token": true,
+ "check_session_iframe": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/checksession",
+ "userinfo_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/openid/userinfo",
+ "tenant_region_scope": "WW",
+ "cloud_instance_name": "microsoftonline.com",
+ "cloud_graph_host_name": "graph.windows.net",
+ "msgraph_host": "graph.microsoft.com",
+ "rbac_url": "https://pas.windows.net"
+}
+```
+The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it is receiving an authentic token.
+
+## Validate claims of the token
+
+Once the server has verified the authenticity of the token, the FHIR server will then proceed to validate that the client has the required claims to access the token.
+
+When using FHIR service, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The user or principal that the token was issued for is allowed to access the FHIR server data plane. The `oid` claim of the token contains an identity object ID, which uniquely identifies the user or principal.
+
+We recommend that the FHIR service be configured to use Azure RBAC to manage data plane role assignments. But you can also configure local RBAC if your FHIR service uses an external or secondary Azure Active Directory tenant.
+
+When using the OSS Microsoft FHIR server for Azure, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The token has a role in the `roles` claim, which is allowed access to the FHIR server.
+
+Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
+
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the FHIR service does not validate token scopes.
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
+
+ Title: Deploy Azure Healthcare APIs FHIR service using ARM template
+description: Learn how to deploy the FHIR service by using an Azure Resource Manager template (ARM template)
++++ Last updated : 08/06/2021++
+# Deploy a FHIR service within Azure Healthcare APIs - using ARM template
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, you will learn how to deploy the FHIR service within the Azure Healthcare APIs using the Azure Resource Manager template (ARM template). We provide you two options, using PowerShell or using CLI.
+
+An [ARM template](../../azure-resource-manager/templates/overview.md) is a JSON file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
+
+## Prerequisites
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * [Azure PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
+ * [Azure CLI](/cli/azure/install-azure-cli).
+++
+## Review the ARM template
+
+The template used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-api-for-fhir/).
+
+The template defines three Azure resources:
+- Microsoft.HealthcareApis/workspaces
+- Microsoft.HealthcareApis/workspaces/fhirservices
+- Microsoft.Storage/storageAccounts
+
+You can deploy the FHIR service resource by **removing** the workspaces resource, the storage resource, and the `dependsOn` property in the ΓÇ£Microsoft.HealthcareApis/workspaces/fhirservicesΓÇ¥ resource.
++
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "authorityurl": {
+ "type": "string",
+ "defaultValue": "https://login.microsoftonline.com"
+ },
+ "tagName": {
+ "type": "string",
+ "defaultValue": "My Deployment"
+ },
+ "region": {
+ "type": "string",
+ "allowedValues": [
+ "australiaeast",
+ "canadacentral",
+ "eastus",
+ "eastus2",
+ "germanywestcentral",
+ "japaneast",
+ "northcentralus",
+ "northeurope",
+ "southafricanorth",
+ "southcentralus",
+ "southeastasia",
+ "switzerlandnorth",
+ "uksouth",
+ "ukwest",
+ "westcentralus",
+ "westeurope",
+ "westus2"
+ ]
+ },
+ "workspaceName": {
+ "type": "string"
+ },
+ "fhirServiceName": {
+ "type": "string"
+ },
+ "tenantid": {
+ "type": "string"
+ },
+ "storageAccountName": {
+ "type": "string"
+ },
+ "storageAccountConfirm": {
+ "type": "bool",
+ "defaultValue": true
+ },
+ "AccessPolicies": {
+ "type": "array",
+ "defaultValue": []
+ },
+ "smartProxyEnabled": {
+ "type": "bool",
+ "defaultValue": false
+ }
+ },
+ "variables": {
+ "authority": "[Concat(parameters('authorityurl'), '/', parameters('tenantid'))]",
+ "createManagedIdentity": true,
+ "managedIdentityType": {
+ "type": "SystemAssigned"
+ },
+ "storageBlobDataContributerRoleId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.HealthcareApis/workspaces",
+ "name": "[parameters('workspaceName')]",
+ "apiVersion": "2020-11-01-preview",
+ "location": "[parameters('region')]",
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.HealthcareApis/workspaces/fhirservices",
+ "kind": "fhir-R4",
+ "name": "[concat(parameters('workspaceName'), '/', parameters('fhirServiceName'))]",
+ "apiVersion": "2020-11-01-preview",
+ "location": "[parameters('region')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.HealthcareApis/workspaces', parameters('workspaceName'))]"
+ ],
+ "tags": {
+ "environmentName": "[parameters('tagName')]"
+ },
+ "properties": {
+ "accessPolicies": "[parameters('AccessPolicies')]",
+ "authenticationConfiguration": {
+ "authority": "[variables('Authority')]",
+ "audience": "[concat('https//', parameters('workspaceName'), '-', parameters('fhirServiceName'), '.fhir.azurehealthcareapis.com')]",
+ "smartProxyEnabled": "[parameters('smartProxyEnabled')]"
+ },
+ "corsConfiguration": {
+ "allowCredentials": false,
+ "headers": ["*"],
+ "maxAge": 1440,
+ "methods": ["DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT"],
+ "origins": ["https://localhost:6001"]
+ },
+ "exportConfiguration": {
+ "storageAccountName": "[parameters('storageAccountName')]"
+ }
+ },
+ "identity": "[if(variables('createManagedIdentity'), variables('managedIdentityType'), json('null'))]"
+ },
+ {
+ "name": "[parameters('storageAccountName')]",
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "supportsHttpsTrafficOnly": "true"
+ },
+ "condition": "[parameters('storageAccountConfirm')]",
+ "dependsOn": [
+ "[parameters('fhirServiceName')]"
+ ],
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "environmentName": "[parameters('tagName')]",
+ "test-account-rg": "true"
+ }
+ }
+ ],
+ "outputs": {
+ }
+}
+```
+
+## Deploy ARM template
+
+You can deploy the ARM template using two options: PowerShell or CLI.
+
+The sample code provided below uses the template in the ΓÇ£templatesΓÇ¥ subfolder of the subfolder ΓÇ£srcΓÇ¥. You may want to change the location path to reference the template file properly.
+
+The deployment process takes a few minutes to complete. Take a note of the names for the FHIR service and the resource group, which you will use later.
+
+# [PowerShell](#tab/PowerShell)
+
+### Deploy the template: using PowerShell
+
+Run the code in PowerShell locally, in Visual Studio Code, or in Azure Cloud Shell, to deploy the FHIR service.
+
+If you havenΓÇÖt logged in to Azure, use ΓÇ£Connect-AzAccountΓÇ¥ to log in. Once you have logged in, use ΓÇ£Get-AzContextΓÇ¥ to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
+
+You can create a new resource group, or use an existing one by skipping the step or commenting out the line starting with ΓÇ£New-AzResourceGroupΓÇ¥.
+
+```powershell-interactive
+### variables
+$resourcegroupname="your resource group"
+$location="South Central US"
+$workspacename="your workspace name"
+$servicename="your fhir service name"
+$tenantid="xxx"
+$subscriptionid="xxx"
+$storageaccountname="storage account name"
+$storageaccountconfirm=1
+
+### login to azure
+Connect-AzAccount
+#Connect-AzAccount SubscriptionId $subscriptionid
+Set-AzContext -Subscription $subscriptionid
+Connect-AzAccount -Tenant $tenantid -SubscriptionId $subscriptionid
+#Get-AzContext
+
+### create resource group
+New-AzResourceGroup -Name $resourcegroupname -Location $location
+
+### deploy the resource
+New-AzResourceGroupDeployment -ResourceGroupName $resourcegroupname -TemplateFile "src/templates/fhirtemplate.json" -region $location -workspaceName $workspacename -fhirServiceName $fhirservicename -tenantid $tenantid -storageAccountName $storageaccountname -storageAccountConfirm $storageaccountconfirm
+```
+# [CLI](#tab/CLI)
+
+### Deploy the template: using CLI
+
+Run the code locally, in Visual Studio Code or in Azure Cloud Shell, to deploy the FHIR service.
+
+If you havenΓÇÖt logged in to Azure, use ΓÇ£az loginΓÇ¥ to log in. Once you have logged in, use ΓÇ£az account show --output tableΓÇ¥ to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
+
+You can create a new resource group, or use an existing one by skipping the step or commenting out the line starting with ΓÇ£az group createΓÇ¥.
+
+```azurecli-interactive
+### variables
+resourcegroupname=your resource group name
+location=southcentralus
+workspacename=your workspace name
+fhirservicename=your fhir service name
+tenantid=xxx
+subscriptionid=xxx
+storageaccountname=your storage account name
+storageaccountconfirm=true
+
+### login to azure
+az login
+az account show --output table
+az account set --subscription $subscriptionid
+
+### create resource group
+az group create --name $resourcegroupname --location $location
+
+### deploy the resource
+az deployment group create --resource-group $resourcegroupname --template-file 'src\\templates\\fhirtemplate.json' --parameters region=$location workspaceName=$workspacename fhirServiceName=$fhirservicename tenantid=$tenantid storageAccountName=$storageaccountname storageAccountConfirm=$storageaccountconfirm
+```
+++
+## Review the deployed resources
+
+You can verify that the FHIR service is up and running by opening the browser and navigating to `https://<yourfhir servic>.azurehealthcareapis.com/metadata`. If the capability statement is displayed or downloaded automatically, your deployment is successful.
+
+## Clean up the resources
+
+When the resource is no longer needed, run the code below to delete the resource group.
+
+# [PowerShell](#tab/PowerShell)
+```powershell-interactive
+$resourceGroupName = ΓÇ£your resource group nameΓÇ¥
+Remove-AzResourceGroup -Name $resourceGroupName
+```
+# [CLI](#tab/CLI)
+```azurecli-interactive
+resourceGroupName = ΓÇ£your resource group nameΓÇ¥
+az group delete --name $resourceGroupName
+```
++
+## Next steps
+
+In this quickstart guide, you've deployed the FHIR service within Azure Healthcare APis using an ARM template. For more information about the FHIR service supported features, see.
+
+>[!div class="nextstepaction"]
+>[Supported FHIR features](fhir-features-supported.md)
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/tutorial-member-match.md
Previously updated : 08/03/2021 Last updated : 08/06/2021 # $member-match operation in FHIR service
You'll need to include a parameters resource in the body that includes the patie
If a single match is found, you'll receive a 200 response with another identifier added: If the $member-match can't find a unique match, you'll receive a 422 response with an error code.
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
ms.suite: integration Previously updated : 08/09/2021 Last updated : 08/11/2021 # As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
This article shows how to deploy a single-tenant based logic app project from Vi
## Deploy infrastructure resources
-If you don't already have a logic app project or infrastructure set up, you can use the following sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use:
+If you haven't already set up a logic app project or infrastructure, you can use the following sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use:
- [GitHub sample for single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/github-sample)
If you use other deployment tools, you can deploy your single-tenant based logic
##### Install Azure Logic Apps (Standard) extension for Azure CLI
-Install the *preview* single-tenant Azure Logic Apps (Standard) extension for Azure CLI by running the command, `az extension add`, with the following required parameters:
+Currently, only the *preview* version for this extension is available. If you haven't previously installed this extension, run the command, `az extension add`, with the following required parameters:
```azurecli-interactive az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any.whl" ```
+To get the latest extension, which is version 0.1.1, run these commands to remove the existing extension and then install the latest version from the source:
+
+```azurecli-interactive
+az extension remove --name logicapp
+az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any.whl"
+```
+ <a name="create-resource-group"></a> #### Create resource group
machine-learning Dsvm Tools Data Platforms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-platforms.md
The following data platform tools are supported on the DSVM.
> June, 30. Existing deployments will continue to have access to the software but due to the reached support end date, > there will be no support for it after July 1, 2021.
+> [!NOTE]
+> We will remove SQL Server Developer Edition from DSVM images by end of November, 2021. Existing deployments will continue to have SQL Server Developer Edition installed. In new deployemnts, if you would like to have access to SQL Server Developer Edition you can install and use via Docker support see [Quickstart: Run SQL Server container images with Docker](/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-)
### Windows
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
best_run, fitted_model = local_run.get_output()
Use the best model iteration to forecast values for the test data set.
-The `forecast()` function allows specifications of when predictions should start, unlike the `predict()`, which is typically used for classification and regression tasks.
+The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainity around it.
-In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin will be at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
+In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
-You can also use the `forecast_destination` parameter in the `forecast()` function to forecast values up until a specified date.
+You can also use the `forecast_destination` parameter in the `forecast_quantiles()` function to forecast values up to a specified date.
```python label_query = test_labels.copy().astype(np.float) label_query.fill(np.nan)
-label_fcst, data_trans = fitted_model.forecast(
+label_fcst, data_trans = fitted_model.forecast_quantiles(
test_data, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ```
+Often customers want to understand the predictions at a specific quantile of the distribution. For example, when the forecast is used to control inventory like grocery items or virtual machines for a cloud service. In such cases, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". The following demonstrates how to specify which quantiles you'd like to see for your predictions, such as 50th or 95th percentile. If you don't specify a quantile, like in the aforementioned code example, then only the 50th percentile predictions are generated.
+
+```python
+# specify which quantiles you would like
+fitted_model.quantiles = [0.05,0.5, 0.9]
+fitted_model.forecast_quantiles(
+ test_data, label_query, forecast_destination=pd.Timestamp(2019, 1, 8))
+```
+
Calculate root mean squared error (RMSE) between the `actual_labels` actual values, and the forecasted values in `predict_labels`. ```python
from math import sqrt
rmse = sqrt(mean_squared_error(actual_labels, predict_labels)) rmse ```-
+
+
Now that the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values. Supply a data set in the same format as the test set `test_data` but with future datetimes, and the resulting prediction set is the forecasted values for each time-series step. Assume the last time-series records in the data set were for 12/31/2018. To forecast demand for the next day (or as many periods as you need to forecast, <= `forecast_horizon`), create a single time series record for each store for 01/01/2019.
day_datetime,store,week_of_year
01/01/2019,A,1 ```
-Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast(test_data)` to predict future values.
+Repeat the necessary steps to load this future data to a dataframe and then run `best_run.forecast_quantiles(test_data)` to predict future values.
> [!NOTE] > In-sample predictions are not supported for forecasting with automated ML when `target_lags` and/or `target_rolling_window_size` are enabled. - ## Example notebooks+ See the [forecasting sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning) for detailed code examples of advanced forecasting configuration including: * [holiday detection and featurization](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
A&nbsp;score&nbsp;has&nbsp;been&nbsp;reached| Use `experiment_exit_score` comple
## Run experiment
+> [!WARNING]
+> If you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score and generated models. The algorithms automated ML employs have inherent randomness that can cause slight variation in the models output by the experiment and the recommended model's final metrics score, like accuracy. You'll likely also see results with the same model name, but different hyperparameters used.
+ For automated ML, you create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ```python
Configure `max_concurrent_iterations` in your `AutoMLConfig` object. If it is n
## Explore models and metrics
-> [!WARNING]
-> The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors.
- Automated ML offers options for you to monitor and evaluate your training results. * You can view your training results in a widget or inline if you are in a notebook. See [Monitor automated machine learning runs](#monitor) for more details.
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
To create and work with datasets, you need:
* Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install). > [!NOTE]
-> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (14.04, 16.04, 18.04), Fedora (27, 28), Debian (8, 9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
+> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. For If you are developing on Linux, these classes are supported only on the following distributions: Red Hat Enterprise Linux (7, 8), Ubuntu (18.04), Debian (9), and CentOS (7). If you are using unsupported distros, please follow [this guide](/dotnet/core/install/linux) to install .NET Core 2.1 to proceed.
+
+> [!IMPORTANT]
+> While the package may work on older versions of these Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro, or the latest one supported with the azureml-dataprep package.
## Compute size guidance
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Previously updated : 06/29/2021 Last updated : 08/03/2021
The Fully Qualified Domains resolve to the following Canonical Names (CNAMEs) ca
- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.ml.azure.us``` - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.privatelink.notebooks.usgovcloudapi.net```
-The FQDNs resolve to the IP addresses of the Azure Machine Learning workspace in that region. However, resolution of the workspace Private Link FQDNs will be overridden when resolving with the Azure DNS Virtual Server IP address in a Virtual Network linked to the Private DNS Zones created as described above.
+The FQDNs resolve to the IP addresses of the Azure Machine Learning workspace in that region. However, resolution of the workspace Private Link FQDNs can be overridden by using a custom DNS server hosted in the virtual network. For an example of this architecture, see the [custom DNS server hosted in a vnet](#example-custom-dns-server-hosted-in-vnet) example.
## Manual DNS server integration
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-visual-studio-code.md
Previously updated : 05/25/2021 Last updated : 08/11/2021 # Interactive debugging with Visual Studio Code
Local web service deployments require a working Docker installation on your loca
myenv = Environment.from_conda_specification(name="env", file_path="myenv.yml") myenv.docker.base_image = None
- myenv.docker.base_dockerfile = "FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04"
+ myenv.docker.base_dockerfile = "FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1"
inference_config = InferenceConfig(entry_script="score.py", environment=myenv) package = Model.package(ws, [model], inference_config) package.wait_for_creation(show_output=True) # Or show_output=False to hide the Docker build logs.
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-custom-image.md
Previously updated : 10/20/2020 Last updated : 08/11/2021
It's also possible to use a custom Dockerfile. Use this approach if you need to
```python # Specify Docker steps as a string. dockerfile = r"""
-FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
RUN echo "Hello from custom container!" """
fastai_env.docker.base_dockerfile = "./Dockerfile"
>[!IMPORTANT] > Azure Machine Learning only supports Docker images that provide the following software:
-> * Ubuntu 16.04 or greater.
-> * Conda 4.5.# or greater.
+> * Ubuntu 18.04 or greater.
+> * Conda 4.7.# or greater.
> * Python 3.6+. > * A POSIX compliant shell available at /bin/sh is required in any container image used for training.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-batch-endpoint.md
Previously updated : 5/25/2021 Last updated : 8/11/2021 # Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
You can also check job details along with status using the Azure CLI.
Get the job name from the invoke response.
-```azurecli
-job_name=$(az ml endpoint invoke --name mybatchedp --type batch --input-path https://pipelinedata.blob.core.windows.net/sampledata/nytaxi/taxi-tip-data.csv --query name -o tsv)
-```
Use `job show` to check details and status of a batch scoring job. - :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_job_status" ::: Stream the job logs using `job stream`.
One batch endpoint can have multiple deployments. Each deployment hosts one mode
Use the following command to add a new deployment to an existing batch endpoint.
-```azurecli
-az ml endpoint update --name mybatchedp --type batch --deployment-file cli/endpoints/batch/add-deployment.yml
-```
This sample uses a non-MLflow model. When using non-MLflow, you'll need to specify the environment and a scoring script in the YAML file:
If you re-examine the details of your deployment, you will see your changes:
Now you can invoke a batch scoring job with this new deployment:
-```azurecli
-az ml endpoint invoke --name mybatchedp --type batch --input-path https://pipelinedata.blob.core.windows.net/sampledata/mnist --mini-batch-size 10 --instance-count 2
-```
## Start a batch scoring job using REST
Batch endpoints have scoring URIs for REST access. REST lets you use any HTTP li
3. Use the `scoring_uri`, the access token, and JSON data to POST a request and start a batch scoring job:
-```bash
-curl --location --request POST "$scoring_uri" --header "Authorization: Bearer $auth_token" --header 'Content-Type: application/json' --data-raw '{
-"properties": {
- "dataset": {
- "dataInputType": "DataUrl",
- "Path": "https://pipelinedata.blob.core.windows.net/sampledata/mnist"
- }
- }
-}'
-```
## Clean up resources
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
Previously updated : 07/08/2021 Last updated : 08/11/2021
myenv.docker.base_image_registry="your_registry_location"
>[!IMPORTANT] > Azure Machine Learning only supports Docker images that provide the following software:
-> * Ubuntu 16.04 or greater.
-> * Conda 4.5.# or greater.
+> * Ubuntu 18.04 or greater.
+> * Conda 4.7.# or greater.
> * Python 3.6+. > * A POSIX compliant shell available at /bin/sh is required in any container image used for training.
You can also specify a path to a specific Python interpreter within the image, b
```python dockerfile = """
-FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
RUN conda install numpy """
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-azure-machine-learning-cli.md
If you used the `az ml environment scaffold` command, it generates a template `a
}, "docker": { "enabled": false,
- "baseImage": "mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04",
+ "baseImage": "mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1",
"baseDockerfile": null, "sharedVolumes": true, "shmSize": "2g",
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/add-server-credentials.md
Title: Provide server credentials to discover software inventory, dependencies, and SQL Server instances and databases
+ Title: Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
description: Learn how to provide server credentials on appliance configuration manager--++ ms. Last updated 03/18/2021
-# Provide server credentials to discover software inventory, dependencies, and SQL Server instances and databases
+# Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
-Follow this article to learn how to add multiple server credentials on the appliance configuration manager to perform software inventory (discover installed applications), agentless dependency analysis and discover SQL Server instances and databases.
+Follow this article to learn how to add multiple server credentials on the appliance configuration manager to perform software inventory (discover installed applications), agentless dependency analysis and discover web apps, and SQL Server instances and databases.
-> [!Note]
-> Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. To try out this feature, use [**this link**](https://aka.ms/AzureMigrate/SQL) to create a project in **Australia East** region. If you already have a project in Australia East and want to try out this feature, please ensure that you have completed these [**prerequisites**](how-to-discover-sql-existing-project.md) on the portal.
-
-The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance used by Azure Migrate: Discovery and assessment to discover on-premises servers running in VMware environment and send server configuration and performance metadata to Azure. The appliance can also be used to perform software inventory, agentless dependency analysis and discover of SQL Server instances and databases.
+The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance used by Azure Migrate: Discovery and assessment to discover on-premises servers running in VMware environment and send server configuration and performance metadata to Azure. The appliance can also be used to perform software inventory, agentless dependency analysis and discover of web app, and SQL Server instances and databases.
If you want to use these features, you can provide server credentials by following the steps below. The appliance will attempt to automatically map the credentials to the servers to perform the discovery features.
The types of server credentials supported are listed in the table below:
Type of credentials | Description |
-**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to proceed with software inventory.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
+**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to proceed with software inventory.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. **SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you have configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
-Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover SQL Server instances and databases.
+Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover web apps, and SQL Server instances and databases.
### Required permissions
Feature | Windows credentials | Linux credentials
| | **Software inventory** | Guest user account | Regular/normal user account (non-sudo access permissions) **Discovery of SQL Server instances and databases** | User account that is member of the sysadmin server role. | _Not supported currently_
+**Discovery of ASP.NET web apps** | Domain or non-domain (local) account with administrative permissions | _Not supported currently_
**Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Root user account, or <br/> an account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.<br/><br/> Set these capabilities using the following commands: <br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat ### Recommended practices to provide credentials -- It is recommended to create a dedicated domain user account with the [required permissions](add-server-credentials.md#required-permissions), which is scoped to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases on the desired servers.
+- It is recommended to create a dedicated domain user account with the [required permissions](add-server-credentials.md#required-permissions), which is scoped to perform software inventory, agentless dependency analysis and discovery of web app, and SQL Server instances and databases on the desired servers.
- It is recommended to provide at least one successfully validated domain credential or at least one non-domain credential to initiate software inventory. - To discover SQL Server instances and databases, you can provide domain credentials, if you have configured Windows authentication mode on your SQL Servers. - You can also provide SQL Server authentication credentials if you have configured SQL Server authentication mode on your SQL Servers but it is recommended to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can first complete the software inventory.
Feature | Windows credentials | Linux credentials
- All the credentials provided on the appliance configuration manager are stored locally on the appliance server and not sent to Azure. - The credentials stored on the appliance server are encrypted using Data Protection API (DPAPI). - After you have added credentials, appliance attempts to automatically map the credentials to perform discovery on the respective servers.-- The appliance uses the credentials automatically mapped on a server for all the subsequent discovery cycles till the credentials are able to fetch the required discovery data. If the credentials stop working, appliance again attempts to map from the list of added credentials and continue the ongoing discovery on the server.
+- The appliance uses the credentials automatically mapped on a server for all the subsequent discovery cycles until the credentials are able to fetch the required discovery data. If the credentials stop working, appliance again attempts to map from the list of added credentials and continue the ongoing discovery on the server.
- The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. The appliance will not attempt to map the domain credentials that have failed validation. - If the appliance cannot map any domain or non-domain credentials against a server, you will see "Credentials not available" status against the server in your project.
migrate Best Practices Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/best-practices-assessment.md
ms. Previously updated : 11/19/2019 Last updated : 07/28/2021
Last updated 11/19/2019
This article summarizes best practices when creating assessments using the Azure Migrate Discovery and assessment tool.
-Assessments you create with Azure Migrate: Discovery and assessment tool are a point-in-time snapshot of data. There are three types of assessments you can create using Azure Migrate: Discovery and assessment:
+Assessments you create with Azure Migrate: Discovery and assessment tool are a point-in-time snapshot of data. There are four types of assessments you can create using Azure Migrate: Discovery and assessment:
**Assessment Type** | **Details**
- |
+ |
**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure using this assessment type. [Learn more](concepts-assessment-calculation.md) **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md)
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps running on IIS web server, from your VMware environment to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) > [!NOTE]
Sizing criteria options in Azure Migrate assessments:
**Sizing criteria** | **Details** | **Data** | |
-**Performance-based** | Assessments that make recommendations based on collected performance data | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD or premium-managed disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes: CPU utilization, Memory utilization, IOPS (Data and Log files), throughput and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
-**As-is on-premises** | Assessments that don't use performance data to make recommendations. | **Azure VM assessment**: VM size recommendation is based on the on-premises VM size<br/><br> The recommended disk type is based on what you select in the storage type setting for the assessment.<br/><br/> **Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on the on-premises VM size.
+**Performance-based** | Assessments that make recommendations based on collected performance data | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD, premium-managed or ultra disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes: CPU utilization, Memory utilization, IOPS (Data and Log files), throughput, and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
+**As-is on-premises** | Assessments that don't use performance data to make recommendations. | **Azure VM assessment**: VM size recommendation is based on the on-premises VM size<br/><br> The recommended disk type is based on what you select in the storage type setting for the assessment.<br/><br/> **Azure App Service assessment**: Assessment recommendation is based on on-premises web apps configuration data.<br/><br/> **Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on the on-premises VM size.
#### Example As an example, if you have an on-premises VM with four cores at 20% utilization, and memory of 8 GB with 10% utilization, the Azure VM assessment will be as follows:
As an example, if you have an on-premises VM with four cores at 20% utilization,
The Azure Migrate appliance continuously profiles your on-premises environment, and sends metadata and performance data to Azure. Follow these best practices for assessments of servers discovered using an appliance: -- **Create as-is assessments**: You can create as-is assessments immediately once your servers show up in the Azure Migrate portal. You cannot create an Azure SQL assessment with sizing criteria "As on-premises".
+- **Create as-is assessments**: You can create as-is assessments immediately once your servers show up in the Azure Migrate portal. You cannot create an Azure SQL assessment with sizing criteria "As on-premises". Azure App Service assessment by default is "As on-premises".
- **Create performance-based assessment**: After setting up discovery, we recommend that you wait at least a day before running a performance-based assessment: - Collecting performance data takes time. Waiting at least a day ensures that there are enough performance data points before you run the assessment. - When you're running performance-based assessments, make sure you profile your environment for the assessment duration. For example, if you create an assessment with a performance duration set to one week, you need to wait for at least a week after you start discovery, for all the data points to be collected. If you don't, the assessment won't get a five-star rating.
Follow these best practices for assessments of servers imported into Azure Migra
### FTT Sizing Parameters for AVS assessments
-The storage engine used in AVS is vSAN. vSAN storage polices define storage requirements for your virtual machines. These policies guarantee the required level of service for your VMs because they determine how storage is allocated to the VM. These are the available FTT-Raid Combinations:
+The storage engine used in AVS is vSAN. vSAN storage policies define storage requirements for your virtual machines. These policies guarantee the required level of service for your VMs because they determine how storage is allocated to the VM. These are the available FTT-Raid Combinations:
**Failures to Tolerate (FTT)** | **RAID Configuration** | **Minimum Hosts Required** | **Sizing consideration**
- | | |
+ | | |
1 | RAID-1 (Mirroring) | 3 | A 100GB VM would consume 200GB. 1 | RAID-5 (Erasure Coding) | 4 | A 100GB VM would consume 133.33GB 2 | RAID-1 (Mirroring) | 5 | A 100GB VM would consume 300GB. 2 | RAID-6 (Erasure Coding) | 6 | A 100GB VM would consume 150GB. 3 | RAID-1 (Mirroring) | 7 | A 100GB VM would consume 400GB. - ## Best practices for confidence ratings When you run performance-based assessments, a confidence rating from 1-star (lowest) to 5-star (highest) is awarded to the assessment. To use confidence ratings effectively:
Depending on the percentage of data points available for the selected duration,
61%-80% | 4 Star 81%-100% | 5 Star - ## Common assessment issues Here's how to address some common environment issues that affect assessments.
If you add or remove servers from a group after you create an assessment, the as
### Outdated assessments #### Azure VM assessment and AVS assessment+ If there are changes on the on-premises servers that are in a group that's been assessed, the assessment is marked **outdated**. An assessment can be marked as ΓÇ£OutdatedΓÇ¥ because of one or more changes in below properties:+ - Number of processor cores - Allocated memory - Boot type or firmware
If there are changes on the on-premises servers that are in a group that's been
- Number of network adaptor - Disk size change(GB Allocated) - Nic properties update. Example: Mac address changes, IP address addition etc.
-
+ Run the assessment again (**Recalculate**) to reflect the changes.
-
+ #### Azure SQL assessment+ If there are changes to on-premises SQL instances and databases that are in a group that's been assessed, the assessment is marked **outdated**. An assessment can be marked as ΓÇ£OutdatedΓÇ¥ because of one or more reasons below:+ - SQL instance was added or removed from a server - SQL database was added or removed from a SQL instance - Total database size in a SQL instance changed by more than 20% - Change in number of processor cores-- Change in allocated memory
+- Change in allocated memory
+
+ Run the assessment again (**Recalculate**) to reflect the changes.
+
+#### Azure App Service assessment
+
+If there are changes to on-premises web apps that are in a group that's been assessed, the assessment is marked **outdated**. An assessment can be marked as ΓÇ£OutdatedΓÇ¥ because of one or more reasons below:
+
+- Web apps were added or removed from a server
+- Configuration changes made to existing web apps.
Run the assessment again (**Recalculate**) to reflect the changes. ### Low confidence rating
-An assessment might not have all the data points for a number of reasons:
+An assessment might not have all the data points for many reasons:
- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and 'Recalculate' the assessment. - Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that:
- - Servers are powered on for the duration of the assessment
+ - Servers are powered on during the assessment
- Outbound connections on ports 443 are allowed - For Hyper-V Servers dynamic memory is enabled - The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat
- - For For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/common-questions-appliance.md
Title: Azure Migrate appliance FAQ description: Get answers to common questions about the Azure Migrate appliance.--++ ms. Last updated 03/22/2021
The appliance can connect via the internet or by using Azure ExpressRoute.
- You can use ExpressRoute with Microsoft peering. Public peering is deprecated, and isn't available for new ExpressRoute circuits. - Private peering only isn't supported. - ## Does appliance analysis affect performance? The Azure Migrate appliance profiles on-premises servers continuously to measure performance data. This profiling has almost no performance impact on profiled servers.
No. To discover servers in VMware environment, you must have vCenter Server.
## How do I update the appliance?
-By default, the appliance and its installed agents are updated automatically. The appliance checks for updates every 24 hours. Updates that fail are retried.
+By default, the appliance and its installed agents are updated automatically. The appliance checks for updates every 24 hours. Updates that fail are retried.
Only the appliance and the appliance agents are updated by these automatic updates. The operating system is not updated by Azure Migrate automatic updates. Use Windows Updates to keep the operating system up to date.
Yes. In the portal, go the **Agent health** page for the Azure Migrate: Discover
Yes, we now support multiple server credentials to perform software inventory (discovery of installed applications), agentless dependency analysis, and discovery of SQL Server instances and databases. [Learn more](tutorial-discover-vmware.md#provide-server-credentials) on how to provide credentials on the appliance configuration manager. ## What type of server credentials can I add on the VMware appliance?+ You can provide domain/ Windows(non-domain)/ Linux(non-domain)/ SQL Server authentication credentials on the appliance configuration manager. [Learn more](add-server-credentials.md) about how to provide credentials and how we handle them. ## What type of SQL Server connection properties are supported by Azure Migrate for SQL discovery?+ Azure Migrate will encrypt the communication between Azure Migrate appliance and source SQL Server instances (with Encrypt connection property set to TRUE). These connections are encrypted with [TrustServerCertificate](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder.trustservercertificate) (set to TRUE); the transport layer will use SSL to encrypt the channel and bypass the certificate chain to validate trust. The appliance server must be set up to [trust the certificate's root authority](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine). If no certificate has been provisioned on the server when it starts up, SQL Server generates a self-signed certificate that is used to encrypt login packets. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine). - ## Next steps Read the [Azure Migrate overview](migrate-services-overview.md).
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/common-questions-discovery-assessment.md
This article answers common questions about discovery, assessment, and dependenc
- [General questions](resources-faq.md) about Azure Migrate - Questions about the [Azure Migrate appliance](common-questions-appliance.md) - Questions about [server migration](common-questions-server-migration.md)-- Get questions answered in the [Azure Migrate forum](https://aka.ms/AzureMigrateForum)-
+- Get questions answered in the [Azure Migrate forum](https://social.msdn.microsoft.com/forums/azure/home?forum=AzureMigrate
## What geographies are supported for discovery and assessment with Azure Migrate? Review the supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government). - ## How many servers can I discover with an appliance? You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you have more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
You can discover up to 10,000 servers from VMware environment, up to 5,000 serve
## How do I choose the assessment type? - Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md)- - Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server from your VMware environment for migration to Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-assessment-calculation.md)-
+- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md)
- Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)--- You can use a common group with VMware machines only to run both types of assessments. Note that if you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
-
+- You can use a common group with VMware machines only to run both types of assessments. If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
## Why is performance data missing for some/all servers in my Azure VM and/or AVS assessment report?
-For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Please check:
+For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Check:
- If the servers are powered on for the duration for which you are creating the assessment - If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, please enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
For "Performance-based" assessment, the assessment report export says 'Percentag
> [!Note] > If any of the performance counters are missing, Azure Migrate: Server Assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly. - ## Why is performance data missing for some/all SQL instances/databases in my Azure SQL assessment? To ensure performance data is collected, please check:
To ensure performance data is collected, please check:
If any of the performance counters are missing, Azure SQL assessment recommends the smallest Azure SQL configuration for that instance/database.
+## Why confidence rating is not available for Azure App Service assessments?
+
+Performance data is not captured for Azure App Service assessment and hence you do not see confidence rating for this assessment type. Azure App Service assessment takes configuration data of web apps in to account while performing assessment calculation.
+ ## Why is the confidence rating of my assessment low? The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating: - You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and **Recalculate** the assessment.
-
- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that: - Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed
- - For Hyper-V Servers dynamic memory is enabled
+ - For Hyper-V Servers dynamic memory is enabled
- The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat
- - For For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
Please **Recalculate** the assessment to reflect the latest changes in confidence rating. - For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based)- - For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings)
-## > The number of Azure VM or AVS assessments on the Discovery and assessment tool are incorrect
- To remediate this, click on the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
+## The number of Azure VM or AVS assessments on the Discovery and assessment tool are incorrect
+ To remediate this, click on the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
## I want to try out the new Azure SQL assessment+ Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+## I want to try out the new Azure App Service assessment
+
+Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+ ## I can't see some servers when I am creating an Azure SQL assessment -- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, please wait for some time for the discovery to get completed and then create the assessment.
+- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, please wait for some time for the discovery to get completed and then create the assessment.
- If you are not able to see a previously created group while creating the assessment, please remove any non-VMware server or any server without a SQL instance from the group. - If you are running Azure SQL assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+## I can't see some servers when I am creating an Azure App Service assessment
+
+- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, please wait for some time for the discovery to get completed and then create the assessment.
+- If you are not able to see a previously created group while creating the assessment, please remove any non-VMware server or any server without a web app from the group.
+- If you are running Azure App Service assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+ ## I want to understand how was the readiness for my instance computed?+ The readiness for your SQL instances has been computed after doing a feature compatibility check with the targeted Azure SQL deployment type (Azure SQL Database or Azure SQL Managed Instance). [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-readiness)
+## I want to understand how was the readiness for my web apps is computed?
+
+The readiness for your web apps is computed by running series of technical checks to determine if your web app will run successfully in Azure App service or not. These checks are documented [here](https://github.com/Azure/App-Service-Migration-Assistant/wiki/Readiness-Checks).
+
+## Why is my web app marked as Ready with conditions or Not ready in my Azure App Service assessment?
+
+This can happen when one or more technical checks fail for a given web app. You may click on the readiness status for the web app to find out details and remediation for failed checks.
+ ## Why is the readiness for all my SQL instances marked as unknown?+ If your discovery was started recently and is still in progress, you might see the readiness for some or all SQL instances as unknown. We recommend that you wait for some time for the appliance to profile the environment and then recalculate the assessment.
-The SQL discovery is performed once every 24 hours and you might need to wait upto a day for the latest configuration changes to reflect.
+The SQL discovery is performed once every 24 hours and you might need to wait upto a day for the latest configuration changes to reflect.
## Why is the readiness for some of my SQL instances marked as unknown?
-This could happen if:
+
+This could happen if:
+ - The discovery is still in progress. We recommend that you wait for some time for the appliance to profile the environment and then recalculate the assessment. - There are some discovery issues that you need to fix in the Errors and notifications blade.
The SQL discovery is performed once every 24 hours and you might need to wait up
## My assessment is in Outdated state ### Azure VM/AVS assessment+ If there are on-premises changes to servers that are in a group that's been assessed, the assessment is marked outdated. An assessment can be marked as ΓÇ£OutdatedΓÇ¥ because of one or more changes in below properties:+ - Number of processor cores - Allocated memory - Boot type or firmware
If there are on-premises changes to servers that are in a group that's been asse
Please **Recalculate** the assessment to reflect the latest changes in the assessment. ### Azure SQL assessment+ If there are changes to on-premises SQL instances and databases that are in a group that's been assessed, the assessment is marked **outdated**:+ - SQL instance was added or removed from a server - SQL database was added or removed from a SQL instance - Total database size in a SQL instance changed by more than 20%
If there are changes to on-premises SQL instances and databases that are in a gr
Please **Recalculate** the assessment to reflect the latest changes in the assessment. ## Why was I recommended a particular target deployment type?+ Azure Migrate recommends a specific Azure SQL deployment type that is compatible with your SQL instance. Migrating to a Microsoft recommended target reduces your overall migration effort. This Azure SQL configuration (SKU) has been recommended after considering the performance characteristics of your SQL instance and the databases it manages. If multiple Azure SQL configurations are eligible, we recommend the one, which is the most cost effective. [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-sizing)
-## What deployment target should I choose if my SQL instance is ready for Azure SQL DB and Azure SQL MI?
+## What deployment target should I choose if my SQL instance is ready for Azure SQL DB and Azure SQL MI?
+ If your instance is ready for both Azure SQL DB and Azure SQL MI, we recommend the target deployment type for which the estimated cost of Azure SQL configuration is lower. ## Why is my instance marked as Potentially ready for Azure VM in my Azure SQL assessment?+ This can happen when the target deployment type chosen in the assessment properties is **Recommended** and the SQL instance is not ready for Azure SQL Database and Azure SQL Managed Instance. The user is recommended to create an assessment in Azure migrate with assessment type as **Azure VM** to determine if the Server on which the instance is running is ready to migrate to an Azure VM. The user is recommended to create an assessment in Azure Migrate with assessment type as **Azure VM** to determine if the server on which the instance is running is ready to migrate to an Azure VM instead:-- Azure VM assessments in Azure Migrate are currently lift-an-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. +
+- Azure VM assessments in Azure Migrate are currently lift-an-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine.
- When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](../azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md) for SQL Server on Azure virtual machines. ## I can't see some databases in my assessment even though the instance is part of the assessment
-The Azure SQL assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing and cost calculation for such databases. In case you wish you assess such databases, please change the status of the database and recalculate the assessment in some time.
+The Azure SQL assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing, and cost calculation for such databases. In case you wish you assess such databases, please change the status of the database and recalculate the assessment in some time.
## I want to compare costs for running my SQL instances on Azure VM Vs Azure SQL Database/Azure SQL Managed Instance You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](../azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md) for SQL Server on Azure virtual machines. ## The storage cost in my Azure SQL assessment is zero
-For Azure SQL Managed Instance, there is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/)
+
+For Azure SQL Managed Instance, there is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32 GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/)
## I can't see some groups when I am creating an Azure VMware Solution (AVS) assessment -- AVS assessment can be done on groups that have only VMware machines. Please remove any non-VMware machine from the group if you intend to perform an AVS assessment.
+- AVS assessment can be done on groups that have only VMware machines. Remove any non-VMware machine from the group if you intend to perform an AVS assessment.
- If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
+## Queries regarding Ultra disks
+
+### Can I migrate my disks to Ultra disk using Azure Migrate?
+
+No. Currently, both Azure Migrate and Azure Sire Recovery do not support migration to Ultra disks. Find steps to deploy Ultra disk [here](https://docs.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal#deploy-an-ultra-disk)
+
+### Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
+
+As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra Disk is billed based on the provisioned size, provisioned IOPS and provisioned throughput. As per an example provided:
+
+If you provisioned a 200 GiB Ultra Disk, with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB and you'll be billed for the 256 GiB, 20,000 IOPS and 1,000 MB/second for 20 hours.
+
+IOPS to be provisioned = (Throughput discovered) *1024/256
+
+### Does the Ultra disk recommendation consider latency?
+
+No, currently only disk size, total throughput and total IOPS is used for sizing and costing.
+
+### I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says ΓÇ£No VM found for this locationΓÇ¥?
+
+This is possible as not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+ ## I can't see some VM types and sizes in Azure Government VM types and sizes supported for assessment and migration depend on availability in Azure Government location. You can [review and compare](https://azure.microsoft.com/global-infrastructure/services/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&products=virtual-machines) VM types in Azure Government.
Yes, Azure Migrate requires vCenter Server in a VMware environment to perform di
With as-on-premises sizing, Azure Migrate doesn't consider server performance data for assessment. Azure Migrate assesses VM sizes based on the on-premises configuration. With performance-based sizing, sizing is based on utilization data. For example, if an on-premises server has four cores and 8 GB of memory at 50% CPU utilization and 50% memory utilization:+ - As-on-premises sizing will recommend an Azure VM SKU that has four cores and 8 GB of memory. - Performance-based sizing will recommend a VM SKU that has two cores and 4 GB of memory because the utilization percentage is considered. Similarly, disk sizing depends on sizing criteria and storage type:-- If the sizing criteria is performance-based and the storage type is automatic, Azure Migrate takes the IOPS and throughput values of the disk into account when it identifies the target disk type (Standard or Premium).-- If the sizing criteria is performance-based and the storage type is Premium, Azure Migrate recommends a Premium disk SKU based on the size of the on-premises disk. The same logic is applied to disk sizing when the sizing is as-on-premises and the storage type is Standard or Premium.+
+- If the sizing criteria is "performance-based" and the storage type is automatic, Azure Migrate takes the IOPS and throughput values of the disk into account when it identifies the target disk type (Standard, Premium or Ultra disk).
+- If the sizing criteria is "as on premises" and the storage type is Premium, Azure Migrate recommends a Premium disk SKU based on the size of the on-premises disk. The same logic is applied to disk sizing when the sizing is as-on-premises and the storage type is Standard, Premium or Ultra disk.
## Does performance history and utilization affect sizing in an Azure VM assessment?
For example, if you set the performance duration to one day and the percentile v
Using the 95th percentile value ensures that outliers are ignored. Outliers might be included if your Azure Migrate uses the 99th percentile. To pick the peak usage for the period without missing any outliers, set Azure Migrate to use the 99th percentile. - ## How are import-based assessments different from assessments with discovery source as appliance?
-Import-based Azure VM assessments are assessments created with machines that are imported into Azure Migrate using a CSV file. Only four fields are mandatory to import: Server name, cores, memory, and operating system. Here are some things to note:
+Import-based Azure VM assessments are assessments created with machines that are imported into Azure Migrate using a CSV file. Only four fields are mandatory to import: Server name, cores, memory, and operating system. Here are some things to note:
+
+ - The readiness criteria is less stringent in import-based assessments on the boot type parameter. If the boot type isn't provided, it is assumed the machine has BIOS boot type and the machine is not marked as **Conditionally Ready**. In assessments with discovery source as appliance, the readiness is marked as **Conditionally Ready** if the boot type is missing. This difference in readiness calculation is because users may not have all information on the machines in the early stages of migration planning when import-based assessments are done.
- Performance-based import assessments use the utilization value provided by the user for right-sizing calculations. Since the utilization value is provided by the user, the **Performance history** and **Percentile utilization** options are disabled in the assessment properties. In assessments with discovery source as appliance, the chosen percentile value is picked from the performance data collected by the appliance. ## Why is the suggested migration tool in import-based AVS assessment marked as unknown? For machines imported via a CSV file, the default migration tool in an AVS assessment is unknown. Though, for VMware machines, it is recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/install-vmware-hcx.md). - ## What is dependency visualization? Dependency visualization can help you assess groups of servers to migrate with greater confidence. Dependency visualization cross-checks machine dependencies before you run an assessment. It helps ensure that nothing is left behind, and it helps avoid unexpected outages when you migrate to Azure. Azure Migrate uses the Service Map solution in Azure Monitor to enable dependency visualization. [Learn more](concepts-dependency-visualization.md).
Data | Source machine server name, process, application name.<br/><br/> Destinat
Visualization | Dependency map of single server can be viewed over a duration of one hour to 30 days. | Dependency map of a single server.<br/><br/> Map can be viewed over an hour only.<br/><br/> Dependency map of a group of servers.<br/><br/> Add and remove servers in a group from the map view. Data export | Last 30 days data can be downloaded in a CSV format. | Data can be queried with Log Analytics. - ## Do I need to deploy the appliance for agentless dependency analysis? Yes, the [Azure Migrate appliance](migrate-appliance.md) must be deployed.
You need these agents only if you use agent-based dependency visualization.
## Can I use an existing workspace?
-Yes, for agent-based dependency visualization you can attach an existing workspace to the migration project and use it for dependency visualization.
+Yes, for agent-based dependency visualization you can attach an existing workspace to the migration project and use it for dependency visualization.
## Can I export the dependency visualization report?
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-assessment-calculation.md
ms. Previously updated : 05/27/2020 Last updated : 07/28/2021 # Assessment overview (migrate to Azure VMs)
An assessment with the Discovery and assessment tool measures the readiness and
There are three types of assessments you can create using Azure Migrate: Discovery and assessment. ***Assessment Type** | **Details**
- |
+ |
**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type. **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
+**Azure App Service** | Assessments to migrate on-premises web apps from your VMware environment to Azure App Service.
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) > [!NOTE]
Assessments you create with Azure Migrate are a point-in-time snapshot of data.
**Assessment type** | **Details** | **Data** | |
-**Performance-based** | Assessments that make recommendations based on collected performance data | The VM size recommendation is based on CPU and RAM-utilization data.<br/><br/> The disk-type recommendation is based on the input/output operations per second (IOPS) and throughput of the on-premises disks. Disk types are Azure Standard HDD, Azure Standard SSD, and Azure Premium disks.
+**Performance-based** | Assessments that make recommendations based on collected performance data | The VM size recommendation is based on CPU and RAM-utilization data.<br/><br/> The disk-type recommendation is based on the input/output operations per second (IOPS) and throughput of the on-premises disks. Disk types are Azure Standard HDD, Azure Standard SSD, Azure Premium disks, and Azure Ultra disks.
**As-is on-premises** | Assessments that don't use performance data to make recommendations | The VM size recommendation is based on the on-premises server size.<br/><br> The recommended disk type is based on the selected storage type for the assessment. ## How do I run an assessment?
Here's what's included in an Azure VM assessment:
**Property** | **Details** | **Target location** | The location to which you want to migrate. The assessment currently supports these target Azure regions:<br/><br/> Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, China East, China North, East Asia, East US, East US 2, Germany Central, Germany Northeast, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, South India, UK South, UK West, US Gov Arizona, US Gov Texas, US Gov Virginia, West Central US, West Europe, West India, West US, and West US 2.
-**Target storage disk (as-is sizing)** | The type of disk to use for storage in Azure. <br/><br/> Specify the target storage disk as Premium-managed, Standard SSD-managed, or Standard HDD-managed.
-**Target storage disk (performance-based sizing)** | Specifies the type of target storage disk as automatic, Premium-managed, Standard HDD-managed, or Standard SSD-managed.<br/><br/> **Automatic**: The disk recommendation is based on the performance data of the disks, meaning the IOPS and throughput.<br/><br/>**Premium or Standard**: The assessment recommends a disk SKU within the storage type selected.<br/><br/> If you want a single-instance VM service-level agreement (SLA) of 99.9%, consider using Premium-managed disks. This use ensures that all disks in the assessment are recommended as Premium-managed disks.<br/><br/> Azure Migrate supports only managed disks for migration assessment.
+**Target storage disk (as-is sizing)** | The type of disk to use for storage in Azure. <br/><br/> Specify the target storage disk as Premium-managed, Standard SSD-managed, Standard HDD-managed, or Ultra disk.
+**Target storage disk (performance-based sizing)** | Specifies the type of target storage disk as automatic, Premium-managed, Standard HDD-managed, Standard SSD-managed, or Ultra disk.<br/><br/> **Automatic**: The disk recommendation is based on the performance data of the disks, meaning the IOPS and throughput.<br/><br/>**Premium or Standard or Ultra disk**: The assessment recommends a disk SKU within the storage type selected.<br/><br/> If you want a single-instance VM service-level agreement (SLA) of 99.9%, consider using Premium-managed disks. This use ensures that all disks in the assessment are recommended as Premium-managed disks.<br/><br/> If you are looking to run data-intensive workloads that need high throughput, high IOPS, and consistent low latency disk storage, consider using Ultra disks.<br/><br/> Azure Migrate supports only managed disks for migration assessment.
**Azure Reserved VM Instances** | Specifies [reserved instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) so that cost estimations in the assessment take them into account.<br/><br/> When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable.<br/><br/> Azure Migrate currently supports Azure Reserved VM Instances only for pay-as-you-go offers. **Sizing criteria** | Used to rightsize the Azure VM.<br/><br/> Use as-is sizing or performance-based sizing. **Performance history** | Used with performance-based sizing. Performance history specifies the duration used when performance data is evaluated.
Property | Details | Azure readiness status
**Boot type** | Azure supports VMs with a boot type of BIOS, not UEFI. | Conditionally ready if the boot type is UEFI **Cores** | Each server must have no more than 128 cores, which is the maximum number an Azure VM supports.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores for comparison. If the assessment settings specify a comfort factor, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores to apply the comfort factor. | Ready if the number of cores is within the limit **RAM** | Each server must have no more than 3,892 GB of RAM, which is the maximum size an Azure M-series Standard_M128m&nbsp;<sup>2</sup> VM supports. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized RAM for comparison. If a comfort factor is specified, the utilized RAM is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated RAM is used to apply a comfort factor.<br/><br/> | Ready if the amount of RAM is within the limit
-**Storage disk** | The allocated size of a disk must be no more than 32 TB. Although Azure supports 64-TB disks with Azure Ultra SSD disks, The assessment currently checks for 32 TB as the disk-size limit because it doesn't support Ultra SSD yet. <br/><br/> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
+**Storage disk** | The allocated size of a disk must be no more than 64 TB.<br/><br/> The number of disks attached to the server, including the OS disk, must be 65 or fewer. | Ready if the disk size and number are within the limits
**Networking** | A server must have no more than 32 network interfaces (NICs) attached to it. | Ready if the number of NICs is within the limit ### Guest operating system
After the server is marked as ready for Azure, the assessment makes sizing recom
If you use as-is on-premises sizing, the assessment doesn't consider the performance history of the VMs and disks in the Azure VM assessment. - **Compute sizing**: The assessment allocates an Azure VM SKU based on the size allocated on-premises.-- **Storage and disk sizing**: The assessment looks at the storage type specified in assessment properties and recommends the appropriate disk type. Possible storage types are Standard HDD, Standard SSD, and Premium. The default storage type is Premium.
+- **Storage and disk sizing**: The assessment looks at the storage type specified in assessment properties and recommends the appropriate disk type. Possible storage types are Standard HDD, Standard SSD, Premium, and Ultra disk. The default storage type is Premium.
- **Network sizing**: The assessment considers the network adapter on the on-premises server. ### Calculate sizing (performance-based)
For storage sizing in an Azure VM assessment, Azure Migrate tries to map each di
1. Assessment adds the read and write IOPS of a disk to get the total IOPS required. Similarly, it adds the read and write throughput values to get the total throughput of each disk. In the case of import-based assessments, you have the option to provide the total IOPS, total throughput and total no. of disks in the imported file without specifying individual disk settings. If you do this, individual disk sizing is skipped and the supplied data is used directly to compute sizing, and select an appropriate VM SKU.
-1. If you've specified the storage type as automatic, the selected type is based on the effective IOPS and throughput values. The Assessment determines whether to map the disk to a Standard HDD, Standard SSD, or Premium disk in Azure. If the storage type is set to one of those disk types, the assessment tries to find a disk SKU within the storage type selected.
+1. If you've specified the storage type as automatic, the selected type is based on the effective IOPS and throughput values. The Assessment determines whether to map the disk to a Standard HDD, Standard SSD, Premium disk, or Ultra disk in Azure. If the storage type is set to one of those disk types, the assessment tries to find a disk SKU within the storage type selected.
1. Disks are selected as follows: - If assessment can't find a disk with the required IOPS and throughput, it marks the server as unsuitable for Azure. - If assessment finds a set of suitable disks, it selects the disks that support the location specified in the assessment settings. - If there are multiple eligible disks, assessment selects the disk with the lowest cost. - If performance data for any disk is unavailable, the configuration disk size is used to find a Standard SSD disk in Azure.
+##### Ultra disk sizing
+
+For Ultra disks, there is a range of IOPS and throughput that is allowed for a particular disk size, and thus the logic used in sizing is different from Standard and Premium disks:
+1. Three Ultra disk sizes are calculated:
+ - One disk (Disk 1) is found that can satisfy the disk size requirement
+ - One disk (Disk 2) is found that can satisfy total IOPS requirement
+ - IOPS to be provisioned = (source disk throughput) *1024/256
+ - One disk (Disk 3) is found that can satisfy total throughput requirement
+1. Out of the three disks, one with the max disk size is found and is rounded up to the next available [Ultra disk offering](https://docs.microsoft.com/azure/virtual-machines/disks-types#disk-size). This is the provisioned Ultra disk size.
+1. Provisioned IOPS is calculated using the following logic:
+ - If source throughput discovered is in the allowable range for the Ultra disk size, provisioned IOPS is equal to source disk IOPS
+ - Else, provisioned IOPS is calculated using IOPS to be provisioned = (source disk throughput) *1024/256
+1. Provisioned throughput range is dependent on provisioned IOPS
++ #### Calculate network sizing For an Azure VM assessment, assessment tries to find an Azure VM that supports the number and required performance of network adapters attached to the on-premises server.
Here are a few reasons why an assessment could get a low confidence rating:
After sizing recommendations are complete, an Azure VM assessment in Azure Migrate calculates compute and storage costs for after migration. -- **Compute cost**: Azure Migrate uses the recommended Azure VM size and the Azure Billing API to calculate the monthly cost for the server.
+### Compute cost
+Azure Migrate uses the recommended Azure VM size and the Azure Billing API to calculate the monthly cost for the server.
+
+The calculation takes into account the:
+- Operating system
+- Software assurance
+- Reserved instances
+- VM uptime
+- Location
+- Currency settings
+
+The assessment aggregates the cost across all servers to calculate the total monthly compute cost.
+
+### Storage cost
+The monthly storage cost for a server is calculated by aggregating the monthly cost of all disks that are attached to the server.
+
+#### Standard and Premium disk
+The cost for Standard or Premium disks is calculated based on the selected/recommended disk size.
- The calculation takes into account the:
- - Operating system
- - Software assurance
- - Reserved instances
- - VM uptime
- - Location
- - Currency settings
+#### Ultra disk
- The assessment aggregates the cost across all servers to calculate the total monthly compute cost.
+The cost for Ultra disk is calculated based on the provisioned size, provisioned IOPS and provisioned throughput. [Learn more](https://azure.microsoft.com/pricing/details/managed-disks/)
-- **Storage cost**: The monthly storage cost for a server is calculated by aggregating the monthly cost of all disks that are attached to the server.
+Cost is calculated using the following logic:
+- Cost of disk size is calculated by multiplying provisioned disk size by hourly price of disk capacity
+- Cost of provisioned IOPS is calculated by multiplying provisioned IOPS by hourly provisioned IOPS price
+- Cost of provisioned throughput is calculated by multiplying provisioned throughput by hourly provisioned throughput price
+- The Ultra disk VM reservation fee is not added in the total cost. [Learn More](https://azure.microsoft.com/pricing/details/managed-disks/)
- Assessment calculates the total monthly storage costs by aggregating the storage costs of all servers. Currently, the calculation doesn't consider offers specified in the assessment settings.
+Assessment calculates the total monthly storage costs by aggregating the storage costs of all servers. Currently, the calculation doesn't consider offers specified in the assessment settings.
Costs are displayed in the currency specified in the assessment settings.
migrate Concepts Azure Webapps Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/concepts-azure-webapps-assessment-calculation.md
+
+ Title: Azure App Service assessments in Azure Migrate Discovery and assessment tool
+description: Learn about Azure App Service assessments in Azure Migrate Discovery and assessment tool
+++ Last updated : 07/27/2021+++
+# Assessment overview (migrate to Azure App Service)
+
+This article provides an overview of assessments for migrating on-premises ASP.NET web apps from a VMware environment to Azure App Service using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
+
+## What's an assessment?
+An assessment with the Discovery and assessment tool is a point in time snapshot of data and measures the readiness and provides cost details to host on-premises servers, databases, and web apps to Azure.
+
+## Types of assessments
+
+There are four types of assessments you can create using the Azure Migrate: Discovery and assessment tool.
+
+**Assessment Type** | **Details**
+ |
+**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type.
+**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware environment to Azure App Service.
+**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+
+An Azure App Service assessment provides one sizing criteria:
+
+**Sizing criteria** | **Details** | **Data**
+ | |
+**Configuration-based** | Assessments that make recommendations based on collected configuration data | The Azure App Service assessment takes only configuration data in to consideration for assessment calculation. Performance data for web apps is not collected.
+
+## How do I assess my on-premises ASP.NET web apps?
+
+You can assess your on-premises web apps by using the configuration data collected by a lightweight Azure Migrate appliance. The appliance discovers on-premises web apps and sends the configuration data to Azure Migrate. [Learn More](how-to-set-up-appliance-vmware.md)
+
+## How do I assess with the appliance?
+
+If you're deploying an Azure Migrate appliance to discover on-premises servers, do the following steps:
+
+1. Set up Azure and your on-premises environment to work with Azure Migrate.
+2. For your first assessment, create an Azure Migrate project. Azure Migrate: Discovery and assessment tool gets added to the project by default.
+3. Deploy a lightweight Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends configuration and performance data to Azure Migrate. Deploy the appliance as a VM or a physical server. You don't need to install anything on servers that you want to assess.
+
+After the appliance begins discovery, you can gather servers (hosting web apps) you want to assess into a group and run an assessment for the group with assessment type **Azure App Service**.
+
+Follow our tutorial for assessing [ASP.NET web apps](tutorial-assess-webapps.md) to try out these steps.
+
+## What properties are used to customize the assessment?
+
+Here's what's included in Azure App Service assessment properties:
+
+**Property** | **Details**
+ |
+**Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
+**Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
+**Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
+**Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
+**Currency** | The billing currency for your account.
+**Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+**EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, discount (%) and VM uptime properties with their default settings.
+
+[Review the best practices](best-practices-assessment.md) for creating an assessment with Azure Migrate.
+
+## Calculate readiness
+
+### Azure App Service readiness
+
+Azure App Service readiness for web apps is based on feature compatibility checks between on-premises configuration of web apps and Azure App Service:
+
+1. The Azure App Service assessment considers the web apps configuration data to identify compatibility issues.
+1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
+1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
+1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
+1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
+
+## Calculate sizing
+
+### Azure App Service SKU
+
+After the assessment determines the readiness based on configuration data, it determines the Azure App Service SKU that is suitable for running your apps in Azure App Service.
+Premium plans are for production workloads and run on dedicated Virtual Machine instances. Each instance can support multiple applications and domains. The Isolated plans host your apps in a private, dedicated Azure environment and are ideal for apps that require secure connections with your on-premise network.
+
+> [!NOTE]
+> Currently, Azure Migrate only recommends I1, P1v2, and P1v3 SKUs. There are more SKUs available in Azure App service. [Learn more](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+### Azure App Service Plan
+
+In App Service, an app always runs in an [App Service plan](/azure/app-service/overview-hosting-plans). An App Service plan defines a set of compute resources for a web app to run. At a high level, plan/SKU is determined as per below table.
+
+**Isolation required** | **Reserved instance** | **App Service plan/ SKU**
+ | |
+Yes | Yes | I1
+Yes | No | I1
+No | Yes | P1v3
+No | No | P1v2
+
+### Azure App Service cost details
+
+An [App Service plan](/azure/app-service/overview-hosting-plans) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses. In App Service, you pay charges per App Service plans and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
+To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per below table.
+
+**App Service plan** | **Web apps per App Service plan**
+ |
+I1 | 8
+P1v2 | 8
+P1v3 | 16
+
+> [!NOTE]
+> Your App Service plan can be scaled up and down at any time. [Learn more](/azure/app-service/overview-hosting-plans#what-if-my-app-needs-more-capabilities-or-features/).
+
+## Next steps
+- [Review](best-practices-assessment.md) best practices for creating assessments.
+- Learn how to run an [Azure App Service assessment](how-to-create-azure-app-service-assessment.md).
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/deploy-appliance-script-government.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
-
-> [!NOTE]
-> The same script can be used to set up VMware appliance for Azure Government cloud with either public or private endpoint connectivity.
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
### Run the script
Check that the zipped file is secure, before you deploy it.
After the script has executed successfully, the appliance configuration manager will be launched automatically.
-> [!NOTE]
-> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
Check that the zipped file is secure, before you deploy it.
| [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
-> [!NOTE]
-> The same script can be used to set up Hyper-V appliance for Azure Government cloud with either public or private endpoint connectivity.
- ### Run the script 1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
Check that the zipped file is secure, before you deploy it.
After the script has executed successfully, the appliance configuration manager will be launched automatically.
-> [!NOTE]
-> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
- ### Verify access Make sure that the appliance can connect to Azure URLs for [government clouds](migrate-appliance.md#government-cloud-urls).
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/deploy-appliance-script.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
> [!NOTE]
-> The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
+> The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud.
### Run the script
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
> [!NOTE]
-> The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
+> The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud.
### Run the script
migrate How To Create A Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-create-a-group.md
Last updated 07/17/2019
This article describes how to create groups of servers for assessment with Azure Migrate: Discovery and assessment.
-[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
+[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
## Grouping servers
If you want to create a group manually outside of creating an assessment, do the
![Create group](./media/how-to-create-a-group/create-group.png)
-You can now use this group when you [create an Azure VM assessment](how-to-create-assessment.md) or [an Azure VMware Solution (AVS) assessment](how-to-create-azure-vmware-solution-assessment.md) or [an Azure SQL assessment](how-to-create-azure-sql-assessment.md).
+You can now use this group when you [create an Azure VM assessment](how-to-create-assessment.md) or [an Azure VMware Solution (AVS) assessment](how-to-create-azure-vmware-solution-assessment.md) or [an Azure SQL assessment](how-to-create-azure-sql-assessment.md) or [an Azure App Service assessment](how-to-create-azure-app-service-assessment.md).
## Refine a group with dependency mapping Dependency mapping helps you to visualize dependencies across servers. You typically use dependency mapping when you want to assess server groups with higher levels of confidence.-- It helps you to cross-check server dependencies, before you run an assessment. +
+- It helps you to cross-check server dependencies, before you run an assessment.
- It also helps to effectively plan your migration to Azure, by ensuring that nothing is left behind, and thus avoiding surprise outages during migration. - You can discover interdependent systems that need to migrate together, and identify whether a running system is still serving users, or is a candidate for decommissioning instead of migration.
If you've already [set up dependency mapping](how-to-create-group-machine-depend
- Adding and removing servers invalidates past assessments for a group. - You can optionally create a new assessment when you modify the group. - ## Next steps Learn how to set up and use [dependency mapping](how-to-create-group-machine-dependencies.md) to create high confidence groups.
migrate How To Create Azure App Service Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-create-azure-app-service-assessment.md
+
+ Title: Create an Azure App Service assessment
+description: Learn how to assess web apps for migration to Azure App Service
+++ Last updated : 07/28/2021+++
+# Create an Azure App Service assessment
+
+As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
+This article shows you how to assess discovered ASP.NET web apps for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
+
+> [!Note]
+> Discovery and assessment of ASP.NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+
+## Before you start
+
+- Make sure you've [created](./create-manage-projects.md) an Azure Migrate project and have the Azure Migrate: Discovery and assessment tool added.
+- To create an assessment, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md). The [appliance](migrate-appliance.md) discovers on-premises servers, and sends metadata and performance data to Azure Migrate. Same appliance discovers ASP.NET web apps running in your VMware environment.
+
+## Azure App Service assessment overview
+
+An Azure App Service assessment provides one sizing criteria:
+
+**Sizing criteria** | **Details** | **Data**
+ | |
+**Configuration-based** | Assessments that make recommendations based on collected configuration data | The Azure App Service assessment takes only configuration data in to consideration for assessment calculation. Performance data for web apps is not collected.
+
+[Learn more](concepts-azure-webapps-assessment-calculation.md) about Azure App Service assessments.
+
+## Run an assessment
+
+Run an assessment as follows:
+
+1. On the **Overview** page > **Servers, databases and web apps**, click **Discover, assess and migrate**.
+ :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Overview page for Azure Migrate":::
+2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure App Service**.
+ :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Dropdown to choose assessment type as Azure App Service":::
+3. In **Create assessment** > you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+4. Click **Edit** to review the assessment properties.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Edit button from where assessment properties can be customized":::
+
+1. Here's what's included in Azure App Service assessment properties:
+
+ | **Property** | **Details** |
+ | | |
+ | **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. |
+ | **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans. |
+ | **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥. |
+ | **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. |
+ | **Currency** | The billing currency for your account. |
+ | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. |
+ | **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings. |
+
+ :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="App Service assessment properties":::
+
+1. In **Create assessment** > click Next.
+1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
+1. In **Select or create a group** > select **Create New** and specify a group name.
+1. Select the appliance, and select the servers you want to add to the group. Then click Next.
+1. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment.
+1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** tile > Refresh the tile data by clicking on the Refresh option on top of the tile. Wait for data to get refreshed.
+ :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data":::
+1. Click on the number next to Azure App Service assessment.
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment":::
+1. Click on the assessment name which you wish to view.
+
+## Review an assessment
+
+**To view an assessment**:
+
+1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure App Service assessment.
+2. Click on the assessment name which you wish to view.
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="App Service assessment overview":::
+3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
+
+#### Azure App Service readiness
+
+This indicates the distribution of assessed web apps. You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
+You can also review the recommended App Service SKU for migrating to Azure App Service.
+
+#### Azure App Service cost details
+
+An [App Service plan](/azure/app-service/overview-hosting-plans) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses.
+
+### Review readiness
+
+1. Click **Azure App Service readiness**.
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details":::
+1. Review Azure App Service readiness column in table, for the assessed web apps:
+ 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
+ 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
+ 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
+ 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
+1. Review the recommended SKU for the web apps which is determined as per the matrix below:
+
+**Isolation required** | **Reserved instance** | **App Service plan/ SKU**
+ | |
+Yes | Yes | I1
+Yes | No | I1
+No | Yes | P1v3
+No | No | P1v2
+
+**Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates**
+ | |
+Ready | Yes | Yes
+Ready with conditions | Yes | Yes
+Not ready | No | No
+Unknown | No | No
+
+1. Click on the App Service plan hyperlink in table to see the App Service plan details such as compute resources, and other web apps that are part of the same plan.
+
+### Review cost estimates
+
+The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
+To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per below table.
+
+**App Service plan** | **Web apps per App Service plan**
+ |
+I1 | 8
+P1v2 | 8
+P1v3 | 16
+
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-cost.png" alt-text="Cost details":::
+
+## Next steps
+
+- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-discover-applications.md
Last updated 03/18/2021
-# Discover installed software inventory, and SQL Server instances and databases
+# Discover installed software inventory, web apps, and SQL Server instances and databases
-This article describes how to discover installed software inventory, and SQL Server instances and databases on servers running in your VMware environment, using Azure Migrate: Discovery and assessment tool.
+This article describes how to discover installed software inventory, web apps, and SQL Server instances and databases on servers running in your VMware environment, using Azure Migrate: Discovery and assessment tool.
Performing software inventory helps identify and tailor a migration path to Azure for your workloads. Software inventory uses the Azure Migrate appliance to perform discovery, using server credentials. It is completely agentless- no agents are installed on the servers to collect this data.
Performing software inventory helps identify and tailor a migration path to Azur
1. In **Step 1: Provide vCenter Server credentials**, click on **Add credentials** to provide credentials for the vCenter Server account that the appliance will use to discover servers running on the vCenter Server. 1. In **Step 2: Provide vCenter Server details**, click on **Add discovery source** to select the friendly name for credentials from the drop-down, specify the **IP address/FQDN** of the vCenter Server instance :::image type="content" source="./media/tutorial-discover-vmware/appliance-manage-sources.png" alt-text="Panel 3 on appliance configuration manager for vCenter Server details":::
-1. In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, click **Add credentials** to provide multiple server credentials to initiate software inventory.
+1. In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, click **Add credentials** to provide multiple server credentials to initiate software inventory.
1. Click on **Start discovery**, to kick off vCenter Server discovery. After the vCenter Server discovery is complete, appliance initiates the discovery of installed applications, roles, and features (software inventory). The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate portal.
Performing software inventory helps identify and tailor a migration path to Azur
After software inventory has completed, you can review and export the inventory in the Azure portal.
-1. In **Azure Migrate - Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click the displayed count to open the **Discovered servers** page.
+1. In **Azure Migrate - Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click the displayed count to open the **Discovered servers** page.
> [!NOTE] > At this stage you can optionally also enable dependency analysis for the discovered servers, so that you can visualize dependencies across servers you want to assess. [Learn more](concepts-dependency-visualization.md) about dependency analysis. 2. In **Software inventory** column, click the displayed count to review the discovered applications, roles, and features.
-4. To export the inventory, in **Discovered Servers**, click **Export app inventory**.
+4. To export the inventory, in **Discovered Servers**, click **Export software inventory**.
The software inventory is exported and downloaded in Excel format. The **Software Inventory** sheet displays all the apps discovered across all the servers.
The software inventory is exported and downloaded in Excel format. The **Softwar
Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data are captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal.
+## Discover ASP.NET web apps
+
+Software inventory identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server.
+User can add both domain and non-domain credentials on appliance. Make sure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in source environment.
+After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours.
+ ## Next steps - [Create an assessment](how-to-create-assessment.md) for discovered servers.-- [Assess SQL Servers](./tutorial-assess-sql.md) for migration to Azure SQL.
+- [Assess web apps](how-to-create-azure-app-service-assessment.md) for migration to Azure App Service.
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-discover-sql-existing-project.md
Last updated 03/23/2021
-# Discover SQL Server instances in an existing project
+# Discover web apps and SQL Server instances in an existing project
-This article describes how to discover SQL Server instances and databases in an [Azure Migrate](./migrate-services-overview.md) project that was created before the preview of Azure SQL assessment feature.
+This article describes how to discover web apps and SQL Server instances and databases in an [Azure Migrate](./migrate-services-overview.md) project that was created before the preview of Azure SQL assessment feature and/or before the preview of Azure App Service assessment feature.
-Discovering SQL Server instances and databases running on on-premises machines helps identify and tailor a migration path to Azure SQL. The Azure Migrate appliance performs this discovery using the Domain credentials or SQL Server authentication credentials that have access to the SQL Server instances and databases running on the targeted servers. This discovery process is agentless that is, nothing is installed on the target servers.
+Discovering ASP.NET web apps and SQL Server instances and databases running on on-premises machines helps identify and tailor a migration path to Azure. The Azure Migrate appliance performs this discovery using the Windows OS domain or non-domain credentials or SQL Server authentication credentials that have access to the SQL Server instances and databases running on the targeted servers.
+This discovery process is agentless that is, nothing is installed on the target servers.
## Before you start - Make sure you've:
- - Created an [Azure Migrate project](./create-manage-projects.md) before the announcement of SQL assessment feature for your region
+ - Created an [Azure Migrate project](./create-manage-projects.md) before the announcement of SQL and web apps assessment feature for your region
- Added the [Azure Migrate: Discovery and assessment](./how-to-assess.md) tool to a project - Review [app-discovery support and requirements](./migrate-support-matrix-vmware.md#vmware-requirements). - Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
Discovering SQL Server instances and databases running on on-premises machines h
- Verify that you have the [required roles](./create-manage-projects.md#verify-permissions) in the subscription to create resources. - Ensure that your appliance has access to the internet
-## Enable discovery of SQL Server instances and databases
+## Enable discovery of ASP.NET web apps and SQL Server instances and databases
-1. In your Azure Migrate Project, either
+1. In your Azure Migrate project, either
- Select **Not enabled** on the Hub tile, or
- :::image type="content" source="./media/how-to-discover-sql-existing-project/hub-not-enabled.png" alt-text="Azure Migrate hub tile with SQL discovery not enabled":::
- - Select **Not enabled** on any entry in the Server discovery page under SQL instances column
- :::image type="content" source="./media/how-to-discover-sql-existing-project/discovery-not-enabled.png" alt-text="Azure Migrate discovered servers blade with SQL discovery not enabled":::
-2. In Discover SQL Server instances and databases follow the steps entailed:
+ :::image type="content" source="./media/how-to-discover-sql-existing-project/hub-not-enabled.png" alt-text="Azure Migrate hub tile with SQL and web apps discovery not enabled":::
+ - Select **Not enabled** on any entry in the Server discovery page under SQL instances or Web apps column
+ :::image type="content" source="./media/how-to-discover-sql-existing-project/discovery-not-enabled.png" alt-text="Azure Migrate discovered servers blade with SQL and web apps discovery not enabled":::
+2. To Discover ASP.NET web apps and SQL Server instances and databases follow the steps entailed:
- Select **Upgrade**, to create the required resource. :::image type="content" source="./media/how-to-discover-sql-existing-project/discovery-upgrade-appliance.png" alt-text="Button to upgrade the Azure Migrate appliance"::: - Validate that the services running on the appliance are updated to the latest versions. To do so, launch the Appliance configuration manager from your appliance server and select view appliance services from the Setup prerequisites panel. - Appliance and its components are automatically updated :::image type="content" source="./media/how-to-discover-sql-existing-project/appliance-services-version.png" alt-text="Check the appliance version"::: - In the manage credentials and discovery sources panel of the Appliance configuration manager, add Domain or SQL Server Authentication credentials that have Sysadmin access on the SQL Server instance and databases to be discovered.
- You can leverage either the automatic credential-mapping feature of the appliance, or manually map the credentials to the respective server as highlighted [here](./tutorial-discover-vmware.md#start-continuous-discovery).
+ - ASP.NET web apps discovery works with both domain and non-domain Windows OS credentials as long as the account used has local admin privileges on servers.
+ You can leverage the automatic credential-mapping feature of the appliance, as highlighted [here](./tutorial-discover-vmware.md#start-continuous-discovery).
Some points to note:
- - Ensure that software inventory is enabled already, or provide Domain or Non-domain credentials to enable the same. Software inventory must be performed to discover SQL Server instances.
- - Appliance will attempt to validate the Domain credentials with AD, as they are added. Ensure that appliance server has network line of sight to the AD server associated with the credentials. Credentials associated with SQL Server Authentication are not validated.
+ - Ensure that software inventory is enabled already, or provide Domain or Non-domain credentials to enable the same. Software inventory must be performed to discover SQL Server instances and ASP.NET web apps.
+ - Appliance will attempt to validate the Domain credentials with AD, as they are added. Ensure that appliance server has network line of sight to the AD server associated with the credentials. Non-domain credentials and credentials associated with SQL Server Authentication are not validated.
3. Once the desired credentials are added, please select Start Discovery, to begin the scan. > [!Note]
->Please allow SQL discovery to run for sometime before creating assessments for Azure SQL. If the discovery of SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
+>Please allow web apps and SQL discovery to run for sometime before creating assessments for Azure App Service or Azure SQL. If the discovery of web apps and SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
## Next steps - Learn how to create an [Azure SQL assessment](./how-to-create-azure-sql-assessment.md)-- Learn more about [Azure SQL assessments](./concepts-azure-sql-assessment-calculation.md)
+- Learn more about [Azure SQL assessments](./concepts-azure-sql-assessment-calculation.md)
+- Learn how to create an [Azure App Service assessment](./how-to-create-azure-app-service-assessment.md)
+- Learn more about [Azure App Service assessments](./concepts-azure-webapps-assessment-calculation.md)
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-scale-out-for-migration.md
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` > 3. Download the latest version of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
### 3. Run the Azure Migrate installer script
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-set-up-appliance-physical.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
> [!NOTE]
-> The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
+> The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud.
### Run the Azure Migrate installer script
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-set-up-appliance-vmware.md
ms. Previously updated : 04/16/2020 Last updated : 07/27/2021 # Set up an appliance for servers in a VMware environment
To set up the appliance for the first time:
:::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::
-1. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported.
+1. In a new tab in your browser, paste the device code and sign-in by using your Azure username and password. Signing in with a PIN isn't supported.
If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and **Copy code & Login** button. 1. After you successfully log in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to log in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
The appliance must connect to vCenter Server to discover the configuration and p
### Provide server credentials
-In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
+In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-vmware/appliance-server-credentials-mapping.png" alt-text="Screenshot that shows providing credentials for software inventory, dependency analysis, and s q l server discovery.":::
To add server credentials:
Select **Save**. If you choose to use domain credentials, you also must enter the FQDN for the domain. The FQDN is required to validate the authenticity of the credentials with the Active Directory instance in that domain.
-1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications, agentless dependency analysis, and discovery of SQL Server instances and databases.
+1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps.
1. To add multiple credentials at once, select **Add more** to save credentials, and then add more credentials. When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as the appliance iterates to map credentials to respective servers.
If validation fails, you can select a **Failed** status to see the validation er
### Start discovery
-To start vCenter Server discovery, in **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, select **Start discovery**. After the discovery is successfully initiated, you can check the discovery status by looking at the vCenter Server IP address or FQDN in the sources table.
+To start vCenter Server discovery, in **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, select **Start discovery**. After the discovery is successfully initiated, you can check the discovery status by looking at the vCenter Server IP address or FQDN in the sources table.
## How discovery works * It takes approximately 15 minutes for the inventory of discovered servers to appear in the Azure portal. * If you provided server credentials, software inventory (discovery of installed applications) is automatically initiated when the discovery of servers running vCenter Server is finished. Software inventory occurs once every 12 hours. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* [Software inventory](how-to-discover-applications.md) identifies the web server role on the servers. Using the information it collects, the appliance attempts to connect to the IIS web server through the Windows authentication credentials that are provided on the appliance. Then, it gathers data on web apps. The web app discovery is performed once every 24 hours.
* Discovery of installed applications might take longer than 15 minutes. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable agentless dependency analysis.
-* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* SQL Server instances and databases data and web apps data begin to appear in the portal within 24 hours after you start discovery.
## Next steps
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | b4668be44c05836bf0f2ac1c8b1f48b7a9538afcf416c5212c7190629e3683b2
> [!NOTE]
-> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other by choosing from the scenario and cloud options to deploy an appliance with the desired configuration.
+> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
migrate Migrate Appliance Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-appliance-architecture.md
Title: Azure Migrate appliance architecture description: Provides an overview of the Azure Migrate appliance used in server discovery, assessment and migration.--++ ms. Last updated 03/18/2021
The Azure Migrate appliance is used in the following scenarios.
**Scenario** | **Tool** | **Used to** | |
-**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, agentless dependency analysis and discover SQL Server instances and databases.<br/><br/> Collect server configuration and performance metadata for assessments.
+**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
**Agentless migration of servers running in VMware environment** | Azure Migrate:Server Migration | Discover servers running in your VMware environment.<br/><br/> Replicate servers without installing any agents on them. **Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments. **Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.
The appliance can be deployed using a couple of methods:
The appliance has the following -- **Appliance configuration manager**: This is a web application which can be configured with source details to start the discovery and assessment of servers.
+- **Appliance configuration manager**: This is a web application which can be configured with source details to start the discovery and assessment of servers.
- **Discovery agent**: The agent collects server configuration metadata which can be used to create as on-premises assessments. - **Assessment agent**: The agent collects server performance metadata which can be used to create performance-based assessments. - **Auto update service**: The service keeps all the agents running on the appliance up-to-date. It automatically runs once every 24 hours. - **DRA agent**: Orchestrates server replication, and coordinates communication between replicated servers and Azure. Used only when replicating servers to Azure using agentless migration. - **Gateway**: Sends replicated data to Azure. Used only when replicating servers to Azure using agentless migration. - **SQL discovery and assessment agent**: sends the configuration and performance metadata of SQL Server instances and databases to Azure.
+- **Web apps discovery and assessment agent**: sends the web apps configuration data to Azure.
> [!Note]
-> The last 3 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
+> The last 4 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
## Discovery and collection process
The appliance communicates with the discovery sources using the following proces
**Start discovery** | The appliance communicates with the vCenter server on TCP port 443 by default. If the vCenter server listens on a different port, you can configure it in the appliance configuration manager. | The appliance communicates with the Hyper-V hosts on WinRM port 5985 (HTTP). | The appliance communicates with Windows servers over WinRM port 5985 (HTTP) with Linux servers over port 22 (TCP). **Gather configuration and performance metadata** | The appliance collects the metadata of servers running on vCenter Server using vSphere APIs by connecting on port 443 (default port) or any other port vCenter Server listens on. | The appliance collects the metadata of servers running on Hyper-V hosts using a Common Information Model (CIM) session with hosts on port 5985.| The appliance collects metadata from Windows servers using Common Information Model (CIM) session with servers on port 5985 and from Linux servers using SSH connectivity on port 22. **Send discovery data** | The appliance sends the collected data to Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits.
-**Data collection frequency** | Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 20 seconds and is aggregated to send a data point to Azure every 10 minutes. <br/><br/> Software inventory data is sent to Azure once every 12 hours. <br/><br/> Agentless dependency data is collected every 5 mins, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 10 minutes.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 5 minutes and is aggregated to send a data point to Azure every 10 minutes.
+**Data collection frequency** | Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 20 seconds and is aggregated to send a data point to Azure every 10 minutes. <br/><br/> Software inventory data is sent to Azure once every 12 hours. <br/><br/> Agentless dependency data is collected every 5 mins, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 10 minutes.| Configuration metadata is collected and sent every 30 mins. <br/><br/> Performance metadata is collected every 5 minutes and is aggregated to send a data point to Azure every 10 minutes.
**Assess and migrate** | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.<br/><br/>In addition, you can also start migrating servers running in your VMware environment using Azure Migrate: Server Migration tool to orchestrate agentless server replication.| You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. ## Next steps
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-appliance.md
Title: Azure Migrate appliance description: Provides a summary of support for the Azure Migrate appliance.--++ ms. Last updated 03/18/2021
The Azure Migrate appliance is used in the following scenarios.
**Scenario** | **Tool** | **Used to** | |
-**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, agentless dependency analysis and discover SQL Server instances and databases.<br/><br/> Collect server configuration and performance metadata for assessments.
-**Agentless migration of servers running in VMware environment** | Azure Migrate:Server Migration | Discover servers running in your VMware environment. <br/><br/> Replicate servers without installing any agents on them.
+**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
+**Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br/><br/> Replicate servers without installing any agents on them.
**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments. **Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Requirement** | **VMware** |
-**Permissions** | To access the appliance configuration manager locally or remotely,you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
+**Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
+**Appliance services** | The appliance has the following
**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 10,000 servers running on a vCenter Server.<br/> An appliance can connect to a single vCenter Server. **Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br/><br/> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br/><br/> Download size is 11.9 GB.<br/><br/> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template , or you activate the operating system license of the appliance server.
+**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br/><br/> Download size is 11.9 GB.<br/><br/> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
**OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it's running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br/><br/>
-**VDDK (agentless migration)** | To leverage the appliance for agentless migration of servers, the VMware vSphere VDDK must be installed on the appliance server.
+**VDDK (agentless migration)** | To use the appliance for agentless migration of servers, the VMware vSphere VDDK must be installed on the appliance server.
## Appliance - Hyper-V **Requirement** | **Hyper-V** |
-**Permissions** | To access the appliance configuration manager locally or remotely,you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
+**Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
+**Appliance services** | The appliance has the following
**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 5000 servers running in Hyper-V environment.<br/> An appliance can connect to up to 300 Hyper-V hosts. **Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br/><br/> Deploy on an existing server running Windows Server 2016 using PowerShell installer script. **VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br/><br/> Download size is 8.91 GB.<br/><br/> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/> If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server. **VHD verification** | [Verify](tutorial-discover-hyper-v.md#verify-security) the VHD template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it's running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br/><br/> The Hyper-V host must be running Windows Server 2012 R2 or later. ## Appliance - Physical **Requirement** | **Physical** |
-**Permissions** | To access the appliance configuration manager locally or remotely,you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
+**Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
+**Appliance services** | The appliance has the following
**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances.<br/> **Discovery limits** | An appliance can discover up to 1000 physical servers. **Supported deployment** | Deploy on an existing server running Windows Server 2016 using PowerShell installer script. **PowerShell script** | Download the script (AzureMigrateInstaller.ps1) in a zip file from the project or from [here](https://go.microsoft.com/fwlink/?linkid=2140334). [Learn more](tutorial-discover-physical.md).<br/><br/> Download size is 85.8 MB. **Script verification** | [Verify](tutorial-discover-physical.md#verify-security) the PowerShell installer script downloaded from project by checking the hash values.
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that it's running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
## URL access
The Azure Migrate appliance needs connectivity to the internet.
management.azure.com | Used for resource deployments and management operations *.services.visualstudio.com | Upload appliance logs used for internal monitoring. *.vault.azure.net | Manage secrets in the Azure Key Vault.<br/> Note: Ensure servers to replicate have access to this.
-aka.ms/* | Allow access to aka links; used to download and install the latest updates for appliance services.
+aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services.
download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.windows.net | Communication between the appliance and the Azure Migrate service. *.discoverysrv.windowsazure.com <br/> *.migration.windowsazure.com | Connect to Azure Migrate service URLs.
login.microsoftonline.us | Used for access control and identity management by A
management.usgovcloudapi.net | Used for resource deployments and management operations *.services.visualstudio.com | Upload appliance logs used for internal monitoring. *.vault.usgovcloudapi.net | Manage secrets in the Azure Key Vault.
-aka.ms/* | Allow access to aka links; used to download and install the latest updates for appliance services.
+aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services.
download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.usgovcloudapi.net | Communication between the appliance and the Azure Migrate service. *.discoverysrv.windowsazure.us <br/> *.migration.windowsazure.us | Connect to Azure Migrate service URLs.
download.microsoft.com/download | Allow downloads from Microsoft download center
### Public cloud URLs for private link connectivity
-The appliance needs access to the following URLs (directly or via proxy) over and above private link access.
+The appliance needs access to the following URLs (directly or via proxy) over and above private link access.
**URL** | **Details**
- | |
+ | |
*.portal.azure.com | Navigate to the Azure portal. *.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> *.microsoftazuread-sso.com | Used for access control and identity management by Azure Active Directory management.azure.com | Used for resource deployments and management operations *.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring.
-aka.ms/* (optional) | Allow access to aka links; used to download and install the latest updates for appliance services.
+aka.ms/* (optional) | Allow access to these links; used to download and install the latest updates for appliance services.
download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.windows.net | **Used for VMware agentless migration**<br/><br/> Communication between the appliance and the Azure Migrate service. *.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br/><br/> Connect to Azure Migrate service URLs.
Edition | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Se
Service Pack | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | SP Version | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | Version
+#### ASP.NET web apps data
+
+Here's the web apps configuration data that the appliance collects from each Windows server discovered in your VMware environment.
+
+**Entity** | **Data**
+ |
+Web apps | Application Name <br/>Configuration Path <br/>Frontend Bindings <br/>Enabled Frameworks <br/>Hosting Web Server<br/>Sub-Applications and virtual applications <br/>Application Pool name <br/>Runtime version <br/>Managed pipeline mode
+Web server | Server Name <br/>Server Type (currently only IIS) <br/>Configuration Location <br/>Version <br/>FQDN <br/>Credentials used for discovery <br/>List of Applications
+ #### Windows server operating system data Here's the operating system data that the appliance collects from each Windows server discovered in your VMware environment.
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-services-overview.md
Azure Migrate provides a centralized hub to assess and migrate to Azure on-premi
- **Unified migration platform**: A single portal to start, run, and track your migration to Azure. - **Range of tools**: A range of tools for assessment and migration. Azure Migrate tools include Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration. Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings. - **Assessment and migration**: In the Azure Migrate hub, you can assess and migrate:
- - **Windows, Linux and SQL Server**: Assess on-premises servers including SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview).
+ - **Servers, databases, and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview).
- **Databases**: Assess on-premises databases and migrate them to Azure SQL Database or to SQL Managed Instance.
- - **Web applications**: Assess on-premises web applications and migrate them to Azure App Service by using the Azure App Service Migration Assistant.
+ - **Web applications**: Assess on-premises web applications and migrate them to Azure App Service.
- **Virtual desktops**: Assess your on-premises virtual desktop infrastructure (VDI) and migrate it to Windows Virtual Desktop in Azure. - **Data**: Migrate large amounts of data to Azure quickly and cost-effectively using Azure Data Box products.
The Azure Migrate hub includes these tools:
**Tool** | **Assess and migrate** | **Details** | |
-**Azure Migrate: Discovery and assessment** | Discover and assess servers including SQL | Discover and assess on-premises VMware VMs, Hyper-V VMs, and physical servers in preparation for migration to Azure.
+**Azure Migrate: Discovery and assessment** | Discover and assess servers including SQL and web apps | Discover and assess on-premises servers running on VMware, Hyper-V, and physical servers in preparation for migration to Azure.
**Azure Migrate: Server Migration** | Migrate servers | Migrate VMware VMs, Hyper-V VMs, physical servers, other virtualized servers, and public cloud VMs to Azure.
-**Data Migration Assistant** | Assess SQL Server databases for migration to Azure SQL Database, Azure SQL Managed Instance, or Azure VMs running SQL Server. | Data Migration Assistant is a stand alone tool to assess SQL Severs.It helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration. [Learn more](/sql/dma/dma-overview).
+**Data Migration Assistant** | Assess SQL Server databases for migration to Azure SQL Database, Azure SQL Managed Instance, or Azure VMs running SQL Server. | Data Migration Assistant is a stand-alone tool to assess SQL Severs. It helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration. [Learn more](/sql/dma/dma-overview).
**Azure Database Migration Service** | Migrate on-premises databases to Azure VMs running SQL Server, Azure SQL Database, or SQL Managed Instances | [Learn more](../dms/dms-overview.md) about Database Migration Service. **Movere** | Assess servers | [Learn more](#movere) about Movere.
-**Web app migration assistant** | Assess on-premises web apps and migrate them to Azure. | Use Azure App Service Migration Assistant to assess on-premises websites for migration to Azure App Service.<br/><br/> Use Migration Assistant to migrate .NET and PHP web apps to Azure. [Learn more](https://appmigration.microsoft.com/) about Azure App Service Migration Assistant.
+**Web app migration assistant** | Assess on-premises web apps and migrate them to Azure. | Azure App Service Migration Assistant is a standalone tool to assess on-premises websites for migration to Azure App Service.<br/><br/> Use Migration Assistant to migrate .NET and PHP web apps to Azure. [Learn more](https://appmigration.microsoft.com/) about Azure App Service Migration Assistant.
**Azure Data Box** | Migrate offline data | Use Azure Data Box products to move large amounts of offline data to Azure. [Learn more](../databox/index.yml). > [!NOTE]
Azure Migrate integrates with several ISV offerings.
## Azure Migrate: Discovery and assessment tool
-The Azure Migrate: Discovery and assessment tool discovers and assesses on-premises VMware VMs, Hyper-V VMs, and physical servers for migration to Azure.
+The Azure Migrate: Discovery and assessment tool discovers and assesses on-premises VMware VMs, Hyper-V VMs, and physical servers for migration to Azure.
Here's what the tool does: -- **Azure readiness**: Assesses whether on-premises servers are ready for migration to Azure.
+- **Azure readiness**: Assesses whether on-premises servers, SQL Servers and web apps are ready for migration to Azure.
- **Azure sizing**: Estimates the size of Azure VMs/Azure SQL configuration/number of Azure VMware Solution nodes after migration. - **Azure cost estimation**: Estimates costs for running on-premises servers in Azure. - **Dependency analysis**: Identifies cross-server dependencies and optimization strategies for moving interdependent servers to Azure. Learn more about Discovery and assessment with [dependency analysis](concepts-dependency-visualization.md).
-Discovery and assessment uses a lightweight [Azure Migrate appliance](migrate-appliance.md) that you deploy on-premises.
+Discovery and assessment use a lightweight [Azure Migrate appliance](migrate-appliance.md) that you deploy on-premises.
- The appliance runs on a VM or physical server. You can install it easily using a downloaded template. - The appliance discovers on-premises servers. It also continually sends server metadata and performance data to Azure Migrate. - Appliance discovery is agentless. Nothing is installed on discovered servers. - After appliance discovery, you can gather discovered servers into groups and run assessments for each group. - ## Azure Migrate: Server Migration tool The Azure Migrate: Server Migration tool helps in migrating servers to Azure: **Migrate** | **Details** |
-On-premises VMware VMs | Migrate VMs to Azure using agentless or agent-based migration.<br/><br/> For agentless migration, Server Migration uses the same Azure Migrate appliance that can also be used by Discovery and assessment for discovery and assessment of VMware VMs.<br/><br/> For agent-based migration, Server Migration uses a replication appliance.
+On-premises VMware VMs | Migrate VMs to Azure using agentless or agent-based migration.<br/><br/> For agentless migration, Server Migration uses the same appliance that is used by Discovery and assessment tool for discovery and assessment of servers.<br/><br/> For agent-based migration, Server Migration uses a replication appliance.
On-premises Hyper-V VMs | Migrate VMs to Azure.<br/><br/> Server Migration uses provider agents installed on Hyper-V host for the migration. On-premises physical servers or servers hosted on other clouds | You can migrate physical servers to Azure. You can also migrate other virtualized servers, and VMs from other public clouds, by treating them as physical servers for the purpose of migration. Server Migration uses a replication appliance for the migration. - ## Selecting assessment and migration tools In the Azure Migrate hub, you select the tool you want to use for assessment or migration and add it to a project. If you add an ISV tool or Movere:
With both Azure and partner ISV tools built in, Azure Migrate has an extensive r
- Import-based assessments. - Dependency analysis of agentless applications.
-If you're looking for expert help to get started, Microsoft has skilled [Azure Expert Managed Service Providers](https://azure.microsoft.com/partners) to guide you. Check out the [Azure Migrate website](https://azure.microsoft.com/services/azure-migrate/).
+If you're looking for expert help to get started, Microsoft has skilled [Azure Expert Managed Service Providers](https://azure.microsoft.com/partners) to guide you. Check out the [Azure Migrate website](https://azure.microsoft.com/services/azure-migrate/).
## Azure Migrate versions There are two versions of the Azure Migrate service. - **Current version**: Use this version to create projects, discover on-premises servers, and orchestrate assessments and migrations. [Learn more](whats-new.md) about what's new in this version.-- **Previous version**: The previous version of Azure Migrate, also known as classic Azure Migrate, supports only assessment of on-premises VMware VMs. Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in classic projects will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a classic project, you can attach it to a project of current version after you delete the classic project.
+- **Previous version**: The previous version of Azure Migrate, also known as classic Azure Migrate, supports only assessment of on-premises servers running on VMware. Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in classic projects will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a classic project, you can attach it to a project of current version after you delete the classic project.
To access existing projects in the Azure portal, search for and select **Azure Migrate**. The **Azure Migrate** dashboard has a notification and a link to access old projects.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware.md
Title: VMware server assessment support in Azure Migrate
+ Title: VMware server discovery support in Azure Migrate
description: Learn about Azure Migrate discovery and assessment support for servers in a VMware environment.
Last updated 03/17/2021
-# Support matrix for VMware assessment
+# Support matrix for VMware discovery
This article summarizes prerequisites and support requirements for using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to discover and assess servers in a VMware environment for migration to Azure.
Support | Details
> > However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](https://go.microsoft.com/fwlink/?linkid=2158046) to understand what to choose.
+## ASP.NET web apps discovery requirements
+
+[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server.
+User can add both domain and non-domain credentials on appliance. Please make sure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in source environment.
+After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours.
+
+Support | Details
+ |
+**Supported servers** | Currently supported only for windows servers running IIS in your VMware environment.
+**Windows servers** | Windows Server 2008 R2 and later are supported.
+**Linux servers** | Currently not supported.
+**IIS access** | Web apps discovery requires a local admin user account.
+**IIS versions** | IIS 7.5 and later are supported.
+
+> [!NOTE]
+> Data is always encrypted at rest and during transit.
## Dependency analysis requirements (agentless) [Dependency analysis](concepts-dependency-visualization.md) helps you identify dependencies between on-premises servers that you want to assess and migrate to Azure. The following table summarizes the requirements for setting up agentless dependency analysis: Support | Details
- |
+ |
**Supported servers** | Currently supported only for servers in your VMware environment. **Windows servers** | Windows Server 2019<br />Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit) **Linux servers** | Red Hat Enterprise Linux 7, 6, 5<br /> Ubuntu Linux 16.04, 14.04<br /> Debian 8, 7<br /> Oracle Linux 7, 6<br /> CentOS 7, 6, 5<br /> SUSE Linux Enterprise Server 11 and later
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix.md
The table summarizes supported discovery, assessment, and migration scenarios.
**Deployment** | **Details** | **Discovery** | You can discover server metadata, and dynamic performance data.
-**App-discovery** | You can discover apps, roles, and features running on VMware VMs. Currently this feature is limited to discovery only. Assessment is currently at the server level. We don't yet offer app, role, or feature-based assessments.
-**Assessment** | Assess on-premises workloads and data running on VMware VMs, Hyper-V VMs, and physical servers. Assess using Azure Migrate: Server Assessment, Microsoft Data Migration Assistant (DMA), as well as other tools and ISV offerings.
-**Migration** | Migrate workloads and data running on physical servers, VMware VMs, Hyper-V VMs, physical servers, and cloud-based VMS to Azure. Migrate using Azure Migrate: Server Assessment and Azure Database Migration Service (DMS), and well as other tools and ISV offerings.
+**Software inventory** | You can discover apps, roles, and features running on VMware VMs. Currently this feature is limited to discovery only. Assessment is currently at the server level. We don't yet offer app, role, or feature-based assessments.
+**Assessment** | Assess on-premises workloads and data running on VMware VMs, Hyper-V VMs, and physical servers. Assess using Azure Migrate: Discovery and assessment, Microsoft Data Migration Assistant (DMA), as well as other tools and ISV offerings.
+**Migration** | Migrate workloads and data running on physical servers, VMware VMs, Hyper-V VMs, physical servers, and cloud-based VMS to Azure. Migrate using Azure Migrate: Server Migration and Azure Database Migration Service (DMS), and well as other tools and ISV offerings.
> [!NOTE] > Currently, ISV tools can't send data to Azure Migrate in Azure Government. You can use integrated Microsoft tools, or use partner tools independently. ## Supported tools - Specific tool support is summarized in the table. **Tool** | **Assess** | **Migrate** | |
-Azure Migrate: Server Assessment | Assess [VMware VMs](./tutorial-discover-vmware.md), [Hyper-V VMs](./tutorial-discover-hyper-v.md), and [physical servers](./tutorial-discover-physical.md). | Not available (N/A)
+Azure Migrate: Discovery and assessment | Assess [VMware VMs](./tutorial-discover-vmware.md), [Hyper-V VMs](./tutorial-discover-hyper-v.md), and [physical servers](./tutorial-discover-physical.md). | Not available (N/A)
Azure Migrate: Server Migration | N/A | Migrate [VMware VMs](tutorial-migrate-vmware.md), [Hyper-V VMs](tutorial-migrate-hyper-v.md), and [physical servers](tutorial-migrate-physical-virtual-machines.md).
-[Carbonite](https://www.carbonite.com/data-protection-resources/resource/Datasheet/carbonite-migrate-for-microsoft-azure) | N/A | Migrate VMware VMs, Hyper-V VMs, physical servers and other cloud workloads.
-[Cloudamize](https://www.cloudamize.com/platform#tab-0)| Assess VMware VMs, Hyper-V VMs, physical servers,and other cloud workloads. | N/A
+[Carbonite](https://www.carbonite.com/data-protection-resources/resource/Datasheet/carbonite-migrate-for-microsoft-azure) | N/A | Migrate VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads.
+[Cloudamize](https://www.cloudamize.com/platform#tab-0)| Assess VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads. | N/A
[Corent Technology](https://go.microsoft.com/fwlink/?linkid=2084928) | Assess VMware VMs, Hyper-V VMs, physical server sand other cloud workloads. | Migrate VMware VMs, Hyper-V VMs, physical servers, public cloud workloads.
-[Device 42](https://go.microsoft.com/fwlink/?linkid=2097158) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads.| N/A
+[Device 42](https://go.microsoft.com/fwlink/?linkid=2097158) | Assess VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads.| N/A
[DMA](/sql/dma/dma-overview) | Assess SQL Server databases. | N/A [DMS](../dms/dms-overview.md) | N/A | Migrate SQL Server, Oracle, MySQL, PostgreSQL, MongoDB. [Lakeside](https://go.microsoft.com/fwlink/?linkid=2104908) | Assess virtual desktop infrastructure (VDI) | N/A [Movere](https://www.movere.io/) | Assess VMware VMs, Hyper-V VMs, Xen VMs, physical servers, workstations (including VDI) and other cloud workloads. | N/A
-[RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMware VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers and other cloud workloads
-[Turbonomic](https://go.microsoft.com/fwlink/?linkid=2094295) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads. | N/A
+[RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMware VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers, and other cloud workloads
+[Turbonomic](https://go.microsoft.com/fwlink/?linkid=2094295) | Assess VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads. | N/A
[UnifyCloud](https://go.microsoft.com/fwlink/?linkid=2097195) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads, and SQL Server databases. | N/A [Webapp Migration Assistant](https://appmigration.microsoft.com/) | Assess web apps | Migrate web apps.
-[Zerto](https://go.microsoft.com/fwlink/?linkid=2157322) | N/A | Migrate VMware VMs, Hyper-V VMs, physical servers and other cloud workloads.
-
+[Zerto](https://go.microsoft.com/fwlink/?linkid=2157322) | N/A | Migrate VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads.
## Project **Support** | **Details** | Subscription | Can have multiple projects within a subscription.
-Azure permissions | User need Contributor or Owner permissions in the subscription to create a project.
+Azure permissions | Users need Contributor or Owner permissions in the subscription to create a project.
VMware VMs | Assess up to 35,000 VMware VMs in a single project. Hyper-V VMs | Assess up to 35,000 Hyper-V VMs in a single project.
For Azure Migrate to work with Azure you need these permissions before you start
**Task** | **Permissions** | **Details** | | Create a project | Your Azure account needs permissions to create a project. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
-Register the Azure Migrate appliance| Azure Migrate uses a lightweight [Azure Migrate appliance](migrate-appliance.md) to assess servers with Azure Migrate: Server Assessment, and to run [agentless migration](server-migrate-overview.md) of VMware VMs with Azure Migrate: Server Migration. This appliance discovers servers, and sends metadata and performance data to Azure Migrate.<br/><br/> During registration, register providers (Microsoft.OffAzure, Microsoft.Migrate, and Microsoft.KeyVault) are registered with the subscription chosen in the appliance, so that the subscription works with the resource provider. To register, you need Contributor or Owner access on the subscription.<br/><br/> **VMware**-During onboarding, Azure Migrate creates two Azure Active Directory (Azure AD) apps. The first app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. The second app accesses an Azure Key Vault created in the user subscription for agentless VMware migration only. In agentless migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. It has Azure RBAC access on the Azure Key Vault (in the customer tenant) when discovery is initiated from the appliance.<br/><br/> **Hyper-V**-During onboarding. Azure Migrate creates one Azure AD app. The app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
+Register the Azure Migrate appliance| Azure Migrate uses a lightweight [Azure Migrate appliance](migrate-appliance.md) to discover and assess servers with Azure Migrate: Discovery and assessment, and to run [agentless migration](server-migrate-overview.md) of VMware VMs with Azure Migrate: Server Migration. This appliance discovers servers, and sends metadata and performance data to Azure Migrate.<br/><br/> During registration, register providers (Microsoft.OffAzure, Microsoft.Migrate, and Microsoft.KeyVault) are registered with the subscription chosen in the appliance, so that the subscription works with the resource provider. To register, you need Contributor or Owner access on the subscription.<br/><br/> **VMware**-During onboarding, Azure Migrate creates two Azure Active Directory (Azure AD) apps. The first app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. The second app accesses an Azure Key Vault created in the user subscription for agentless VMware migration only. In agentless migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. It has Azure RBAC access on the Azure Key Vault (in the customer tenant) when discovery is initiated from the appliance.<br/><br/> **Hyper-V**-During onboarding. Azure Migrate creates one Azure AD app. The app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
Create a key vault for VMware agentless migration | To migrate VMware VMs with agentless Azure Migrate: Server Migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. To create the vault, you set permissions (Owner, or Contributor and User Access Administrator) on the resource group where the project resides. | [Set up](./tutorial-discover-vmware.md#prepare-an-azure-user-account) permissions. ## Supported geographies (Public cloud)
-You can create a project in a number of geographies in the public cloud.
+You can create a project in many geographies in the public cloud.
- Although you can only create projects in these geographies, you can assess or migrate servers for other target locations. - The project geography is only used to store the discovered metadata.
Create project | United States | Metadata is stored in US Gov Arizona, US Gov Vi
Target assessment | United States | Target regions: US Gov Arizona, US Gov Virginia, US Gov Texas Target replication | United States | Target regions: US DoD Central, US DoD East, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia - ## VMware assessment and migration
-[Review](migrate-support-matrix-vmware.md) the Azure Migrate: Server Assessment and Azure Migrate: Server Migration support matrix for VMware VMs.
+[Review](migrate-support-matrix-vmware.md) the Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration support matrix for VMware VMs.
## Hyper-V assessment and migration
-[Review](migrate-support-matrix-hyper-v.md) the Azure Migrate: Server Assessment and Azure Migrate: Server Migration support matrix for Hyper-V VMs.
--
+[Review](migrate-support-matrix-hyper-v.md) the Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration support matrix for Hyper-V VMs.
## Azure Migrate versions
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/quickstart-create-migrate-project.md
The template used in this quickstart is from [Azure Quickstart Templates](https:
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.migrate/migrate-project-create/azuredeploy.json"::: -- ## Deploy the template To deploy the template, the **Subscription**, **Resource group**, **Project name**, and **Location** are required.
To confirm that the Azure Migrate project was created, use the Azure portal.
1. Navigate to Azure Migrate by searching for **Azure Migrate** in the search bar on the Azure portal.
-2. Click the **Discover,** **Assess,** and **Migrate** button under the Windows, Linux, and SQL Server tile.
+2. Click the **Discover,** **Assess,** and **Migrate** button under the Servers, databases and web apps tile.
3. Select the **Azure subscription** and **Project** as per the values specified in the deployment.
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/resources-faq.md
Azure Migrate provides a central hub to track discovery, assessment, and migrati
## What can I do with Azure Migrate?
-Use Azure Migrate to discover, assess, and migrate on-premises infrastructure, applications, and data to Azure. Azure Migrate supports assessment and migration of on-premises VMware VMs, Hyper-V VMs, physical servers, other virtualized VMs, databases, web apps, and virtual desktops.
+Use Azure Migrate to discover, assess, and migrate on-premises infrastructure, applications, and data to Azure. Azure Migrate supports assessment and migration of on-premises VMware VMs, Hyper-V VMs, physical servers, other virtualized VMs, databases, web apps, and virtual desktops.
## What's the difference between Azure Migrate and Azure Site Recovery?
-[Azure Migrate](migrate-services-overview.md) provides a centralized hub for assessment and migration to Azure.
+[Azure Migrate](migrate-services-overview.md) provides a centralized hub for assessment and migration to Azure.
- Using Azure Migrate provides interoperability and future extensibility with Azure Migrate tools, other Azure services, and third-party tools.-- The Azure Migrate: Server Migration tool is purpose-built for server migration to Azure. It's optimized for migration. You don't need to learn about concepts and scenarios that aren't directly relevant to migration.
+- The Azure Migrate: Server Migration tool is purpose-built for server migration to Azure. It's optimized for migration. You don't need to learn about concepts and scenarios that aren't directly relevant to migration.
- There are no tool usage charges for migration for 180 days, from the time replication is started for a VM. It gives you time to complete migration. You only pay for the storage and network resources used in replication, and for compute charges consumed during test migrations. - Azure Migrate supports all migration scenarios supported by Site Recovery. Also, for VMware VMs, Azure Migrate provides an agentless migration option. - We're prioritizing new migration features for the Azure Migrate: Server Migration tool only. These features aren't targeted for Site Recovery.
The Azure Migrate: Server Migration tool uses some back-end Site Recovery functi
Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in the classic project will be deleted. You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a classic project, you can attach it to a project of current version after you delete the classic project.
-## What's the difference between Azure Migrate: Server Assessment and the MAP Toolkit?
+## What's the difference between Azure Migrate: Discovery and assessment and the MAP Toolkit?
Server Assessment provides assessment to help with migration readiness, and evaluation of workloads for migration to Azure. The [Microsoft Assessment and Planning (MAP) Toolkit](https://www.microsoft.com/download/details.aspx?id=7826) helps with other tasks, including migration planning for newer versions of Windows client and server operating systems, and software usage tracking. For these scenarios, continue to use the MAP Toolkit.
Review the supported geographies for [public](migrate-support-matrix.md#supporte
## How do I get started?
-Identify the tool you need, and then add the tool to an Azure Migrate project.
+Identify the tool you need, and then add the tool to an Azure Migrate project.
To add an ISV tool or Movere:
You can track your migration journey from within the Azure Migrate project, acro
## How do I delete a project?
-Learn how to [delete a project](how-to-delete-project.md).
+Learn how to [delete a project](how-to-delete-project.md).
## Next steps
migrate Troubleshoot Appliance Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance-diagnostic.md
ms. Previously updated : 08/09/2021 Last updated : 08/11/2021 # Diagnose and solve issues with Azure Migrate appliance
The **Diagnose and solve** capability on Azure Migrate appliance helps users ide
You can run **Diagnose and solve** at any time from the appliance configuration manager to generate a diagnostics report. The report provides information about the checks performed, their status, the issues identified and recommendation steps to solve the issues.
+> [!IMPORTANT]
+> **Diagnose and solve** capability is currently in preview for Azure Migrate appliance.
+> This preview is covered by customer support and can be used for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++ ## Diagnostic checks *Diagnose and solve* runs some pre-validations to see if the required configuration files are not missing or blocked by an anti-virus software on the appliance and then performs the following checks:
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-assessment.md
ms. Previously updated : 01/02/2020 Last updated : 07/28/2021 # Troubleshoot assessment This article helps you troubleshoot issues with assessment and dependency visualization with [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool). - ## Assessment readiness issues Fix assessment readiness issues as follows:
Requires a Microsoft Visual Studio subscription | The server is running a Window
VM not found for the required storage performance | The storage performance (input/output operations per second [IOPS] and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration. VM not found for the required network performance | The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server. VM not found in the specified location | Use a different target location before migration.
-One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.A<br/><br/> Azure Migrate: Discovery and assessment currently doesn't support Ultra SSD disks, and assesses the disks based on the disk limits for premium managed disks (32 TB).<br/><br/> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br/><br/> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
+One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.A<br/><br/> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br/><br/> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br/><br/> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
One or more unsuitable network adapters. | Remove unused network adapters from the server before migration. Disk count exceeds limit | Remove unused disks from the server before migration.
-Disk size exceeds limit | Azure Migrate: Discovery and assessment currently doesn't support Ultra SSD disks, and assesses the disks based on premium disk limits (32 TB).<br/><br/> However, Azure supports disks with up to 64-TB size (supported by Ultra SSD disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
+Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64-TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
Disk unavailable in the specified location | Make sure the disk is in your target location before you migrate. Disk unavailable for the specified redundancy | The disk should use the redundancy storage type defined in the assessment settings (LRS by default). Could not determine disk suitability because of an internal error | Try creating a new assessment for the group.
In the case of VMware and Hyper-V VMs, Azure VM assessment marks Linux VMs as "C
- The gap prevents it from detecting the minor version of the Linux OS installed on the on-premises VMs. - For example, for RHEL 6.10, currently Azure VM assessment detects only RHEL 6 as the OS version. This is because the vCenter Server ar the Hyper-V host do not provide the kernel version for Linux VM operating systems.-- Because Azure endorses only specific versions of Linux, the Linux VMs are currently marked as conditionally ready in Azure VM assessment.
+- Because Azure endorses only specific versions of Linux, the Linux VMs are currently marked as conditionally ready in Azure VM assessment.
- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md).-- After you've verified the endorsed distribution, you can ignore this warning.
+- After you've verified the endorsed distribution, you can ignore this warning.
This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. Azure VM assessment uses the operating system detected from the VM using the guest credentials provided. This operating system data identifies the right OS information in the case of both Windows and Linux VMs.
To show how this can affect recommendations, let's take an example:
We have an on-premises VM with four cores and eight GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3. -- If the assessment is **As on-premises**, an Azure VM SKU with four cores and 8 GB of memory is recommended.
+- If the assessment is **As on-premises**, an Azure VM SKU with four cores and 8 GB of memory is recommended.
- If the assessment is performance-based, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of four cores (nearest supported core count) and eight GB of memory (nearest supported memory size) is recommended. - [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing. ## Why is the recommended Azure disk SKUs bigger than on-premises in an Azure VM assessment? Azure VM assessment might recommend a bigger disk based on the type of assessment.+ - Disk sizing depends on two assessment properties: sizing criteria and storage type.-- If the sizing criteria is **Performance-based**, and the storage type is set to **Automatic**, the IOPS, and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, or Premium). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.-- If the sizing criteria is **Performance-based**, and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, or **Premium**.
+- If the sizing criteria is **Performance-based**, and the storage type is set to **Automatic**, the IOPS, and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, Premium, or Ultra disk). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.
+- If the sizing criteria is **Performance-based**, and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, **Premium**, or **Ultra disk**.
As an example, if you have an on-premises disk with 32 GB of memory, but the aggregated read and write IOPS for the disk is 800 IOPS, Azure VM assessment recommends a premium disk (because of the higher IOPS requirements), and then recommends a disk SKU that can support the required IOPS and size. The nearest match in this example would be P15 (256 GB, 1100 IOPS). Even though the size required by the on-premises disk was 32 GB, Azure VM assessment recommends a larger disk because of the high IOPS requirement of the on-premises disk.
For "Performance-based" assessment, the assessment report export says 'Percentag
> [!Note] > If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly. - ## Why is performance data missing for some/all SQL instances/databases in my Azure SQL assessment? To ensure performance data is collected, please check:
If any of the performance counters are missing, Azure SQL assessment recommends
The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating: - You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and **Recalculate** the assessment.
-
- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that: - Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed
- - For Hyper-V Servers dynamic memory is enabled
+ - For Hyper-V Servers dynamic memory is enabled
- The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat - For For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
Azure VM assessment currently considers the operating system license cost only f
Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn how](concepts-assessment-calculation.md#calculate-sizing-performance-based) performance-based data is collected.
+## Can I migrate my disks to Ultra disk using Azure Migrate?
+
+No. Currently, both Azure Migrate and Azure Site Recovery do not support migration to Ultra disks. Find steps to deploy Ultra disk [here](https://docs.microsoft.com/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal#deploy-an-ultra-disk)
+
+## Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
+
+As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra Disk is billed based on the provisioned size, provisioned IOPS and provisioned throughput. As per an example provided:
+If you provisioned a 200 GiB Ultra Disk, with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB and you'll be billed for the 256 GiB, 20,000 IOPS and 1,000 MB/second for 20 hours.
+
+IOPS to be provisioned = (Throughput discovered) *1024/256
+
+## Does the Ultra disk recommendation consider latency?
+
+No, currently only disk size, total throughput and total IOPS is used for sizing and costing.
+
+## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says ΓÇ£No VM found for this locationΓÇ¥?
+
+This is possible as not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+ ## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime and Discount (%)?+ When you select 'Reserved instances', the 'Discount (%)' and 'VM uptime' properties are not applicable. As your assessment was created with an invalid combination of these properties, the edit and recalculate buttons are disabled. Please create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment). ## I do not see performance data for some network adapters on my physical servers
This can happen if the physical server has Hyper-V virtualization enabled. On th
Readiness category may be incorrectly marked as "Not Ready" in the case of a physical server that has Hyper-V virtualization enabled. On these servers, due to a product gap, Azure Migrate currently discovers both the physical and virtual adapters. Hence, the no. of network adapters discovered is higher than actual. In both as-on-premises and performance-based assessments, Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be being higher than 32, the maximum no. of NICs supported on Azure VMs, the server will be marked ΓÇ£Not readyΓÇ¥. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of no. of NICs on sizing. - ## Number of discovered NICs higher than actual for physical servers This can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual adapters. Hence, the no. of NICs discovered is higher than actual. - ## Capture network traffic Collect network traffic logs as follows:
Collect network traffic logs as follows:
- In Microsoft Edge or Internet Explorer, right-click the errors and select **Copy all**. 7. Close Developer Tools. - ## Where is the operating system data in my assessment discovered from? -- For VMware VMs, by default, it is the operating system data provided by the vCenter.
- - For VMware linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details in the assessment, go to the Discovered servers view, and mouse over the value in the "Operating system" column. In the text that pops up, you would be able to see whether the OS data you see is gathered from vCenter server or from the guest VM using the VM credentials.
+- For VMware VMs, by default, it is the operating system data provided by the vCenter.
+ - For VMware linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details in the assessment, go to the Discovered servers view, and mouse over the value in the "Operating system" column. In the text that pops up, you would be able to see whether the OS data you see is gathered from vCenter server or from the guest VM using the VM credentials.
- For Windows VMs, the operating system details are always fetched from the vCenter Server. - For Hyper-V VMs, the operating system data is gathered from the Hyper-V host - For physical servers, it is fetched from the server.
+## Common web apps discovery errors
+
+Azure Migrate provides option to assess discovered ASP.NET web apps for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool. Refer to the [assessment](tutorial-assess-webapps.md) tutorial to get started.
+
+Typical Azure App Service assessment errors are summarized in the table.
+
+| **Error** | **Cause** | **Recommended action** |
+|--|--|--|
+|**Application pool check.**|The IIS site is using the following application pools: {0}.|Azure App Service does not support more than one application pool configuration per App Service Application. Move the workloads to a single application pool and remove additional application pools.|
+|**Application pool identity check.**|The site's application pool is running as an unsupported user identity type: {0}|Azure App Service does not support using the LocalSystem or SpecificUser application pool identity types. Set the Application pool to run as ApplicationPoolIdentity.|
+|**Authorization check.**|The following unsupported authentication types were found: {0}|Azure App Service supported authentication types and configuration are different from on-premises IIS. Disable the unsupported authentication types on the site. After the migration is complete, it will be possible to configure the site using one of the Azure App Service supported authentication types.|
+|**Authorization check unknown.**|Unable to determine enabled authentication types for all of the site configuration.|Unable to determine authentication types. Fix all configuration errors and confirm that all site content locations are accessible to Administrators group.|
+|**Configuration error check.**|The following configuration errors were found: {0}|Migration readiness cannot be determined without reading all applicable configuration. Fix all configuration errors, making sure configuration is valid and accessible.|
+|**Content size check.**|The site content appears to be greater than the maximum allowed of 2 GB for successful migration.|For successful migration site content should be less than 2 GB. Evaluate if the site could switch to using non-file system based storage options for static content, such as Azure Storage.|
+|**Content size check unknown.**|File content size could not be determined, which usually indicates an access issue.|Content must be accessible in order to migrate the site. Confirm that site is not using UNC shares for content and that all site content locations are accessible to Administrators group.|
+|**Global module check.**|The following unsupported global modules were detected: {0}|Azure App Service supports limited global modules. Remove the unsupported module(s) from GlobalModules section along with all associated configuration.|
+|**Isapi filter check.**|The following unsupported ISAPI filters were detected: {0}|Automatic configuration of custom ISAPI filters is not supported. Remove the unsupported ISAPI filters.|
+|**Isapi filter check unknown.**|Unable to determine ISAPI filters present for all of the site configuration.|Automatic configuration of custom ISAPI filters is not supported. Fix all configuration errors and confirm that all site content locations are accessible to Administrators group.|
+|**Location tag check.**|The following location paths were found in the applicationHost.config file: {0}|Migration method does not support moving location path configuration in applicationHost.config. Move the location path configuration to either the site's root web.config file, or to a web.config file associated with the specific application to which they apply.|
+|**Protocol check.**|Bindings were found using the following unsupported protocols: {0}|Azure App Service only supports HTTP and HTTPS protocol. Remove the bindings with protocols that are not HTTP or HTTPS.|
+|**Virtual directory check.**|The following virtual directories are hosted on UNC shares: {0}|Migration does not support migrating site content hosted on UNC shares. Move content to a local file path or consider changing to a non-file system based storage option, such as Azure Storage. If using shared configuration, disable shared configuration for the server before modifying content paths.|
+|**Https binding check.**|The application uses HTTPS.|Additional manual steps are required for HTTPS configuration in App Service. Post migration additional steps will be required to associate certificates with the Azure App Service site.|
+|**TCP port check**|Bindings were found on the following unsupported ports: {0}|Azure App Services supports only ports 80 and 443. Clients making requests to the site should update the port in their requests to use 80 or 443.|
+|**Framework check.**|The following non-.NET frameworks or unsupported .NET framework versions were detected as possibly in use by this site: {0}|Migration does not validate the framework for non-.NET sites. App Service supports multiple frameworks, however these have different migration options. Confirm that the non-.NET frameworks are not being used by the site, or consider using an alternate migration option.|
+ ## Next steps [Create](how-to-create-assessment.md) or [customize](how-to-modify-assessment.md) an assessment.
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-discovery.md
Title: Troubleshoot the ongoing server discovery, software inventory and SQL discovery
-description: Get help with server discovery,software inventory and SQL discovery
+description: Get help with server discovery, software inventory and SQL and web apps discovery
ms.
Last updated 07/01/2020
-# Troubleshoot the ongoing server discovery, software inventory and SQL discovery
+# Troubleshoot the ongoing server discovery, software inventory, SQL, and web app discovery
This article helps you troubleshoot issues with ongoing server discovery, software inventory and discovery of SQL Server instances and databases.
Ensure the user downloading the inventory from the portal has Contributor privil
Azure Migrate supports software inventory, using Azure Migrate: Discovery and assessment. Software inventory is currently supported for VMware only. [Learn more](how-to-discover-applications.md) about the requirements for software inventory.
-The list of software inventory errors are summarized in the table below.
+The list of software inventory errors is summarized in the table below.
> [!Note] > Same errors can also be encountered with agentless dependency analysis as it follows the same methodology as software inventory to collect the required data.
The list of software inventory errors are summarized in the table below.
| **9000:** VMware tools status on the server cannot be detected | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 is installed and running on the server. | | **9001:** VMware tools not installed on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 is installed and running on the server. | | **9002:** VMware tools not running on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 is installed and running on the server. |
-| **9003:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. If the server is indeed running Windows or Linux OS, check the operating system type specified in vCenter Server. |
+| **9003:** Operating system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. If the server is indeed running Windows or Linux OS, check the operating system type specified in vCenter Server. |
| **9004:** Server is not a running state. | Server is in powered off state. | Ensure that the server is in a running state. |
-| **9005:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<FetchedParameter> operating system is not supported currently. |
+| **9005:** Operating system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<FetchedParameter> operating system is not supported currently. |
| **9006:** The URL needed to download the discovery metadata file from the server is empty. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. | | **9007:** The process that runs the script to collect the metadata is not found in the server. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. | | **9008:** The status of the process running on the server to collect the metadata cannot be retrieved. | This could be a transient issue due to an internal error. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. |
The list of software inventory errors are summarized in the table below.
| **9012:** The file containing the discovered metadata on the server is empty. | This could be a transient issue due to an internal error. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. | | **9013:** A new temporary user profile is getting created on logging in the server each time. | A new temporary user profile is getting created on logging in the server each time. | Please submit a Microsoft support case to help troubleshoot this issue. | | **9014:** Unable to retrieve the file containing the discovered metadata due to an error encountered on the ESXi host. Error code: %ErrorCode; Details: %ErrorMessage | Encountered an error on the ESXi host \<HostName>. Error code: %ErrorCode; Details: %ErrorMessage | Ensure that port 443 is open on the ESXi host on which the server is running.<br/><br/> [Learn more](troubleshoot-discovery.md#error-9014-httpgetrequesttoretrievefilefailed) on how to remediate the issue.|
-| **9015:** The vCenter Server user account provided for server discovery does not have Guest operations privileges enabled. | The required privileges of Guest Operations has not been enabled on the vCenter Server user account. | Ensure that the vCenter Server user account has privileges enabled for Virtual Machines > Guest Operations, in order to interact with the server and pull the required data. <br/><br/> [Learn more](tutorial-discover-vmware.md#prepare-vmware) on how to set up the vCenter Server account with required privileges. |
-| **9016:** Unable to discover the metadata as the guest operations agent on the server is outdated. | Either the VMware tools is not installed on the server or the installed version is not up-to-date. | Ensure that the VMware tools is installed and running up-to-date on the server. The VMware Tools version must be version 10.2.1 or later. |
+| **9015:** The vCenter Server user account provided for server discovery does not have Guest operations privileges enabled. | The required privileges of Guest Operations have not been enabled on the vCenter Server user account. | Ensure that the vCenter Server user account has privileges enabled for Virtual Machines > Guest Operations, in order to interact with the server and pull the required data. <br/><br/> [Learn more](tutorial-discover-vmware.md#prepare-vmware) on how to set up the vCenter Server account with required privileges. |
+| **9016:** Unable to discover the metadata as the guest operations agent on the server is outdated. | Either the VMware tools is not installed on the server or the installed version is not up to date. | Ensure that the VMware tools is installed and running up to date on the server. The VMware Tools version must be version 10.2.1 or later. |
| **9017:** The file containing the discovered metadata cannot be found on the server. | This could be a transient issue due to an internal error. | Please submit a Microsoft support case to help troubleshoot this issue. | | **9018:** PowerShell is not installed on the server. | PowerShell cannot be found on the server. | Ensure that PowerShell version 2.0 or later is installed on the server. <br/><br/> [Learn more](troubleshoot-discovery.md#error-9018-powershellnotfound) on how to remediate the issue.|
-| **9019:** Unable to discover the metadata due to guest operation failures on the server. | VMware guest operations failed on the server.The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | Ensure that the server credentials provided on the appliance are valid and username provided in the credentials is in UPN format. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
+| **9019:** Unable to discover the metadata due to guest operation failures on the server. | VMware guest operations failed on the server. The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | Ensure that the server credentials provided on the appliance are valid and username provided in the credentials is in UPN format. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) |
| **9020:** Unable to create the file required to contain the discovered metadata on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting the creation of file in the required folder. The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file permission on the folder \<folder path/folder name> in the server. <br/>2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) | | **9021:** Unable to create the file required to contain the discovered metadata at right path on the server. | VMware tools is reporting an incorrect file path to create the file. | Ensure that VMware tools later than version 10.2.0 is installed and running on the server. | | **9022:** The access is denied to run the Get-WmiObject cmdlet on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting access to WMI object. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file Administrator privileges and has WMI enabled. <br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes).<br/><br/> [Learn more](troubleshoot-discovery.md#error-9022-getwmiobjectaccessdenied) on how to remediate the issue.|
The list of software inventory errors are summarized in the table below.
| **9037:** The metadata collection is temporarily paused due to high response time from the server. | The server is taking too long to respond. | The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. | | **10000:** Operation system type running on the server is not supported. | Operating system running on the server is neither Windows nor Linux. | Only Windows and Linux OS types are supported. \<GuestOSName> operating system is not supported currently. | | **10001:** The script required to gather discovery metadata is not found on the server. | The script required to perform discovery may have been deleted or removed from the expected location. | Please submit a Microsoft support case to help troubleshoot this issue. |
-| **10002:** The discovery operations timed out on the server. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If it isnt resolved, follow the steps [here](troubleshoot-discovery.md#error-10002-scriptexecutiontimedoutonvm) to remediate the issue. If the issue still persists, open a Microsoft support case.|
+| **10002:** The discovery operations timed out on the server. | This could be a transient issue due to the discovery agent on the appliance not working as expected. | The issue should automatically resolve in the next cycle within 24 hours. If it isn't resolved, follow the steps [here](troubleshoot-discovery.md#error-10002-scriptexecutiontimedoutonvm) to remediate the issue. If the issue still persists, open a Microsoft support case.|
| **10003:** The process executing the discovery operations exited with an error. | The process executing the discovery operations exited abruptly due to an error.| The issue should automatically resolve in the next cycle within 24 hours. If the issue persists, submit a Microsoft support case. | | **10004:** Credentials not provided on the appliance for the server OS type. | The credentials for the server OS type were not added on the appliance. | 1. Ensure that you add the credentials for the OS type of the impacted server on the appliance.<br/> 2. You can now add multiple server credentials on the appliance. | | **10005:** Credentials provided on the appliance for the server are invalid. | The credentials provided on the appliance are not valid. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Ensure that the credentials provided on the appliance are valid and the server is accessible using the credentials.<br/> 2. You can now add multiple server credentials on the appliance.<br/> 3. Go back to the appliance configuration manager to either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes). <br/><br/> [Learn more](troubleshoot-discovery.md#error-10005-guestcredentialnotvalid) on how to remediate the issue.|
The list of software inventory errors are summarized in the table below.
| **10008:** Unable to create the file required to contain the discovered metadata on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting the creation of file in the required folder. The issue was encountered when trying the following credentials on the server: <FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has create file permission on the folder \<folder path/folder name> in the server.<br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) | | **10009:** Unable to write the discovered metadata in the file on the server. | The role associated to the credentials provided on the appliance or a group policy on-premises is restricting writing in the file on the server. The issue was encountered when trying the following credentials on the server: \<FriendlyNameOfCredentials>. | 1. Check if the credentials provided on the appliance has write file permission on the folder <folder path/folder name> in the server.<br/> 2. If the credentials provided on the appliance do not have the required permissions, either provide another set of credentials or edit an existing one. (find the friendly name of the credentials tried by Azure Migrate in the possible causes) | | **10010:** Unable to discover as the command- %CommandName; required to collect some metadata is missing on the server. | The package containing the command %CommandName; is not installed on the server. | Ensure that the package containing the command %CommandName; is installed on the server. |
-| **10011:** The credentials provided on the appliance were used to log in and log off for an interactive session. | The interactive log in and log off forces the registry keys to be unloaded in the profile of the account, being used.This condition makes the keys unavailable for future use. | Use the resolution methods documented [here](/sharepoint/troubleshoot/administration/800703fa-illegal-operation-error#resolutionus/sharepoint/troubleshoot/administration/800703fa-illegal-operation-error) |
-| **10012:** Credentials have not been provided on the appliance for the server. | Either no credentials have been provided for the server or you have provided domain credentials with incorrect domain name on the appliance.[Learn more](troubleshoot-discovery.md#error-10012-credentialnotprovided) about the cause of this error. | 1. Ensure that the credentials are provided on the appliance for the server and the server is accessible using the credentials. <br/> 2. You can now add multiple credentials on the appliance for servers.Go back to the appliance configuration manager to provide credentials for the server.|
+| **10011:** The credentials provided on the appliance were used to log in and log off for an interactive session. | The interactive log in and log-off forces the registry keys to be unloaded in the profile of the account, being used. This condition makes the keys unavailable for future use. | Use the resolution methods documented [here](/sharepoint/troubleshoot/administration/800703fa-illegal-operation-error#resolutionus/sharepoint/troubleshoot/administration/800703fa-illegal-operation-error) |
+| **10012:** Credentials have not been provided on the appliance for the server. | Either no credentials have been provided for the server or you have provided domain credentials with incorrect domain name on the appliance.[Learn more](troubleshoot-discovery.md#error-10012-credentialnotprovided) about the cause of this error. | 1. Ensure that the credentials are provided on the appliance for the server and the server is accessible using the credentials. <br/> 2. You can now add multiple credentials on the appliance for servers. Go back to the appliance configuration manager to provide credentials for the server.|
## Error 9014: HTTPGetRequestToRetrieveFileFailed
After installing the required PowerShell version, you can verify if the error wa
### Remediation Make sure that the user account provided in the appliance has access to WMI Namespace and subnamespaces. You can set the access by following these steps:
-1. Go to the server which is reporting this error.
+1. Go to the server that is reporting this error.
2. Search and select ΓÇÿRunΓÇÖ from the Start menu. In the ΓÇÿRunΓÇÖ dialog box, type wmimgmt.msc in the ΓÇÿOpen:ΓÇÖ text field and press enter.
-3. The wmimgmt console will open where you can find ΓÇ£WMI Control (Local)ΓÇ¥ in the left panel. Right click on it and select ΓÇÿPropertiesΓÇÖ from the menu.
+3. The wmimgmt console will open where you can find ΓÇ£WMI Control (Local)ΓÇ¥ in the left panel. Right-click on it and select ΓÇÿPropertiesΓÇÖ from the menu.
4. In the ΓÇÿWMI Control (Local) PropertiesΓÇÖ dialog box, select ΓÇÿSecuritiesΓÇÖ tab.
-5. On the Securities tab, select 'Security' button which will open ΓÇÿSecurity for ROOTΓÇÖ dialog box.
+5. On the Securities tab, select 'Security' button, which will open ΓÇÿSecurity for ROOTΓÇÖ dialog box.
7. Select 'Advanced' button to open 'Advanced Security Settings for Root' dialog box.
-8. Select 'Add' button which opens the 'Permission Entry for Root' dialog box.
+8. Select 'Add' button, which opens the 'Permission Entry for Root' dialog box.
9. Click ΓÇÿSelect a principalΓÇÖ to open ΓÇÿSelect Users, Computers, Service Accounts or GroupsΓÇÖ dialog box. 10. Select the user name(s) or group(s) you want to grant access to the WMI and click 'OK'. 11. Ensure you grant execute permissions and select "This namespace and subnamespaces" in the 'Applies to:' drop-down.
After getting the required access, you can verify if the error was resolved by f
## Error 9032: InvalidRequest ### Cause
-There can be multiple reasons for this issue, one of the reason is when the username provided (server credentials) on the appliance configuration manager is having invalid XML characters, which causes error in parsing the SOAP request.
+There can be multiple reasons for this issue, one of the reason is when the username provided (server credentials) on the appliance configuration manager is having invalid XML characters, which cause error in parsing the SOAP request.
### Remediation - Make sure the username of the server credentials does not have invalid XML characters and is in username@domain.com format popularly known as UPN format.
There can be multiple reasons for this issue, one of the reason is when the user
```` 3. If commands output the result in few seconds, then you can go to the Azure Migrate project> Discovery and assessment>Overview>Manage>Appliances, select the appliance name and select **Refresh services** to restart the discovery service. 4. If the commands are timing out without giving any output, then -- You need to figure out which process are consuming high CPU or memory on the server.-- You can try and provide more cores/memory to that server and execute the commands again.
+- You need to figure out which process is consuming high CPU or memory on the server.
+- You can try to provide more cores/memory to that server and execute the commands again.
## Error 10005: GuestCredentialNotValid ### Remediation - Ensure the validity of credentials _(friendly name provided in the error)_ by clicking on "Revalidate credentials" on the appliance config manager.-- Ensure that you are able to login into the impacted server using the same credential provided in the appliance.-- You can try using another user account (for the same domain, in case server is domain-joined) for that server instead of Administrator account .
+- Ensure that you are able to log in into the impacted server using the same credential provided in the appliance.
+- You can try using another user account (for the same domain, in case server is domain-joined) for that server instead of Administrator account.
- The issue can happen when Global Catalog <-> Domain Controller communication is broken. You can check this by creating a new user account in the domain controller and providing the same in the appliance. This might also require restarting the Domain controller. - After taking the remediation steps, you can verify if the error was resolved by following steps [here](troubleshoot-discovery.md#mitigation-verification-using-vmware-powercli). ## Error 10012: CredentialNotProvided ### Cause
-This error occurs when you have provided a domain credential with a wrong domain name on the appliance configuration manager. For example, if you have provided a domain credentials with username user@abc.com but provided a the domain name as def.com, those credentials will not be attempted if the server is connected to def.com and you will get this error message.
+This error occurs when you have provided a domain credential with a wrong domain name on the appliance configuration manager. For example, if you have provided a domain credential with username user@abc.com but provided the domain name as def.com, those credentials will not be attempted if the server is connected to def.com and you will get this error message.
### Remediation - Go to appliance configuration manager to add a server credential or edit an existing one as explained in the cause.
After you have initiated discovery on the appliance, it may take up to 24 hours
If you have not provided Windows authentication or SQL Server authentication credentials on the appliance configuration manager, then add the credentials so that the appliance can use them to connect to respective SQL Server instances.
-Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal.
+Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data are captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal.
## SQL Server instance is showing up in "Not connected" state on portal
Typical SQL discovery errors are summarized in the table.
|30004: Insufficient Permissions.|This error could occur due to the lack of permissions required to scan SQL Server instances. |Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/t-sql/statements/grant-server-permissions-transact-sql)| [View](/sql/t-sql/statements/grant-server-permissions-transact-sql) | |30005: SQL Server login failed to connect because of a problem with its default master database.|Either the database itself is invalid or the login lacks CONNECT permission on the database.|Use ALTER LOGIN to set the default database to master database.<br/>Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error) | |30006: SQL Server login cannot be used with Windows Authentication.|1. The login may be a SQL Server login but the server only accepts Windows Authentication.<br/>2. You are trying to connect using SQL Server Authentication but the login used does not exist on SQL Server.<br/>3. The login may use Windows Authentication but the login is an unrecognized Windows principal. An unrecognized Windows principal means that the login cannot be verified by Windows. This could be because the Windows login is from an untrusted domain.|If you are trying to connect using SQL Server Authentication, verify that SQL Server is configured in Mixed Authentication Mode and SQL Server login exists.<br/>If you are trying to connect using Windows Authentication, verify that you are properly logged into the correct domain. [Learn more](/sql/relational-databases/errors-events/mssqlserver-18452-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-18452-database-engine-error) |
-|30007: Password expired.|The password of the account has expired.|The SQL Server login password may have expired, re-set the password and/ or extend the password expiration date. [Learn more](/sql/relational-databases/native-client/features/changing-passwords-programmatically)| [View](/sql/relational-databases/native-client/features/changing-passwords-programmatically) |
+|30007: Password expired.|The password of the account has expired.|The SQL Server login password may have expired, reset the password and/ or extend the password expiration date. [Learn more](/sql/relational-databases/native-client/features/changing-passwords-programmatically)| [View](/sql/relational-databases/native-client/features/changing-passwords-programmatically) |
|30008: Password must be changed.|The password of the account must be changed.|Change the password of the credential provided for SQL Server discovery. [Learn more](/previous-versions/sql/sql-server-2008-r2/cc645934(v=sql.105))| [View](/previous-versions/sql/sql-server-2008-r2/cc645934(v=sql.105)) | |30009: An internal error occurred.|Internal error occurred while discovering SQL Server instances and databases. |Please contact Microsoft support if the issue persists.| - | |30010: No databases found.|Unable to find any databases from the selected server instance.|Grant sysadmin role to the credentials/ account provided on the appliance for discovering SQL databases.| - | |30011: An internal error occurred while assessing a SQL instance or database.|Internal error occurred while performing assessment.|Please contact Microsoft support if the issue persists.| - |
-|30012: SQL connection failed.|1. The firewall on the server has refused the connection.<br/>2. The SQL Server Browser service (sqlbrowser) is not started.<br/>3. SQL Server did not respond to the client request because the server is probably not started.<br/>4. The SQL Server client cannot connect to the server. This error could occur because the server is not configured to accept remote connections.<br/>5. The SQL Server client cannot connect to the server. The error could occur because either the client cannot resolve the name of the server or the name of the server is incorrect.<br/>6. The TCP, or named pipe protocols are not enabled.<br/>7. Specified SQL Server instance name is not valid.|Please use [this](https://go.microsoft.com/fwlink/?linkid=2153317) interactive user guide to troubleshoot the connectivity issue. Please wait for 24 hours after following the guide for the data to update in the service. If the issue still persists please contact Microsoft support.| [View](https://go.microsoft.com/fwlink/?linkid=2153317) |
+|30012: SQL connection failed.|1. The firewall on the server has refused the connection.<br/>2. The SQL Server Browser service (sqlbrowser) is not started.<br/>3. SQL Server did not respond to the client request because the server is probably not started.<br/>4. The SQL Server client cannot connect to the server. This error could occur because the server is not configured to accept remote connections.<br/>5. The SQL Server client cannot connect to the server. The error could occur because either the client cannot resolve the name of the server or the name of the server is incorrect.<br/>6. The TCP, or named pipe protocols are not enabled.<br/>7. Specified SQL Server instance name is not valid.|Please use [this](https://go.microsoft.com/fwlink/?linkid=2153317) interactive user guide to troubleshoot the connectivity issue. Wait for 24 hours after following the guide for the data to update in the service. If the issue still persists, contact Microsoft support.| [View](https://go.microsoft.com/fwlink/?linkid=2153317) |
|30013: An error occurred while establishing a connection to the SQL server instance.|1. SQL ServerΓÇÖs name cannot be resolved from appliance.<br/>2. SQL Server does not allow remote connections.|If you can ping SQL server from appliance, please wait 24 hours to check if this issue auto resolves. If it doesnΓÇÖt, please contact Microsoft support. [Learn more](/sql/relational-databases/errors-events/mssqlserver-53-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-53-database-engine-error) | |30014: Username or password is invalid.| This error could occur because of an authentication failure that involves a bad password or username.|Please provide a credential with a valid Username and Password. [Learn more](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error)| [View](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error) | |30015: An internal error occurred while discovering the SQL instance.|An internal error occurred while discovering the SQL instance.|Please contact Microsoft support if the issue persists.| - |
Typical SQL discovery errors are summarized in the table.
|30018: Internal error occurred.|An internal error occurred while collecting data such as Temp DB size, File size etc. of the SQL instance.|Please wait for 24 hours and contact Microsoft support if the issue persists.| - | |30019: An internal error occurred.|An internal error occurred while collecting performance metrics such as memory utilization, etc. of a database or an instance.|Please wait for 24 hours and contact Microsoft support if the issue persists.| - |
+## Common web apps discovery errors
+
+Azure Migrate supports discovery of ASP.NET web apps running on on-premises machines, using Azure Migrate: Discovery and assessment. Web apps discovery is currently supported for VMware only. Refer to the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
+
+Typical web apps discovery errors are summarized in the table.
+
+| **Error** | **Cause** | **Action** |
+|--|--|--|
+| **40001:** IIS Management console feature is not enabled. | IIS web app discovery uses the management API that is included with the local version of IIS to read the IIS configuration. This API is available when the IIS Management Console feature of IIS is enabled. Either this feature is not enabled or the OS is not a supported version for IIS web app discovery.| Ensure that the Web Server (IIS) role including the IIS Management Console feature (part of Management Tools) is enabled, and that the server OS is version Windows Server 2008 R2 or later version.|
+| **40002:** Unable to connect to the server from appliance. | Connection to the server failed due to invalid login credentials or unavailable machine. | Ensure that the login credentials provided for the server are correct and that the server is online and accepting WS-Management PowerShell remote connections. |
+| **40003:** Unable to connect to server due to invalid credentials. | Connection to the server failed due to invalid login credentials. | Ensure that the login credentials provided for the server are correct, and that WS-Management PowerShell remoting is enabled. |
+| **40004:** Unable to access the IIS configuration. | No or insufficient permissions. | Confirm that the user credentials provided for the server are administrator level credentials, and that the user has permissions to access files under the IIS directory (%windir%\System32\inetsrv) and IIS server configuration directory (%windir%\System32\inetsrv\config). |
+| **40005:** Unable to complete IIS discovery. | Failed to complete discovery on the VM. This may be due to issues with accessing configuration on the server. The error was: '%detailedMessage;'. | Confirm that the user credentials provided for the server are administrator level credentials, and that the user has permissions to access files under the IIS directory (%windir%\System32\inetsrv) and IIS server configuration directory (%windir%\System32\inetsrv\config). Then contact support with the error details. |
+| **40006:** Uncategorized exception. | New error scenario. | Contact Microsoft support with error details and logs. Logs can be found on appliance server under the path - C:\ProgramData\Microsoft Azure\Logs. |
+| **40007:** No web apps found for the web server. | Web server does not have any hosted applications. | Check the web server configuration. |
+ ## Next steps Set up an appliance for [VMware](how-to-set-up-appliance-vmware.md), [Hyper-V](how-to-set-up-appliance-hyper-v.md), or [physical servers](how-to-set-up-appliance-physical.md).
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-webapps.md
+
+ Title: Tutorial to assess web apps for migration to Azure App Service
+description: Learn how to create assessment for Azure App Service in Azure Migrate
+++ Last updated : 07/28/2021++++
+# Tutorial: Assess ASP.NET web apps for migration to Azure App Service
+
+As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
+This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Run an assessment based on web apps configuration data.
+> * Review an Azure App Service assessment
+
+> [!NOTE]
+> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md)
+- If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+
+## Run an assessment
+
+Run an assessment as follows:
+
+1. On the **Overview** page > **Servers, databases and web apps**, click **Discover, assess and migrate**.
+ :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Overview page for Azure Migrate":::
+2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure App Service**.
+ :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Dropdown to choose assessment type as Azure App Service":::
+3. In **Create assessment** > you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+4. Click **Edit** to review the assessment properties.
+ :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Edit button from where assessment properties can be customized":::
+5. Here's what's included in Azure App Service assessment properties:
+
+ **Property** | **Details**
+ |
+ **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
+ **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
+ **Reserved instances** | Specifies reserved instances so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved instance option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
+ **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
+ **Currency** | The billing currency for your account.
+ **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+ **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings.
+
+ :::image type="content" source="./media/tutorial-assess-webapps/webapps-assessment-properties.png" alt-text="App Service assessment properties":::
+
+1. In **Create assessment** > click Next.
+1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
+1. In **Select or create a group** > select **Create New** and specify a group name.
+1. Select the appliance, and select the servers you want to add to the group. Then click Next.
+1. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment.
+1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** tile > Refresh the tile data by clicking on the Refresh option on top of the tile. Wait for data to get refreshed.
+ :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data":::
+1. Click on the number next to Azure App Service assessment.
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment":::
+1. Click on the assessment name which you wish to view.
+
+## Review an assessment
+
+**To view an assessment**:
+
+1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure App Service assessment.
+2. Click on the assessment name which you wish to view.
+3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
+
+#### Azure App Service readiness
+
+This indicates the distribution of assessed web apps. You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md)
+You can also view the recommended App Service SKU and plan for migrating to Azure App Service.
+
+#### Azure App Service cost details
+
+An [App Service plan](/azure/app-service/overview-hosting-plans) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses.
+
+### Review readiness
+
+1. Click **Azure App Service readiness**.
+ :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details":::
+1. Review Azure App Service readiness column in table, for the assessed web apps:
+ 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
+ 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
+ 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
+ 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
+1. Review the recommended SKU for the web apps which is determined as per the matrix below:
+
+ **Isolation required** | **Reserved instance** | **App Service plan/ SKU**
+ | |
+ Yes | Yes | I1
+ Yes | No | I1
+ No | Yes | P1v3
+ No | No | P1v2
+
+ **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates**
+ | |
+ Ready | Yes | Yes
+ Ready with conditions | Yes | Yes
+ Not ready | No | No
+ Unknown | No | No
+
+1. Click on the App Service plan hyperlink in table to see the App Service plan details such as compute resources, and other web apps that are part of the same plan.
+
+### Review cost estimates
+
+The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
+To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. Number of web apps allocated to each plan instance is as per below table.
+
+**App Service plan** | **Web apps per App Service plan**
+ |
+I1 | 8
+P1v2 | 8
+P1v3 | 16
++
+## Next steps
+
+[Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 03/25/2021 Last updated : 07/28/2021 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
-# Tutorial: Discover servers running in a VMware environment with Azure Migrate: Discovery and assessment
+# Tutorial: Discover servers running in a VMware environment with Azure Migrate
As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
-This tutorial shows you how to discover the servers that are running in your VMware environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server running in your vCenter Server instance, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, and SQL Server instances and databases.
+This tutorial shows you how to discover the servers that are running in your VMware environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server running in your vCenter Server instance, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, ASP.NET web apps, and SQL Server instances and databases.
In this tutorial, you learn how to:
Requirement | Details
| **vCenter Server/ESXi host** | You need a server running vCenter Server version 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy.
-**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.
+**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#aspnet-web-apps-discovery-requirements).
## Prepare an Azure user account
In VMware vSphere Web Client, set up a read-only account to use for vCenter Serv
### Create an account to access servers
-Your user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and discovery of SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
+Your user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
-* For Windows servers, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
+* For Windows servers and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
* For Linux servers, create an account that has root privileges. Or, you can create an account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files. > [!NOTE]
-> You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications, agentless dependency analysis, and discovery of SQL Server instances and databases. You can add multiple domain, Windows (non-domain), Linux (non-domain), or SQL Server authentication credentials. Learn how to [add server credentials](add-server-credentials.md).
+> You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can add multiple domain, Windows (non-domain), Linux (non-domain), or SQL Server authentication credentials. Learn how to [add server credentials](add-server-credentials.md).
## Set up a project
To set up a new project:
1. In the Azure portal, select **All services**, and then search for **Azure Migrate**. 1. Under **Services**, select **Azure Migrate**.
-1. In **Overview**, select one of the following options, depending on your migration goals: **Windows, Linux and SQL Server**, **SQL Server (only)**, or **Explore more scenarios**.
+1. In **Overview**, select one of the following options, depending on your migration goals: **Servers, databases and web apps**, **SQL Server (only)**, or **Explore more scenarios**.
1. Select **Create project**. 1. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 1. In **Project Details**, specify the project name and the geography where you want to create the project. Review [supported geographies for public clouds](migrate-support-matrix.md#supported-geographies-public-cloud) and [supported geographies for government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
To set up the appliance by using an OVA template, you'll complete these steps, w
#### Generate the project key
-1. In **Migration Goals**, select **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment** > **Discover**.
+1. In **Migration Goals**, select **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Discover**.
1. In **Discover servers**, select **Are your servers virtualized?** > **Yes, with VMware vSphere hypervisor**. 1. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up to discover servers in your VMware environment. The name should be alphanumeric and 14 characters or fewer. 1. To start creating the required Azure resources, select **Generate key**. Don't close the **Discover** pane while the resources are being created.
The appliance must connect to vCenter Server to discover the configuration and p
### Provide server credentials
-In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
+In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-vmware/appliance-server-credentials-mapping.png" alt-text="Screenshot that shows providing credentials for software inventory, dependency analysis, and s q l server discovery.":::
To add server credentials:
Select **Save**. If you choose to use domain credentials, you also must enter the FQDN for the domain. The FQDN is required to validate the authenticity of the credentials with the Active Directory instance in that domain.
-1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications, agentless dependency analysis, and discovery of SQL Server instances and databases.
+1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications, agentless dependency analysis, and discovery of web apps and SQL Server instances and databases.
1. To add multiple credentials at once, select **Add more** to save credentials, and then add more credentials. When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as the appliance iterates to map credentials to respective servers.
To start vCenter Server discovery, select **Start discovery**. After the discove
* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours. * Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight. * Discovery of