Updates from: 01/20/2021 04:05:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-amazon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-amazon.md
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.workload: identity ms.topic: how-to ms.custom: project-no-code
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.author: mimart ms.subservice: B2C zone_pivot_groups: b2c-policy-type
@@ -43,7 +43,7 @@ To enable sign-in for users with an Amazon account in Azure Active Directory B2C
::: zone pivot="b2c-user-flow"
-## Configure an Amazon account as an identity provider
+## Configure Amazon as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -54,6 +54,16 @@ To enable sign-in for users with an Amazon account in Azure Active Directory B2C
1. For the **Client secret**, enter the Client Secret that you recorded. 1. Select **Save**.
+## Add Amazon identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Amazon identity provider.
+1. Under the **Social identity providers**, select **Amazon**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -73,9 +83,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure Amazon as an identity provider
-If you want users to sign in by using an Amazon account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Amazon account, you need to define the account as a claims provider. that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define an Amazon account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -89,7 +99,7 @@ You can define an Amazon account as a claims provider by adding it to the **Clai
<Domain>amazon.com</Domain> <DisplayName>Amazon</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Amazon-OAUTH">
+ <TechnicalProfile Id="Amazon-OAuth2">
<DisplayName>Amazon</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -126,77 +136,27 @@ You can define an Amazon account as a claims provider by adding it to the **Clai
4. Set **client_id** to the application ID from the application registration. 5. Save the file.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Azure AD directory. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Amazon identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInAmazon`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for an Amazon account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `AmazonExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="AmazonExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with an Amazon account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for the ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="AmazonExchange" TechnicalProfileReferenceId="Amazon-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `Amazon-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Amazon identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Amazon identity provider.
-1. Under the **Social identity providers**, select **Amazon**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
-## Update and test the relying party file
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AmazonExchange" TechnicalProfileReferenceId="Amazon-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-Update the relying party (RP) file that initiates the user journey that you created.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInAmazon.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInAmazon`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_amazon`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignAmazon).
-1. Save your changes, upload the file, and then select the new policy in the list.
-1. Make sure that Azure AD B2C application that you created is selected in the **Select application** field, and then test it by clicking **Run now**.
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-b2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.author: mimart ms.subservice: B2C ms.custom: fasttrack-edit, project-no-code
@@ -103,6 +103,17 @@ To create an application.
1. Select **Save**.
+## Add Azure AD B2C identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Azure AD B2C identity provider.
+1. Under the **Social identity providers**, select **Fabrikam**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+1. From the sign-up or sign-in page, select *Fabrikam* to sign in with the other Azure AD B2C tenant.
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -121,9 +132,9 @@ You need to store the application key that you created earlier in your Azure AD
1. For **Key usage**, select `Signature`. 1. Select **Create**.
-## Add a claims provider
+## Configure Azure AD B2C as an identity provider
-If you want users to sign in by using the other Azure AD B2C (Fabrikam), you need to define the other Azure AD B2C as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an account from another Azure AD B2C tenant (Fabrikam), you need to define the other Azure AD B2C as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define Azure AD B2C as a claims provider by adding Azure AD B2C to the **ClaimsProvider** element in the extension file of your policy.
@@ -135,7 +146,7 @@ You can define Azure AD B2C as a claims provider by adding Azure AD B2C to the *
<Domain>fabrikam.com</Domain> <DisplayName>Federation with Fabrikam tenant</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Fabrikam-OpenIdConnect">
+ <TechnicalProfile Id="AzureADB2CFabrikam-OpenIdConnect">
<DisplayName>Fabrikam</DisplayName> <Protocol Name="OpenIdConnect"/> <Metadata>
@@ -184,83 +195,27 @@ You can define Azure AD B2C as a claims provider by adding Azure AD B2C to the *
|CryptographicKeys| Update the value of **StorageReferenceId** to the name of the policy key that you created earlier. For example, `B2C_1A_FabrikamAppSecret`.|
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with the other Azure AD B2C tenant. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-1. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-1. Click **Upload**.
-## Register the claims provider
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="AzureADB2CFabrikamExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
-At this point, the identity provider has been set up, but it's not yet available in any of the sign-up/sign-in pages. To make it available, create a duplicate of an existing template user journey, and then modify it so that it also has the Azure AD identity provider:
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-1. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-1. Rename the ID of the user journey. For example, `SignUpSignInFabrikam`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in page. If you add a **ClaimsProviderSelection** element for Azure AD B2C, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created in *TrustFrameworkExtensions.xml*.
-1. Under **ClaimsProviderSelections**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `FabrikamExchange`:
-
- ```xml
- <ClaimsProviderSelection TargetClaimsExchangeId="FabrikamExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with the other Azure AD B2C to receive a token. Link the button to an action by linking the technical profile for the Azure AD B2C claims provider:
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-1. Add the following **ClaimsExchange** element making sure that you use the same value for **Id** that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="FabrikamExchange" TechnicalProfileReferenceId="Fabrikam-OpenIdConnect" />
- ```
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AzureADB2CFabrikamExchange" TechnicalProfileReferenceId="AzureADB2CFabrikam-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
- Update the value of **TechnicalProfileReferenceId** to the **Id** of the technical profile you created earlier. For example, `Fabrikam-OpenIdConnect`.
-
-1. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Azure AD B2C identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Azure AD B2C identity provider.
-1. Under the **Social identity providers**, select **Fabrikam**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-1. From the sign-up or sign-in page, select *Fabrikam* to sign in with the other Azure AD B2C tenant.
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
--
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInFabrikam.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInFabrikam`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example, `http://contoso.com/B2C_1A_signup_signin_fabrikam`.
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the user journey that you created earlier. For example, *SignUpSignInFabrikam*.
-1. Save your changes and upload the file.
-1. Under **Custom policies**, select the new policy in the list.
-1. In the **Select application** drop-down, select the Azure AD B2C application that you created earlier. For example, *testapp1*.
-1. Select **Run now**
-1. From the sign-up or sign-in page, select *Fabrikam* to sign in with the other Azure AD B2C tenant.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-multi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -28,7 +28,7 @@ zone_pivot_groups: b2c-policy-type
::: zone pivot="b2c-custom-policy"
-This article shows you how to enable sign-in for users using the multi-tenant endpoint for Azure Active Directory (Azure AD). This allows users from multiple Azure AD tenants to sign in using Azure AD B2C, without you having to configure an identity provider for each tenant. However, guest members in any of these tenants **will not** be able to sign in. For that, you need to [individually configure each tenant](identity-provider-azure-ad-single-tenant.md).
+This article shows you how to enable sign-in for users using the multi-tenant endpoint for Azure Active Directory (Azure AD). Allowing users from multiple Azure AD tenants to sign in using Azure AD B2C, without you having to configure an identity provider for each tenant. However, guest members in any of these tenants **will not** be able to sign in. For that, you need to [individually configure each tenant](identity-provider-azure-ad-single-tenant.md).
## Prerequisites
@@ -59,7 +59,7 @@ To enable sign-in for users with an Azure AD account in Azure Active Directory B
## Configuring optional claims
-If you want to get the `family_name` and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md).
+If you want to get the `family_name`, and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md).
1. Sign in to the [Azure portal](https://portal.azure.com). Search for and select **Azure Active Directory**. 1. From the **Manage** section, select **App registrations**.
@@ -67,7 +67,7 @@ If you want to get the `family_name` and `given_name` claims from Azure AD, you
1. From the **Manage** section, select **Token configuration**. 1. Select **Add optional claim**. 1. For the **Token type**, select **ID**.
-1. Select the optional claims to add, `family_name` and `given_name`.
+1. Select the optional claims to add, `family_name`, and `given_name`.
1. Click **Add**. ## Create a policy key
@@ -84,9 +84,9 @@ You need to store the application key that you created in your Azure AD B2C tena
1. For **Key usage**, select `Signature`. 1. Select **Create**.
-## Add a claims provider
+## Configure Azure AD as an identity provider
-If you want users to sign in by using Azure AD, you need to define Azure AD as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an Azure AD account, you need to define Azure AD as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsProvider** element in the extension file of your policy.
@@ -99,7 +99,7 @@ You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
<Domain>commonaad</Domain> <DisplayName>Common AAD</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Common-AAD">
+ <TechnicalProfile Id="AADCommon-OpenIdConnect">
<DisplayName>Multi-Tenant AAD</DisplayName> <Description>Login with your Contoso account</Description> <Protocol Name="OpenIdConnect"/>
@@ -149,10 +149,7 @@ You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
### Restrict access
-> [!NOTE]
-> Using `https://login.microsoftonline.com/` as the value for **ValidTokenIssuerPrefixes** allows all Azure AD users to sign in to your application.
-
-You need to update the list of valid token issuers and restrict access to a specific list of Azure AD tenant users who can sign in.
+Using `https://login.microsoftonline.com/` as the value for **ValidTokenIssuerPrefixes** allows all Azure AD users to sign in to your application. Update the list of valid token issuers and restrict access to a specific list of Azure AD tenant users who can sign in.
To obtain the values, look at the OpenID Connect discovery metadata for each of the Azure AD tenants that you would like to have users sign in from. The format of the metadata URL is similar to `https://login.microsoftonline.com/your-tenant/v2.0/.well-known/openid-configuration`, where `your-tenant` is your Azure AD tenant name. For example:
@@ -163,67 +160,29 @@ Perform these steps for each Azure AD tenant that should be used to sign in:
1. Open your browser and go to the OpenID Connect metadata URL for the tenant. Find the **issuer** object and record its value. It should look similar to `https://login.microsoftonline.com/00000000-0000-0000-0000-000000000000/`. 1. Copy and paste the value into the **ValidTokenIssuerPrefixes** key. Separate multiple issuers with a comma. An example with two issuers appears in the previous `ClaimsProvider` XML sample.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Azure AD directories. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Select **Upload**.
-
-## Register the claims provider
-
-At this point, the identity provider has been set up, but it's not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Azure AD identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInContoso`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for Azure AD, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created in *TrustFrameworkExtensions.xml*.
-1. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `AzureADExchange`:
-
- ```xml
- <ClaimsProviderSelection TargetClaimsExchangeId="AzureADExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with Azure AD to receive a token. Link the button to an action by linking the technical profile for your Azure AD claims provider.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for **Id** that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="AzureADExchange" TechnicalProfileReferenceId="Common-AAD" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the **Id** of the technical profile you created earlier. For example, `Common-AAD`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Update and test the relying party file
-Update the relying party (RP) file that initiates the user journey that you created:
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="AzureADCommonExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignContoso.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInContoso`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example, `http://contoso.com/B2C_1A_signup_signin_contoso`.
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the user journey that you created earlier. For example, *SignUpSignInContoso*.
-1. Save your changes and upload the file.
-1. From the uploaded **Custom policies**, select the newly created policy from the list.
-1. In the **Select application** drop-down, select the Azure AD B2C application that you created earlier. For example, *testapp1*.
-1. Copy the **Run now endpoint** and open it in a private browser window, for example, Incognito Mode in Google Chrome or an InPrivate window in Microsoft Edge. Opening in a private browser window allows you to test the full user journey by not using any currently cached Azure AD credentials.
-1. Select the Azure AD sign in button, for example, *Contoso Employee*, and then enter the credentials for a user in one of your Azure AD organizational tenants. You're asked to authorize the application, and then enter information for your profile.
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AzureADCommonExchange" TechnicalProfileReferenceId="AADCommon-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-If the sign in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
-To test the multi-tenant sign-in capability, perform the last two steps using the credentials for a user that exists another Azure AD tenant.
+To test the multi-tenant sign-in capability, perform the last two steps using the credentials for a user that exists another Azure AD tenant. Copy the **Run now endpoint** and open it in a private browser window, for example, Incognito Mode in Google Chrome or an InPrivate window in Microsoft Edge. Opening in a private browser window allows you to test the full user journey by not using any currently cached Azure AD credentials.
## Next steps
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.author: mimart ms.subservice: B2C ms.custom: fasttrack-edit, project-no-code
@@ -99,6 +99,16 @@ If you want to get the `family_name` and `given_name` claims from Azure AD, you
1. Select **Save**.
+## Add Azure AD identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Azure AD identity provider.
+1. Under the **Social identity providers**, select **Contoso Azure AD**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -117,9 +127,9 @@ You need to store the application key that you created in your Azure AD B2C tena
1. For **Key usage**, select `Signature`. 1. Select **Create**.
-## Add a claims provider
+## Configure Azure AD as an identity provider
-If you want users to sign in by using Azure AD, you need to define Azure AD as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an Azure AD account, you need to define Azure AD as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsProvider** element in the extension file of your policy.
@@ -131,7 +141,7 @@ You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
<Domain>Contoso</Domain> <DisplayName>Login using Contoso</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="OIDC-Contoso">
+ <TechnicalProfile Id="AADContoso-OpenIdConnect">
<DisplayName>Contoso Employee</DisplayName> <Description>Login with your Contoso account</Description> <Protocol Name="OpenIdConnect"/>
@@ -175,7 +185,7 @@ You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
To get a token from the Azure AD endpoint, you need to define the protocols that Azure AD B2C should use to communicate with Azure AD. This is done inside the **TechnicalProfile** element of **ClaimsProvider**.
-1. Update the ID of the **TechnicalProfile** element. This ID is used to refer to this technical profile from other parts of the policy, for example `OIDC-Contoso`.
+1. Update the ID of the **TechnicalProfile** element. This ID is used to refer to this technical profile from other parts of the policy, for example `AADContoso-OpenIdConnect`.
1. Update the value for **DisplayName**. This value will be displayed on the sign-in button on your sign-in screen. 1. Update the value for **Description**. 1. Azure AD uses the OpenID Connect protocol, so make sure that the value for **Protocol** is `OpenIdConnect`.
@@ -183,84 +193,28 @@ To get a token from the Azure AD endpoint, you need to define the protocols that
1. Set **client_id** to the application ID from the application registration. 1. Under **CryptographicKeys**, update the value of **StorageReferenceId** to the name of the policy key that you created earlier. For example, `B2C_1A_ContosoAppSecret`.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Azure AD directory. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-1. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-1. Click **Upload**.
-
-## Register the claims provider
-
-At this point, the identity provider has been set up, but it's not yet available in any of the sign-up/sign-in pages. To make it available, create a duplicate of an existing template user journey, and then modify it so that it also has the Azure AD identity provider:
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-1. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-1. Rename the ID of the user journey. For example, `SignUpSignInContoso`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in page. If you add a **ClaimsProviderSelection** element for Azure AD, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created in *TrustFrameworkExtensions.xml*.
-1. Under **ClaimsProviderSelections**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `ContosoExchange`:
-
- ```xml
- <ClaimsProviderSelection TargetClaimsExchangeId="ContosoExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with Azure AD to receive a token. Link the button to an action by linking the technical profile for your Azure AD claims provider:
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-1. Add the following **ClaimsExchange** element making sure that you use the same value for **Id** that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="ContosoExchange" TechnicalProfileReferenceId="OIDC-Contoso" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the **Id** of the technical profile you created earlier. For example, `OIDC-Contoso`.
-
-1. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Azure AD identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Azure AD identity provider.
-1. Under the **Social identity providers**, select **Contoso Azure AD**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Update and test the relying party file
-Update the relying party (RP) file that initiates the user journey that you created.
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="AzureADContosoExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInContoso.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInContoso`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example, `http://contoso.com/B2C_1A_signup_signin_contoso`.
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the user journey that you created earlier. For example, *SignUpSignInContoso*.
-1. Save your changes and upload the file.
-1. Under **Custom policies**, select the new policy in the list.
-1. In the **Select application** drop-down, select the Azure AD B2C application that you created earlier. For example, *testapp1*.
-1. Copy the **Run now endpoint** and open it in a private browser window, for example, Incognito Mode in Google Chrome or an InPrivate window in Microsoft Edge. Opening in a private browser window allows you to test the full user journey by not using any currently cached Azure AD credentials.
-1. Select the Azure AD sign in button, for example, *Contoso Employee*, and then enter the credentials for a user in your Azure AD organizational tenant. You're asked to authorize the application, and then enter information for your profile.
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="AzureADContosoExchange" TechnicalProfileReferenceId="AADContoso-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-If the sign in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
## Next steps
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-facebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-facebook.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -56,7 +56,7 @@ To enable sign-in for users with a Facebook account in Azure Active Directory B2
::: zone pivot="b2c-user-flow"
-## Configure a Facebook account as an identity provider
+## Configure Facebook as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -67,25 +67,6 @@ To enable sign-in for users with a Facebook account in Azure Active Directory B2
1. For the **Client secret**, enter the App Secret that you recorded. 1. Select **Save**.
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Add Facebook as an identity provider
-
-1. In the `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** file, replace the value of `client_id` with the Facebook application ID:
-
- ```xml
- <TechnicalProfile Id="Facebook-OAUTH">
- <Metadata>
- <!--Replace the value of client_id in this technical profile with the Facebook app ID"-->
- <Item Key="client_id">00000000000000</Item>
- ```
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
- ## Add Facebook identity provider to a user flow 1. In your Azure AD B2C tenant, select **User flows**.
@@ -100,6 +81,17 @@ To enable sign-in for users with a Facebook account in Azure Active Directory B2
::: zone pivot="b2c-custom-policy"
+## Configure a Facebook account as an identity provider
+
+1. In the `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** file, replace the value of `client_id` with the Facebook application ID:
+
+ ```xml
+ <TechnicalProfile Id="Facebook-OAUTH">
+ <Metadata>
+ <!--Replace the value of client_id in this technical profile with the Facebook app ID"-->
+ <Item Key="client_id">00000000000000</Item>
+ ```
+ ## Upload and test the policy Update the relying party (RP) file that initiates the user journey that you created.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -45,7 +45,7 @@ To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD
::: zone pivot="b2c-user-flow"
-## Configure a GitHub account as an identity provider
+## Configure GitHub as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -56,6 +56,16 @@ To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD
1. For the **Client secret**, enter the Client Secret that you recorded. 1. Select **Save**.
+## Add GitHub identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the GitHub identity provider.
+1. Under the **Social identity providers**, select **GitHub**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -75,9 +85,9 @@ You need to store the client secret that you previously recorded in your Azure A
1. For **Key usage**, select `Signature`. 1. Click **Create**.
-## Add a claims provider
+## Configure GitHub as an identity provider
-If you want users to sign in by using a GitHub account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a GitHub account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a GitHub account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -90,7 +100,7 @@ You can define a GitHub account as a claims provider by adding it to the **Claim
<Domain>github.com</Domain> <DisplayName>GitHub</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="GitHub-OAUTH2">
+ <TechnicalProfile Id="GitHub-OAuth2">
<DisplayName>GitHub</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -163,79 +173,26 @@ The GitHub technical profile requires the **CreateIssuerUserId** claim transform
</BuildingBlocks> ```
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your GitHub account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
-
-## Register the claims provider
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the GitHub identity provider.
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInGitHub`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a GitHub account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `GitHubExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="GitHubExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a GitHub account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="GitHubExchange" TechnicalProfileReferenceId="GitHub-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `GitHub-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add GitHub identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the GitHub identity provider.
-1. Under the **Social identity providers**, select **GitHub**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="GitHubExchange" TechnicalProfileReferenceId="GitHub-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInGitHub.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInGitHub`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_github`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignGitHub).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select GitHub to sign in with GitHub and test the custom policy.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-google.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -38,18 +38,18 @@ To enable sign-in for users with a Google account in Azure Active Directory B2C
1. Sign in to the [Google Developers Console](https://console.developers.google.com/) with your Google account credentials. 1. In the upper-left corner of the page, select the project list, and then select **New Project**. 1. Enter a **Project Name**, select **Create**.
-1. Make sure you are using the new project by selecting the project drop-down in the top-left of the screen, select your project by name, then select **Open**.
+1. Make sure you are using the new project by selecting the project drop-down in the top-left of the screen. Select your project by name, then select **Open**.
1. Select **OAuth consent screen** in the left menu, select **External**, and then select **Create**. Enter a **Name** for your application. Enter *b2clogin.com* in the **Authorized domains** section and select **Save**. 1. Select **Credentials** in the left menu, and then select **Create credentials** > **Oauth client ID**. 1. Under **Application type**, select **Web application**.
-1. Enter a **Name** for your application, enter `https://your-tenant-name.b2clogin.com` in **Authorized JavaScript origins**, and `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorized redirect URIs**. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
+1. Enter a **Name** for your application, enter `https://your-tenant-name.b2clogin.com` in **Authorized JavaScript origins**, and `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorized redirect URIs**. Replace `your-tenant-name` with the name of your tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
1. Click **Create**. 1. Copy the values of **Client ID** and **Client secret**. You will need both of them to configure Google as an identity provider in your tenant. **Client secret** is an important security credential. ::: zone pivot="b2c-user-flow"
-## Configure a Google account as an identity provider
+## Configure Google as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -60,6 +60,16 @@ Enter a **Name** for your application. Enter *b2clogin.com* in the **Authorized
1. For the **Client secret**, enter the Client Secret that you recorded. 1. Select **Save**.
+## Add Google identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Google identity provider.
+1. Under the **Social identity providers**, select **Google**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -79,9 +89,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure Google as an identity provider
-If you want users to sign in by using a Google account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Google account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a Google account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -94,7 +104,7 @@ You can define a Google account as a claims provider by adding it to the **Claim
<Domain>google.com</Domain> <DisplayName>Google</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Google-OAUTH">
+ <TechnicalProfile Id="Google-OAuth2">
<DisplayName>Google</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -134,80 +144,27 @@ You can define a Google account as a claims provider by adding it to the **Claim
4. Set **client_id** to the application ID from the application registration. 5. Save the file.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Google account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Google identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInGoogle`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a Google account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `GoogleExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="GoogleExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a Google account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="GoogleExchange" TechnicalProfileReferenceId="Google-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `Google-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Google identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Google identity provider.
-1. Under the **Social identity providers**, select **Google**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInGoogle.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInGoogle`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_google`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignGoogle).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select Google to sign in with Google and test the custom policy.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="GoogleExchange" TechnicalProfileReferenceId="Google-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-id-me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-id-me.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.author: mimart ms.subservice: B2C zone_pivot_groups: b2c-policy-type
@@ -41,7 +41,7 @@ To enable sign-in for users with an ID.me account in Azure Active Directory B2C
1. Select **View My Applications**, and select **Continue**. 1. Select **Create new** 1. Enter a **Name**, and **Display Name**.
- 1. In **Redirect URI** enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant.
+ 1. In the **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant.
1. Click **Continue**. 1. Copy the values of **Client ID** and **Client Secret**. You need both to add the identity provider to your tenant.
@@ -60,9 +60,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure ID.me as an identity provider
-If you want users to sign in by using a ID.me account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an ID.me account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a ID.me account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -75,7 +75,7 @@ You can define a ID.me account as a claims provider by adding it to the **Claims
<Domain>id.me</Domain> <DisplayName>ID.me</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="IdMe-OAUTH2">
+ <TechnicalProfile Id="IdMe-OAuth2">
<DisplayName>IdMe</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -137,61 +137,27 @@ Next, you need a claims transformation to create the displayName claim. Add the
</ClaimsTransformations> ```
-### Upload the extension file for verification
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your ID.me account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
-
-## Register the claims provider
-
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the ID.me identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInIdMe`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a ID.me account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `IdMeExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="IdMeExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a ID.me account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="IdMeExchange" TechnicalProfileReferenceId="IdMe-OAUTH2" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `IdMe-OAUTH2`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-## Update and test the relying party file
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="IdMeExchange" TechnicalProfileReferenceId="IdMe-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-Update the relying party (RP) file that initiates the user journey that you created.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInIdMe.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignIdMe`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_IdMe`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignIdMe).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select ID.me to sign in with ID.me and test the custom policy.
::: zone-end\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -46,7 +46,7 @@ To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
::: zone pivot="b2c-user-flow"
-## Configure a LinkedIn account as an identity provider
+## Configure LinkedIn as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -57,6 +57,16 @@ To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
1. For the **Client secret**, enter the Client Secret that you recorded. 1. Select **Save**.
+## Add LinkedIn identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the LinkedIn identity provider.
+1. Under the **Social identity providers**, select **LinkedIn**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -76,9 +86,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure LinkedIn as an identity provider
-If you want users to sign in using a LinkedIn account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an LinkedIn account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
Define a LinkedIn account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -91,7 +101,7 @@ Define a LinkedIn account as a claims provider by adding it to the **ClaimsProvi
<Domain>linkedin.com</Domain> <DisplayName>LinkedIn</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="LinkedIn-OAUTH">
+ <TechnicalProfile Id="LinkedIn-OAuth2">
<DisplayName>LinkedIn</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -177,79 +187,27 @@ Add the **BuildingBlocks** element near the top of the *TrustFrameworkExtensions
</BuildingBlocks> ```
-### Upload the extension file for verification
-
-You now have a policy configured so that Azure AD B2C knows how to communicate with your LinkedIn account. Try uploading the extension file of your policy to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-1. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-1. Click **Upload**.
-
-## Register the claims provider
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-At this point, the identity provider has been set up, but it's not available in any of the sign-up or sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the LinkedIn identity provider.
-1. Open the *TrustFrameworkBase.xml* file in the starter pack.
-1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-1. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-1. Rename the ID of the user journey. For example, `SignUpSignInLinkedIn`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up or sign-in screen. If you add a **ClaimsProviderSelection** element for a LinkedIn account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelections**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `LinkedInExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="LinkedInExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a LinkedIn account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-1. Add the following **ClaimsExchange** element making sure that you use the same value for the ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="LinkedInExchange" TechnicalProfileReferenceId="LinkedIn-OAUTH" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `LinkedIn-OAUTH`.
-
-1. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-:: zone pivot="b2c-user-flow"
-
-## Add LinkedIn identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the LinkedIn identity provider.
-1. Under the **Social identity providers**, select **LinkedIn**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInLinkedIn.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInLinkedIn`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_linkedin`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignLinkedIn).
-1. Save your changes, upload the file, and then select the new policy in the list.
-1. Make sure that Azure AD B2C application that you created is selected in the **Select application** field, and then test it by clicking **Run now**.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="LinkedInExchange" TechnicalProfileReferenceId="LinkedIn-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
## Migration from v1.0 to v2.0
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-local.md new file mode 100644
@@ -0,0 +1,149 @@
+---
+title: Set up Azure AD B2C local account identity provider
+titleSuffix: Azure AD B2C
+description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant.
+services: active-directory-b2c
+author: msmimart
+manager: celestedg
+
+ms.service: active-directory
+ms.workload: identity
+ms.topic: how-to
+ms.date: 01/19/2021
+ms.author: mimart
+ms.subservice: B2C
+zone_pivot_groups: b2c-policy-type
+---
+# Set up the local account identity provider
+
+[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+
+Azure AD B2C provides various ways in which users can authenticate a user. Users can sign-in to a local account, by using username and password, phone verification (also known as password less authentication), or social identity providers. Email sign-up is enabled by default in your local account identity provider settings.
+
+This article describes how users create their accounts local to this Azure AD B2C tenant. For social or enterprise identities, where the identity of the user is managed by a federated identity provider like Facebook, and Google, see [Add an identity provider](add-identity-provider.md).
+
+## Email sign-in
+
+With the email option, users can sign in/up with their email address and password:
+
+- **Sign-in**, users are prompted to provide their email and password.
+- **Sign-up**, users will be prompted for an email address, which will be verified at sign-up (optional) and become their login ID. The user then enters any other information requested on the sign-up page, for example, Display Name, Given Name, and Surname. Then select Continue to create the account.
+- **Password reset**, Users must enter and verify their email, after which, the user can reset the password
+
+![Email sign-up or sign-in experience](./media/identity-provider-local/local-account-email-experience.png)
+
+## Username sign-in
+
+With the user option, users can sign in/up with a username and password:
+
+- **Sign-in**: Users are prompted to provide their username and password.
+- **Sign-up**: Users will be prompted for a username, which will become their login ID. Users will also be prompted for an email address, which will be verified at sign-up. The email address will be used during a password reset flow. The user enters any other information requested on the sign-up page, for example, Display Name, Given Name, and Surname. The user then selects Continue to create the account.
+- **Password reset**: Users must enter their username, and associated email address. The email address must be verified, after which, the user can reset the password.
+
+![Username sign-up or sign-in experience](./media/identity-provider-local/local-account-username-experience.png)
+
+## Phone sign-in (Preview)
+
+Passwordless authentication is a type of authentication where a user doesn't need to sign-in with their password. With phone sign-up and sign-in, the user can sign up for the app using a phone number as their primary login identifier. The user will have the following experience during sign-up and sign-in:
+
+- **Sign-in**: If the user has an existing account with phone number as their identifier, the user enters their phone number and selects *Sign in*. They confirm the country and phone number by selecting *Continue*, and a one-time verification code is sent to their phone. The user enters the verification code and selects *Continue* to sign in.
+- **Sign-up**: If the user doesn't already have an account for your application, they can create one by clicking on the *Sign up now* link.
+ 1. A sign-up page appears, where the user selects their *Country*, enters their phone number, and selects *Send Code*.
+ 1. A one-time verification code is sent to the user's phone number. The user enters the *Verification Code* on the sign-up page, and then selects *Verify Code*. (If the user can't retrieve the code, they can select *Send New Code*).
+ 1. The user enters any other information requested on the sign-up page, for example, Display Name, Given Name, and Surname. Then select Continue.
+ 1. Next, the user is asked to provide a **recovery email**. The user enters their email address, and then selects *Send verification code*. A code is sent to the user's email inbox, which they can retrieve and enter in the Verification code box. Then the user selects Verify code.
+ 1. Once the code is verified, the user selects *Create* to create their account.
+
+![Phone sign-up or sign-in experience](./media/identity-provider-local/local-account-phone-experience.png)
+
+### Pricing
+
+One-time passwords are sent to your users by using SMS text messages. Depending on your mobile network operator, you may be charged for each message sent. For pricing information, see the **Separate Charges** section of [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+
+> [!NOTE]
+> Multi-factor authentication (MFA) is disabled by default when you configure a user flow with phone sign-up. You can enable MFA in user flows with phone sign-up, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor.
+
+### Phone recovery
+
+When you enable phone sign-up and sign-in for your user flows, it's also a good idea to enable the recovery email feature. With this feature, a user can provide an email address that can be used to recover their account when they don't have their phone. This email address is used for account recovery only. It can't be used for signing in.
+
+- When the recovery email prompt is **On**, a user signing up for the first time is prompted to verify a backup email. A user who hasn't provided a recovery email before is asked to verify a backup email during next sign in.
+
+- When recovery email is **Off**, a user signing up or signing in isn't shown the recovery email prompt.
+
+The following screenshots demonstrate the phone recovery flow:
+
+![Phone recovery user flow](./media/identity-provider-local/local-account-change-phone-flow.png)
++
+## Phone or email sign-in (Preview)
+
+You can choose to combine the [phone sign-in](#phone-sign-in-preview), and the [email sign-in](#email-sign-in). In the sign-up or sign-in page, user can type a phone number, or email address. Based on the user input, Azure AD B2C takes the user to the corresponding flow.
+
+![Phone or email sign-up or sign-in experience](./media/identity-provider-local/local-account-phone-and-email-experience.png)
+
+::: zone pivot="b2c-user-flow"
+
+## Configure local account identity provider settings
+
+You can configure the local identity providers available to be used within a User Flow by enabling or disabling the providers (email, username, or phone number). You can have more than one local identity provider enabled at the tenant level.
+
+A User Flow can only be configured to use one of the local account identity providers at any one time. Each User Flow can have a different local account identity provider set, if more than one has been enabled at the tenant level.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Under **Manage**, select **Identity providers**.
+1. In the identity provider list, select **Local account**.
+1. In the **Configure local IDP** page, selected at least one of the allowable identity types consumers can use to create their local accounts in your Azure AD B2C tenant.
+1. Select **Save**.
+
+## Configure your User Flow
+
+1. In the left menu of the Azure portal, select **Azure AD B2C**.
+1. Under **Policies**, select **User flows (policies)**.
+1. Select the user flow for which you'd like to configure the sign-up and sign-in experience.
+1. Select **Identity providers**
+1. Under the **Local accounts**, select one of the following: **Email signup**, **User ID signup**, **Phone signup**, **Phone/Email signup**, or **None**.
+
+### Enable the recovery email prompt
+
+If you choose the **Phone signup**, **Phone/Email signup** option, enable the recovery email prompt.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. In Azure AD B2C, under **Policies**, select **User flows**.
+1. Select the user flow from the list.
+1. Under **Settings**, select **Properties**.
+1. Next to **Enable recovery email prompt for phone number signup and sign in (preview)**, select:
+ - **On** to show the recovery email prompt during both sign-up and sign-in.
+ - **Off** to hide the recovery email prompt.
+1. Select **Save**.
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
+
+## Get the starter pack
+
+Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs with several pre-built policies. Download the relevant starter-pack:
+
+- [Email sign-in](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts)
+- [Username sign-in](https://github.com/azure-ad-b2c/samples/tree/master/policies/username-signup-or-signin)
+- [Phone sign-in](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/phone-number-passwordless). Select the [SignUpOrSignInWithPhone.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/scenarios/phone-number-passwordless/SignUpOrSignInWithPhone.xml) relying party policy.
+- [Phone or email sign-in](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/phone-number-passwordless). Select the [SignUpOrSignInWithPhone.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/scenarios/phone-number-passwordless/SignUpOrSignInWithPhoneOrEmail.xml) relying party policy.
+
+After you download the starter pack.
+
+1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
+
+1. Complete the steps in the [Add application IDs to the custom policy](custom-policy-get-started.md#add-application-ids-to-the-custom-policy) section of [Get started with custom policies in Azure Active Directory B2C](custom-policy-get-started.md). For example, update `/phone-number-passwordless/`**`Phone_Email_Base.xml`** with the **Application (client) IDs** of the two applications you registered when completing the prerequisites, *IdentityExperienceFramework* and *ProxyIdentityExperienceFramework*.
+1. Upload the policy files
+
+::: zone-end
+
+## Next steps
+
+- [Add external identity providers](tutorial-add-identity-providers.md)
+- [Create a user flow](tutorial-create-user-flows.md)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-microsoft-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -44,15 +44,15 @@ To enable sign-in for users with a Microsoft account in Azure Active Directory B
For more information on the different account type selections, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). 1. Under **Redirect URI (optional)**, select **Web** and enter `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/oauth2/authresp` in the text box. Replace `<tenant-name>` with your Azure AD B2C tenant name. 1. Select **Register**
-1. Record the **Application (client) ID** shown on the application Overview page. You need this when you configure the identity provider in the next section.
+1. Record the **Application (client) ID** shown on the application Overview page. You need the client ID when you configure the identity provider in the next section.
1. Select **Certificates & secrets** 1. Click **New client secret** 1. Enter a **Description** for the secret, for example *Application password 1*, and then click **Add**.
-1. Record the application password shown in the **Value** column. You need this when you configure the identity provider in the next section.
+1. Record the application password shown in the **Value** column. You need the client secret when you configure the identity provider in the next section.
::: zone pivot="b2c-user-flow"
-## Configure a Microsoft account as an identity provider
+## Configure Microsoft as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -63,6 +63,16 @@ To enable sign-in for users with a Microsoft account in Azure Active Directory B
1. For the **Client secret**, enter the client secret that you recorded. 1. Select **Save**.
+## Add Microsoft identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Microsoft identity provider.
+1. Under the **Social identity providers**, select **Microsoft Account**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -95,9 +105,9 @@ Now that you've created the application in your Azure AD tenant, you need to sto
1. For **Key usage**, select `Signature`. 1. Click **Create**.
-## Add a claims provider
+## Configure Microsoft as an identity provider
-To enable your users to sign in using a Microsoft account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Microsoft account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define Azure AD as a claims provider by adding the **ClaimsProvider** element in the extension file of your policy.
@@ -110,7 +120,7 @@ You can define Azure AD as a claims provider by adding the **ClaimsProvider** el
<Domain>live.com</Domain> <DisplayName>Microsoft Account</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="MSA-OIDC">
+ <TechnicalProfile Id="MSA-MicrosoftAccount-OpenIdConnect">
<DisplayName>Microsoft Account</DisplayName> <Protocol Name="OpenIdConnect" /> <Metadata>
@@ -152,81 +162,26 @@ You can define Azure AD as a claims provider by adding the **ClaimsProvider** el
You've now configured your policy so that Azure AD B2C knows how to communicate with your Microsoft account application in Azure AD.
-### Upload the extension file for verification
-
-Before continuing, upload the modified policy to confirm that it doesn't have any issues so far.
-
-1. Navigate to your Azure AD B2C tenant in the Azure portal and select **Identity Experience Framework**.
-1. On the **Custom policies** page, select **Upload custom policy**.
-1. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-1. Click **Upload**.
-
-If no errors are displayed in the portal, continue to the next section.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, you've set up the identity provider, but it's not yet available in any of the sign-up or sign-in screens. To make it available, create a duplicate of an existing template user journey, then modify it so that it also has the Microsoft account identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-1. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-1. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-1. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-1. Rename the ID of the user journey. For example, `SignUpSignInMSA`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up or sign-in screen. If you add a **ClaimsProviderSelection** element for a Microsoft account, a new button is displayed when a user lands on the page.
-
-1. In the *TrustFrameworkExtensions.xml* file, find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-1. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `MicrosoftAccountExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="MicrosoftAccountExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a Microsoft account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-1. Add the following **ClaimsExchange** element making sure that you use the same value for the ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="MicrosoftAccountExchange" TechnicalProfileReferenceId="MSA-OIDC" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to match that of the `Id` value in the **TechnicalProfile** element of the claims provider you added earlier. For example, `MSA-OIDC`.
-
-1. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Microsoft identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Microsoft identity provider.
-1. Under the **Social identity providers**, select **Microsoft Account**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInMSA.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInMSA`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_msa`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the user journey that you created earlier (SignUpSignInMSA).
-1. Save your changes, upload the file, and then select the new policy in the list.
-1. Make sure that Azure AD B2C application that you created in the previous section (or by completing the prerequisites, for example *webapp1* or *testapp1*) is selected in the **Select application** field, and then test it by clicking **Run now**.
-1. Select the **Microsoft Account** button and sign in.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="MicrosoftAccountExchange" TechnicalProfileReferenceId="MicrosoftAccount-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-qq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-qq.md
@@ -8,7 +8,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -62,6 +62,16 @@ To enable sign-in for users with a QQ account in Azure Active Directory B2C (Azu
1. For the **Client secret**, enter the APP KEY that you recorded. 1. Select **Save**.
+## Add QQ identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the QQ identity provider.
+1. Under the **Social identity providers**, select **QQ**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -81,9 +91,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure QQ as an identity provider
-If you want users to sign in by using a QQ account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a QQ account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a QQ account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -96,7 +106,7 @@ You can define a QQ account as a claims provider by adding it to the **ClaimsPro
<Domain>qq.com</Domain> <DisplayName>QQ (Preview)</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="QQ-OAUTH">
+ <TechnicalProfile Id="QQ-OAuth2">
<DisplayName>QQ</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -134,79 +144,26 @@ You can define a QQ account as a claims provider by adding it to the **ClaimsPro
4. Set **client_id** to the application ID from the application registration. 5. Save the file.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your QQ account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the QQ identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInQQ`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a QQ account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `QQExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="QQExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a QQ account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="QQExchange" TechnicalProfileReferenceId="QQ-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `QQ-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add QQ identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the QQ identity provider.
-1. Under the **Social identity providers**, select **QQ**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInQQ.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInQQ`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_QQ`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignQQ).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select QQ to sign in with QQ and test the custom policy.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="QQExchange" TechnicalProfileReferenceId="QQ-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -55,7 +55,7 @@ To enable sign-in for users with a Salesforce account in Azure Active Directory
::: zone pivot="b2c-user-flow"
-## Configure a Salesforce account as an identity provider
+## Configure Salesforce as an identity provider
1. Make sure you're using the directory that contains Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure AD B2C tenant. 1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
@@ -81,6 +81,17 @@ To enable sign-in for users with a Salesforce account in Azure Active Directory
- **Email**: *email* 1. Select **Save**.+
+## Add Salesforce identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Salesforce identity provider.
+1. Under the **Social identity providers**, select **Salesforce**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -100,9 +111,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure Salesforce as an identity provider
-If you want users to sign in by using a Salesforce account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Salesforce account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a Salesforce account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -115,7 +126,7 @@ You can define a Salesforce account as a claims provider by adding it to the **C
<Domain>salesforce.com</Domain> <DisplayName>Salesforce</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Salesforce-OIDC">
+ <TechnicalProfile Id="Salesforce-OpenIdConnect">
<DisplayName>Salesforce</DisplayName> <Protocol Name="OpenIdConnect" /> <Metadata>
@@ -155,80 +166,27 @@ You can define a Salesforce account as a claims provider by adding it to the **C
5. Set **client_id** to the application ID from the application registration. 6. Save the file.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Salesforce account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Salesforce identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInSalesforce`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a Salesforce account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `SalesforceExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="SalesforceExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a Salesforce account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="SalesforceExchange" TechnicalProfileReferenceId="Salesforce-OIDC" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `Salesforce-OIDC`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Salesforce identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Salesforce identity provider.
-1. Under the **Social identity providers**, select **Salesforce**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInSalesforce.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInSalesforce`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_Salesforce`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignSalesforce).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select Salesforce to sign in with Salesforce and test the custom policy.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="SalesforceExchange" TechnicalProfileReferenceId="Salesforce-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -31,7 +31,7 @@ zone_pivot_groups: b2c-policy-type
## Create an application
-To enable sign-in for users with a Twitter account in Azure Active Directory B2C (Azure AD B2C) you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [https://twitter.com/signup](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/en/apply/user.html). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [https://twitter.com/signup](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/en/apply/user.html). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. 1. Under **Standalone Apps**, select **+Create App**.
@@ -41,7 +41,7 @@ To enable sign-in for users with a Twitter account in Azure Active Directory B2C
1. Under **Authentication settings**, select **Edit** 1. Select **Enable 3-legged OAuth** checkbox. 1. Select **Request email address from users** checkbox.
- 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1A_signup_signin_twitter`. You need to use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
+ 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1A_signup_signin_twitter`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
@@ -49,7 +49,7 @@ To enable sign-in for users with a Twitter account in Azure Active Directory B2C
::: zone pivot="b2c-user-flow"
-## Configure Twitter as an identity provider in your tenant
+## Configure Twitter as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -89,9 +89,9 @@ You need to store the secret key that you previously recorded in your Azure AD B
9. For **Key usage**, select `Encryption`. 10. Click **Create**.
-## Add a claims provider
+## Configure Twitter as an identity provider
-If you want users to sign in using a Twitter account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Twitter account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a Twitter account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -104,7 +104,7 @@ You can define a Twitter account as a claims provider by adding it to the **Clai
<Domain>twitter.com</Domain> <DisplayName>Twitter</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Twitter-OAUTH1">
+ <TechnicalProfile Id="Twitter-OAuth1">
<DisplayName>Twitter</DisplayName> <Protocol Name="OAuth1" /> <Metadata>
@@ -141,59 +141,27 @@ You can define a Twitter account as a claims provider by adding it to the **Clai
4. Replace the value of **client_id** with the *API key secret* that you previously recorded. 5. Save the file.
-### Upload the extension file for verification
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Twitter account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
-
-## Register the claims provider
-
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up or sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Twitter identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInTwitter`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up or sign-in screen. If you add a **ClaimsProviderSelection** element for a Twitter account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `TwitterExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="TwitterExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a Twitter account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for the ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="TwitterExchange" TechnicalProfileReferenceId="Twitter-OAUTH1" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `Twitter-OAUTH1`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
-## Update and test the relying party file
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="TwitterExchange" TechnicalProfileReferenceId="Twitter-OAuth1" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-Update the relying party (RP) file that initiates the user journey that you created.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInTwitter.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInTwitter`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_twitter`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignTwitter).
-1. Save your changes, upload the file, and then select the new policy in the list.
-1. Make sure that Azure AD B2C application that you created is selected in the **Select application** field, and then test it by clicking **Run now**.
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-wechat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-wechat.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -37,11 +37,11 @@ To enable sign-in for users with a WeChat account in Azure Active Directory B2C
1. Select **管理中心** (management center). 1. Follow the steps to register a new application. 1. Enter `https://your-tenant_name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **授权回调域** (callback URL). For example, if your tenant name is contoso, set the URL to be `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
-1. Copy the **APP ID** and **APP KEY**. You will need these to add the identity provider to your tenant.
+1. Copy the **APP ID** and **APP KEY**. You need both of them to configure the identity provider to your tenant.
::: zone pivot="b2c-user-flow"
-## Configure WeChat as an identity provider in your tenant
+## Configure WeChat as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -52,6 +52,16 @@ To enable sign-in for users with a WeChat account in Azure Active Directory B2C
1. For the **Client secret**, enter the APP KEY that you recorded. 1. Select **Save**.
+## Add WeChat identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the WeChat identity provider.
+1. Under the **Social identity providers**, select **WeChat**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -71,9 +81,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure WeChat as an identity provider
-If you want users to sign in by using a WeChat account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a WeChat account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a WeChat account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -86,7 +96,7 @@ You can define a WeChat account as a claims provider by adding it to the **Claim
<Domain>wechat.com</Domain> <DisplayName>WeChat (Preview)</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="WeChat-OAUTH">
+ <TechnicalProfile Id="WeChat-OAuth2">
<DisplayName>WeChat</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -128,79 +138,26 @@ You can define a WeChat account as a claims provider by adding it to the **Claim
4. Set **client_id** to the application ID from the application registration. 5. Save the file.
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your WeChat account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-## Register the claims provider
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the WeChat identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInWeChat`.
-
-### Display the button
-
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a WeChat account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `WeChatExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="WeChatExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a WeChat account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="WeChatExchange" TechnicalProfileReferenceId="WeChat-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `WeChat-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add WeChat identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the WeChat identity provider.
-1. Under the **Social identity providers**, select **WeChat**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
-
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInWeChat.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInWeChat`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_WeChat`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignWeChat).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select WeChat to sign in with WeChat and test the custom policy.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="WeChatExchange" TechnicalProfileReferenceId="WeChat-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-weibo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-weibo.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -53,7 +53,7 @@ To enable sign-in for users with a Weibo account in Azure Active Directory B2C (
::: zone pivot="b2c-user-flow"
-## Configure a Weibo account as an identity provider
+## Configure Weibo as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
@@ -64,6 +64,16 @@ To enable sign-in for users with a Weibo account in Azure Active Directory B2C (
1. For the **Client secret**, enter the App Secret that you recorded. 1. Select **Save**.
+## Add Weibo identity provider to a user flow
+
+1. In your Azure AD B2C tenant, select **User flows**.
+1. Click the user flow that you want to add the Weibo identity provider.
+1. Under the **Social identity providers**, select **Weibo**.
+1. Select **Save**.
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Run user flow**
+ ::: zone-end ::: zone pivot="b2c-custom-policy"
@@ -83,9 +93,9 @@ You need to store the client secret that you previously recorded in your Azure A
9. For **Key usage**, select `Signature`. 10. Click **Create**.
-## Add a claims provider
+## Configure Weibo as an identity provider
-If you want users to sign in by using a Weibo account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a Weibo account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
You can define a Weibo account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
@@ -94,50 +104,11 @@ You can define a Weibo account as a claims provider by adding it to the **Claims
3. Add a new **ClaimsProvider** as follows: ```xml
- <ClaimsProvider>
- <Domain>Weibo.com</Domain>
- <DisplayName>Weibo</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="Weibo-OAUTH">
- <DisplayName>Weibo</DisplayName>
- <Protocol Name="OAuth2" />
- <Metadata>
- <Item Key="ProviderName">Weibo</Item>
- <Item Key="authorization_endpoint">https://accounts.Weibo.com/o/oauth2/auth</Item>
- <Item Key="AccessTokenEndpoint">https://accounts.Weibo.com/o/oauth2/token</Item>
- <Item Key="ClaimsEndpoint">https://www.Weiboapis.com/oauth2/v1/userinfo</Item>
- <Item Key="scope">email profile</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="UsePolicyInRedirectUri">false</Item>
- <Item Key="client_id">Your Weibo application ID</Item>
- </Metadata>
- <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_WeiboSecret" />
- </CryptographicKeys>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="id" />
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
- <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name" />
- <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="Weibo.com" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
<ClaimsProvider> <Domain>weibo.com</Domain> <DisplayName>Weibo (Preview)</DisplayName> <TechnicalProfiles>
- <TechnicalProfile Id="Weibo-OAUTH">
+ <TechnicalProfile Id="Weibo-OAuth2">
<DisplayName>Weibo</DisplayName> <Protocol Name="OAuth2" /> <Metadata>
@@ -208,79 +179,26 @@ The GitHub technical profile requires the **CreateIssuerUserId** claim transform
</BuildingBlocks> ```
-### Upload the extension file for verification
-
-By now, you have configured your policy so that Azure AD B2C knows how to communicate with your Weibo account. Try uploading the extension file of your policy just to confirm that it doesn't have any issues so far.
-
-1. On the **Custom Policies** page in your Azure AD B2C tenant, select **Upload Policy**.
-2. Enable **Overwrite the policy if it exists**, and then browse to and select the *TrustFrameworkExtensions.xml* file.
-3. Click **Upload**.
-
-## Register the claims provider
-
-At this point, the identity provider has been set up, but itΓÇÖs not available in any of the sign-up/sign-in screens. To make it available, you create a duplicate of an existing template user journey, and then modify it so that it also has the Weibo identity provider.
-
-1. Open the *TrustFrameworkBase.xml* file from the starter pack.
-2. Find and copy the entire contents of the **UserJourney** element that includes `Id="SignUpOrSignIn"`.
-3. Open the *TrustFrameworkExtensions.xml* and find the **UserJourneys** element. If the element doesn't exist, add one.
-4. Paste the entire content of the **UserJourney** element that you copied as a child of the **UserJourneys** element.
-5. Rename the ID of the user journey. For example, `SignUpSignInWeibo`.
+[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
-### Display the button
-The **ClaimsProviderSelection** element is analogous to an identity provider button on a sign-up/sign-in screen. If you add a **ClaimsProviderSelection** element for a Weibo account, a new button shows up when a user lands on the page.
-
-1. Find the **OrchestrationStep** element that includes `Order="1"` in the user journey that you created.
-2. Under **ClaimsProviderSelects**, add the following element. Set the value of **TargetClaimsExchangeId** to an appropriate value, for example `WeiboExchange`:
-
- ```xml
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
<ClaimsProviderSelection TargetClaimsExchangeId="WeiboExchange" />
- ```
-
-### Link the button to an action
-
-Now that you have a button in place, you need to link it to an action. The action, in this case, is for Azure AD B2C to communicate with a Weibo account to receive a token.
-
-1. Find the **OrchestrationStep** that includes `Order="2"` in the user journey.
-2. Add the following **ClaimsExchange** element making sure that you use the same value for ID that you used for **TargetClaimsExchangeId**:
-
- ```xml
- <ClaimsExchange Id="WeiboExchange" TechnicalProfileReferenceId="Weibo-OAuth" />
- ```
-
- Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier. For example, `Weibo-OAuth`.
-
-3. Save the *TrustFrameworkExtensions.xml* file and upload it again for verification.
-
-::: zone-end
-
-::: zone pivot="b2c-user-flow"
-
-## Add Weibo identity provider to a user flow
-
-1. In your Azure AD B2C tenant, select **User flows**.
-1. Click the user flow that you want to add the Weibo identity provider.
-1. Under the **Social identity providers**, select **Weibo**.
-1. Select **Save**.
-1. To test your policy, select **Run user flow**.
-1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Run user flow**
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Update and test the relying party file
-
-Update the relying party (RP) file that initiates the user journey that you created.
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="WeiboExchange" TechnicalProfileReferenceId="Weibo-OAuth2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
-1. Make a copy of *SignUpOrSignIn.xml* in your working directory, and rename it. For example, rename it to *SignUpSignInWeibo.xml*.
-1. Open the new file and update the value of the **PolicyId** attribute for **TrustFrameworkPolicy** with a unique value. For example, `SignUpSignInWeibo`.
-1. Update the value of **PublicPolicyUri** with the URI for the policy. For example,`http://contoso.com/B2C_1A_signup_signin_Weibo`
-1. Update the value of the **ReferenceId** attribute in **DefaultUserJourney** to match the ID of the new user journey that you created (SignUpSignWeibo).
-1. Save your changes, upload the file.
-1. Under **Custom policies**, select **B2C_1A_signup_signin**.
-1. For **Select Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Select **Run now** and select Weibo to sign in with Weibo and test the custom policy.
+[!INCLUDE [active-directory-b2c-create-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
::: zone-end
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/sap-successfactors-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: app-provisioning ms.topic: reference ms.workload: identity
-ms.date: 07/20/2020
+ms.date: 01/19/2021
ms.author: chmutali ---
@@ -50,21 +50,22 @@ For every user in SuccessFactors, Azure AD provisioning service retrieves the fo
| 6 | User | employmentNav/userNav | Always | | 7 | EmpJob | employmentNav/jobInfoNav | Always | | 8 | EmpEmploymentTermination | activeEmploymentsCount | Always |
-| 9 | FOCompany | employmentNav/jobInfoNav/companyNav | Only if `company` or `companyId` attribute is mapped |
-| 10 | FODepartment | employmentNav/jobInfoNav/departmentNav | Only if `department` or `departmentId` attribute is mapped |
-| 11 | FOBusinessUnit | employmentNav/jobInfoNav/businessUnitNav | Only if `businessUnit` or `businessUnitId` attribute is mapped |
-| 12 | FOCostCenter | employmentNav/jobInfoNav/costCenterNav | Only if `costCenter` or `costCenterId` attribute is mapped |
-| 13 | FODivision | employmentNav/jobInfoNav/divisionNav | Only if `division` or `divisionId` attribute is mapped |
-| 14 | FOJobCode | employmentNav/jobInfoNav/jobCodeNav | Only if `jobCode` or `jobCodeId` attribute is mapped |
-| 15 | FOPayGrade | employmentNav/jobInfoNav/payGradeNav | Only if `payGrade` attribute is mapped |
-| 16 | FOLocation | employmentNav/jobInfoNav/locationNav | Only if `location` attribute is mapped |
-| 17 | FOCorporateAddressDEFLT | employmentNav/jobInfoNav/addressNavDEFLT | If mapping contains one of the following attributes: `officeLocationAddress, officeLocationCity, officeLocationZipCode` |
-| 18 | FOEventReason | employmentNav/jobInfoNav/eventReasonNav | Only if `eventReason` attribute is mapped |
-| 19 | EmpGlobalAssignment | employmentNav/empGlobalAssignmentNav | Only if `assignmentType` is mapped |
-| 20 | EmploymentType Picklist | employmentNav/jobInfoNav/employmentTypeNav | Only if `employmentType` is mapped |
-| 21 | EmployeeClass Picklist | employmentNav/jobInfoNav/employeeClassNav | Only if `employeeClass` is mapped |
-| 22 | EmplStatus Picklist | employmentNav/jobInfoNav/emplStatusNav | Only if `emplStatus` is mapped |
-| 23 | AssignmentType Picklist | employmentNav/empGlobalAssignmentNav/assignmentTypeNav | Only if `assignmentType` is mapped |
+| 9 | User's manager | employmentNav/userNav/manager/empInfo | Always |
+| 10 | FOCompany | employmentNav/jobInfoNav/companyNav | Only if `company` or `companyId` attribute is mapped |
+| 11 | FODepartment | employmentNav/jobInfoNav/departmentNav | Only if `department` or `departmentId` attribute is mapped |
+| 12 | FOBusinessUnit | employmentNav/jobInfoNav/businessUnitNav | Only if `businessUnit` or `businessUnitId` attribute is mapped |
+| 13 | FOCostCenter | employmentNav/jobInfoNav/costCenterNav | Only if `costCenter` or `costCenterId` attribute is mapped |
+| 14 | FODivision | employmentNav/jobInfoNav/divisionNav | Only if `division` or `divisionId` attribute is mapped |
+| 15 | FOJobCode | employmentNav/jobInfoNav/jobCodeNav | Only if `jobCode` or `jobCodeId` attribute is mapped |
+| 16 | FOPayGrade | employmentNav/jobInfoNav/payGradeNav | Only if `payGrade` attribute is mapped |
+| 17 | FOLocation | employmentNav/jobInfoNav/locationNav | Only if `location` attribute is mapped |
+| 18 | FOCorporateAddressDEFLT | employmentNav/jobInfoNav/addressNavDEFLT | If mapping contains one of the following attributes: `officeLocationAddress, officeLocationCity, officeLocationZipCode` |
+| 19 | FOEventReason | employmentNav/jobInfoNav/eventReasonNav | Only if `eventReason` attribute is mapped |
+| 20 | EmpGlobalAssignment | employmentNav/empGlobalAssignmentNav | Only if `assignmentType` is mapped |
+| 21 | EmploymentType Picklist | employmentNav/jobInfoNav/employmentTypeNav | Only if `employmentType` is mapped |
+| 22 | EmployeeClass Picklist | employmentNav/jobInfoNav/employeeClassNav | Only if `employeeClass` is mapped |
+| 23 | EmplStatus Picklist | employmentNav/jobInfoNav/emplStatusNav | Only if `emplStatus` is mapped |
+| 24 | AssignmentType Picklist | employmentNav/empGlobalAssignmentNav/assignmentTypeNav | Only if `assignmentType` is mapped |
## How full sync works Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active users.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/workday-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/workday-integration-reference.md new file mode 100644
@@ -0,0 +1,453 @@
+---
+title: Azure Active Directory and Workday integration reference
+description: Technical deep dive into Workday-HR driven provisioning
+services: active-directory
+author: cmmdesai
+manager: celestedg
+ms.service: active-directory
+ms.subservice: app-provisioning
+ms.topic: reference
+ms.workload: identity
+ms.date: 01/18/2021
+ms.author: chmutali
+---
+
+# How Azure Active Directory provisioning integrates with Workday
+
+[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [Workday HCM](https://www.workday.com) to manage the identity life cycle of users. Azure Active Directory offers three pre-built integrations:
+
+* [Workday to on-premises Active Directory user provisioning](../saas-apps/workday-inbound-tutorial.md)
+* [Workday to Azure Active Directory user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md)
+* [Workday Writeback](../saas-apps/workday-writeback-tutorial.md)
+
+This article explains how the integration works and how you can customize the provisioning behavior for different HR scenarios.
+
+## Establishing connectivity
+
+### Restricting Workday API access to Azure AD endpoints
+Azure AD provisioning service uses basic authentication to connect to Workday Web Services API endpoints.
+
+To further secure the connectivity between Azure AD provisioning service and Workday, you can restrict access so that the designated integration system user only accesses the Workday APIs from allowed Azure AD IP ranges. Please engage your Workday administrator to complete the following configuration in your Workday tenant.
+
+1. Download the [latest IP Ranges](https://www.microsoft.com/download/details.aspx?id=56519) for the Azure Public Cloud.
+1. Open the file and search for tag **AzureActiveDirectory**
+
+ >[!div class="mx-imgBorder"]
+ >![Azure AD IP range](media/sap-successfactors-integration-reference/azure-active-directory-ip-range.png)
+
+1. Copy all IP address ranges listed within the element *addressPrefixes* and use the range to build your IP address list.
+1. Log in to Workday admin portal.
+1. Access the **Maintain IP Ranges** task to create a new IP range for Azure data centers. Specify the IP ranges (using CIDR notation) as a comma-separated list.
+1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use **Authentication Whitelist** to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes.
+1. Access the **Activate All Pending Authentication Policy Changes** task to confirm changes.
+
+### Limiting access to worker data in Workday using constrained security groups
+
+The default steps to [configure the Workday integration system user](../saas-apps/workday-inbound-tutorial.md#configure-integration-system-user-in-workday) grants access to retrieve all users in your Workday tenant. In certain integration scenarios, you may want to limit the access, so that users belonging only to certain supervisory organizations are returned by the Get_Workers API call and processed by the Workday Azure AD connector.
+
+You can fulfil this requirement by working with your Workday admin and configuring constrained integration system security groups. For more information on how this is done, please refer to [this Workday community article](https://community.workday.com/forums/customer-questions/620393) (*Workday Community login credentials are required to access this article*)
+
+This strategy of limiting access using constrained ISSG (Integration System Security Groups) is particularly useful in the following scenarios:
+* **Phased rollout scenario**: You have a large Workday tenant and plan to perform a phased rollout of Workday to Azure AD automated provisioning. In this scenario, rather than excluding users who are not in scope of the current phase with Azure AD scoping filters, we recommend configuring constrained ISSG so that only in-scope workers are visible to Azure AD.
+* **Multiple provisioning jobs scenario**: You have a large Workday tenant and multiple AD domains each supporting a different business unit/division/company. To support this topology, you would like to run multiple Workday to Azure AD provisioning jobs with each job provisioning a specific set of workers. In this scenario, rather than using Azure AD scoping filters to exclude worker data, we recommend configuring constrained ISSG so that only the relevant worker data is visible to Azure AD.
+
+### Workday test connection query
+
+To test connectivity to Workday, Azure AD sends the following *Get_Workers* Workday Web Services request.
+
+```XML
+<!-- Test connection query tries to retrieve one record from the first page -->
+<!-- Replace version with Workday Web Services version present in your connection URL -->
+<!-- Replace timestamps below with the UTC time corresponding to the test connection event -->
+<Get_Workers_Request p1:version="v21.1" xmlns:p1="urn:com.workday/bsvc" xmlns="urn:com.workday/bsvc">
+ <p1:Request_Criteria>
+ <p1:Transaction_Log_Criteria_Data>
+ <p1:Transaction_Date_Range_Data>
+ <p1:Updated_From>2021-01-19T02:28:50.1491022Z</p1:Updated_From>
+ <p1:Updated_Through>2021-01-19T02:28:50.1491022Z</p1:Updated_Through>
+ </p1:Transaction_Date_Range_Data>
+ </p1:Transaction_Log_Criteria_Data>
+ <p1:Exclude_Employees>true</p1:Exclude_Employees>
+ <p1:Exclude_Contingent_Workers>true</p1:Exclude_Contingent_Workers>
+ <p1:Exclude_Inactive_Workers>true</p1:Exclude_Inactive_Workers>
+ </p1:Request_Criteria>
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-01-19T02:28:50.1491022Z</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-01-19T02:28:50.1491022Z</p1:As_Of_Entry_DateTime>
+ <p1:Page>1</p1:Page>
+ <p1:Count>1</p1:Count>
+ </p1:Response_Filter>
+ <p1:Response_Group>
+ <p1:Include_Reference>1</p1:Include_Reference>
+ <p1:Include_Personal_Information>1</p1:Include_Personal_Information>
+ </p1:Response_Group>
+</Get_Workers_Request>
+```
+
+## How full sync works
+
+**Full sync** in the context of Workday-driven provisioning refers to the process of fetching all identities from Workday and determining what provisioning rules to apply to each worker object. Full sync happens when you turn on provisioning for the first time and also when you *restart provisioning* either from the Azure portal or using Graph APIs.
+
+Azure AD sends the following *Get_Workers* Workday Web Services request to retrieve worker data. The query looks up the Workday transaction log for all effective dated worker entries as of the time corresponding to the full sync run.
+
+```XML
+<!-- Workday full sync query -->
+<!-- Replace version with Workday Web Services version present in your connection URL -->
+<!-- Replace timestamps below with the UTC time corresponding to full sync run -->
+<!-- Count specifies the number of records to return in each page -->
+<!-- Response_Group flags derived from provisioning attribute mapping -->
+
+<Get_Workers_Request p1:version="v21.1" xmlns:p1="urn:com.workday/bsvc" xmlns="urn:com.workday/bsvc">
+ <p1:Request_Criteria>
+ <p1:Transaction_Log_Criteria_Data>
+ <p1:Transaction_Type_References>
+ <p1:Transaction_Type_Reference>
+ <p1:ID p1:type="Business_Process_Type">Hire Employee</p1:ID>
+ </p1:Transaction_Type_Reference>
+ <p1:Transaction_Type_Reference>
+ <p1:ID p1:type="Business_Process_Type">Contract Contingent Worker</p1:ID>
+ </p1:Transaction_Type_Reference>
+ </p1:Transaction_Type_References>
+ </p1:Transaction_Log_Criteria_Data>
+ </p1:Request_Criteria>
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-01-19T02:29:16.0094202Z</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-01-19T02:29:16.0094202Z</p1:As_Of_Entry_DateTime>
+ <p1:Count>30</p1:Count>
+ </p1:Response_Filter>
+ <p1:Response_Group>
+ <p1:Include_Reference>1</p1:Include_Reference>
+ <p1:Include_Personal_Information>1</p1:Include_Personal_Information>
+ <p1:Include_Employment_Information>1</p1:Include_Employment_Information>
+ <p1:Include_Organizations>1</p1:Include_Organizations>
+ <p1:Exclude_Organization_Support_Role_Data>1</p1:Exclude_Organization_Support_Role_Data>
+ <p1:Exclude_Location_Hierarchies>1</p1:Exclude_Location_Hierarchies>
+ <p1:Exclude_Cost_Center_Hierarchies>1</p1:Exclude_Cost_Center_Hierarchies>
+ <p1:Exclude_Company_Hierarchies>1</p1:Exclude_Company_Hierarchies>
+ <p1:Exclude_Matrix_Organizations>1</p1:Exclude_Matrix_Organizations>
+ <p1:Exclude_Pay_Groups>1</p1:Exclude_Pay_Groups>
+ <p1:Exclude_Regions>1</p1:Exclude_Regions>
+ <p1:Exclude_Region_Hierarchies>1</p1:Exclude_Region_Hierarchies>
+ <p1:Exclude_Funds>1</p1:Exclude_Funds>
+ <p1:Exclude_Fund_Hierarchies>1</p1:Exclude_Fund_Hierarchies>
+ <p1:Exclude_Grants>1</p1:Exclude_Grants>
+ <p1:Exclude_Grant_Hierarchies>1</p1:Exclude_Grant_Hierarchies>
+ <p1:Exclude_Business_Units>1</p1:Exclude_Business_Units>
+ <p1:Exclude_Business_Unit_Hierarchies>1</p1:Exclude_Business_Unit_Hierarchies>
+ <p1:Exclude_Programs>1</p1:Exclude_Programs>
+ <p1:Exclude_Program_Hierarchies>1</p1:Exclude_Program_Hierarchies>
+ <p1:Exclude_Gifts>1</p1:Exclude_Gifts>
+ <p1:Exclude_Gift_Hierarchies>1</p1:Exclude_Gift_Hierarchies>
+ <p1:Include_Management_Chain_Data>1</p1:Include_Management_Chain_Data>
+ <p1:Include_Transaction_Log_Data>1</p1:Include_Transaction_Log_Data>
+ <p1:Include_Additional_Jobs>1</p1:Include_Additional_Jobs>
+ </p1:Response_Group>
+</Get_Workers_Request>
+```
+The *Response_Group* node is used to specify which worker attributes to fetch from Workday. For a description of each flag in the *Response_Group* node, please refer to the Workday [Get_Workers API documentation](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v35.2/Get_Workers.html#Worker_Response_GroupType).
+
+Certain flag values specified in the *Response_Group* node are calculated based on the attributes configured in the Workday Azure AD provisioning application. Refer to the section on *Supported entities* for the criteria used to set the flag values.
+
+The *Get_Workers* response from Workday for the above query includes the number of worker records and page count.
+
+```XML
+ <wd:Response_Results>
+ <wd:Total_Results>509</wd:Total_Results>
+ <wd:Total_Pages>17</wd:Total_Pages>
+ <wd:Page_Results>30</wd:Page_Results>
+ <wd:Page>1</wd:Page>
+ </wd:Response_Results>
+```
+To retrieve the next page of the result set, the next *Get_Workers* query specifies the page number as a parameter in the *Response_Filter*.
+
+```XML
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-01-19T02:29:16.0094202Z</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-01-19T02:29:16.0094202Z</p1:As_Of_Entry_DateTime>
+ <p1:Page>2</p1:Page>
+ <p1:Count>30</p1:Count>
+ </p1:Response_Filter>
+```
+Azure AD provisioning service processes each page and iterates through the all effective workers during full sync.
+For each worker entry imported from Workday:
+* The [XPATH expression](workday-attribute-reference.md) is applied to retrieve attribute values from Workday.
+* The attribute mapping and matching rules are applied and
+* The service determines what operation to perform in the target (Azure AD/AD).
+
+Once the processing is complete, it saves the timestamp associated with the start of full sync as a watermark. This watermark serves as the starting point for the incremental sync cycle.
+
+## How incremental sync works
+
+After full sync, Azure AD provisioning service maintains `LastExecutionTimestamp` and uses it to create delta queries to retrieve incremental changes. During incremental sync, Azure AD sends the following types of queries to Workday:
+
+* [Query for manual updates](#query-for-manual-updates)
+* [Query for effective-dated updates and terminations](#query-for-effective-dated-updates-and-terminations)
+* [Query for future-dated hires](#query-for-future-dated-hires)
+
+### Query for manual updates
+
+The following *Get_Workers* request queries for manual updates that happened between last execution and current execution time.
+
+```xml
+<!-- Workday incremental sync query for manual updates -->
+<!-- Replace version with Workday Web Services version present in your connection URL -->
+<!-- Replace timestamps below with the UTC time corresponding to last execution and current execution time -->
+<!-- Count specifies the number of records to return in each page -->
+<!-- Response_Group flags derived from provisioning attribute mapping -->
+
+<Get_Workers_Request p1:version="v21.1" xmlns:p1="urn:com.workday/bsvc" xmlns="urn:com.workday/bsvc">
+ <p1:Request_Criteria>
+ <p1:Transaction_Log_Criteria_Data>
+ <p1:Transaction_Date_Range_Data>
+ <p1:Updated_From>2021-01-19T02:29:16.0094202Z</p1:Updated_From>
+ <p1:Updated_Through>2021-01-19T02:49:06.290136Z</p1:Updated_Through>
+ </p1:Transaction_Date_Range_Data>
+ </p1:Transaction_Log_Criteria_Data>
+ </p1:Request_Criteria>
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-01-19T02:49:06.290136Z</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-01-19T02:49:06.290136Z</p1:As_Of_Entry_DateTime>
+ <p1:Count>30</p1:Count>
+ </p1:Response_Filter>
+ <p1:Response_Group>
+ <p1:Include_Reference>1</p1:Include_Reference>
+ <p1:Include_Personal_Information>1</p1:Include_Personal_Information>
+ <p1:Include_Employment_Information>1</p1:Include_Employment_Information>
+ <p1:Include_Organizations>1</p1:Include_Organizations>
+ <p1:Exclude_Organization_Support_Role_Data>1</p1:Exclude_Organization_Support_Role_Data>
+ <p1:Exclude_Location_Hierarchies>1</p1:Exclude_Location_Hierarchies>
+ <p1:Exclude_Cost_Center_Hierarchies>1</p1:Exclude_Cost_Center_Hierarchies>
+ <p1:Exclude_Company_Hierarchies>1</p1:Exclude_Company_Hierarchies>
+ <p1:Exclude_Matrix_Organizations>1</p1:Exclude_Matrix_Organizations>
+ <p1:Exclude_Pay_Groups>1</p1:Exclude_Pay_Groups>
+ <p1:Exclude_Regions>1</p1:Exclude_Regions>
+ <p1:Exclude_Region_Hierarchies>1</p1:Exclude_Region_Hierarchies>
+ <p1:Exclude_Funds>1</p1:Exclude_Funds>
+ <p1:Exclude_Fund_Hierarchies>1</p1:Exclude_Fund_Hierarchies>
+ <p1:Exclude_Grants>1</p1:Exclude_Grants>
+ <p1:Exclude_Grant_Hierarchies>1</p1:Exclude_Grant_Hierarchies>
+ <p1:Exclude_Business_Units>1</p1:Exclude_Business_Units>
+ <p1:Exclude_Business_Unit_Hierarchies>1</p1:Exclude_Business_Unit_Hierarchies>
+ <p1:Exclude_Programs>1</p1:Exclude_Programs>
+ <p1:Exclude_Program_Hierarchies>1</p1:Exclude_Program_Hierarchies>
+ <p1:Exclude_Gifts>1</p1:Exclude_Gifts>
+ <p1:Exclude_Gift_Hierarchies>1</p1:Exclude_Gift_Hierarchies>
+ <p1:Include_Management_Chain_Data>1</p1:Include_Management_Chain_Data>
+ <p1:Include_Additional_Jobs>1</p1:Include_Additional_Jobs>
+ </p1:Response_Group>
+</Get_Workers_Request>
+```
+
+### Query for effective-dated updates and terminations
+
+The following *Get_Workers* request queries for effective-dated updates that happened between last execution and current execution time.
+
+```xml
+<!-- Workday incremental sync query for effective-dated updates -->
+<!-- Replace version with Workday Web Services version present in your connection URL -->
+<!-- Replace timestamps below with the UTC time corresponding to last execution and current execution time -->
+<!-- Count specifies the number of records to return in each page -->
+<!-- Response_Group flags derived from provisioning attribute mapping -->
+
+<Get_Workers_Request p1:version="v21.1" xmlns:p1="urn:com.workday/bsvc" xmlns="urn:com.workday/bsvc">
+ <p1:Request_Criteria>
+ <p1:Transaction_Log_Criteria_Data>
+ <p1:Transaction_Date_Range_Data>
+ <p1:Effective_From>2021-01-19T02:29:16.0094202Z</p1:Effective_From>
+ <p1:Effective_Through>2021-01-19T02:49:06.290136Z</p1:Effective_Through>
+ </p1:Transaction_Date_Range_Data>
+ </p1:Transaction_Log_Criteria_Data>
+ </p1:Request_Criteria>
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-01-19T02:49:06.290136Z</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-01-19T02:49:06.290136Z</p1:As_Of_Entry_DateTime>
+ <p1:Page>1</p1:Page>
+ <p1:Count>30</p1:Count>
+ </p1:Response_Filter>
+ <p1:Response_Group>
+ <p1:Include_Reference>1</p1:Include_Reference>
+ <p1:Include_Personal_Information>1</p1:Include_Personal_Information>
+ <p1:Include_Employment_Information>1</p1:Include_Employment_Information>
+ <p1:Include_Organizations>1</p1:Include_Organizations>
+ <p1:Exclude_Organization_Support_Role_Data>1</p1:Exclude_Organization_Support_Role_Data>
+ <p1:Exclude_Location_Hierarchies>1</p1:Exclude_Location_Hierarchies>
+ <p1:Exclude_Cost_Center_Hierarchies>1</p1:Exclude_Cost_Center_Hierarchies>
+ <p1:Exclude_Company_Hierarchies>1</p1:Exclude_Company_Hierarchies>
+ <p1:Exclude_Matrix_Organizations>1</p1:Exclude_Matrix_Organizations>
+ <p1:Exclude_Pay_Groups>1</p1:Exclude_Pay_Groups>
+ <p1:Exclude_Regions>1</p1:Exclude_Regions>
+ <p1:Exclude_Region_Hierarchies>1</p1:Exclude_Region_Hierarchies>
+ <p1:Exclude_Funds>1</p1:Exclude_Funds>
+ <p1:Exclude_Fund_Hierarchies>1</p1:Exclude_Fund_Hierarchies>
+ <p1:Exclude_Grants>1</p1:Exclude_Grants>
+ <p1:Exclude_Grant_Hierarchies>1</p1:Exclude_Grant_Hierarchies>
+ <p1:Exclude_Business_Units>1</p1:Exclude_Business_Units>
+ <p1:Exclude_Business_Unit_Hierarchies>1</p1:Exclude_Business_Unit_Hierarchies>
+ <p1:Exclude_Programs>1</p1:Exclude_Programs>
+ <p1:Exclude_Program_Hierarchies>1</p1:Exclude_Program_Hierarchies>
+ <p1:Exclude_Gifts>1</p1:Exclude_Gifts>
+ <p1:Exclude_Gift_Hierarchies>1</p1:Exclude_Gift_Hierarchies>
+ <p1:Include_Management_Chain_Data>1</p1:Include_Management_Chain_Data>
+ <p1:Include_Additional_Jobs>1</p1:Include_Additional_Jobs>
+ </p1:Response_Group>
+</Get_Workers_Request>
+```
+
+### Query for future-dated hires
+
+If any of the above queries returns a future-dated hire, then the following *Get_Workers* request is used to fetch information about a future-dated new hire. The *WID* attribute of the new hire is used to perform the lookup and the effective date is set to the date and time of hire.
+
+```xml
+<!-- Workday incremental sync query to get new hire data effective as on hire date/first day of work -->
+<!-- Replace version with Workday Web Services version present in your connection URL -->
+<!-- Replace timestamps below hire date/first day of work -->
+<!-- Count specifies the number of records to return in each page -->
+<!-- Response_Group flags derived from provisioning attribute mapping -->
+
+<Get_Workers_Request p1:version="v21.1" xmlns:p1="urn:com.workday/bsvc" xmlns="urn:com.workday/bsvc">
+ <p1:Request_References>
+ <p1:Worker_Reference>
+ <p1:ID p1:type="WID">7bf6322f1ea101fd0b4433077f09cb04</p1:ID>
+ </p1:Worker_Reference>
+ </p1:Request_References>
+ <p1:Response_Filter>
+ <p1:As_Of_Effective_Date>2021-02-01T08:00:00+00:00</p1:As_Of_Effective_Date>
+ <p1:As_Of_Entry_DateTime>2021-02-01T08:00:00+00:00</p1:As_Of_Entry_DateTime>
+ <p1:Count>30</p1:Count>
+ </p1:Response_Filter>
+ <p1:Response_Group>
+ <p1:Include_Reference>1</p1:Include_Reference>
+ <p1:Include_Personal_Information>1</p1:Include_Personal_Information>
+ <p1:Include_Employment_Information>1</p1:Include_Employment_Information>
+ <p1:Include_Organizations>1</p1:Include_Organizations>
+ <p1:Exclude_Organization_Support_Role_Data>1</p1:Exclude_Organization_Support_Role_Data>
+ <p1:Exclude_Location_Hierarchies>1</p1:Exclude_Location_Hierarchies>
+ <p1:Exclude_Cost_Center_Hierarchies>1</p1:Exclude_Cost_Center_Hierarchies>
+ <p1:Exclude_Company_Hierarchies>1</p1:Exclude_Company_Hierarchies>
+ <p1:Exclude_Matrix_Organizations>1</p1:Exclude_Matrix_Organizations>
+ <p1:Exclude_Pay_Groups>1</p1:Exclude_Pay_Groups>
+ <p1:Exclude_Regions>1</p1:Exclude_Regions>
+ <p1:Exclude_Region_Hierarchies>1</p1:Exclude_Region_Hierarchies>
+ <p1:Exclude_Funds>1</p1:Exclude_Funds>
+ <p1:Exclude_Fund_Hierarchies>1</p1:Exclude_Fund_Hierarchies>
+ <p1:Exclude_Grants>1</p1:Exclude_Grants>
+ <p1:Exclude_Grant_Hierarchies>1</p1:Exclude_Grant_Hierarchies>
+ <p1:Exclude_Business_Units>1</p1:Exclude_Business_Units>
+ <p1:Exclude_Business_Unit_Hierarchies>1</p1:Exclude_Business_Unit_Hierarchies>
+ <p1:Exclude_Programs>1</p1:Exclude_Programs>
+ <p1:Exclude_Program_Hierarchies>1</p1:Exclude_Program_Hierarchies>
+ <p1:Exclude_Gifts>1</p1:Exclude_Gifts>
+ <p1:Exclude_Gift_Hierarchies>1</p1:Exclude_Gift_Hierarchies>
+ <p1:Include_Management_Chain_Data>1</p1:Include_Management_Chain_Data>
+ <p1:Include_Additional_Jobs>1</p1:Include_Additional_Jobs>
+ </p1:Response_Group>
+</Get_Workers_Request>
+```
+
+### Retrieving worker data attributes
+
+The *Get_Workers* API can return different data sets associated with a worker. Depending on the [XPATH API expressions](workday-attribute-reference.md) configured in the provisioning schema, Azure AD provisioning service determines which data sets to retrieve from Workday. Accordingly, the *Response_Group* flags are set in the *Get_Workers* request.
+
+The table below provides guidance on mapping configuration to use to retrieve a specific data set.
+
+| \# | Workday Entity | Included by default | XPATH pattern to specify in mapping to fetch non-default entities |
+|----|--------------------------------------|---------------------|-------------------------------------------------------------------------------|
+| 1 | Personal Data | Yes | wd:Worker\_Data/wd:Personal\_Data |
+| 2 | Employment Data | Yes | wd:Worker\_Data/wd:Employment\_Data |
+| 3 | Additional Job Data | Yes | wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary\_Job=0\]|
+| 4 | Organization Data | Yes | wd:Worker\_Data/wd:Organization\_Data |
+| 5 | Management Chain Data | Yes | wd:Worker\_Data/wd:Management\_Chain\_Data |
+| 6 | Supervisory Organization | Yes | 'SUPERVISORY' |
+| 7 | Company | Yes | 'COMPANY' |
+| 8 | Business Unit | No | 'BUSINESS\_UNIT' |
+| 9 | Business Unit Hierarchy | No | 'BUSINESS\_UNIT\_HIERARCHY' |
+| 10 | Company Hierarchy | No | 'COMPANY\_HIERARCHY' |
+| 11 | Cost Center | No | 'COST\_CENTER' |
+| 12 | Cost Center Hierarchy | No | 'COST\_CENTER\_HIERARCHY' |
+| 13 | Fund | No | 'FUND' |
+| 14 | Fund Hierarchy | No | 'FUND\_HIERARCHY' |
+| 15 | Gift | No | 'GIFT' |
+| 16 | Gift Hierarchy | No | 'GIFT\_HIERARCHY' |
+| 17 | Grant | No | 'GRANT' |
+| 18 | Grant Hierarchy | No | 'GRANT\_HIERARCHY' |
+| 19 | Business Site Hierarchy | No | 'BUSINESS\_SITE\_HIERARCHY' |
+| 20 | Matrix Organization | No | 'MATRIX' |
+| 21 | Pay Group | No | 'PAY\_GROUP' |
+| 22 | Programs | No | 'PROGRAMS' |
+| 23 | Program Hierarchy | No | 'PROGRAM\_HIERARCHY' |
+| 24 | Region | No | 'REGION\_HIERARCHY' |
+| 25 | Location Hierarchy | No | 'LOCATION\_HIERARCHY' |
+| 26 | Account Provisioning Data | No | wd:Worker\_Data/wd:Account\_Provisioning\_Data |
+| 27 | Background Check Data | No | wd:Worker\_Data/wd:Background\_Check\_Data |
+| 28 | Benefit Eligibility Data | No | wd:Worker\_Data/wd:Benefit\_Eligibility\_Data |
+| 29 | Benefit Enrollment Data | No | wd:Worker\_Data/wd:Benefit\_Enrollment\_Data |
+| 30 | Career Data | No | wd:Worker\_Data/wd:Career\_Data |
+| 31 | Compensation Data | No | wd:Worker\_Data/wd:Compensation\_Data |
+| 32 | Contingent Worker Tax Authority Data | No | wd:Worker\_Data/wd:Contingent\_Worker\_Tax\_Authority\_Form\_Type\_Data |
+| 33 | Development Item Data | No | wd:Worker\_Data/wd:Development\_Item\_Data |
+| 34 | Employee Contracts Data | No | wd:Worker\_Data/wd:Employee\_Contracts\_Data |
+| 35 | Employee Review Data | No | wd:Worker\_Data/wd:Employee\_Review\_Data |
+| 36 | Feedback Received Data | No | wd:Worker\_Data/wd:Feedback\_Received\_Data |
+| 37 | Worker Goal Data | No | wd:Worker\_Data/wd:Worker\_Goal\_Data |
+| 38 | Photo Data | No | wd:Worker\_Data/wd:Photo\_Data |
+| 39 | Qualification Data | No | wd:Worker\_Data/wd:Qualification\_Data |
+| 40 | Related Persons Data | No | wd:Worker\_Data/wd:Related\_Persons\_Data |
+| 41 | Role Data | No | wd:Worker\_Data/wd:Role\_Data |
+| 42 | Skill Data | No | wd:Worker\_Data/wd:Skill\_Data |
+| 43 | Succession Profile Data | No | wd:Worker\_Data/wd:Succession\_Profile\_Data |
+| 44 | Talent Assessment Data | No | wd:Worker\_Data/wd:Talent\_Assessment\_Data |
+| 45 | User Account Data | No | wd:Worker\_Data/wd:User\_Account\_Data |
+| 46 | Worker Document Data | No | wd:Worker\_Data/wd:Worker\_Document\_Data |
+
+Here are some examples on how you can extend the Workday integration to meet specific requirements.
+
+**Example 1**
+
+Let's say you want to retrieve the following data sets from Workday and use them in your provisioning rules:
+
+* Cost center
+* Cost center hierarchy
+* Pay group
+
+The above data sets are not included by default.
+To retrieve these data sets:
+1. Login to the Azure portal and open your Workday to AD/Azure AD user provisioning app.
+1. In the Provisioning blade, edit the mappings and open the Workday attribute list from the advanced section.
+1. Add the following attributes definitions and mark them as "Required". These attributes will not be mapped to any attribute in AD or Azure AD. They just serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information.
+
+ > [!div class="mx-tdCol2BreakAll"]
+ >| Attribute Name | XPATH API expression |
+ >|---|---|
+ >| CostCenterHierarchyFlag | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data[wd:Organization_Data/wd:Organization_Type_Reference/wd:ID[@wd:type='Organization_Type_ID']='COST_CENTER_HIERARCHY']/wd:Organization_Reference/@wd:Descriptor |
+ >| CostCenterFlag | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data[wd:Organization_Data/wd:Organization_Type_Reference/wd:ID[@wd:type='Organization_Type_ID']='COST_CENTER']/wd:Organization_Data/wd:Organization_Code/text() |
+ >| PayGroupFlag | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data[wd:Organization_Data/wd:Organization_Type_Reference/wd:ID[@wd:type='Organization_Type_ID']='PAY_GROUP']/wd:Organization_Data/wd:Organization_Reference_ID/text() |
+
+1. Once the Cost Center and Pay Group data set is available in the *Get_Workers* response, you can use the below XPATH values to retrieve the cost center name, cost center code and pay group.
+
+ > [!div class="mx-tdCol2BreakAll"]
+ >| Attribute Name | XPATH API expression |
+ >|---|---|
+ >| CostCenterName | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data/wd:Organization_Data[wd:Organization_Type_Reference/@wd:Descriptor='Cost Center']/wd:Organization_Name/text() |
+ >| CostCenterCode | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data/wd:Organization_Data[wd:Organization_Type_Reference/@wd:Descriptor='Cost Center']/wd:Organization_Code/text() |
+ >| PayGroup | wd:Worker/wd:Worker_Data/wd:Organization_Data/wd:Worker_Organization_Data/wd:Organization_Data[wd:Organization_Type_Reference/@wd:Descriptor='Pay Group']/wd:Organization_Name/text() |
+
+**Example 2**
+
+Let's say you want to retrieve certifications associated with a user. This information is available as part of the *Qualification Data* set.
+To get this data set as part of the *Get_Workers* response, use the following XPATH:
+
+`wd:Worker/wd:Worker_Data/wd:Qualification_Data/wd:Certification/wd:Certification_Data/wd:Issuer/text()`
+
+**Example 3**
+
+Let's say you want to retrieve *Provisioning Groups* assigned to a worker. This information is available as part of the *Account Provisioning Data* set.
+To get this data set as part of the *Get_Workers* response, use the following XPATH:
+
+`wd:Worker/wd:Worker_Data/wd:Account_Provisioning_Data/wd:Provisioning_Group_Assignment_Data[wd:Status='Assigned']/wd:Provisioning_Group/text()`
+
+## Next steps
+
+* [Learn how to configure Workday to Active Directory provisioning](../saas-apps/workday-inbound-tutorial.md)
+* [Learn how to configure write back to Workday](../saas-apps/workday-writeback-tutorial.md)
+* [Learn more about supported Workday Attributes for inbound provisioning](workday-attribute-reference.md)
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-location https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
@@ -41,6 +41,8 @@ More information about the location condition in Conditional Access can be found
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users and groups** 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, and select **All cloud apps**. 1. Under **Conditions** > **Location**. 1. Set **Configure** to **Yes**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
@@ -31,6 +31,8 @@ The following policy applies to all selected users, who attempt to register usin
> [!WARNING] > Users must be enabled for the [combined registration](../authentication/howto-registration-mfa-sspr-combined.md).
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 1. Select **Done**.
1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**. 1. Under **Conditions** > **Locations**. 1. Configure **Yes**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-initializing-client-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-initializing-client-applications.md
@@ -163,3 +163,12 @@ app = PublicClientApplicationBuilder.Create(clientId)
.WithB2CAuthority("https://fabrikamb2c.b2clogin.com/tfp/{tenant}/{PolicySignInSignUp}") .Build(); ```+
+## Next steps
+
+After you've initialized the client application, your next task is to add support for user sign-in, authorized API access, or both.
+
+Our application scenario documentation provides guidance for signing in a user and acquiring an access token to access an API on behalf of that user:
+
+- [Web app that signs in users: Sign-in and sign-out](scenario-web-app-sign-user-sign-in.md)
+- [Web app that calls web APIs: Acquire a token](scenario-web-app-call-api-acquire-token.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-server.md
@@ -68,7 +68,7 @@ dotnet new blazorserver2 --auth SingleOrg --calls-graph -o {APP NAME} --client-i
Now, navigate to your new Blazor app in your editor and add the client secret to the *appsettings.json* file, replacing the text "secret-from-app-registration". ```json
-"ClientSecret": "xkAlNiG70000000_UI~d.OS4Dl.-Cy-1m3",
+"ClientSecret": "secret-from-app-registration",
``` ## Test the app
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-whatis.md
@@ -90,7 +90,7 @@ To better understand Azure AD and its documentation, we recommend reviewing the
|Azure AD account| An identity created through Azure AD or another Microsoft cloud service, such as Microsoft 365. Identities are stored in Azure AD and accessible to your organization's cloud service subscriptions. This account is also sometimes called a Work or school account.| |Account Administrator|This classic subscription administrator role is conceptually the billing owner of a subscription. This role has access to the [Azure Account Center](https://account.azure.com/Subscriptions) and enables you to manage all subscriptions in an account. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).| |Service Administrator|This classic subscription administrator role enables you to manage all Azure resources, including access. This role has the equivalent access of a user who is assigned the Owner role at the subscription scope. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).|
-|Owner|This role helps you manage all Azure resources, including access. This role is built on a newer authorization system called Azure role-base access control (Azure RBAC) that provides fine-grained access management to Azure resources. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).|
+|Owner|This role helps you manage all Azure resources, including access. This role is built on a newer authorization system called Azure role-based access control (Azure RBAC) that provides fine-grained access management to Azure resources. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).|
|Azure AD Global administrator|This administrator role is automatically assigned to whomever created the Azure AD tenant. Global administrators can do all of the administrative functions for Azure AD and any services that federate to Azure AD, such as Exchange Online, SharePoint Online, and Skype for Business Online. You can have multiple Global administrators, but only Global administrators can assign administrator roles (including assigning other Global administrators) to users. Note that this administrator role is called Global administrator in the Azure portal, but it's called **Company administrator** in the Microsoft Graph API and Azure AD PowerShell. For more information about the various administrator roles, see [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md).| |Azure subscription| Used to pay for Azure cloud services. You can have many subscriptions and they're linked to a credit card.| |Azure tenant| A dedicated and trusted instance of Azure AD that's automatically created when your organization signs up for a Microsoft cloud service subscription, such as Microsoft Azure, Microsoft Intune, or Microsoft 365. An Azure tenant represents a single organization.|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-security-defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
@@ -74,7 +74,7 @@ We tend to think that administrator accounts are the only accounts that need ext
After these attackers gain access, they can request access to privileged information on behalf of the original account holder. They can even download the entire directory to perform a phishing attack on your whole organization.
-One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for additional authentication whenever necessary. This functionality protects all applications registered with Azure AD including SaaS applications.
+One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for additional authentication whenever necessary. Users will be prompted primarily when they authenticate using a new device or application, or when performing critical roles and tasks. This functionality protects all applications registered with Azure AD including SaaS applications.
### Blocking legacy authentication
@@ -175,4 +175,4 @@ To disable security defaults in your directory:
## Next steps
-[Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
\ No newline at end of file
+[Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-provisioning-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
@@ -14,7 +14,7 @@ ms.topic: conceptual
ms.tgt_pltfrm: na ms.workload: identity ms.subservice: report-monitor
-ms.date: 12/28/2020
+ms.date: 1/19/2021
ms.author: markvi ms.reviewer: arvinh
@@ -212,8 +212,6 @@ The **summary** tab provides an overview of what happened and identifiers for th
- You can use the Change ID attribute as unique identifier. This is, for example, helpful when interacting with product support. -- There is currently no option to download provisioning data as a CSV file, but you can export the data using [Microsoft Graph](/graph/api/provisioningobjectsummary-list?tabs=http&view=graph-rest-beta).- - You may see skipped events for users that are not in scope. This is expected, especially when the sync scope is set to all users and groups. Our service will evaluate all the objects in the tenant, even the ones that are out of scope. - The provisioning logs are currently unavailable in the government cloud. If you're unable to access the provisioning logs, please use the audit logs as a temporary workaround.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/overview-sign-in-diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
@@ -1,7 +1,7 @@
---
-title: What is Sign-in Diagnostics in Azure AD? | Microsoft Docs
-description: Provides a general overview of the Sign-in Diagnostics in Azure AD.
+title: What is the sign-in diagnostic for Azure Active Directory?
+description: Provides a general overview of the sign-in diagnostic in Azure Active Directory.
services: active-directory documentationcenter: '' author: MarkusVi
@@ -23,164 +23,154 @@ ms.reviewer: tspring
ms.collection: M365-identity-device-management ---
-# What is Sign-in Diagnostic in Azure AD?
+# What is the sign-in diagnostic in Azure AD?
-Azure AD provides you with a flexible security model to control what users can do with the managed resources. Access to these resources is not only controlled by **who** you are but also by **how** you access them. Typically, flexibility comes along with a certain degree of complexity because of the number of configuration options you have. Complexity has the potential to increase the risk for errors.
+Azure Active Directory (Azure AD) provides you with a flexible security model to control what users can do with managed resources. Access to these resources is controlled not only by *who* they are, but also by *how* they access them. Typically, a flexible model comes with a certain degree of complexity because of the number of configuration options you have. Complexity has the potential to increase the risk for errors.
-As an IT admin, you need a solution that gives you the right level of insights into the activities in your system so that you can easily diagnose and solve problems when they occur. The Sign-in Diagnostic for Azure AD is an example for such a solution. Use the diagnostic to analyze what happened during a sign-in and what actions you can take to resolve problems without being required to involve Microsoft support.
+As an IT admin, you need a solution that gives you insight into the activities in your system. This visibility can let you diagnose and solve problems when they occur. The sign-in diagnostic for Azure AD is an example of such a solution. You can use the diagnostic to analyze what happened during a sign-in attempt and get recommendations for resolving problems without needing to involve Microsoft support.
This article gives you an overview of what the solution does and how you can use it. - ## Requirements
-The Sign-in Diagnostics is available in all editions of Azure AD.<br>
+The sign-in diagnostic is available in all editions of Azure AD.
+ You must be a global administrator in Azure AD to use it. ## How it works
-In Azure AD, the response to a sign-in attempt is tied to **who** you are and **how** you access your tenant. For example, as an administrator, you can typically configure all aspects of your tenant when you sign in from your corporate network. However, you might be even blocked when you sign in with the same account from an untrusted network.
-
+In Azure AD, the response to a sign-in attempt is tied to *who* signs in and *how* they access the tenant. For example, an administrator can typically configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign in with the same account from an untrusted network.
+ Due to the greater flexibility of the system to respond to a sign-in attempt, you might end-up in scenarios where you need to troubleshoot sign-ins. The sign-in diagnostic is a feature that: -- Analyzes data from sign-ins.
+- Analyzes data from sign-in events.
+
+- Displays what happened.
+
+- Provides recommendations for how to resolve problems.
-- Displays what happened, and recommendations on how to resolve problems.
+The sign-in diagnostic for Azure AD is designed to enable self-diagnosis of sign-in errors. To complete the diagnostic process, you need to:
-The Sign-in Diagnostic for Azure AD is designed to enable self-diagnosis of sign-in errors. To complete the diagnostic process, you need to:
+![Diagram showing the sign-in diagnostic.](./media/overview-sign-in-diagnostics/process.png)
-![Sign-in diagnostics process](./media/overview-sign-in-diagnostics/process.png)
-
-1. **Define** the scope of the sign-in events you care about
+1. Define the scope of the sign-in events you care about.
-2. **Select** the sign-in you want to review
+2. Select the sign-in you want to review.
-3. **Review** the diagnostic result
+3. Review the diagnostic results.
-4. **Take** actions
+4. Take action.
-
### Define scope
-The goal of this step is to define the scope for the sign-ins you want to investigate. Your scope is either based on a user or an identifier (correlationId, requestId) and a time range. To narrow down the scope further, you can also specify an app name. Azure AD uses the scope information to locate the right events for you.
+The goal of this step is to define the scope of the sign-in events to investigate. Your scope is either based on a user or on an identifier (correlationId, requestId) and a time range. To narrow down the scope further, you can specify an app name. Azure AD uses the scope information to locate the right events for you.
### Select sign-in
-Based on your search criteria, Azure AD retrieves all matching sign-ins and presents them in an authentication summary list view.
+Based on your search criteria, Azure AD retrieves all matching sign-in events and presents them in an authentication summary list view.
+
+![Partial screenshot showing the authentication summary section.](./media/overview-sign-in-diagnostics/authentication-summary.png)
-![Authentication summary](./media/overview-sign-in-diagnostics/authentication-summary.png)
-
You can customize the columns displayed in this view.
-### Review diagnostic
+### Review diagnostic
-For the selected sign-in event, Azure AD provides you with a diagnostics result.
+For the selected sign-in event, Azure AD provides you with diagnostic results.
-![Diagnostic results](./media/overview-sign-in-diagnostics/diagnostics-results.png)
+![Partial screenshot showing the diagnostic results section.](./media/overview-sign-in-diagnostics/diagnostics-results.png)
-
-The result starts with an assessment. The assessment explains in a few sentences what happened. The explanation helps you to understand the behavior of the system.
+These results start with an assessment, which explains what happened in a few sentences. The explanation helps you to understand the behavior of the system.
-As a next step, you get a summary of the related conditional access policies that were applied to the selected sign-in. This part is completed by recommended remediation steps to resolve your issue. Because it is not always possible to resolve issues without additional help, a recommended step might be to open a support ticket.
+Next, you get a summary of the related conditional access policies that were applied to the selected sign-in event. The diagnostic results also include recommended remediation steps to resolve your issue. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
-### Take action
-At this point, you should have the information you need to fix your issue.
+### Take action
+At this point, you should have the information you need to fix your issue.
## Scenarios
-This section provides you with an overview of the covered diagnostic scenarios. The following scenarios are implemented:
-
+The following scenarios are covered by the sign-in diagnostic:
+ - Blocked by conditional access - Failed conditional access -- MFA from conditional access
+- Multifactor authentication (MFA) from conditional access
- MFA from other requirements -- MFA Proof up required--- MFA Proof up required but user sign-in attempt is not from secure location
+- MFA proof up required
-- Successful Sign-in
+- MFA proof up required (risky sign-in location)
+- Successful sign-in
-### Blocked by conditional access
+### Blocked by conditional access
-This scenario is based on a sign-in that was blocked by a conditional access policy.
+In this scenario, a sign-in attempt has been blocked by a conditional access policy.
-![Block access](./media/overview-sign-in-diagnostics/block-access.png)
-
-The diagnostic section for this scenario shows details about the user sign-in and the applied policies.
+![Screenshot showing access configuration with Block access selected.](./media/overview-sign-in-diagnostics/block-access.png)
+The diagnostic section for this scenario shows details about the user sign-in event and the applied policies.
### Failed conditional access
-This scenario is typically a result of a sign-in that failed because the requirements of a conditional access policy were not satisfied. Common examples are:
+This scenario is typically a result of a sign-in attempt that failed because the requirements of a conditional access policy weren't satisfied. Common examples are:
-![Require controls](./media/overview-sign-in-diagnostics/require-controls.png)
+![Screenshot showing access configuration with common policy examples and Grant access selected.](./media/overview-sign-in-diagnostics/require-controls.png)
- Require hybrid Azure AD joined device - Require approved client app -- Require app protection policy --
-The diagnostic section for this scenario shows details about the user sign-in and the applied policies.
+- Require app protection policy
+The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
### MFA from conditional access
-This scenario is based on a conditional access policy that has the requirement to sign-in using multi-factor authentication set.
-
-![Require multi-factor authentication](./media/overview-sign-in-diagnostics/require-mfa.png)
-
-The diagnostic section for this scenario shows details about the user sign-in and the applied policies.
+In this scenario, a conditional access policy has the requirement to sign in using multifactor authentication set.
+![Screenshot showing access configuration with Require multifactor authentication selected.](./media/overview-sign-in-diagnostics/require-mfa.png)
+The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
### MFA from other requirements
-This scenario is based on a multi-factor authentication requirement that was not enforced by a conditional access policy. For example, multi-factor authentication on a per user basis.
--
-![Require multi-factor authentication per user](./media/overview-sign-in-diagnostics/mfa-per-user.png)
+In this scenario, a multifactor authentication requirement wasn't enforced by a conditional access policy. For example, multifactor authentication on a per-user basis.
+![Screenshot showing multifactor authentication per user configuration.](./media/overview-sign-in-diagnostics/mfa-per-user.png)
The intent of this diagnostic scenario is to provide more details about: -- The source of the multi-factor authentication interrupt. -- The result of the client interaction.-
-Additionally, this section also provides you with all details about the user sign-in attempt.
+- The source of the multifactor authentication interrupt
+- The result of the client interaction
+You can also view all details of the user sign-in attempt.
### MFA proof up required
-This scenario is based on sign-ins that were interrupted by requests to set up multi-factor authentication. This setup is also known as ΓÇ£proof upΓÇ¥.
+In this scenario, sign-in attempts were interrupted by requests to set up multifactor authentication. This setup is also known as proof up.
-Multi-factor authentication proof up occurs when a user is required to use multi-factor authentication but has not configured it yet, or an administrator has configured the user to configure it.
+Multifactor authentication proof up occurs when a user is required to use multifactor authentication but hasn't configured it yet, or an administrator has required the user to configure it.
-The intent of this diagnostic scenario is to provide insight that the multi-factor authentication interruption was to set it up and to provide the recommendation to have the user complete the proof up.
+The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up.
-### MFA proof up required from a risky sign-in
+### MFA proof up required (risky sign-in location)
-This scenario results from sign-ins that were interrupted by a request to set up multi-factor authentication from a risky sign-on.
+In this scenario, sign-in attempts were interrupted by a request to set up multifactor authentication from a risky sign-in location.
-The intent of this diagnostic scenario is to provide insight that the multi-factor authentication interruption was to set it up, to provide the recommendation to have the user complete the proof up but to do so from a network location which does not appear risky. For example, if a corporate network is defined as a named location attempt to do the Proof up from the corporate network instead.
+The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up, specifically from a network location that doesn't appear risky.
+For example, if a corporate network is defined as a named location, the user should attempt to do the proof up from the corporate network instead.
### Successful sign-in
-This scenario is based on sign-ins that were not interrupted by conditional access or multi-factor authentication.
-
-The intent of this diagnostic scenario is to provide insight into what the user supplied during the sign-in in case there was a Conditional Access policy or policies which were expected to apply, or a configured multi-factor authentication which was expected to interrupt the user sign-in.
-
+In this scenario, sign-in events weren't interrupted by conditional access or multifactor authentication.
+This diagnostic scenario provides details about user sign-in events that were expected to be interrupted due to conditional access policies or multifactor authentication.
## Next steps
-* [What are Azure Active Directory reports?](overview-reports.md)
-* [What is Azure Active Directory monitoring?](overview-monitoring.md)
+- [What are Azure Active Directory reports?](overview-reports.md)
+- [What is Azure Active Directory monitoring?](overview-monitoring.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/fortes-change-cloud-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fortes-change-cloud-provisioning-tutorial.md new file mode 100644
@@ -0,0 +1,150 @@
+---
+title: 'Tutorial: Configure Fortes Change Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Fortes Change Cloud.
+services: active-directory
+documentationcenter: ''
+author: Zhchia
+writer: Zhchia
+manager: beatrizd
+
+ms.assetid: ef9a8f5e-0bf0-46d6-8e17-3bcf1a5b0a6b
+ms.service: active-directory
+ms.subservice: saas-app-tutorial
+ms.workload: identity
+ms.tgt_pltfrm: na
+ms.devlang: na
+ms.topic: article
+ms.date: 01/15/2021
+ms.author: Zhchia
+---
+
+# Tutorial: Configure Fortes Change Cloud for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Fortes Change Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Fortes Change Cloud](https://fortesglobal.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Fortes Change Cloud
+> * Remove users in Fortes Change Cloud when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Fortes Change Cloud
+> * [Single sign-on](fortes-change-cloud-tutorial.md) to Fortes Change Cloud (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Fortes Change Cloud tenant.
+* A user account in Fortes Change Cloud with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Fortes Change Cloud](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Fortes Change Cloud to support provisioning with Azure AD
+
+1. Login with your admin account to Fortes Change Cloud. Click on the **Settings icon** and then navigate to **SCIM Settings**.
+
+ [ ![The Fortes Change Cloud SCIM Setting](media/fortes-change-cloud-provisioning-tutorial/scim-settings.png) ](media/fortes-change-cloud-provisioning-tutorial/scim-settings.png#lightbox)
+
+2. In the new window, copy and save the **Primary token**. This value will be entered in the Secret Token field in the Provisioning tab of your Fortes Change Cloud application in the Azure portal.
+
+ [ ![The Fortes Change Cloud primary token](media/fortes-change-cloud-provisioning-tutorial/primary-token.png)](media/fortes-change-cloud-provisioning-tutorial/primary-token.png#lightbox)
+
+## Step 3. Add Fortes Change Cloud from the Azure AD application gallery
+
+Add Fortes Change Cloud from the Azure AD application gallery to start managing provisioning to Fortes Change Cloud. If you have previously setup Fortes Change Cloud for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Fortes Change Cloud, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Fortes Change Cloud
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Fortes Change Cloud in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Fortes Change Cloud**.
+
+ ![The Fortes Change Cloud link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Fortes Change Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Fortes Change Cloud. If the connection fails, ensure your Fortes Change Cloud account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Fortes Change Cloud**.
+
+9. Review the user attributes that are synchronized from Azure AD to Fortes Change Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Fortes Change Cloud for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Fortes Change Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ |---|---|---|
+ |userName|String|&check;|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:fcc:2.0:User:administrator|Boolean|
+ |urn:ietf:params:scim:schemas:extension:fcc:2.0:User:loginDisabled|Boolean|
+
+
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for Fortes Change Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to Fortes Change Cloud by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.topic: tutorial ms.workload: identity
-ms.date: 08/05/2020
+ms.date: 01/19/2021
ms.author: chmutali --- # Tutorial: Configure SAP SuccessFactors to Azure AD user provisioning
@@ -86,51 +86,61 @@ Work with your SuccessFactors admin team or implementation partner to create or
### Create an API permissions role
-* Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
-* Search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
+1. Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
+1. Search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
![Manage Permission Roles](./media/sap-successfactors-inbound-provisioning/manage-permission-roles.png)
-* From the Permission Role List, click **Create New**.
- > [!div class="mx-imgBorder"]
- > ![Create New Permission Role](./media/sap-successfactors-inbound-provisioning/create-new-permission-role-1.png)
-* Add a **Role Name** and **Description** for the new permission role. The name and description should indicate that the role is for API usage permissions.
- > [!div class="mx-imgBorder"]
- > ![Permission role detail](./media/sap-successfactors-inbound-provisioning/permission-role-detail.png)
-* Under Permission settings, click **Permission...**, then scroll down the permission list and click **Manage Integration Tools**. Check the box for **Allow Admin to Access to OData API through Basic Authentication**.
- > [!div class="mx-imgBorder"]
- > ![Manage integration tools](./media/sap-successfactors-inbound-provisioning/manage-integration-tools.png)
-* Scroll down in the same box and select **Employee Central API**. Add permissions as shown below to read using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the Writeback to SuccessFactors scenario.
- > [!div class="mx-imgBorder"]
- > ![Read write permissions](./media/sap-successfactors-inbound-provisioning/odata-read-write-perm.png)
-* Click on **Done**. Click **Save Changes**.
+1. From the Permission Role List, click **Create New**.
+ > [!div class="mx-imgBorder"]
+ > ![Create New Permission Role](./media/sap-successfactors-inbound-provisioning/create-new-permission-role-1.png)
+1. Add a **Role Name** and **Description** for the new permission role. The name and description should indicate that the role is for API usage permissions.
+ > [!div class="mx-imgBorder"]
+ > ![Permission role detail](./media/sap-successfactors-inbound-provisioning/permission-role-detail.png)
+1. Under Permission settings, click **Permission...**, then scroll down the permission list and click **Manage Integration Tools**. Check the box for **Allow Admin to Access to OData API through Basic Authentication**.
+ > [!div class="mx-imgBorder"]
+ > ![Manage integration tools](./media/sap-successfactors-inbound-provisioning/manage-integration-tools.png)
+1. Scroll down in the same box and select **Employee Central API**. Add permissions as shown below to read using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the Writeback to SuccessFactors scenario.
+ > [!div class="mx-imgBorder"]
+ > ![Read write permissions](./media/sap-successfactors-inbound-provisioning/odata-read-write-perm.png)
+
+1. In the same permissions box, go to **User Permissions -> Employee Data** and review the attributes that the service account can read from the SuccessFactors tenant. For example, to retrieve the *Username* attribute from SuccessFactors, ensure that "View" permission is granted for this attribute. Similarly review each attribute for view permission.
+
+ > [!div class="mx-imgBorder"]
+ > ![Employee data permissions](./media/sap-successfactors-inbound-provisioning/review-employee-data-permissions.png)
+
+
+ >[!NOTE]
+ >For the complete list of attributes retrieved by this provisioning app, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
+
+1. Click on **Done**. Click **Save Changes**.
### Create a Permission Group for the API user
-* In the SuccessFactors Admin Center, search for *Manage Permission Groups*, then select **Manage Permission Groups** from the search results.
- > [!div class="mx-imgBorder"]
- > ![Manage permission groups](./media/sap-successfactors-inbound-provisioning/manage-permission-groups.png)
-* From the Manage Permission Groups window, click **Create New**.
- > [!div class="mx-imgBorder"]
- > ![Add new group](./media/sap-successfactors-inbound-provisioning/create-new-group.png)
-* Add a Group Name for the new group. The group name should indicate that the group is for API users.
- > [!div class="mx-imgBorder"]
- > ![Permission group name](./media/sap-successfactors-inbound-provisioning/permission-group-name.png)
-* Add members to the group. For example, you could select **Username** from the People Pool drop-down menu and then enter the username of the API account that will be used for the integration.
- > [!div class="mx-imgBorder"]
- > ![Add group members](./media/sap-successfactors-inbound-provisioning/add-group-members.png)
-* Click **Done** to finish creating the Permission Group.
+1. In the SuccessFactors Admin Center, search for *Manage Permission Groups*, then select **Manage Permission Groups** from the search results.
+ > [!div class="mx-imgBorder"]
+ > ![Manage permission groups](./media/sap-successfactors-inbound-provisioning/manage-permission-groups.png)
+1. From the Manage Permission Groups window, click **Create New**.
+ > [!div class="mx-imgBorder"]
+ > ![Add new group](./media/sap-successfactors-inbound-provisioning/create-new-group.png)
+1. Add a Group Name for the new group. The group name should indicate that the group is for API users.
+ > [!div class="mx-imgBorder"]
+ > ![Permission group name](./media/sap-successfactors-inbound-provisioning/permission-group-name.png)
+1. Add members to the group. For example, you could select **Username** from the People Pool drop-down menu and then enter the username of the API account that will be used for the integration.
+ > [!div class="mx-imgBorder"]
+ > ![Add group members](./media/sap-successfactors-inbound-provisioning/add-group-members.png)
+1. Click **Done** to finish creating the Permission Group.
### Grant Permission Role to the Permission Group
-* In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
-* From the **Permission Role List**, select the role that you created for API usage permissions.
-* Under **Grant this role to...**, click **Add...** button.
-* Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
- > [!div class="mx-imgBorder"]
- > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
-* Review the Permission Role grant to the Permission Group.
- > [!div class="mx-imgBorder"]
- > ![Permission Role and Group detail](./media/sap-successfactors-inbound-provisioning/permission-role-group.png)
-* Click **Save Changes**.
+1. In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
+1. From the **Permission Role List**, select the role that you created for API usage permissions.
+1. Under **Grant this role to...**, click **Add...** button.
+1. Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
+ > [!div class="mx-imgBorder"]
+ > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
+1. Review the Permission Role grant to the Permission Group.
+ > [!div class="mx-imgBorder"]
+ > ![Permission Role and Group detail](./media/sap-successfactors-inbound-provisioning/permission-role-group.png)
+1. Click **Save Changes**.
## Configuring user provisioning from SuccessFactors to Azure AD
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.topic: tutorial ms.workload: identity
-ms.date: 08/05/2020
+ms.date: 01/19/2021
ms.author: chmutali --- # Tutorial: Configure SAP SuccessFactors to Active Directory user provisioning
@@ -89,55 +89,61 @@ Work with your SuccessFactors admin team or implementation partner to create or
### Create an API permissions role
-* Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
-* Search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
+1. Log in to SAP SuccessFactors with a user account that has access to the Admin Center.
+1. Search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
![Manage Permission Roles](./media/sap-successfactors-inbound-provisioning/manage-permission-roles.png)
-* From the Permission Role List, click **Create New**.
- > [!div class="mx-imgBorder"]
- > ![Create New Permission Role](./media/sap-successfactors-inbound-provisioning/create-new-permission-role-1.png)
-* Add a **Role Name** and **Description** for the new permission role. The name and description should indicate that the role is for API usage permissions.
- > [!div class="mx-imgBorder"]
- > ![Permission role detail](./media/sap-successfactors-inbound-provisioning/permission-role-detail.png)
-* Under Permission settings, click **Permission...**, then scroll down the permission list and click **Manage Integration Tools**. Check the box for **Allow Admin to Access to OData API through Basic Authentication**.
- > [!div class="mx-imgBorder"]
- > ![Manage integration tools](./media/sap-successfactors-inbound-provisioning/manage-integration-tools.png)
-* Scroll down in the same box and select **Employee Central API**. Add permissions as shown below to read using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the Writeback to SuccessFactors scenario.
- > [!div class="mx-imgBorder"]
- > ![Read write permissions](./media/sap-successfactors-inbound-provisioning/odata-read-write-perm.png)
+1. From the Permission Role List, click **Create New**.
+ > [!div class="mx-imgBorder"]
+ > ![Create New Permission Role](./media/sap-successfactors-inbound-provisioning/create-new-permission-role-1.png)
+1. Add a **Role Name** and **Description** for the new permission role. The name and description should indicate that the role is for API usage permissions.
+ > [!div class="mx-imgBorder"]
+ > ![Permission role detail](./media/sap-successfactors-inbound-provisioning/permission-role-detail.png)
+1. Under Permission settings, click **Permission...**, then scroll down the permission list and click **Manage Integration Tools**. Check the box for **Allow Admin to Access to OData API through Basic Authentication**.
+ > [!div class="mx-imgBorder"]
+ > ![Manage integration tools](./media/sap-successfactors-inbound-provisioning/manage-integration-tools.png)
+1. Scroll down in the same box and select **Employee Central API**. Add permissions as shown below to read using ODATA API and edit using ODATA API. Select the edit option if you plan to use the same account for the Writeback to SuccessFactors scenario.
+ > [!div class="mx-imgBorder"]
+ > ![Read write permissions](./media/sap-successfactors-inbound-provisioning/odata-read-write-perm.png)
+
+1. In the same permissions box, go to **User Permissions -> Employee Data** and review the attributes that the service account can read from the SuccessFactors tenant. For example, to retrieve the *Username* attribute from SuccessFactors, ensure that "View" permission is granted for this attribute. Similarly review each attribute for view permission.
+
+ > [!div class="mx-imgBorder"]
+ > ![Employee data permissions](./media/sap-successfactors-inbound-provisioning/review-employee-data-permissions.png)
+
- >[!NOTE]
- >For the complete list of attributes retrieved by this provisioning app, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
+ >[!NOTE]
+ >For the complete list of attributes retrieved by this provisioning app, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
-* Click on **Done**. Click **Save Changes**.
+1. Click on **Done**. Click **Save Changes**.
### Create a Permission Group for the API user
-* In the SuccessFactors Admin Center, search for *Manage Permission Groups*, then select **Manage Permission Groups** from the search results.
- > [!div class="mx-imgBorder"]
- > ![Manage permission groups](./media/sap-successfactors-inbound-provisioning/manage-permission-groups.png)
-* From the Manage Permission Groups window, click **Create New**.
- > [!div class="mx-imgBorder"]
- > ![Add new group](./media/sap-successfactors-inbound-provisioning/create-new-group.png)
-* Add a Group Name for the new group. The group name should indicate that the group is for API users.
- > [!div class="mx-imgBorder"]
- > ![Permission group name](./media/sap-successfactors-inbound-provisioning/permission-group-name.png)
-* Add members to the group. For example, you could select **Username** from the People Pool drop-down menu and then enter the username of the API account that will be used for the integration.
- > [!div class="mx-imgBorder"]
- > ![Add group members](./media/sap-successfactors-inbound-provisioning/add-group-members.png)
-* Click **Done** to finish creating the Permission Group.
+1. In the SuccessFactors Admin Center, search for *Manage Permission Groups*, then select **Manage Permission Groups** from the search results.
+ > [!div class="mx-imgBorder"]
+ > ![Manage permission groups](./media/sap-successfactors-inbound-provisioning/manage-permission-groups.png)
+1. From the Manage Permission Groups window, click **Create New**.
+ > [!div class="mx-imgBorder"]
+ > ![Add new group](./media/sap-successfactors-inbound-provisioning/create-new-group.png)
+1. Add a Group Name for the new group. The group name should indicate that the group is for API users.
+ > [!div class="mx-imgBorder"]
+ > ![Permission group name](./media/sap-successfactors-inbound-provisioning/permission-group-name.png)
+1. Add members to the group. For example, you could select **Username** from the People Pool drop-down menu and then enter the username of the API account that will be used for the integration.
+ > [!div class="mx-imgBorder"]
+ > ![Add group members](./media/sap-successfactors-inbound-provisioning/add-group-members.png)
+1. Click **Done** to finish creating the Permission Group.
### Grant Permission Role to the Permission Group
-* In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
-* From the **Permission Role List**, select the role that you created for API usage permissions.
-* Under **Grant this role to...**, click **Add...** button.
-* Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
- > [!div class="mx-imgBorder"]
- > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
-* Review the Permission Role grant to the Permission Group.
- > [!div class="mx-imgBorder"]
- > ![Permission Role and Group detail](./media/sap-successfactors-inbound-provisioning/permission-role-group.png)
-* Click **Save Changes**.
+1. In SuccessFactors Admin Center, search for *Manage Permission Roles*, then select **Manage Permission Roles** from the search results.
+1. From the **Permission Role List**, select the role that you created for API usage permissions.
+1. Under **Grant this role to...**, click **Add...** button.
+1. Select **Permission Group...** from the drop-down menu, then click **Select...** to open the Groups window to search and select the group created above.
+ > [!div class="mx-imgBorder"]
+ > ![Add permission group](./media/sap-successfactors-inbound-provisioning/add-permission-group.png)
+1. Review the Permission Role grant to the Permission Group.
+ > [!div class="mx-imgBorder"]
+ > ![Permission Role and Group detail](./media/sap-successfactors-inbound-provisioning/permission-role-group.png)
+1. Click **Save Changes**.
## Configuring user provisioning from SuccessFactors to Active Directory
@@ -168,68 +174,14 @@ This section provides steps for user account provisioning from SuccessFactors to
7. Change the **Provisioning** **Mode** to **Automatic** 8. Click on the information banner displayed to download the Provisioning Agent.
- > [!div class="mx-imgBorder"]
- > ![Download Agent](./media/sap-successfactors-inbound-provisioning/download-pa-agent.png "Download Agent Screen")
-
+ >[!div class="mx-imgBorder"]
+ >![Download Agent](./media/workday-inbound-tutorial/pa-download-agent.png "Download Agent Screen")
### Part 2: Install and configure on-premises Provisioning Agent(s)
-To provision to Active Directory on-premises, the Provisioning agent must be installed on a server that has .NET 4.7.1+ Framework and network access to the desired Active Directory domain(s).
-
-> [!TIP]
-> You can check the version of the .NET framework on your server using the instructions provided [here](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed).
-> If the server does not have .NET 4.7.1 or higher installed, you can download it from [here](https://support.microsoft.com/help/4033342/the-net-framework-4-7-1-offline-installer-for-windows).
-
-Transfer the downloaded agent installer to the server host and follow the steps given below to complete the agent configuration.
-
-1. Sign in to the Windows Server where you want to install the new agent.
-
-1. Launch the Provisioning Agent installer, agree to the terms, and click on the **Install** button.
-
- ![Install Screen](./media/workday-inbound-tutorial/pa_install_screen_1.png "Install Screen")
-
-1. After installation is complete, the wizard will launch and you will see the **Connect Azure AD** screen. Click on the **Authenticate** button to connect to your Azure AD instance.
-
- ![Connect Azure AD](./media/workday-inbound-tutorial/pa_install_screen_2.png "Connect Azure AD")
-
-1. Authenticate to your Azure AD instance using Global Admin Credentials.
-
- ![Admin Auth](./media/workday-inbound-tutorial/pa_install_screen_3.png "Admin Auth")
-
- > [!NOTE]
- > The Azure AD admin credentials is used only to connect to your Azure AD tenant. The agent does not store the credentials locally on the server.
+To provision to Active Directory on-premises, the Provisioning agent must be installed on a domain-joined server that has network access to the desired Active Directory domain(s).
-1. After successful authentication with Azure AD, you will see the **Connect Active Directory** screen. In this step, enter your AD domain name and click on the **Add Directory** button.
-
- ![Add Directory](./media/workday-inbound-tutorial/pa_install_screen_4.png "Add Directory")
-
-1. You will now be prompted to enter the credentials required to connect to the AD Domain. On the same screen, you can use the **Select domain controller priority** to specify domain controllers that the agent should use for sending provisioning requests.
-
- ![Domain Credentials](./media/workday-inbound-tutorial/pa_install_screen_5.png)
-
-1. After configuring the domain, the installer displays a list of configured domains. On this screen, you can repeat step #5 and #6 to add more domains or click on **Next** to proceed to agent registration.
-
- ![Configured Domains](./media/workday-inbound-tutorial/pa_install_screen_6.png "Configured Domains")
-
- > [!NOTE]
- > If you have multiple AD domains (e.g. na.contoso.com, emea.contoso.com), then please add each domain individually to the list.
- > Only adding the parent domain (e.g. contoso.com) is not sufficient. You must register each child domain with the agent.
-
-1. Review the configuration details and click on **Confirm** to register the agent.
-
- ![Confirm Screen](./media/workday-inbound-tutorial/pa_install_screen_7.png "Confirm Screen")
-
-1. The configuration wizard displays the progress of the agent registration.
-
- ![Agent Registration](./media/workday-inbound-tutorial/pa_install_screen_8.png "Agent Registration")
-
-1. Once the agent registration is successful, you can click on **Exit** to exit the Wizard.
-
- ![Exit Screen](./media/workday-inbound-tutorial/pa_install_screen_9.png "Exit Screen")
-
-1. Verify the installation of the Agent and make sure it is running by opening the "Services" Snap-In and look for the Service named "Microsoft Azure AD Connect Provisioning Agent"
-
- ![Screenshot of the Microsoft Azure AD Connect Provisioning Agent running in Services.](./media/workday-inbound-tutorial/services.png)
+Transfer the downloaded agent installer to the server host and follow the steps listed [in the install agent section](../cloud-provisioning/how-to-install.md) to complete the agent configuration.
### Part 3: In the provisioning app, configure connectivity to SuccessFactors and Active Directory In this step, we establish connectivity with SuccessFactors and Active Directory in the Azure portal.
@@ -330,24 +282,22 @@ In this section, you will configure how user data flows from SuccessFactors to A
1. To save your mappings, click **Save** at the top of the Attribute-Mapping section.
-Once your attribute mapping configuration is complete, you can now [enable and launch the user provisioning service](#enable-and-launch-user-provisioning).
+Once your attribute mapping configuration is complete, you can test provisioning for a single user using [on-demand provisioning](../app-provisioning/provision-on-demand.md) and then [enable and launch the user provisioning service](#enable-and-launch-user-provisioning).
## Enable and launch user provisioning
-Once the SuccessFactors provisioning app configurations have been completed, you can turn on the provisioning service in the Azure portal.
+Once the SuccessFactors provisioning app configurations have been completed and you have verified provisioning for a single user with [on-demand provisioning](../app-provisioning/provision-on-demand.md), you can turn on the provisioning service in the Azure portal.
> [!TIP]
-> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or SuccessFactors data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
-
-1. In the **Provisioning** tab, set the **Provisioning Status** to **On**.
+> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or SuccessFactors data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users using [on-demand provisioning](../app-provisioning/provision-on-demand.md) before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
-2. Click **Save**.
+1. Go to the **Provisioning** blade and click on **Start provisioning**.
-3. This operation will start the initial sync, which can take a variable number of hours depending on how many users are in the SuccessFactors tenant. You can check the progress bar to the track the progress of the sync cycle.
+1. This operation will start the initial sync, which can take a variable number of hours depending on how many users are in the SuccessFactors tenant. You can check the progress bar to the track the progress of the sync cycle.
-4. At any time, check the **Audit logs** tab in the Azure portal to see what actions the provisioning service has performed. The audit logs lists all individual sync events performed by the provisioning service, such as which users are being read out of SuccessFactors and then subsequently added or updated to Active Directory.
+1. At any time, check the **Audit logs** tab in the Azure portal to see what actions the provisioning service has performed. The audit logs lists all individual sync events performed by the provisioning service, such as which users are being read out of SuccessFactors and then subsequently added or updated to Active Directory.
-5. Once the initial sync is completed, it will write an audit summary report in the **Provisioning** tab, as shown below.
+1. Once the initial sync is completed, it will write an audit summary report in the **Provisioning** tab, as shown below.
> [!div class="mx-imgBorder"] > ![Provisioning progress bar](./media/sap-successfactors-inbound-provisioning/prov-progress-bar-stats.png)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/workday-inbound-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workday-inbound-tutorial.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.topic: tutorial ms.workload: identity
-ms.date: 05/26/2020
+ms.date: 01/19/2021
ms.author: chmutali --- # Tutorial: Configure Workday for automatic user provisioning
@@ -35,12 +35,12 @@ The [Azure Active Directory user provisioning service](../app-provisioning/user-
### What's new This section captures recent Workday integration enhancements. For a list of comprehensive updates, planned changes and archives, please visit the page [What's new in Azure Active Directory?](../fundamentals/whats-new.md)
+* **Oct 2020 - Enabled provision on demand for Workday:** Using [on-demand provisioning](../app-provisioning/provision-on-demand.md) you can now test end-to-end provisioning for a specific user profile in Workday to verify your attribute mapping and expression logic.
+ * **May 2020 - Ability to writeback phone numbers to Workday:** In addition to email and username, you can now writeback work phone number and mobile phone number from Azure AD to Workday. For more details, refer to the [writeback app tutorial](workday-writeback-tutorial.md). * **April 2020 - Support for the latest version of Workday Web Services (WWS) API:** Twice a year in March and September, Workday delivers feature-rich updates that help you meet your business goals and changing workforce demands. To keep up with the new features delivered by Workday you can now directly specify the WWS API version that you would like to use in the connection URL. For details on how to specify the Workday API version, refer to the section on [configuring Workday connectivity](#part-3-in-the-provisioning-app-configure-connectivity-to-workday-and-active-directory).
-* **Jan 2020 - Ability to set AD accountExpires attribute:** Using the function [NumFromDate](../app-provisioning/functions-for-customizing-application-data.md#numfromdate) you can now map Workday date fields such as *EndContractDate* or *StatusTerminationDate*.
- ### Who is this user provisioning solution best suited for? This Workday user provisioning solution is ideally suited for:
@@ -145,51 +145,37 @@ In this step, you'll grant "domain security" policy permissions for the worker d
**To configure domain security policy permissions:**
-1. Enter **Domain Security Configuration** in the search box, and then click on the link **Domain Security Configuration Report**.
+1. Enter **Security Group Membership and Access** in the search box and click on the report link.
>[!div class="mx-imgBorder"]
- >![Screenshot that shows "domain security configuration" in the search box, with "Domain Security Configuration - Report" displayed in the results.](./media/workday-inbound-tutorial/wd_isu_06.png "Domain Security Policies")
-2. In the **Domain** text box, search for the following domains and add them to the filter one by one.
- * *External Account Provisioning*
- * *Worker Data: Workers*
- * *Worker Data: Public Worker Reports*
- * *Person Data: Work Contact Information*
- * *Worker Data: All Positions*
- * *Worker Data: Current Staffing Information*
- * *Worker Data: Business Title on Worker Profile*
- * *Workday Accounts*
+ >![Search Security Group Membership](./media/workday-inbound-tutorial/security-group-membership-access.png)
- >[!div class="mx-imgBorder"]
- >![Screenshot that shows the Domain Security Configuration report with the "External Account" in the "Domain" text box.](./media/workday-inbound-tutorial/wd_isu_07.png "Domain Security Policies")
-
- >[!div class="mx-imgBorder"]
- >![Screenshot that shows the Domain Security Configuration report with a list of domains selected.](./media/workday-inbound-tutorial/wd_isu_08.png "Domain Security Policies")
-
- Click **OK**.
-
-3. In the report that shows up, select the ellipsis (...) that appears next to **External Account Provisioning** and click on the menu option **Domain -> Edit Security Policy Permissions**
+1. Search and select the security group created in the previous step.
>[!div class="mx-imgBorder"]
- >![Domain Security Policies](./media/workday-inbound-tutorial/wd_isu_09.png "Domain Security Policies")
+ >![Select Security Group](./media/workday-inbound-tutorial/select-security-group-msft-wdad.png)
-4. On the **Edit Domain Security Policy Permissions** page, scroll down to the section **Integration Permissions**. Click on the "+" sign to add the integration system group to the list of security groups with **Get** and **Put** integration permissions.
+1. Click on the ellipsis (...) next to the group name and from the menu, select **Security Group > Maintain Domain Permissions for Security Group**
>[!div class="mx-imgBorder"]
- >![Screenshot that shows the "Integration Permissons" section highlighted.](./media/workday-inbound-tutorial/wd_isu_10.png "Edit Permission")
+ >![Select Maintain Domain Permissions](./media/workday-inbound-tutorial/select-maintain-domain-permissions.png)
-5. Click on the "+" sign to add the integration system group to the list of security groups with **Get** and **Put** integration permissions.
+1. Under **Integration Permissions**, add the following domains to the list **Domain Security Policies permitting Put access**
+ * *External Account Provisioning*
+ * *Worker Data: Public Worker Reports*
+ * *Person Data: Work Contact Information* (required if you plan to writeback contact data from Azure AD to Workday)
+ * *Workday Accounts* (required if you plan to writeback username/UPN from Azure AD to Workday)
- >[!div class="mx-imgBorder"]
- >![Edit Permission](./media/workday-inbound-tutorial/wd_isu_11.png "Edit Permission")
+1. Under **Integration Permissions**, add the following domains to the list **Domain Security Policies permitting Get access**
+ * *Worker Data: Workers*
+ * *Worker Data: All Positions*
+ * *Worker Data: Current Staffing Information*
+ * *Worker Data: Business Title on Worker Profile*
+ * *Worker Data: Qualified Workers* (Optional - add this to retrieve worker qualification data for provisioning)
+ * *Worker Data: Skills and Experience* (Optional - add this to retrieve worker skills data for provisioning)
-6. Repeat steps 3-5 above for each of these remaining security policies:
+1. After completing above steps, the permissions screen will appear as shown below:
+ >[!div class="mx-imgBorder"]
+ >![All Domain Security Permissions](./media/workday-inbound-tutorial/all-domain-security-permissions.png)
- | Operation | Domain Security Policy |
- | ---------- | ---------- |
- | Get and Put | Worker Data: Public Worker Reports |
- | Get and Put | Person Data: Work Contact Information |
- | Get | Worker Data: Workers |
- | Get | Worker Data: All Positions |
- | Get | Worker Data: Current Staffing Information |
- | Get | Worker Data: Business Title on Worker Profile |
- | Get and Put | Workday Accounts |
+1. Click **OK** and **Done** on the next screen to complete the configuration.
### Configuring business process security policy permissions
@@ -234,35 +220,9 @@ In this step, you'll grant "business process security" policy permissions for th
>[!div class="mx-imgBorder"] >![Activate Pending Security](./media/workday-inbound-tutorial/wd_isu_18.png "Activate Pending Security")
-## Configure Active Directory service account
+## Provisioning Agent installation prerequisites
-This section describes the AD service account permissions required to install and configure the Azure AD Connect Provisioning Agent.
-
-### Permissions required to run the Provisioning Agent Installer
-Once you have identified the Windows Server that will host the Provisioning Agent, login to the server host using either local admin or domain admin credentials. The agent setup process creates secure key store credential files and updates the service profile configuration on the host server. This requires admin access on the server hosting the agent.
-
-### Permissions required to configure the Provisioning Agent service
-Use the steps below to setup a service account that can be used for provisioning agent operations.
-1. On your AD Domain Controller, open *Active Directory Users and Computers* snap-in.
-2. Create a new domain user (example: *provAgentAdmin*)
-3. Right click the OU or domain name and select *Delegate Control* which will open the *Delegation of Control Wizard*.
-
-> [!NOTE]
-> If you want to limit the provisioning agent to only create and read users from a certain OU for testing purposes, then we recommend delegating the control at the appropriate OU level during test runs.
-
-4. Click **Next** on the welcome screen.
-5. On the **Select Users or Groups** screen, add the domain user you created in step 2. Click **Next**.
- >[!div class="mx-imgBorder"]
- >![Add Screen](./media/workday-inbound-tutorial/delegation-wizard-01.png "Add Screen")
-
-6. On the **Tasks to Delegate** screen, select the following tasks:
- * Create, delete and manage user accounts
- * Read all user information
-
- >[!div class="mx-imgBorder"]
- >![Tasks Screen](./media/workday-inbound-tutorial/delegation-wizard-02.png "Tasks Screen")
-
-7. Click **Next** and **Save** the configuration.
+Review the [provisioning agent installation prerequisites](../cloud-provisioning/how-to-prerequisites.md) before proceeding to the next section.
## Configuring user provisioning from Workday to Active Directory
@@ -299,72 +259,9 @@ This section provides steps for user account provisioning from Workday to each A
### Part 2: Install and configure on-premises Provisioning Agent(s)
-To provision to Active Directory on-premises, the Provisioning agent must be installed on a server that has .NET 4.7.1+ Framework and network access to the desired Active Directory domain(s).
+To provision to Active Directory on-premises, the Provisioning agent must be installed on a domain-joined server that has network access to the desired Active Directory domain(s).
-> [!TIP]
-> You can check the version of the .NET framework on your server using the instructions provided [here](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed).
-> If the server does not have .NET 4.7.1 or higher installed, you can download it from [here](https://support.microsoft.com/help/4033342/the-net-framework-4-7-1-offline-installer-for-windows).
-
-Transfer the downloaded agent installer to the server host and follow the steps given below to complete the agent configuration.
-
-1. Sign in to the Windows Server where you want to install the new agent.
-
-1. Launch the Provisioning Agent installer, agree to the terms, and click on the **Install** button.
-
- >[!div class="mx-imgBorder"]
- >![Install Screen](./media/workday-inbound-tutorial/pa_install_screen_1.png "Install Screen")
-
-1. After installation is complete, the wizard will launch and you will see the **Connect Azure AD** screen. Click on the **Authenticate** button to connect to your Azure AD instance.
-
- >[!div class="mx-imgBorder"]
- >![Connect Azure AD](./media/workday-inbound-tutorial/pa_install_screen_2.png "Connect Azure AD")
-
-1. Authenticate to your Azure AD instance using Hybrid Identity Administrator Credentials.
-
- >[!div class="mx-imgBorder"]
- >![Admin Auth](./media/workday-inbound-tutorial/pa_install_screen_3.png "Admin Auth")
-
- > [!NOTE]
- > The Azure AD admin credentials is used only to connect to your Azure AD tenant. The agent does not store the credentials locally on the server.
-
-1. After successful authentication with Azure AD, you will see the **Connect Active Directory** screen. In this step, enter your AD domain name and click on the **Add Directory** button.
-
- >[!div class="mx-imgBorder"]
- >![Add Directory](./media/workday-inbound-tutorial/pa_install_screen_4.png "Add Directory")
-
-1. You will now be prompted to enter the credentials required to connect to the AD Domain. On the same screen, you can use the **Select domain controller priority** to specify domain controllers that the agent should use for sending provisioning requests.
-
- >[!div class="mx-imgBorder"]
- >![Domain Credentials](./media/workday-inbound-tutorial/pa_install_screen_5.png)
-
-1. After configuring the domain, the installer displays a list of configured domains. On this screen, you can repeat step #5 and #6 to add more domains or click on **Next** to proceed to agent registration.
-
- >[!div class="mx-imgBorder"]
- >![Configured Domains](./media/workday-inbound-tutorial/pa_install_screen_6.png "Configured Domains")
-
- > [!NOTE]
- > If you have multiple AD domains (e.g. na.contoso.com, emea.contoso.com), then please add each domain individually to the list.
- > Only adding the parent domain (e.g. contoso.com) is not sufficient. You must register each child domain with the agent.
-
-1. Review the configuration details and click on **Confirm** to register the agent.
-
- >[!div class="mx-imgBorder"]
- >![Confirm Screen](./media/workday-inbound-tutorial/pa_install_screen_7.png "Confirm Screen")
-
-1. The configuration wizard displays the progress of the agent registration.
-
- >[!div class="mx-imgBorder"]
- >![Agent Registration](./media/workday-inbound-tutorial/pa_install_screen_8.png "Agent Registration")
-
-1. Once the agent registration is successful, you can click on **Exit** to exit the Wizard.
-
- >[!div class="mx-imgBorder"]
- >![Exit Screen](./media/workday-inbound-tutorial/pa_install_screen_9.png "Exit Screen")
-
-1. Verify the installation of the Agent and make sure it is running by opening the "Services" Snap-In and look for the Service named "Microsoft Azure AD Connect Provisioning Agent"
-
- >[!div class="mx-imgBorder"]
- >![Screenshot of the Microsoft Azure AD Connect Provisioning Agent running in Services.](./media/workday-inbound-tutorial/services.png)
+Transfer the downloaded agent installer to the server host and follow the steps listed [in the **Install agent** section](../cloud-provisioning/how-to-install.md) to complete the agent configuration.
### Part 3: In the provisioning app, configure connectivity to Workday and Active Directory In this step, we establish connectivity with Workday and Active Directory in the Azure portal.
@@ -512,24 +409,22 @@ In this section, you will configure how user data flows from Workday to Active D
| **LocalReference** | preferredLanguage | | Create + update | | **Switch(\[Municipality\], "OU=Default Users,DC=contoso,DC=com", "Dallas", "OU=Dallas,OU=Users,DC=contoso,DC=com", "Austin", "OU=Austin,OU=Users,DC=contoso,DC=com", "Seattle", "OU=Seattle,OU=Users,DC=contoso,DC=com", "London", "OU=London,OU=Users,DC=contoso,DC=com")** | parentDistinguishedName | | Create + update |
-Once your attribute mapping configuration is complete, you can now [enable and launch the user provisioning service](#enable-and-launch-user-provisioning).
+Once your attribute mapping configuration is complete, you can test provisioning for a single user using [on-demand provisioning](../app-provisioning/provision-on-demand.md) and then [enable and launch the user provisioning service](#enable-and-launch-user-provisioning).
## Enable and launch user provisioning
-Once the Workday provisioning app configurations have been completed, you can turn on the provisioning service in the Azure portal.
+Once the Workday provisioning app configurations have been completed and you have verified provisioning for a single user with [on-demand provisioning](../app-provisioning/provision-on-demand.md), you can turn on the provisioning service in the Azure portal.
> [!TIP]
-> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
-
-1. In the **Provisioning** tab, set the **Provisioning Status** to **On**.
+> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users using [on-demand provisioning](../app-provisioning/provision-on-demand.md) before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
-2. Click **Save**.
+1. Go to the **Provisioning** blade and click on **Start provisioning**.
-3. This operation will start the initial sync, which can take a variable number of hours depending on how many users are in the Workday tenant.
+1. This operation will start the initial sync, which can take a variable number of hours depending on how many users are in the Workday tenant. You can check the progress bar to the track the progress of the sync cycle.
-4. At any time, check the **Audit logs** tab in the Azure portal to see what actions the provisioning service has performed. The audit logs lists all individual sync events performed by the provisioning service, such as which users are being read out of Workday and then subsequently added or updated to Active Directory. Refer to the Troubleshooting section for instructions on how to review the audit logs and fix provisioning errors.
+1. At any time, check the **Audit logs** tab in the Azure portal to see what actions the provisioning service has performed. The audit logs lists all individual sync events performed by the provisioning service, such as which users are being read out of Workday and then subsequently added or updated to Active Directory. Refer to the Troubleshooting section for instructions on how to review the audit logs and fix provisioning errors.
-5. Once the initial sync is completed, it will write an audit summary report in the **Provisioning** tab, as shown below.
+1. Once the initial sync is completed, it will write an audit summary report in the **Provisioning** tab, as shown below.
> [!div class="mx-imgBorder"] > ![Provisioning progress bar](./media/sap-successfactors-inbound-provisioning/prov-progress-bar-stats.png)
@@ -538,12 +433,10 @@ Once the Workday provisioning app configurations have been completed, you can tu
* **Solution capability questions** * [When processing a new hire from Workday, how does the solution set the password for the new user account in Active Directory?](#when-processing-a-new-hire-from-workday-how-does-the-solution-set-the-password-for-the-new-user-account-in-active-directory) * [Does the solution support sending email notifications after provisioning operations complete?](#does-the-solution-support-sending-email-notifications-after-provisioning-operations-complete)
- * [How do I manage delivery of passwords for new hires and securely provide a mechanism to reset their password?](#how-do-i-manage-delivery-of-passwords-for-new-hires-and-securely-provide-a-mechanism-to-reset-their-password)
* [Does the solution cache Workday user profiles in the Azure AD cloud or at the provisioning agent layer?](#does-the-solution-cache-workday-user-profiles-in-the-azure-ad-cloud-or-at-the-provisioning-agent-layer) * [Does the solution support assigning on-premises AD groups to the user?](#does-the-solution-support-assigning-on-premises-ad-groups-to-the-user) * [Which Workday APIs does the solution use to query and update Workday worker profiles?](#which-workday-apis-does-the-solution-use-to-query-and-update-workday-worker-profiles) * [Can I configure my Workday HCM tenant with two Azure AD tenants?](#can-i-configure-my-workday-hcm-tenant-with-two-azure-ad-tenants)
- * [Why "Workday to Azure AD" user provisioning app is not supported if we have deployed Azure AD Connect?](#why-workday-to-azure-ad-user-provisioning-app-is-not-supported-if-we-have-deployed-azure-ad-connect)
* [How do I suggest improvements or request new features related to Workday and Azure AD integration?](#how-do-i-suggest-improvements-or-request-new-features-related-to-workday-and-azure-ad-integration) * **Provisioning Agent questions**
@@ -575,19 +468,13 @@ When the on-premises provisioning agent gets a request to create a new AD accoun
No, sending email notifications after completing provisioning operations is not supported in the current release.
-#### How do I manage delivery of passwords for new hires and securely provide a mechanism to reset their password?
-
-One of the final steps involved in new AD account provisioning is the delivery of the temporary password assigned to the user's AD account. Many enterprises still use the traditional approach of delivering the temporary password to the user's manager, who then hands over the password to the new hire/contingent worker. This process has an inherent security flaw and there is an option available to implement a better approach using Azure AD capabilities.
-
-As part of the hiring process, HR teams usually run a background check and vet the mobile number of the new hire. With the Workday to AD User Provisioning integration, you can build on top of this fact and rollout a self-service password reset capability for the user on Day 1. This is accomplished by propagating the "Mobile Number" attribute of the new hire from Workday to AD and then from AD to Azure AD using Azure AD Connect. Once the "Mobile Number" is present in Azure AD, you can enable the [Self-Service Password Reset (SSPR)](../authentication/howto-sspr-authenticationdata.md) for the user's account, so that on Day 1, a new hire can use the registered and verified mobile number for authentication.
- #### Does the solution cache Workday user profiles in the Azure AD cloud or at the provisioning agent layer? No, the solution does not maintain a cache of user profiles. The Azure AD provisioning service simply acts as a data processor, reading data from Workday and writing to the target Active Directory or Azure AD. See the section [Managing personal data](#managing-personal-data) for details related to user privacy and data retention. #### Does the solution support assigning on-premises AD groups to the user?
-This functionality is not supported currently. Recommended workaround is to deploy a PowerShell script that queries the Microsoft Graph API endpoint for [audit log data](/graph/api/resources/azure-ad-auditlog-overview?view=graph-rest-beta) and use that to trigger scenarios such as group assignment. This PowerShell script can be attached to a task scheduler and deployed on the same box running the provisioning agent.
+This functionality is not supported currently. Recommended workaround is to deploy a PowerShell script that queries the Microsoft Graph API endpoint for [audit log data](/graph/api/resources/azure-ad-auditlog-overview) and use that to trigger scenarios such as group assignment. This PowerShell script can be attached to a task scheduler and deployed on the same box running the provisioning agent.
#### Which Workday APIs does the solution use to query and update Workday worker profiles?
@@ -610,10 +497,6 @@ Yes, this configuration is supported. Here are the high level steps to configure
* Based on the "Child Domains" that each Provisioning Agent will manage, configure each agent with the domain(s). One agent can handle multiple domains. * In Azure portal, setup the Workday to AD User Provisioning App in each tenant and configure it with the respective domains.
-#### Why "Workday to Azure AD" user provisioning app is not supported if we have deployed Azure AD Connect?
-
-When Azure AD is used in hybrid mode (where it contains a mix of cloud + on-premises users), it's important to have a clear definition of "source of authority". Typically hybrid scenarios require deployment of Azure AD Connect. When Azure AD Connect is deployed, on-premises AD is the source of authority. Introducing the Workday to Azure AD connector into the mix can lead to a situation where Workday attribute values could potentially overwrite the values set by Azure AD Connect. Hence use of "Workday to Azure AD" provisioning app is not supported when Azure AD Connect is enabled. In such situations, we recommend using "Workday to AD User" provisioning app for getting users into on-premises AD and then syncing them into Azure AD using Azure AD Connect.
- #### How do I suggest improvements or request new features related to Workday and Azure AD integration? Your feedback is highly valued as it helps us set the direction for the future releases and enhancements. We welcome all feedback and encourage you to submit your idea or improvement suggestion in the [feedback forum of Azure AD](https://feedback.azure.com/forums/169401-azure-active-directory). For specific feedback related to the Workday integration, select the category *SaaS Applications* and search using the keywords *Workday* to find existing feedback related to the Workday.
@@ -846,35 +729,69 @@ This section provides specific guidance on how to troubleshoot provisioning issu
This section covers the following aspects of troubleshooting:
+* [Configure provisioning agent to emit Event Viewer logs](#configure-provisioning-agent-to-emit-event-viewer-logs)
* [Setting up Windows Event Viewer for agent troubleshooting](#setting-up-windows-event-viewer-for-agent-troubleshooting) * [Setting up Azure portal Audit Logs for service troubleshooting](#setting-up-azure-portal-audit-logs-for-service-troubleshooting) * [Understanding logs for AD User Account create operations](#understanding-logs-for-ad-user-account-create-operations) * [Understanding logs for Manager update operations](#understanding-logs-for-manager-update-operations) * [Resolving commonly encountered errors](#resolving-commonly-encountered-errors)
+### Configure provisioning agent to emit Event Viewer logs
+1. Sign in to the Windows Server machine where the provisioning agent is deployed
+1. Stop the service **Microsoft Azure AD Connect Provisioning Agent**.
+1. Create a copy of the original config file: *C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config*.
+1. Replace the existing `<system.diagnostics>` section with the following.
+ * The listener config **etw** emits messages to the EventViewer logs
+ * The listener config **textWriterListener** sends trace messages to the file *ProvAgentTrace.log*. Uncomment the lines related to textWriterListener only for advanced troubleshooting.
+
+ ```xml
+ <system.diagnostics>
+ <sources>
+ <source name="AAD Connect Provisioning Agent">
+ <listeners>
+ <add name="console"/>
+ <add name="etw"/>
+ <!-- <add name="textWriterListener"/> -->
+ </listeners>
+ </source>
+ </sources>
+ <sharedListeners>
+ <add name="console" type="System.Diagnostics.ConsoleTraceListener" initializeData="false"/>
+ <add name="etw" type="System.Diagnostics.EventLogTraceListener" initializeData="Azure AD Connect Provisioning Agent">
+ <filter type="System.Diagnostics.EventTypeFilter" initializeData="All"/>
+ </add>
+ <!-- <add name="textWriterListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="C:/ProgramData/Microsoft/Azure AD Connect Provisioning Agent/Trace/ProvAgentTrace.log"/> -->
+ </sharedListeners>
+ </system.diagnostics>
+
+ ```
+1. Start the service **Microsoft Azure AD Connect Provisioning Agent**.
+ ### Setting up Windows Event Viewer for agent troubleshooting
-* Sign in to the Windows Server machine where the Provisioning Agent is deployed
-* Open **Windows Server Event Viewer** desktop app.
-* Select **Windows Logs > Application**.
-* Use the **Filter Current Log…** option to view all events logged under the source **AAD.Connect.ProvisioningAgent** and exclude events with Event ID "5", by specifying the filter "-5" as shown below.
+1. Sign in to the Windows Server machine where the Provisioning Agent is deployed
+1. Open **Windows Server Event Viewer** desktop app.
+1. Select **Windows Logs > Application**.
+1. Use the **Filter Current Log…** option to view all events logged under the source **Azure AD Connect Provisioning Agent** and exclude events with Event ID "5", by specifying the filter "-5" as shown below.
+ > [!NOTE]
+ > Event ID 5 captures agent bootstrap messages to the Azure AD cloud service and hence we filter it while analyzing the log files.
- ![Windows Event Viewer](media/workday-inbound-tutorial/wd_event_viewer_01.png))
+ ![Windows Event Viewer](media/workday-inbound-tutorial/wd_event_viewer_01.png)
-* Click **OK** and sort the result view by **Date and Time** column.
+1. Click **OK** and sort the result view by **Date and Time** column.
### Setting up Azure portal Audit Logs for service troubleshooting
-* Launch the [Azure portal](https://portal.azure.com), and navigate to the **Audit logs** section of your Workday provisioning application.
-* Use the **Columns** button on the Audit Logs page to display only the following columns in the view (Date, Activity, Status, Status Reason). This configuration ensures that you focus only on data that is relevant for troubleshooting.
+1. Launch the [Azure portal](https://portal.azure.com), and navigate to the **Audit logs** section of your Workday provisioning application.
+1. Use the **Columns** button on the Audit Logs page to display only the following columns in the view (Date, Activity, Status, Status Reason). This configuration ensures that you focus only on data that is relevant for troubleshooting.
- ![Audit log columns](media/workday-inbound-tutorial/wd_audit_logs_00.png)
+ ![Audit log columns](media/workday-inbound-tutorial/wd_audit_logs_00.png)
-* Use the **Target** and **Date Range** query parameters to filter the view.
- * Set the **Target** query parameter to the "Worker ID" or "Employee ID" of the Workday worker object.
- * Set the **Date Range** to an appropriate time period over which you want to investigate for errors or issues with the provisioning.
+1. Use the **Target** and **Date Range** query parameters to filter the view.
+ * Set the **Target** query parameter to the "Worker ID" or "Employee ID" of the Workday worker object.
+ * Set the **Date Range** to an appropriate time period over which you want to investigate for errors or issues with the provisioning.
- ![Audit log filters](media/workday-inbound-tutorial/wd_audit_logs_01.png)
+ ![Audit log filters](media/workday-inbound-tutorial/wd_audit_logs_01.png)
### Understanding logs for AD User Account create operations
aks https://docs.microsoft.com/en-us/azure/aks/use-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
@@ -135,7 +135,7 @@ Update the user-assigned identity:
az aks update -g <RGName> -n <AKSName> --enable-managed-identity --assign-identity <UserAssignedIdentityResourceID> ``` > [!NOTE]
-> Once the system-assigned or user-assigned identities have been updated to managed identity, perform an `az nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
+> Once the system-assigned or user-assigned identities have been updated to managed identity, perform an `az aks nodepool upgrade --node-image-only` on your nodes to complete the update to managed identity.
## Bring your own control plane MI A custom control plane identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity.
api-management https://docs.microsoft.com/en-us/azure/api-management/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/plan-manage-costs.md
@@ -45,7 +45,7 @@ For additional pricing and feature details, see:
### Using monetary credit with API Management
-You can pay for API Management charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
+You can pay for API Management charges with your Azure Prepayment (previously called monetary commitment). However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
## Monitor costs
app-service https://docs.microsoft.com/en-us/azure/app-service/overview-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-manage-costs.md
@@ -57,7 +57,7 @@ After you delete Azure App Service resources, resources from related Azure servi
### Using Monetary Credit with Azure App Service
-You can pay for Azure App Service charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third-party products and services, including those from the Azure Marketplace.
+You can pay for Azure App Service charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services, including those from the Azure Marketplace.
## Estimate costs
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/quick-create-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-cli.md
@@ -6,7 +6,7 @@ services: application-gateway
author: vhorne ms.service: application-gateway ms.topic: quickstart
-ms.date: 08/27/2020
+ms.date: 01/19/2021
ms.author: victorh ms.custom: mvc, devx-track-js, devx-track-azurecli ---
@@ -15,7 +15,7 @@ ms.custom: mvc, devx-track-js, devx-track-azurecli
In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or the [Azure portal](quick-create-portal.md).
@@ -63,7 +63,7 @@ az network public-ip create \
## Create the backend servers
-A backend can have NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to test the application gateway.
+A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to test the application gateway.
#### Create two virtual machines
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/quick-create-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-portal.md
@@ -6,7 +6,7 @@ services: application-gateway
author: vhorne ms.service: application-gateway ms.topic: quickstart
-ms.date: 12/08/2020
+ms.date: 01/19/2021
ms.author: victorh ms.custom: mvc ---
@@ -73,7 +73,7 @@ You'll create the application gateway using the tabs on the **Create an applicat
> [!NOTE] > For the Application Gateway v2 SKU, there must be a **Public** frontend IP configuration. You can still have both a Public and a Private frontend IP configuration, but Private only frontend IP configuration (Only ILB mode) is currently not enabled for the v2 SKU.
-2. Choose **Create new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
+2. Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
![Create new application gateway: frontends](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png)
@@ -81,9 +81,9 @@ You'll create the application gateway using the tabs on the **Create an applicat
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
-1. On the **Backends** tab, select **+Add a backend pool**.
+1. On the **Backends** tab, select **Add a backend pool**.
2. In the **Add a backend pool** window that opens, enter the following values to create an empty backend pool:
@@ -100,7 +100,7 @@ The backend pool is used to route requests to the backend servers that serve the
On the **Configuration** tab, you'll connect the frontend and backend pool you created using a routing rule.
-1. Select **Add a rule** in the **Routing rules** column.
+1. Select **Add a routing rule** in the **Routing rules** column.
2. In the **Add a routing rule** window that opens, enter *myRoutingRule* for the **Rule name**.
@@ -115,7 +115,7 @@ On the **Configuration** tab, you'll connect the frontend and backend pool you c
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**.
-5. For the **HTTP setting**, select **Create new** to create a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **HTTP setting**, select **Add new** to add a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
![Create new application gateway: HTTP setting](./media/application-gateway-create-gateway-portal/application-gateway-create-httpsetting.png)
@@ -142,7 +142,7 @@ To do this, you'll:
### Create a virtual machine 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears.
-2. Select **Windows Server 2016 Datacenter** in the **Popular** list. The **Create a virtual machine** page appears.<br>Application Gateway can route traffic to any type of virtual machine used in its backend pool. In this example, you use a Windows Server 2016 Datacenter.
+2. Select **Windows Server 2016 Datacenter** in the **Popular** list. The **Create a virtual machine** page appears.<br>Application Gateway can route traffic to any type of virtual machine used in its backend pool. In this example, you use a Windows Server 2016 Datacenter virtual machine.
3. Enter these values in the **Basics** tab for the following virtual machine settings: - **Resource group**: Select **myResourceGroupAG** for the resource group name.
@@ -160,9 +160,11 @@ To do this, you'll:
### Install IIS for testing
-In this example, you install IIS on the virtual machines only to verify Azure created the application gateway successfully.
+In this example, you install IIS on the virtual machines to verify Azure created the application gateway successfully.
-1. Open Azure PowerShell. Select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
+1. Open Azure PowerShell.
+
+ Select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
![Install custom extension](./media/application-gateway-create-gateway-portal/application-gateway-extension.png)
@@ -203,7 +205,9 @@ In this example, you install IIS on the virtual machines only to verify Azure cr
## Test the application gateway
-Although IIS isn't required to create the application gateway, you installed it in this quickstart to verify if Azure successfully created the application gateway. Use IIS to test the application gateway:
+Although IIS isn't required to create the application gateway, you installed it in this quickstart to verify if Azure successfully created the application gateway.
+
+Use IIS to test the application gateway:
1. Find the public IP address for the application gateway on its **Overview** page.![Record application gateway public IP address](./media/application-gateway-create-gateway-portal/application-gateway-record-ag-address.png) Or, you can select **All resources**, enter *myAGPublicIPAddress* in the search box, and then select it in the search results. Azure displays the public IP address on the **Overview** page. 2. Copy the public IP address, and then paste it into the address bar of your browser to browse that IP address.
@@ -222,7 +226,7 @@ To delete the resource group:
1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups*. 2. On the **Resource groups** page, search for **myResourceGroupAG** in the list, then select it. 3. On the **Resource group page**, select **Delete resource group**.
-4. Enter *myResourceGroupAG* for **TYPE THE RESOURCE GROUP NAME** and then select **Delete**
+4. Enter *myResourceGroupAG* under **TYPE THE RESOURCE GROUP NAME** and then select **Delete**
## Next steps
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/quick-create-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-powershell.md
@@ -6,7 +6,7 @@ services: application-gateway
author: vhorne ms.service: application-gateway ms.topic: quickstart
-ms.date: 08/27/2020
+ms.date: 01/19/2021
ms.author: victorh ms.custom: mvc ---
@@ -15,7 +15,7 @@ ms.custom: mvc
In this quickstart, you use Azure PowerShell to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
You can also complete this quickstart using [Azure CLI](quick-create-cli.md) or the [Azure portal](quick-create-portal.md).
@@ -43,7 +43,7 @@ New-AzResourceGroup -Name myResourceGroupAG -Location eastus
``` ## Create network resources
-For Azure to communicate between the resources that you create, it needs a virtual network. The application gateway subnet can contain only application gateways. No other resources are allowed. You can either create a new subnet for Application Gateway or use an existing one. In this example, you create two subnets in this example: one for the application gateway, and another for the backend servers. You can configure the Frontend IP of the Application Gateway to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP.
+For Azure to communicate between the resources that you create, it needs a virtual network. The application gateway subnet can contain only application gateways. No other resources are allowed. You can either create a new subnet for Application Gateway or use an existing one. You create two subnets in this example: one for the application gateway, and another for the backend servers. You can configure the Frontend IP address of the Application Gateway to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP address.
1. Create the subnet configurations using `New-AzVirtualNetworkSubnetConfig`. 2. Create the virtual network with the subnet configurations using `New-AzVirtualNetwork`.
@@ -76,7 +76,7 @@ New-AzPublicIpAddress `
### Create the IP configurations and frontend port 1. Use `New-AzApplicationGatewayIPConfiguration` to create the configuration that associates the subnet you created with the application gateway.
-2. Use `New-AzApplicationGatewayFrontendIPConfig` to create the configuration that assigns the public IP address that you previously created to the application gateway.
+2. Use `New-AzApplicationGatewayFrontendIPConfig` to create the configuration that assigns the public IP address that you previously created for the application gateway.
3. Use `New-AzApplicationGatewayFrontendPort` to assign port 80 to access the application gateway. ```azurepowershell-interactive
@@ -96,7 +96,7 @@ $frontendport = New-AzApplicationGatewayFrontendPort `
### Create the backend pool
-1. Use `New-AzApplicationGatewayBackendAddressPool` to create the backend pool for the application gateway. The backend pool will be empty for now. When you create the backend server NICs in the next section, you will add them to the backend pool.
+1. Use `New-AzApplicationGatewayBackendAddressPool` to create the backend pool for the application gateway. The backend pool is empty for now. When you create the backend server NICs in the next section, you'll add them to the backend pool.
2. Configure the settings for the backend pool with `New-AzApplicationGatewayBackendHttpSetting`. ```azurepowershell-interactive
@@ -159,7 +159,9 @@ New-AzApplicationGateway `
### Backend servers
-Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. Backend can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines for Azure to use as backend servers for the application gateway. You also install IIS on the virtual machines to verify that Azure successfully created the application gateway.
+Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.
+
+In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to verify that Azure successfully created the application gateway.
#### Create two virtual machines
@@ -168,7 +170,7 @@ Now that you have created the Application Gateway, create the backend virtual ma
3. Create a virtual machine configuration with `New-AzVMConfig`. 4. Create the virtual machine with `New-AzVM`.
-When you run the following code sample to create the virtual machines, Azure prompts you for credentials. Enter *azureuser* for the user name and a password:
+When you run the following code sample to create the virtual machines, Azure prompts you for credentials. Enter a user name and a password:
ΓÇï ```azurepowershell-interactive $appgw = Get-AzApplicationGateway -ResourceGroupName myResourceGroupAG -Name myAppGateway
@@ -219,7 +221,9 @@ for ($i=1; $i -le 2; $i++)
## Test the application gateway
-Although IIS isn't required to create the application gateway, you installed it in this quickstart to verify whether Azure successfully created the application gateway. Use IIS to test the application gateway:
+Although IIS isn't required to create the application gateway, you installed it in this quickstart to verify if Azure successfully created the application gateway.
+
+Use IIS to test the application gateway:
1. Run `Get-AzPublicIPAddress` to get the public IP address of the application gateway. 2. Copy and paste the public IP address into the address bar of your browser. When you refresh the browser, you should see the name of the virtual machine. A valid response verifies that the application gateway was successfully created and it can successfully connect with the backend.
attestation https://docs.microsoft.com/en-us/azure/attestation/azure-diagnostic-monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/azure-diagnostic-monitoring.md new file mode 100644
@@ -0,0 +1,39 @@
+---
+title: Azure diagnostic monitoring - Azure Attestation
+description: Azure diagnostic monitoring for Azure Attestation
+services: attestation
+author: msmbaldwin
+ms.service: attestation
+ms.topic: overview
+ms.date: 08/31/2020
+ms.author: mbaldwin
+---
+
+# Setting up diagnostics with Trusted Platform Module (TPM) endpoint of Azure Attestation
+
+[Platform logs](/azure/azure-monitor/platform/platform-logs-overview) in Azure, including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. [Platform metrics](/azure/azure-monitor/platform/data-platform-metrics) are collected by default and typically stored in the Azure Monitor metrics database. This article provides details on creating and configuring diagnostic settings to send platform metrics and platform logs to different destinations.
+
+TPM endpoint service is enabled with diagnostic setting and can be used to monitor activity. To setup [Azure Monitoring](/azure/azure-monitor/overview) for the TPM service endpoint using PowerShell kindly follow the below steps.
+
+Setup Azure Attestation service.
+
+[Set up Azure Attestation with Azure PowerShell](/azure/attestation/quickstart-powershell#:~:text=%20Quickstart%3A%20Set%20up%20Azure%20Attestation%20with%20Azure,Register%20Microsoft.Attestation%20resource%20provider.%20Register%20the...%20More%20)
+
+```powershell
+
+ Connect-AzAccount
+
+ Set-AzContext -Subscription <Subscription id>
+
+ $attestationProviderName=<Name of the attestation provider>
+
+ $attestationResourceGroup=<Name of the resource Group>
+
+ $attestationProvider=Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroup
+
+ $storageAccount=New-AzStorageAccount -ResourceGroupName $attestationProvider.ResourceGroupName -Name <Storage Account Name> -SkuName Standard_LRS -Location <Location>
+
+ Set-AzDiagnosticSetting -ResourceId $ attestationProvider.Id -StorageAccountId $ storageAccount.Id -Enabled $true
+
+```
+The activity logs can be found in the Containers section of the storage account. Detailed info can be found at [Collect resource logs from an Azure Resource and analyze with Azure Monitor - Azure Monitor](/azure/azure-monitor/learn/tutorial-resource-logs)
attestation https://docs.microsoft.com/en-us/azure/attestation/basic-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/basic-concepts.md
@@ -23,9 +23,7 @@ Below are some basic concepts related to Microsoft Azure Attestation.
## Attestation provider
-Attestation provider belongs to Azure resource provider named Microsoft.Attestation. The resource provider is a service endpoint that provides Azure Attestation REST contract and is deployed using [Azure Resource Manager](../azure-resource-manager/management/overview.md). Each attestation provider honors a specific, discoverable policy.
-
-Attestation providers get created with a default policy for each attestation type (note that VBS enclave has no default policy). See [examples of an attestation policy](policy-examples.md) for more details on the default policy for SGX.
+Attestation provider belongs to Azure resource provider named Microsoft.Attestation. The resource provider is a service endpoint that provides Azure Attestation REST contract and is deployed using [Azure Resource Manager](../azure-resource-manager/management/overview.md). Each attestation provider honors a specific, discoverable policy. Attestation providers get created with a default policy for each attestation type (note that VBS enclave has no default policy). See [examples of an attestation policy](policy-examples.md) for more details on the default policy for SGX.
### Regional default provider
@@ -59,7 +57,7 @@ Attestation policy is used to process the attestation evidence and is configurab
If the default policy in the attestation provider doesnΓÇÖt meet the needs, customers will be able to create custom policies in any of the regions supported by Azure Attestation. Policy management is a key feature provided to customers by Azure Attestation. Policies will be attestation type specific and can be used to identify enclaves or add claims to the output token or modify claims in an output token.
-See [examples of an attestation policy](policy-examples.md) for default policy content and samples.
+See [examples of an attestation policy](policy-examples.md) for policy samples.
## Benefits of policy signing
@@ -81,26 +79,55 @@ Example of JWT generated for an SGX enclave:
``` {
- ΓÇ£algΓÇ¥: ΓÇ£RS256ΓÇ¥,
- ΓÇ£jkuΓÇ¥: ΓÇ£https://tradewinds.us.attest.azure.net/certsΓÇ¥,
- ΓÇ£kidΓÇ¥: ΓÇ£f1lIjBlb6jUHEUp1/Nh6BNUHc6vwiUyMKKhReZeEpGc=ΓÇ¥,
- ΓÇ£typΓÇ¥: ΓÇ£JWTΓÇ¥
+ "alg": "RS256",
+ "jku": "https://tradewinds.us.attest.azure.net/certs",
+ "kid": <self signed certificate reference to perform signature verification of attestation token,
+ "typ": "JWT"
}.{
- ΓÇ£maa-ehdΓÇ¥: <input enclave held data>,
- ΓÇ£expΓÇ¥: 1568187398,
- ΓÇ£iatΓÇ¥: 1568158598,
- ΓÇ£is-debuggableΓÇ¥: false,
- ΓÇ£issΓÇ¥: ΓÇ£https://tradewinds.us.attest.azure.netΓÇ¥,
- ΓÇ£nbfΓÇ¥: 1568158598,
- ΓÇ£product-idΓÇ¥: 4639,
- ΓÇ£sgx-mrenclaveΓÇ¥: ΓÇ£ΓÇ¥,
- ΓÇ£sgx-mrsignerΓÇ¥: ΓÇ£ΓÇ¥,
- ΓÇ£svnΓÇ¥: 0,
- ΓÇ£teeΓÇ¥: ΓÇ£sgxΓÇ¥
+ "aas-ehd": <input enclave held data>,
+ "exp": 1568187398,
+ "iat": 1568158598,
+ "is-debuggable": false,
+ "iss": "https://tradewinds.us.attest.azure.net",
+ "maa-attestationcollateral":
+ {
+ "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
+ "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
+ "qeidhash": <SHA256 value of the QE Identity collateral>,
+ "quotehash": <SHA256 value of the evaluated quote>,
+ "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
+ "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
+ "tcbinfohash": <SHA256 value of the TCB Info collateral>
+ },
+ "maa-ehd": <input enclave held data>,
+ "nbf": 1568158598,
+ "product-id": 4639,
+ "sgx-mrenclave": <SGX enclave mrenclave value>,
+ "sgx-mrsigner": <SGX enclave msrigner value>,
+ "svn": 0,
+ "tee": "sgx"
+ "x-ms-attestation-type": "sgx",
+ "x-ms-policy-hash": <>,
+ "x-ms-sgx-collateral":
+ {
+ "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
+ "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
+ "qeidhash": <SHA256 value of the QE Identity collateral>,
+ "quotehash": <SHA256 value of the evaluated quote>,
+ "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>,
+ "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>,
+ "tcbinfohash": <SHA256 value of the TCB Info collateral>
+ },
+ "x-ms-sgx-ehd": <>,
+ "x-ms-sgx-is-debuggable": true,
+ "x-ms-sgx-mrenclave": <SGX enclave mrenclave value>,
+ "x-ms-sgx-mrsigner": <SGX enclave msrigner value>,
+ "x-ms-sgx-product-id": 1,
+ "x-ms-sgx-svn": 1,
+ "x-ms-ver": "1.0"
}.[Signature] ```
-Claims like ΓÇ£expΓÇ¥, ΓÇ£iatΓÇ¥, ΓÇ£issΓÇ¥, ΓÇ£nbfΓÇ¥ are defined by the [JWT RFC](https://tools.ietf.org/html/rfc7517) and remaining are generated by Azure Attestation.
-See [claims issued by Azure Attestation](claim-sets.md) for more information.
+Some of the claims used above are considered deprecated but are fully supported. It is recommended that all future code and tooling use the non-deprecated claim names. See [claims issued by Azure Attestation](claim-sets.md) for more information.
## Encryption of data at rest
attestation https://docs.microsoft.com/en-us/azure/attestation/claim-sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/claim-sets.md
@@ -14,45 +14,88 @@ ms.author: mbaldwin
Claims generated in the process of attesting enclaves using Microsoft Azure Attestation can be divided into the below categories: -- **Incoming claims**: Claims generated by Microsoft Azure Attestation after parsing the attestation evidence.
+- **Incoming claims**: Claims generated by Microsoft Azure Attestation after parsing the attestation evidence and can be used by policy authors to define authorization rules in a custom policy
-- **Outgoing claims**: Claims created as an output by Azure Attestation. It contains all the claims that should end up in the attestation token.
+- **Outgoing claims**: Claims generated by Azure Attestation and contains all claims that end up in the attestation token
- **Property claims**: Claims created as an output by Azure Attestation. It contains all the claims that represent properties of the attestation token, such as encoding of the report, validity duration of the report, and so on.
-Below claims that are defined by the JWT RFC and used by Azure Attestation in the response object:
--- **"iss" (Issuer) Claim**: The "iss" (issuer) claim identifies the principal that issued the JWT. The processing of this claim is generally application-specific. The "iss" value is a case-sensitive string containing a StringOrURI value.-- **"iat" (Issued At) Claim**: The "iat" (issued at) claim identifies the time at which the JWT was issued. This claim can be used to determine the age of the JWT. Its value MUST be a number containing a NumericDate value.-- **"exp" (Expiration Time) Claim**: The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. The processing of the "exp" claim requires that the current date/time MUST be before the expiration date/time listed in the "exp" claim.-
- Note: A 5-minute leeway is added to the issue time(iat), to account for clock skew.
-- **"nbf" (Not Before) Claim**: The "nbf" (not before) claim identifies the time before which the JWT WILL NOT be accepted for processing. The processing of the "nbf" claim requires that the current date/time MUST be after or equal to the not-before date/time listed in the "nbf" claim.
- Note: A 5-minute leeway is added to the issue time(iat), to account for clock skew.
-
-## Claims issued by Azure Attestation in SGX enclaves
-
-### Incoming claims
--- **$is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not-- **$sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote-- **$sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote-- **$product-id**-- **$svn**: security version number encoded in the quote -- **$tee**: type of enclave -
-### Outgoing claims
--- **is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not-- **sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote-- **sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote-- **product-id**-- **svn**: security version number encoded in the quote -- **tee**: type of enclave -- **maa-ehd**: Base64Url encoded version of the ΓÇ£Enclave Held DataΓÇ¥ specified in the attestation request -- **aas-ehd**: Base64Url encoded version of the ΓÇ£Enclave Held DataΓÇ¥ specified in the attestation request -
-## Claims issued by Azure Attestation in VBS enclaves
+### Common incoming claims across all attestation types
+
+Below claims are generated by Azure Attestation and can be used to define authorization rules in a custom policy:
+- **x-ms-ver**: JWT schema version (expected to be "1.0")
+- **x-ms-attestation-type**: String value representing attestation type
+- **x-ms-policy-hash**: Hash of Azure Attestation evaluation policy computed as BASE64URL(SHA256(UTF8(BASE64URL(UTF8(policy text)))))
+- **x-ms-policy-signer**: JSON object with a "jwkΓÇ¥ member representing the key a customer used to sign their policy, when customer uploads a signed policy
+
+Below claims are considered deprecated but are fully supported. It is recommended to use the non-deprecated claim names.
+
+Deprecated claim | Recommended claim
+--- | ---
+ver | x-ms-ver
+tee | x-ms-attestation-type
+maa-policyHash | x-ms-policy-hash
+policy_hash | x-ms-policy-hash
+policy_signer | x-ms-policy-signer
+
+### Common outgoing claims across all attestation types
+
+Below claims that are defined by the [IETF JWT](https://tools.ietf.org/html/rfc7519) and used by Azure Attestation in the response object:
+
+- **"jti" (JWT ID) Claim**
+- **"iss" (Issuer) Claim**
+- **"iat" (Issued At) Claim**
+- **"exp" (Expiration Time) Claim**
+- **"nbf" (Not Before) Claim**
+
+Below claims that are defined by the [IETF EAT](https://tools.ietf.org/html/draft-ietf-rats-eat-03#page-9) and used by Azure Attestation in the response object:
+- **"Nonce claim" (nonce)**
+
+## Claims specific to SGX enclaves
+
+### Incoming claims specific to SGX attestation
+
+Below claims are generated by the service for SGX attestation and can be used to define authorization rules in a custom policy:
+- **x-ms-sgx-is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not
+- **x-ms-sgx-product-id**
+- **x-ms-sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote
+- **x-ms-sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote
+- **x-ms-sgx-svn**: security version number encoded in the quote
+
+### Outgoing claims specific to SGX attestation
+
+Below claims are generated by the service and included in the response object for SGX attestation:
+- **x-ms-sgx-is-debuggable**: A Boolean, which indicates whether or not the enclave has debugging enabled or not
+- **x-ms-sgx-product-id**
+- **x-ms-ver**
+- **x-ms-sgx-mrsigner**: hex encoded value of the ΓÇ£mrsignerΓÇ¥ field of the quote
+- **x-ms-sgx-mrenclave**: hex encoded value of the ΓÇ£mrenclaveΓÇ¥ field of the quote
+- **x-ms-sgx-svn**: security version number encoded in the quote
+- **x-ms-sgx-ehd**: enclave held data formatted as BASE64URL(enclave held data)
+- **x-ms-sgx-collateral**: JSON object describing the collateral used to perform attestation. The value for the x-ms-sgx-collateral claim is a nested JSON object with the following key/value pairs:
+ - **qeidcertshash**: SHA256 value of QE Identity issuing certs
+ - **qeidcrlhash**: SHA256 value of QE Identity issuing certs CRL list
+ - **qeidhash**: SHA256 value of the QE Identity collateral
+ - **quotehash**: SHA256 value of the evaluated quote
+ - **tcbinfocertshash**: SHA256 value of the TCB Info issuing certs
+ - **tcbinfocrlhash**: SHA256 value of the TCB Info issuing certs CRL list
+ - **tcbinfohash**: JSON object describing the collateral used to perform attestation
+
+Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names.
+
+Deprecated claim | Recommended claim
+--- | ---
+$is-debuggable | x-ms-sgx-is-debuggable
+$sgx-mrsigner | x-ms-sgx-mrsigner
+$sgx-mrenclave | x-ms-sgx-mrenclave
+$product-id | x-ms-sgx-product-id
+$svn | x-ms-sgx-svn
+$tee | x-ms-attestation-type
+maa-ehd | x-ms-sgx-ehd
+aas-ehd | x-ms-sgx-ehd
+maa-attestationcollateral | x-ms-sgx-collateral
+
+## Claims issued specific to Trusted Platform Module (TPM) attestation
### Incoming claims (can also be used as outgoing claims)
@@ -69,17 +112,17 @@ Below claims that are defined by the JWT RFC and used by Azure Attestation in th
- **enclaveAuthorId**: String value containing the Base64Url encoded value of the enclave author id-The author identifier of the primary module for the enclave. - **enclaveImageId**: String value containing the Base64Url encoded value of the enclave Image id-The image identifier of the primary module for the enclave. - **enclaveOwnerId**: String value containing the Base64Url encoded value of the enclave Owner id-The identifier of the owner for the enclave.-- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family id. The family identifier of the primary module for the enclave.
+- **enclaveFamilyId**: String value containing the Base64Url encoded value of the enclave Family ID. The family identifier of the primary module for the enclave.
- **enclaveSvn**: Integer value containing the security version number of the primary module for the enclave. - **enclavePlatformSvn**: Integer value containing the security version number of the platform that hosts the enclave. - **enclaveFlags**: The enclaveFlags claim is an Integer value containing Flags that describe the runtime policy for the enclave.
-### Outgoing claims
+### Outgoing claims specific to TPM attestation
- **policy_hash**: String value containing SHA256 hash of the policy text computed by BASE64URL(SHA256(BASE64URL(UTF8(Policy text)))). - **policy_signer**: Contains a JWK with the public key or the certificate chain present in the signed policy header. - **ver (Version)**: String value containing version of the report. Currently 1.0.-- **cnf (Confirmation) Claim**: The "cnf" claim is used to identify the proof-of-possession key. Confirmation claim as defined in RFC 7800, contains the public part of the attested enclave key represented as a JSON Web Key (JWK) object (RFC 7517).
+- **cnf (Confirmation) Claim**: The "cnf" claim is used to identify the proof-of-possession key. Confirmation claims as defined in RFC 7800, contains the public part of the attested enclave key represented as a JSON Web Key (JWK) object (RFC 7517).
- **rp_data (relying party data)**: Relying party data, if any, specified in the request, used by the relying party as a nonce to guarantee freshness of the report. - **"jti" (JWT ID) Claim**: The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value is assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object.
attestation https://docs.microsoft.com/en-us/azure/attestation/policy-examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-examples.md
@@ -14,29 +14,39 @@ ms.author: mbaldwin
Attestation policy is used to process the attestation evidence and determine whether Azure Attestation will issue an attestation token. Attestation token generation can be controlled with custom policies. Below are some examples of an attestation policy.
-## Default policy for an SGX enclave with PolicyFormat=Text
+## Default policy for an SGX enclave
```
-Version= 1.0;
+version= 1.0;
authorizationrules {
- c:[type==ΓÇ¥$is-debuggableΓÇ¥] => permit();
+ c:[type=="$is-debuggable"] => permit();
};+ issuancerules {
- c:[type==ΓÇ¥$is-debuggableΓÇ¥] => issue(type=ΓÇ¥is-debuggableΓÇ¥, value=c.value);
- c:[type==ΓÇ¥$sgx-mrsignerΓÇ¥] => issue(type=ΓÇ¥sgx-mrsignerΓÇ¥, value=c.value);
- c:[type==ΓÇ¥$sgx-mrenclaveΓÇ¥] => issue(type=ΓÇ¥sgx-mrenclaveΓÇ¥, value=c.value);
- c:[type==ΓÇ¥$product-idΓÇ¥] => issue(type=ΓÇ¥product-idΓÇ¥, value=c.value);
- c:[type==ΓÇ¥$svnΓÇ¥] => issue(type=ΓÇ¥svnΓÇ¥, value=c.value);
- c:[type==ΓÇ¥$teeΓÇ¥] => issue(type=ΓÇ¥teeΓÇ¥, value=c.value);
+ c:[type=="$is-debuggable"] => issue(type="is-debuggable", value=c.value);
+ c:[type=="$sgx-mrsigner"] => issue(type="sgx-mrsigner", value=c.value);
+ c:[type=="$sgx-mrenclave"] => issue(type="sgx-mrenclave", value=c.value);
+ c:[type=="$product-id"] => issue(type="product-id", value=c.value);
+ c:[type=="$svn"] => issue(type="svn", value=c.value);
+ c:[type=="$tee"] => issue(type="tee", value=c.value);
}; ```
-## Default policy for VBS enclave
-
-There is no default policy for VBS enclave
+## Sample custom policy for an SGX enclave
+```
+version= 1.0;
+authorizationrules
+{
+ [ type=="x-ms-sgx-is-debuggable", value==false ]
+ && [ type=="x-ms-sgx-product-id", value==<product-id> ]
+ && [ type=="x-ms-sgx-svn", value>= 0 ]
+ && [ type=="x-ms-sgx-mrsigner", value=="<mrsigner>"]
+ => permit();
+};
+```
## Unsigned Policy for an SGX enclave with PolicyFormat=JWT
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-feature-filters-aspnet-core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
@@ -25,7 +25,7 @@ You can also create your own feature filter that implements the [Microsoft.Featu
## Registering a feature filter
-You register a feature filter by calling the `AddFeatureFilter` method, specifying the name of the feature filter. For example, the following code registers `PercentageFilter`:
+You register a feature filter by calling the `AddFeatureFilter` method, specifying the type name of the desired feature filter. For example, the following code registers `PercentageFilter`:
```csharp public void ConfigureServices(IServiceCollection services)
@@ -50,14 +50,14 @@ You can configure these settings for feature flags defined in Azure App Configur
> [!div class="mx-imgBorder"] > ![Edit Beta feature flag](./media/edit-beta-feature-flag.png)
-1. In the **Edit** screen, select the **Enable feature flag** button if it isn't already selected. Then click the **Use feature filter** button and select **Custom**.
+1. In the **Edit** screen, check the **Enable feature flag** checkbox if it isn't already enabled. Then check the **Use feature filter** checkbox and select **Custom**.
-1. In the **Key** field, enter *Microsoft.Percentage*.
+1. In the **Name** field, select *Microsoft.Percentage*.
> [!div class="mx-imgBorder"] > ![Add feature filter](./media/feature-flag-add-filter.png)
-1. Click the context menu next to the feature filter key. Click **Edit filter parameters**.
+1. Click the context menu next to the feature filter name. Click **Edit filter parameters**.
> [!div class="mx-imgBorder"] > ![Edit feature filter parameters](./media/feature-flags-edit-filter-parameters.png)
@@ -69,10 +69,10 @@ You can configure these settings for feature flags defined in Azure App Configur
1. Click **Apply** to return to the **Edit feature flag** screen. Then click **Apply** again to save the feature flag settings.
-1. The **State** of the feature flag now appears as *Conditional*. This state indicates that the feature flag will be enabled or disabled on a per-request basis, based on the criteria enforced by the feature filter.
+1. On the **Feature manager** page, the feature flag now has a **Feature filter** value of *Custom*.
> [!div class="mx-imgBorder"]
- > ![Conditional feature flag](./media/feature-flag-filter-enabled.png)
+ > ![Feature flag listed with a Feature filter value of "Custom"](./media/feature-flag-filter-custom.png)
## Feature filters in action
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/quickstart-aspnet-core-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-aspnet-core-app.md
@@ -67,7 +67,7 @@ dotnet new mvc --no-https --output TestAppConfig
``` > [!IMPORTANT]
- > Some shells will truncate the connection string unless it's enclosed in quotes. Ensure that the output of the `dotnet user-secrets` command shows the entire connection string. If it doesn't, rerun the command, enclosing the connection string in quotes.
+ > Some shells will truncate the connection string unless it's enclosed in quotes. Ensure that the output of the `dotnet user-secrets list` command shows the entire connection string. If it doesn't, rerun the command, enclosing the connection string in quotes.
Secret Manager is used only to test the web app locally. When the app is deployed to [Azure App Service](https://azure.microsoft.com/services/app-service/web), use the **Connection Strings** application setting in App Service instead of Secret Manager to store the connection string.
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-reserved-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-reserved-pricing.md
@@ -47,7 +47,7 @@ The following table describes required fields.
| Field | Description | | :------------ | :------- |
-| Subscription | The subscription used to pay for the Azure Cache for Redis reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Cache for Redis reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Subscription | The subscription used to pay for the Azure Cache for Redis reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Cache for Redis reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
| Scope | The reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the reservation discount is applied to Azure Cache for Redis instances running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the reservation discount is applied to Azure Cache for Redis instances in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Cache for Redis instances in the selected subscription and the selected resource group within that subscription. | Region | The Azure region thatΓÇÖs covered by the Azure Cache for Redis reserved capacity reservation. | Pricing tier | The service tier for the Azure Cache for Redis servers.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/app-service-export-api-to-powerapps-and-flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/app-service-export-api-to-powerapps-and-flow.md deleted file mode 100644
@@ -1,167 +0,0 @@
-title: Exporting an Azure-hosted API to PowerApps and Microsoft Flow
-description: Overview of how to expose an API hosted in App Service to PowerApps and Microsoft Flow
-
-ms.topic: conceptual
-ms.date: 04/28/2020
-ms.reviewer: sunayv
-
-# Exporting an Azure-hosted API to PowerApps and Microsoft Flow
-
-[PowerApps](https://powerapps.microsoft.com/guided-learning/learning-introducing-powerapps/) is a service for building and using custom business apps that connect to your data and work across platforms. [Power Automate](/learn/modules/get-started-with-flow/index) makes it easy to automate workflows and business processes between your favorite apps and services. Both PowerApps and Microsoft Flow come with a variety of built-in connectors to data sources such as Office 365, Dynamics 365, Salesforce, and more. In some cases, app and flow builders also want to connect to data sources and APIs built by their organization.
-
-Similarly, developers that want to expose their APIs more broadly within an organization can make their APIs available to app and flow builders. This article shows you how to export an API built with [Azure Functions](../azure-functions/functions-overview.md) or [Azure App Service](../app-service/overview.md). The exported API becomes a *custom connector*, which is used in PowerApps and Microsoft Flow just like a built-in connector.
-
-> [!IMPORTANT]
-> The API definition functionality shown in this article is only supported for [version 1.x of the Azure Functions runtime](functions-versions.md#creating-1x-apps) and App Services apps. Version 2.x of Functions integrates with API Management to create and maintain OpenAPI definitions. To learn more, see [Create an OpenAPI definition for a function with Azure API Management](functions-openapi-definition.md).
-
-## Create and export an API definition
-Before exporting an API, you must describe the API using an OpenAPI definition (formerly known as a [Swagger](https://swagger.io/) file). This definition contains information about what operations are available in an API and how the request and response data for the API should be structured. PowerApps and Microsoft Flow can create custom connectors for any OpenAPI 2.0 definition. Azure Functions and Azure App Service have built-in support for creating, hosting, and managing OpenAPI definitions. For more information, see [Host a RESTful API with CORS in Azure App Service](../app-service/app-service-web-tutorial-rest-api.md).
-
-> [!NOTE]
-> You can also build custom connectors in the PowerApps and Microsoft Flow UI, without using an OpenAPI definition. For more information, see [Register and use a custom connector (PowerApps)](https://powerapps.microsoft.com/tutorials/register-custom-api/) and [Register and use a custom connector (Microsoft Flow)](/power-automate/developer/register-custom-api).
-
-To export the API definition, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com), navigate to your function app or an App Service application.
-
- From the left menu, under **API**, select **API definition**.
-
- :::image type="content" source="media/app-service-export-api-to-powerapps-and-flow/api-definition-function.png" alt-text="Azure Functions API definition":::
-
-2. The **Export to PowerApps + Microsoft Flow** button should be available (if not, you must first create an OpenAPI definition). Select this button to begin the export process.
-
- ![Export to PowerApps + Microsoft Flow button](media/app-service-export-api-to-powerapps-and-flow/export-apps-flow.png)
-
-3. Select the **Export Mode**:
-
- **Express** lets you create the custom connector from within the Azure portal. It requires that you are signed into PowerApps or Microsoft Flow and have permission to create connectors in the target environment. This approach is recommended if these two requirements can be met. If using this mode, follow the [Use express export](#express) instructions below.
-
- **Manual** lets you export the API definition, which you then import using the PowerApps or Microsoft Flow portals. This approach is recommended if the Azure user and the user with permission to create connectors are different people, or if the connector needs to be created in another Azure tenant. If using this mode, follow the [Use manual export](#manual) instructions below.
-
- ![Export mode](media/app-service-export-api-to-powerapps-and-flow/export-mode.png)
-
-> [!NOTE]
-> The custom connector uses a *copy* of the API definition, so PowerApps and Microsoft Flow will not immediately know if you make changes to the application and its API definition. If you do make changes, repeat the export steps for the new version.
-
-<a name="express"></a>
-## Use express export
-
-To complete the export in **Express** mode, follow these steps:
-
-1. Make sure you're signed in to the PowerApps or Microsoft Flow tenant to which you want to export.
-
-2. Use the settings as specified in the table.
-
- |Setting|Description|
- |--------|------------|
- |**Environment**|Select the environment that the custom connector should be saved to. For more information, see [Environments overview](https://powerapps.microsoft.com/tutorials/environments-overview/).|
- |**Custom API Name**|Enter a name, which PowerApps and Microsoft Flow builders will see in their connector list.|
- |**Prepare security configuration**|If necessary, provide the security configuration details needed to grant users access to your API. This example shows an API key. For more information, see [Specify authentication type](#auth) below.|
-
- ![Express export to PowerApps and Microsoft Flow](media/app-service-export-api-to-powerapps-and-flow/export-express.png)
-
-3. Click **OK**. The custom connector is now built and added to the environment you specified.
-
-<a name="manual"></a>
-## Use manual export
-
-To complete the export in **Manual** mode, follow these steps:
-
-1. Click **Download** and save the file, or click the copy button and save the URL. You will use the download file or the URL during import.
-
- ![Manual export to PowerApps and Microsoft Flow](media/app-service-export-api-to-powerapps-and-flow/export-manual.png)
-
-2. If your API definition includes any security definitions, these definitions are called out in step #2. During import, PowerApps and Microsoft Flow detects these definitions and prompts for security information. Gather the credentials related to each definition for use in the next section. For more information, see [Specify authentication type](#auth) below.
-
- ![Security for manual export](media/app-service-export-api-to-powerapps-and-flow/export-manual-security.png)
-
- This example shows the API key security definition that was included in the OpenAPI definition.
-
-Now that you've exported the API definition, you import it to create a custom connector in PowerApps and Microsoft Flow. Custom connectors are shared between the two services, so you only need to import the definition once.
-
-To import the API definition into PowerApps and Microsoft Flow, follow these steps:
-
-1. Go to [powerapps.com](https://web.powerapps.com) or [flow.microsoft.com](https://flow.microsoft.com).
-
-2. In the upper right corner, click the gear icon, then click **Custom connectors**.
-
- ![Gear icon in service](media/app-service-export-api-to-powerapps-and-flow/icon-gear.png)
-
-3. Click **Create custom connector**, then click **Import an OpenAPI definition**.
-
- ![Create custom connector](media/app-service-export-api-to-powerapps-and-flow/flow-apps-create-connector.png)
-
-4. Enter a name for the custom connector, then navigate to the OpenAPI definition that you exported, and click **Continue**.
-
- ![Upload OpenAPI definition](media/app-service-export-api-to-powerapps-and-flow/flow-apps-upload-definition.png)
-
-4. On the **General** tab, review the information that comes from the OpenAPI definition.
-
-5. On the **Security** tab, if you are prompted to provide authentication details, enter the values appropriate for the authentication type. Click **Continue**.
-
- ![Security tab](media/app-service-export-api-to-powerapps-and-flow/tab-security.png)
-
- This example shows the required fields for API key authentication. The fields differ depending on the authentication type.
-
-6. On the **Definitions** tab, all the operations defined in your OpenAPI file are auto-populated. If all your required operations are defined, you can go to the next step. If not, you can add and modify operations here.
-
- ![Definitions tab](media/app-service-export-api-to-powerapps-and-flow/tab-definitions.png)
-
- This example has one operation, named `CalculateCosts`. The metadata, like **Description**, all comes from the OpenAPI file.
-
-7. Click **Create connector** at the top of the page.
-
-You can now connect to the custom connector in PowerApps and Microsoft Flow. For more information on creating connectors in the PowerApps and Microsoft Flow portals, see [Register your custom connector (PowerApps)](https://powerapps.microsoft.com/tutorials/register-custom-api/#register-your-custom-connector) and [Register your custom connector (Microsoft Flow)](/power-automate/get-started-flow-dev#create-a-custom-connector).
-
-<a name="auth"></a>
-## Specify authentication type
-
-PowerApps and Microsoft Flow support a collection of identity providers that provide authentication for custom connectors. If your API requires authentication, ensure that it is captured as a _security definition_ in your OpenAPI document, like the following example:
-
-```json
-"securityDefinitions": {
- "AAD": {
- "type": "oauth2",
- "flow": "accessCode",
- "authorizationUrl": "https://login.windows.net/common/oauth2/authorize",
- "scopes": {}
- }
-}
-```
-During export, you provide configuration values that allow PowerApps and Microsoft Flow to authenticate users.
-
-This section covers the authentication types that are supported in **Express** mode: API key, Azure Active Directory, and Generic OAuth 2.0. PowerApps and Microsoft Flow also support Basic Authentication, and OAuth 2.0 for specific services like Dropbox, Facebook, and SalesForce.
-
-### API key
-When using an API key, the users of your connector are prompted to provide the key when they create a connection. You specify an API key name to help them understand which key is needed. In the earlier example, we use the name `API Key (contact meganb@contoso.com)` so people know where to get information about the API key. For Azure Functions, the key is typically one of the host keys, covering several functions within the function app.
-
-### Azure Active Directory (Azure AD)
-When using Azure AD, you need two Azure AD application registrations: one for the API itself, and one for the custom connector:
--- To configure registration for the API, use the [App Service Authentication/Authorization](../app-service/configure-authentication-provider-aad.md) feature.--- To configure registration for the connector, follow the steps in [Adding an Azure AD application](../active-directory/develop/quickstart-register-app.md). The registration must have delegated access to your API and a reply URL of `https://msmanaged-na.consent.azure-apim.net/redirect`. -
-For more information, see the Azure AD registration examples for [PowerApps](https://powerapps.microsoft.com/tutorials/customapi-azure-resource-manager-tutorial/) and [Microsoft Flow](/connectors/custom-connectors/azure-active-directory-authentication). These examples use Azure Resource Manager as the API; substitute your API if you follow the steps.
-
-The following configuration values are required:
-- **Client ID** - the client ID of your connector Azure AD registration-- **Client secret** - the client secret of your connector Azure AD registration-- **Login URL** - the base URL for Azure AD. In Azure, typically `https://login.windows.net`.-- **Tenant ID** - the ID of the tenant to be used for the login. This ID should be "common" or the ID of the tenant in which the connector is created.-- **Resource URL** - the resource URL of the Azure AD registration for your API-
-> [!IMPORTANT]
-> If someone else will import the API definition into PowerApps and Microsoft Flow as part of the manual flow, you must provide them with the client ID and client secret of the *connector registration*, as well as the resource URL of your API. Make sure that these secrets are managed securely. **Do not share the security credentials of the API itself.**
-
-### Generic OAuth 2.0
-When using generic OAuth 2.0, you can integrate with any OAuth 2.0 provider. Doing so allows you to work with custom providers that are not natively supported.
-
-The following configuration values are required:
-- **Client ID** - the OAuth 2.0 client ID-- **Client secret** - the OAuth 2.0 client secret-- **Authorization URL** - the OAuth 2.0 authorization URL-- **Token URL** - the OAuth 2.0 token URL-- **Refresh URL** - the OAuth 2.0 refresh URL
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-perf-and-scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
@@ -98,7 +98,7 @@ Activity functions are stateless and scaled out automatically by adding VMs. Orc
"extensions": { "durableTask": { "storageProvider": {
- "partitionCount": 3
+ "partitionCount": 3
} } }
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-event-hub-cosmos-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-event-hub-cosmos-db.md
@@ -367,6 +367,7 @@ mvn archetype:generate --batch-mode \
-DarchetypeArtifactId=azure-functions-archetype \ -DappName=$FUNCTION_APP \ -DresourceGroup=$RESOURCE_GROUP \
+ -DappRegion=$LOCATION \
-DgroupId=com.example \ -DartifactId=telemetry-functions ```
@@ -379,6 +380,7 @@ mvn archetype:generate --batch-mode ^
-DarchetypeArtifactId=azure-functions-archetype ^ -DappName=%FUNCTION_APP% ^ -DresourceGroup=%RESOURCE_GROUP% ^
+ -DappRegion=%LOCATION% ^
-DgroupId=com.example ^ -DartifactId=telemetry-functions ```
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
@@ -80,7 +80,7 @@ When you develop a function app locally, you must maintain local copies of these
## Hosting plan type
-When you create a function app, you also create an App Service hosting plan in which the app runs. A plan can have one or more function apps. The functionality, scaling, and pricing of your functions depend on the type of plan. To learn more, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+When you create a function app, you also create a hosting plan in which the app runs. A plan can have one or more function apps. The functionality, scaling, and pricing of your functions depend on the type of plan. To learn more, see [Azure Functions hosting options](functions-scale.md).
You can determine the type of plan being used by your function app from the Azure portal, or by using the Azure CLI or Azure PowerShell APIs.
@@ -127,6 +127,75 @@ In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` wit
---
+## Plan migration
+
+You can use Azure CLI commands to migrate a function app between a Consumption plan and a Premium plan on Windows. The specific commands depend on the direction of the migration. Direct migration to a Dedicated (App Service) plan isn't currently supported.
+
+This migration isn't supported on Linux.
+
+### Consumption to Premium
+
+Use the following procedure to migrate from a Consumption plan to a Premium plan on Windows:
+
+1. Run the following command to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app.
+
+ ```azurecli-interactive
+ az functionapp plan create --name <NEW_PREMIUM_PLAN_NAME> --resource-group <MY_RESOURCE_GROUP> --location <REGION> --sku EP1
+ ```
+
+1. Run the following command to migrate the existing function app to the new Premium plan
+
+ ```azurecli-interactive
+ az functionapp update --name <MY_APP_NAME> --resource-group <MY_RESOURCE_GROUP> --plan <NEW_PREMIUM_PLAN>
+ ```
+
+1. If you no longer need your previous Consumption function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Run the following command to get a list of all Consumption plans in your resource group.
+
+ ```azurecli-interactive
+ az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='Y'].{PlanName:name,Sites:numberOfSites}" -o table
+ ```
+
+ You can safely delete the plan with zero sites, which is the one you migrated from.
+
+1. Run the following command to delete the Consumption plan you migrated from.
+
+ ```azurecli-interactive
+ az functionapp plan delete --name <CONSUMPTION_PLAN_NAME> --resource-group <MY_RESOURCE_GROUP>
+ ```
+
+### Premium to Consumption
+
+Use the following procedure to migrate from a Premium plan to a Consumption plan on Windows:
+
+1. Run the following command to create a new function app (Consumption) in the same region and resource group as your existing function app. This command also creates a new Consumption plan in which the function app runs.
+
+ ```azurecli-interactive
+ az functionapp create --resource-group <MY_RESOURCE_GROUP> --name <NEW_CONSUMPTION_APP_NAME> --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --storage-account <STORAGE_NAME>
+ ```
+
+1. Run the following command to migrate the existing function app to the new Consumption plan.
+
+ ```azurecli-interactive
+ az functionapp update --name <MY_APP_NAME> --resource-group <MY_RESOURCE_GROUP> --plan <NEW_CONSUMPTION_PLAN>
+ ```
+
+1. Delete the function app you created in step 1, since you only need the plan that was created to run the existing function app.
+
+ ```azurecli-interactive
+ az functionapp delete --name <NEW_CONSUMPTION_APP_NAME> --resource-group <MY_RESOURCE_GROUP>
+ ```
+
+1. If you no longer need your previous Premium function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Please note that if the plan is not deleted, you will still be charged for the Premium plan. Run the following command to get a list of all Premium plans in your resource group.
+
+ ```azurecli-interactive
+ az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='EP'].{PlanName:name,Sites:numberOfSites}" -o table
+ ```
+
+1. Run the following command to delete the Premium plan you migrated from.
+
+ ```azurecli-interactive
+ az functionapp plan delete --name <PREMIUM_PLAN> --resource-group <MY_RESOURCE_GROUP>
+ ```
## Platform features
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compare-azure-government-global-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: azure-government
-ms.date: 09/09/2020
+ms.date: 01/18/2021
--- # Compare Azure Government and global Azure
@@ -457,7 +457,7 @@ Azure Security Center is deployed on Azure Government regions but not DoD region
### [Azure Sentinel](../sentinel/overview.md) The following **features have known limitations** in Azure Government: - Office 365 data connector
- - The Office 365 data connector can be used only for [Office 365 GCC High](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod).
+ - The Office 365 data connector can be used only for [Office 365 GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc), [Office 365 GCC High](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod), and [Office 365 DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod).
- AWS CloudTrail data connector - The AWS CloudTrail data connector can be used only for [AWS in the Public Sector](https://aws.amazon.com/government-education/).
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-request-elevation-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-elevation-data.md
@@ -467,9 +467,9 @@ The following sample web page shows you how to use the map control to display el
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-### Get elevation data by PolyLine path
+### Get elevation data by Polyline path
-The following sample web page shows you how to use the map control to display elevation data along a path. The user defines the path by clicking on the `PolyLine` icon in the upper-left hand corner, and drawing the PolyLine on the map. The map control then renders the elevation data in colors that are specified in the key located in the upper-right hand corner.
+The following sample web page shows you how to use the map control to display elevation data along a path. The user defines the path by clicking on the `Polyline` icon in the upper-left hand corner, and drawing the Polyline on the map. The map control then renders the elevation data in colors that are specified in the key located in the upper-right hand corner.
<br/>
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-search-for-address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-search-for-address.md
@@ -3,7 +3,7 @@ title: Search for a location using Azure Maps Search services
description: Learn about the Azure Maps Search service. See how to use this set of APIs for geocoding, reverse geocoding, fuzzy searches, and reverse cross street searches. author: anastasia-ms ms.author: v-stharr
-ms.date: 10/05/2020
+ms.date: 01/19/2021
ms.topic: how-to ms.service: azure-maps services: azure-maps
@@ -164,7 +164,7 @@ In this example, we'll search for a cross street based on the coordinates of an
:::image type="content" source="./media/how-to-search-for-address/search-address-cross.png" alt-text="Search cross street.":::
-3. Click **Send**, and review the response body. You'll notice that the response contains a `crossStreet` value of `Occidental Avenue South`.
+3. Click **Send**, and review the response body. You'll notice that the response contains a `crossStreet` value of `South Atlantic Street`.
## Next steps
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/map-show-traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-show-traffic.md
@@ -16,10 +16,14 @@ ms.custom: codepen, devx-track-js
There are two types of traffic data available in Azure Maps: - Incident data - consists of point and line-based data for things such as construction, road closures, and accidents.-- Flow data - provides metrics on the flow of traffic on the roads. Often, traffic flow data is used to color the roads. The colors are based on how much traffic is slowing down the flow, relative to the speed limit, or another metric. The traffic flow data in Azure Maps has three different metrics of measurement:
- - `relative` - is relative to the free-flow speed of the road.
- - `absolute` - is the absolute speed of all vehicles on the road.
- - `relative-delay` - displays areas that are slower than the average expected delay.
+- Flow data - provides metrics on the flow of traffic on the roads. Often, traffic flow data is used to color the roads. The colors are based on how much traffic is slowing down the flow, relative to the speed limit, or another metric. There are four values that can be passed into the traffic `flow` option of the map.
+
+ |Flow Value | Description|
+ | :-- | :-- |
+ | `none` | Doesn't display traffic data on the map |
+ | `relative` | Shows traffic data that's relative to the free-flow speed of the road |
+ | `relative-delay` | Displays areas that are slower than the average expected delay |
+ | `absolute` | Shows the absolute speed of all vehicles on the road |
The following code shows how to display traffic data on the map.
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/open-source-projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/open-source-projects.md
@@ -60,6 +60,12 @@ The following is a list of open-source projects that extend the capabilities of
| [Azure Maps .NET UWP IoT Remote Control](https://github.com/Azure-Samples/azure-maps-dotnet-webgl-uwp-iot-remote-control) | This is a sample application that shows how to build a remotely controlled map using Azure Maps and IoT hub services. | | [Implement IoT spatial analytics using Azure Maps](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing) | Tracking and capturing relevant events that occur in space and time is a common IoT scenario. |
+**Third party map control plugins**
+
+| Project Name | Description |
+|-|-|
+| [Azure Maps Leaflet plugin](https://github.com/azure-samples/azure-maps-leaflet) | A [leaflet](https://leafletjs.com/) JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services](https://docs.microsoft.com/rest/api/maps/renderv2/getmaptilepreview). |
+
**Tools and resources** | Project Name | Description |
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/supported-browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-browsers.md
@@ -31,7 +31,7 @@ The Azure Maps Web SDK supports the following desktop browsers:
- Microsoft Edge (current and previous version) - Google Chrome (current and previous version) - Mozilla Firefox (current and previous version)-- Apple Safari (Mac OS X) (current and previous version)
+- Apple Safari (macOS X) (current and previous version)
See also [Target legacy browsers](#Target-Legacy-Browsers) later in this article.
@@ -58,7 +58,7 @@ The following Web SDK modules are also supported in Node.js:
## <a name="Target-Legacy-Browsers"></a>Target legacy browsers
-You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, we recommend that you use Azure Maps services together with an open-source map control like [Leaflet](https://leafletjs.com/). Here's an example:
+You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, we recommend that you use Azure Maps services together with an open-source map control like [Leaflet](https://leafletjs.com/). Here's an example that makes use of the open source [Azure Maps Leaflet plugin](https://github.com/azure-samples/azure-maps-leaflet).
<br/>
@@ -67,6 +67,7 @@ You might want to target older browsers that don't support WebGL or that have on
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+Additional code samples using Azure Maps in Leaflet can be found [here](https://azuremapscodesamples.azurewebsites.net/?search=leaflet).
## Next steps
@@ -74,4 +75,4 @@ Learn more about the Azure Maps Web SDK:
[Map control](how-to-use-map-control.md)
-[Services module](how-to-use-services-module.md)
\ No newline at end of file
+[Services module](how-to-use-services-module.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-metric-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/alerts-metric-overview.md
@@ -1,7 +1,7 @@
--- title: Understand how metric alerts work in Azure Monitor. description: Get an overview of what you can do with metric alerts and how they work in Azure Monitor.
-ms.date: 01/13/2021
+ms.date: 01/19/2021
ms.topic: conceptual ms.subservice: alerts
@@ -22,9 +22,9 @@ Let's say you have created a simple static threshold metric alert rule as follow
- Target Resource (the Azure resource you want to monitor): myVM - Metric: Percentage CPU - Condition Type: Static-- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#aggregation) are Min, Max, Avg, Total, Count): Average-- Period (The look back window over which metric values are checked): Over the last 5 mins-- Frequency (The frequency with which the metric alert checks if the conditions are met): 1 min
+- Aggregation type (a statistic that is run over raw metric values. [Supported aggregation types](./metrics-aggregation-explained.md#aggregation-types) are Minimum, Maximum, Average, Total, Count): Average
+- Period (the look back window over which metric values are checked): Over the last 5 mins
+- Frequency (the frequency with which the metric alert checks if the conditions are met): 1 min
- Operator: Greater Than - Threshold: 70
@@ -39,9 +39,9 @@ Let's say you have created a simple Dynamic Thresholds metric alert rule as foll
- Target Resource (the Azure resource you want to monitor): myVM - Metric: Percentage CPU - Condition Type: Dynamic-- Time Aggregation (Statistic that is run over raw metric values. [Supported time aggregations](metrics-charts.md#aggregation) are Min, Max, Avg, Total, Count): Average-- Period (The look back window over which metric values are checked): Over the last 5 mins-- Frequency (The frequency with which the metric alert checks if the conditions are met): 1 min
+- Aggregation Type (a statistic that is run over raw metric values. [Supported aggregation types](./metrics-aggregation-explained.md#aggregation-types) are Minimum, Maximum, Average, Total, Count): Average
+- Period (the look back window over which metric values are checked): Over the last 5 mins
+- Frequency (the frequency with which the metric alert checks if the conditions are met): 1 min
- Operator: Greater Than - Sensitivity: Medium - Look Back Periods: 4
@@ -76,7 +76,7 @@ Say you have an App Service plan for your website. You want to monitor CPU usage
- Condition Type: Static - Dimensions - Instance = InstanceName1, InstanceName2-- Time Aggregation: Average
+- Aggregation Type: Average
- Period: Over the last 5 mins - Frequency: 1 min - Operator: GreaterThan
@@ -91,7 +91,7 @@ Say you have a web app that is seeing massive demand and you will need to add mo
- Condition Type: Static - Dimensions - Instance = *-- Time Aggregation: Average
+- Aggregation Type: Average
- Period: Over the last 5 mins - Frequency: 1 min - Operator: GreaterThan
@@ -108,7 +108,7 @@ Say you have a web app with many instances and you don't know what the most suit
- Condition Type: Dynamic - Dimensions - Instance = *-- Time Aggregation: Average
+- Aggregation Type: Average
- Period: Over the last 5 mins - Frequency: 1 min - Operator: GreaterThan
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard-errors.md
@@ -11,17 +11,17 @@ ms.date: 01/18/2021
# Errors in the connector status
-In the connector status list you can find errors that can help you to fix your ITSM connector.
+In the connector status list you can find errors that can help you to fix issues in your ITSM connector.
## Status Common Errors
-in this section you can find the common error that you can find in the status list and how you should resolve it:
+in this section you can find the common errors that presented in the connector status section and how you should resolve it:
-* **Error**: "Unexpected response from ServiceNow along with success status code. Response: { "import_set": "{import_set_id}", "staging_table": "x_mioms_microsoft_oms_incident", "result": [ { "transform_map": "OMS Incident", "table": "incident", "status": "error", "error_message": "{Target record not found|Invalid table|Invalid staging table" }"
+* **Error**: "Unexpected response from ServiceNow along with success status code. Response: { "import_set": "{import_set_id}", "staging_table": "x_mioms_microsoft_oms_incident", "result": [ { "transform_map": "OMS Incident", "table": "incident", "status": "error", "error_message": "{Target record not found|Invalid table|Invalid staging table" }"
**Cause**: Such error is returned from ServiceNow when:
- * A custom script deployed in ServiceNow instance causes incidents to be ignored.
- * "OMS Integrator App" code itself was modified on ServiceNow side, e.g. the onBefore script.
+ * A custom script deployed in ServiceNow instance causes incidents to be ignored.
+ * "OMS Integrator App" code itself was modified on ServiceNow side, e.g. the onBefore script.
**Resolution**: Disable all custom scripts or code modifications of the data import path.
@@ -39,7 +39,7 @@ in this section you can find the common error that you can find in the status l
* **Error**: "ServiceDeskHttpBadRequestException: StatusCode=429"
- **Cause**: ServiceNow rate limits are too low.
+ **Cause**: ServiceNow rate limits are too high/low.
**Resolution**: Increase or cancel the rate limits in ServiceNow instance as explained [here](https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/task/investigate-rate-limit-violations.html).
@@ -53,14 +53,14 @@ in this section you can find the common error that you can find in the status l
**Cause**: ITSM Connector was deleted.
- **Resolution**: The ITSM Connector was deleted but there are still ITSM Actions defined to use it. There are 2 options to solve this issue:
+ **Resolution**: The ITSM Connector was deleted but there are still ITSM action groups defined associated to it. There are 2 options to solve this issue:
* Find and disable or delete such action * [Reconfigure the action group](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) to use an existing ITSM Connector. * [Create a new ITSM connector](./itsmc-definition.md#create-an-itsm-connection) and [reconfigure the action group to use it](itsmc-definition.md#create-itsm-work-items-from-azure-alerts). ## UI Common Errors
-* **Error**:"Something went wrong. Could not get connection details."
+* **Error**:"Something went wrong. Could not get connection details." This error presented when the customer defines ITSM action group.
**Cause**: Newly created ITSM Connector has yet to finish the initial Sync.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard.md
@@ -49,6 +49,9 @@ The dashboard is split into four parts:
![Screenshot that shows impacted computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png) 3. Connector status: The graph and the table below contain messages about the status of the connector. By clicking on the graph on rows in the table, you can get further details on the messages of the connector status. The table contains limited number of rows if you would like to see all the list you can click on "See all".+
+ You can see details about the messages in the table - [here](itsmc-dashboard-errors.md).
+ ![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png) 4. Alert rules: The tables contain the information on the number of alert rules that were detected. By clicking on rows in the tables, you can get further details on the rules that were detected.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-service-manager-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-service-manager-script.md
@@ -313,6 +313,11 @@ if(!$err)
} ```
+## Troubleshoot Service Manager web app deployment
+
+- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.
+- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.
+- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.
+ ## Next steps [Configure the Hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).-
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-troubleshoot-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-troubleshoot-overview.md
@@ -48,11 +48,36 @@ If you're using Service Map, you can view the service desk items created in ITSM
- Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the documentation for making the [hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection). - If Log Analytics alerts fire but work items aren't created in the ITSM product, if configuration items aren't created/linked to work items, or for other information, see these resources:
- - ITSMC: The solution shows a summary of connections, work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
+ - ITSMC: The solution shows a [summary of connections](itsmc-dashboard.md), work items, computers, and more. Select the tile that has the **Connector Status** label. Doing so takes you to **Log Search** with the relevant query. Look at log records with a `LogType_S` of `ERROR` for more information.
+ You can see details about the messages in the table - [here](itsmc-dashboard-errors.md).
- **Log Search** page: View the errors and related information directly by using the query `*ServiceDeskLog_CL*`.
-### Troubleshoot Service Manager web app deployment
+## Common Symptoms - how it should be resolved?
-- If you have problems with web app deployment, ensure that you have permissions to create/deploy resources in the subscription.-- If you get an **Object reference not set to instance of an object** error when you run the [script](itsmc-service-manager-script.md), ensure that you entered valid values in the **User Configuration** section.-- If you fail to create the service bus relay namespace, ensure that the required resource provider is registered in the subscription. If it's not registered, manually create the service bus relay namespace from the Azure portal. You can also create it when you [create the hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection) in the Azure portal.\ No newline at end of file
+The list below contain common symptoms and how should it be resolved:
+
+* **Symptom**: Duplicate work items are created
+
+ **Cause**: the cause can be one of the two options:
+ * More than one ITSM action are defined for the alert.
+ * Alert is resolved.
+
+ **Resolution**: There can be two solutions:
+ * Make sure that you have a single ITSM action group per alert.
+ * ITSM Connector does not support matching work items status update when an alert is resolved. A new resolved work item is created.
+* **Symptom**: Work items are not created
+
+ **Cause**: There can be couple of reasons for this symptom:
+ * Code modification in ServiceNow side
+ * Permissions misconfiguration
+ * ServiceNow rate limits are too high/low
+ * Refresh token is expired
+ * ITSM Connector was deleted
+
+ **Resolution**: You can check the [dashboard](itsmc-dashboard.md) and review the errors in the connector status section. Review the [common errors](itsmc-dashboard-errors.md) and find out how to resolve the error.
+
+* **Symptom**: Unable to create ITSM Action for Action Group
+
+ **Cause**:Newly created ITSM Connector has yet to finish the initial Sync.
+
+ **Resolution**: you can review the [common UI errors](itsmc-dashboard-errors.md#ui-common-errors) and find out how to resolve the error.
\ No newline at end of file
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-solution-architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 01/11/2021
+ms.date: 01/19/2021
ms.author: b-juche --- # Solution architectures using Azure NetApp Files
@@ -72,7 +72,12 @@ This section provides references to SAP on Azure solutions.
* [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md) * [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on Red Hat Enterprise Linux](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md)
+### SAP AnyDB
+
+* [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043)
+ ### SAP IQ-NLS+ * [Deploy SAP IQ-NLS HA Solution using Azure NetApp Files on SUSE Linux Enterprise Server](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-iq-nls-ha-solution-using-azure-netapp-files-on-suse/ba-p/1651172#.X2tDfpNzBh4.linkedin) ### SAP tech community and blog posts
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
@@ -23,7 +23,7 @@ Resource Manager locks apply only to operations that happen in the management pl
## Considerations before applying locks
-Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Some common examples of the operations that are blocked by locks are:
+Applying locks can lead to unexpected results because some operations that don't seem to modify the resource actually require actions that are blocked by the lock. Locks will prevent any operations that require a POST request to the Azure Resource Manager API. Some common examples of the operations that are blocked by locks are:
* A read-only lock on a **storage account** prevents all users from listing the keys. The list keys operation is handled through a POST request because the returned keys are available for write operations.
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-concept-internals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-concept-internals.md
@@ -36,7 +36,7 @@ Once the application server is started,
- For ASP.NET Core SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service. - For ASP.NET SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
-5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/use-signalr-service.md#connectioncount).
+5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#connectioncount).
Messages to and from clients will be multiplexed into these connections.
@@ -84,4 +84,4 @@ SignalR Service transmits data from the client to the pairing application server
SignalR Service does not save or store customer data, all customer data received is transmitted to target server or clients in real-time. As you can see, the Azure SignalR Service is essentially a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service.
-Application server only needs to handle the business logic in hub class, without worrying about client connections.
\ No newline at end of file
+Application server only needs to handle the business logic in hub class, without worrying about client connections.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/cost-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
@@ -63,7 +63,7 @@ Azure SQL Database (with the exception of serverless) is billed on a predictable
### Using Monetary Credit with Azure SQL Database
-You can pay for Azure SQL Database charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure SQL Database charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
## Review estimated costs in the Azure portal
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/firewall-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/firewall-configure.md
@@ -38,6 +38,9 @@ You can configure server-level IP firewall rules by using the Azure portal, Powe
- To use the portal or PowerShell, you must be the subscription owner or a subscription contributor. - To use Transact-SQL, you must connect to the *master* database as the server-level principal login or as the Azure Active Directory administrator. (A server-level IP firewall rule must first be created by a user who has Azure-level permissions.)
+> [!NOTE]
+> By default, during creation of a new logical SQL server from the Azure portal, the **Allow Azure Services and resources to access this server** setting is set to **No**.
+ ### Database-level IP firewall rules Database-level IP firewall rules enable clients to access certain (secure) databases. You create the rules for each database (including the *master* database), and they're stored in the individual database.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/reserved-capacity-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/reserved-capacity-overview.md
@@ -51,7 +51,7 @@ For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 16
| Field | Description| |------------|--------------|
- |Subscription|The subscription used to pay for the capacity reservation. The payment method on the subscription is charged the upfront costs for the reservation. The subscription type must be an enterprise agreement (offer number MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer number MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.|
+ |Subscription|The subscription used to pay for the capacity reservation. The payment method on the subscription is charged the upfront costs for the reservation. The subscription type must be an enterprise agreement (offer number MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer number MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.|
|Scope |The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select <br/><br/>**Shared**, the vCore reservation discount is applied to the database or managed instance running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.<br/><br/>**Single subscription**, the vCore reservation discount is applied to the databases or managed instances in this subscription. <br/><br/>**Single resource group**, the reservation discount is applied to the instances of databases or managed instances in the selected subscription and the selected resource group within that subscription.| |Region |The Azure region that's covered by the capacity reservation.| |Deployment Type|The SQL resource type that you want to buy the reservation for.|
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/reserved-instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
@@ -49,7 +49,7 @@ These requirements apply to buying a reserved dedicated host instance:
| Field | Description | | ------------ | ------------ |
- | Subscription | The subscription used to pay for the reservation. The payment method on the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement, or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P). The charges are deducted from the monetary commitment balance, if available, or charged as overage. For a subscription with pay-as-you-go rates, the charges are billed to the subscription's credit card or an invoice payment method. |
+ | Subscription | The subscription used to pay for the reservation. The payment method on the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement, or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P). The charges are deducted from the Azure Prepayment (previously called monetary commitment) balance, if available, or charged as overage. For a subscription with pay-as-you-go rates, the charges are billed to the subscription's credit card or an invoice payment method. |
| Scope | The reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select:<br><ul><li><b>Single resource group scope</b> - Applies the reservation discount to the matching resources in the selected resource group only.</li><li><b>Single subscription scope</b> - Applies the reservation discount to the matching resources in the selected subscription.</li><li><b>Shared scope</b> - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. For EA customers, the billing context is the enrollment. For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator.</li></ul> | | Region | The Azure region that's covered by the reservation. | | Host Size | AV36 |
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-backup-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-faq.md
@@ -57,6 +57,10 @@ If you've already configured the backup and must move from GRS to LRS, then see
Exporting data directly from the Recovery Services vault to on-premises using Data Box is not supported. Data must be restored to a storage account, and then it can be moved to on-premises via [Data Box](../databox/data-box-overview.md) or [Import/Export](../storage/common/storage-import-export-service.md).
+### What is the difference between a geo-redundant storage (GRS) vault with and without the Cross-Region Restore (CRR) capability enabled?
+
+In the case of a [GRS](azure-backup-glossary.md#grs) vault without [CRR](azure-backup-glossary.md#cross-region-restore-crr) capability enabled, the data in the secondary region can't be accessed until Azure declares a disaster in the primary region. In such a scenario, the restore happens from the secondary region. When CRR is enabled, even if the primary region is up and running, you can trigger a restore in the secondary region.
+ ## Azure Backup agent ### Where can I find common questions about the Azure Backup agent for Azure VM backup?
@@ -228,4 +232,4 @@ The key used to encrypt the backup data is present only on your site. Microsoft
Read the other FAQs: - [Common questions](backup-azure-vm-backup-faq.md) about Azure VM backups.-- [Common questions](backup-azure-file-folder-backup-faq.md) about the Azure Backup agent\ No newline at end of file
+- [Common questions](backup-azure-file-folder-backup-faq.md) about the Azure Backup agent
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-database-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md
@@ -130,7 +130,7 @@ The following instructions are a step-by-step guide to configuring backup on the
1. Define **Retention** settings. You can add one or more retention rules. Each retention rule assumes inputs for specific backups, and data store and retention duration for those backups.
-1. You can choose to store your backups in one of the two data stores (or tiers): **Backup data store** (hot tier) or **Archive data store** (in preview). You can choose between **two tiering options** to define when the backups are tiered across the two datastores:
+1. You can choose to store your backups in one of the two data stores (or tiers): **Backup data store** (standard tier) or **Archive data store** (in preview). You can choose between **two tiering options** to define when the backups are tiered across the two datastores:
- Choose to copy **Immediately** if you prefer to have a backup copy in both backup and archive data stores simultaneously. - Choose to move **On-expiry** if you prefer to move the backup to archive data store upon its expiry in the backup data store.
backup https://docs.microsoft.com/en-us/azure/backup/backup-managed-disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-managed-disks.md
@@ -125,6 +125,8 @@ The following prerequisites are required to configure backup of managed disks:
![Add disk snapshot contributor role](./media/backup-managed-disks/disk-snapshot-contributor-role.png)
+1. If the disk to be backed-up is encrypted with [customer-managed keys (CMK)](https://docs.microsoft.com/azure/virtual-machines/disks-enable-customer-managed-keys-portal) or using [double encryption using platform-managed keys and customer-managed keys](https://docs.microsoft.com/azure/virtual-machines/disks-enable-double-encryption-at-rest-portal), then assign the **Reader** role permission to the Backup VaultΓÇÖs managed identity on the **Disk Encryption Set** resource.
+ 1. Verify that the backup vault's managed identity has the right set of role assignments on the source disk and resource group that serves as the snapshot datastore. 1. Go to **Backup vault - > Identity** and select **Azure role assignments**.
backup https://docs.microsoft.com/en-us/azure/backup/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
@@ -13,6 +13,9 @@ You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- January 2021
+ - [Azure Disk Backup (in preview)](disk-backup-overview.md)
+ - [Encryption at rest using customer-managed keys now generally available](encryption-at-rest-with-cmk.md)
- November 2020 - [Azure Resource Manager template for Azure file share (AFS) backup](#azure-resource-manager-template-for-afs-backup) - [Incremental backups for SAP HANA databases on Azure VMs](#incremental-backups-for-sap-hana-databases)
@@ -27,6 +30,18 @@ You can learn more about the new releases by bookmarking this page or by [subscr
- [Zone redundant storage (ZRS) for backup data](#zone-redundant-storage-zrs-for-backup-data) - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
+## Azure Disk Backup (in preview)
+
+Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for [Azure Managed Disks](https://docs.microsoft.com/azure/virtual-machines/managed-disks-overview) by automating periodic creation of snapshots and retaining it for a configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using [incremental snapshots](https://docs.microsoft.com/azure/virtual-machines/windows/disks-incremental-snapshots) with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
+
+For more information, see [Azure Disk Backup (in preview)](disk-backup-overview.md).
+
+## Encryption at rest using customer-managed keys
+
+Support for encryption at rest using customer-managed keys is now generally available. This gives you the ability to encrypt the backup data in your Recovery Services vaults using your own keys stored in Azure Key Vaults. The encryption key used for encrypting backups in the Recovery Services vault may be different from the ones used for encrypting the source. The data is protected using an AES 256 based data encryption key (DEK), which is, in turn, protected using your keys stored in the Key Vault. Compared to encryption using platform-managed keys (which is available by default), this gives you more control over your keys and can help you better meet your compliance needs.
+
+For more information, see [Encryption of backup data using customer-managed keys](encryption-at-rest-with-cmk.md).
+ ## Azure Resource Manager template for AFS backup Azure Backup now supports configuring backup for existing Azure file shares using an Azure Resource Manager (ARM) template. The template configures protection for an existing Azure file share by specifying appropriate details for the Recovery Services vault and backup policy. It optionally creates a new Recovery Services vault and backup policy, and registers the storage account containing the file share to the Recovery Services vault.
cloud-services https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-guestos-msrc-releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
@@ -10,7 +10,7 @@ ms.service: cloud-services
ms.topic: article ms.tgt_pltfrm: na ms.workload: tbd
-ms.date: 1/18/2021
+ms.date: 1/19/2021
ms.author: yohaddad ---
@@ -19,7 +19,7 @@ The following tables show the Microsoft Security Response Center (MSRC) updates
## January 2021 Guest OS ">[!NOTE]
->The January Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change."
+>The January Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the January Guest OS. This list is subject to change."
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | --- | --- | --- | --- | --- |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/select-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/select-domain.md
@@ -20,10 +20,11 @@ From the settings tab of your Custom Vision project, you can select a domain for
|Domain|Purpose| |---|---|
-|__General__| Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the General domain. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
+|__General__| Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or if you're unsure of which domain to choose, select the General domain. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
+|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`|
|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`| |__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`|
-|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`|
+|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`|
|__Compact domains__| Optimized for the constraints of real-time classification on edge devices.| ## Object Detection
@@ -31,6 +32,7 @@ From the settings tab of your Custom Vision project, you can select a domain for
|Domain|Purpose| |---|---| |__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the General domain. ID: `da2e3a8a-40a5-4171-82f4-58522f70fbc1`|
+|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results are not deterministic: expect a +-1% mAP difference with the same training data provided. ID: `9c616dff-2e7d-ea11-af59-1866da359ce6`|
|__Logo__|Optimized for finding brand logos in images. ID: `1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`| |__Products on shelves__|Optimized for detecting and classifying products on shelves. ID: `3780a898-81c3-4516-81ae-3a139614e1f3`| |__Compact domains__| Optimized for the constraints of real-time object detection on edge devices.|
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-mitigate-latency.md new file mode 100644
@@ -0,0 +1,88 @@
+---
+title: How to mitigate latency when using the Face service
+titleSuffix: Azure Cognitive Services
+description: Learn how to mitigate latency when using the Face service.
+services: cognitive-services
+author: v-jaswel
+manager: chrhoder
+ms.service: cognitive-services
+ms.topic: conceptual
+ms.date: 1/5/2021
+ms.author: v-jawe
+---
+
+# How to: mitigate latency when using the Face service
+
+You may encounter latency when using the Face service. Latency refers to any kind of delay that occurs when communicating over a network. In general, possible causes of latency include:
+- The physical distance each packet must travel from source to destination.
+- Problems with the transmission medium.
+- Errors in routers or switches along the transmission path.
+- The time required by antivirus applications, firewalls, and other security mechanisms to inspect packets.
+- Malfunctions in client or server applications.
+
+This topic talks about possible causes of latency specific to using the Azure Cognitive Services, and how you can mitigate these causes.
+
+> [!NOTE]
+> Azure Cognitive Services do not provide any Service Level Agreement (SLA) regarding latency.
+
+## Possible causes of latency
+
+### Slow connection between the Cognitive Service and a remote URL
+
+Some Azure Cognitive Services provide methods that obtain data from a remote URL that you provide. For example, when you call the [DetectWithUrlAsync method](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithUrlAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_String_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can specify the URL of an image in which the service tries to detect faces.
+
+```csharp
+var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg");
+```
+
+The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will impact the response time of the Detect method.
+
+To mitigate this, consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet).
+
+### Large upload size
+
+Some Azure Cognitive Services provide methods that obtain data from a file that you upload. For example, when you call the [DetectWithStreamAsync method](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync?view=azure-dotnet#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithStreamAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_IO_Stream_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can upload an image in which the service tries to detect faces.
+
+```csharp
+using FileStream fs = File.OpenRead(@"C:\images\face.jpg");
+System.Collections.Generic.IList<DetectedFace> faces = await client.Face.DetectWithStreamAsync(fs, detectionModel: DetectionModel.Detection02);
+```
+
+If the file to upload is large, that will impact the response time of the `DetectWithStreamAsync` method, for the following reasons:
+- It takes longer to upload the file.
+- It takes the service longer to process the file, in proportion to the file size.
+
+Mitigations:
+- Consider [storing the image in Azure Premium Blob Storage](https://docs.microsoft.com/azure/storage/blobs/storage-upload-process-images?tabs=dotnet).
+- Consider uploading a smaller file.
+ - See the guidelines regarding [input data for face detection](https://docs.microsoft.com/azure/cognitive-services/face/concepts/face-detection#input-data) and [input data for face recognition](https://docs.microsoft.com/azure/cognitive-services/face/concepts/face-recognition#input-data).
+ - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When using detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
+ - For face recognition, reducing the face size to 200x200 pixels does not affect the accuracy of the recognition model.
+ - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
+ - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
+```csharp
+var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg");
+var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedy---debating-richard-nixon.jpg");
+Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
+IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
+```
+
+### Slow connection between your compute resource and the Face service
+
+If your computer has a slow connection to the Face service, that will impact the response time of service methods.
+
+Mitigations:
+- When you create your Face subscription, make sure to choose the region closest to where your application is hosted.
+- If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.
+
+## Next steps
+
+In this guide, you learned how to mitigate latency when using the Face service. Next, learn how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively.
+
+> [!div class="nextstepaction"]
+> [Example: Use the large-scale feature](how-to-use-large-scale.md)
+
+## Related topics
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/client/faceapi?view=azure-dotnet)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/concepts/face-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-detection.md
@@ -60,7 +60,9 @@ Use the following tips to make sure that your input images give the most accurat
* The supported input image formats are JPEG, PNG, GIF for the first frame, and BMP. * The image file size should be no larger than 6 MB.
-* The detectable face size range is 36 x 36 to 4096 x 4096 pixels. Faces outside of this range won't be detected.
+* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size.
+* The maximum detectable face size is 4096 x 4096 pixels.
+* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
* Some faces might not be detected because of technical challenges. Extreme face angles (head pose) or face occlusion (objects such as sunglasses or hands that block part of the face) can affect detection. Frontal and near-frontal faces give the best results. If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera:
@@ -76,4 +78,4 @@ If you're detecting faces from a video feed, you may be able to improve performa
Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
-* [Detect faces in an image](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md)
\ No newline at end of file
+* [Detect faces in an image](../Face-API-How-to-Topics/HowtoDetectFacesinImage.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/direct-line-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/direct-line-speech.md
@@ -18,7 +18,7 @@ ms.author: trbye
[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech-to-text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text-to-speech](text-to-speech.md).
-Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands (Preview)](custom-commands.md) for a streamlined solution experience.
+Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience.
## Getting started with Direct Line Speech
@@ -40,7 +40,7 @@ We also offer quickstarts designed to have you running code and learning the API
Sample code for creating a voice assistant is available on GitHub. These samples cover the client application for connecting to your assistant in several popular programming languages.
-* [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples)
+* [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples/#voice-assistants-quickstarts)
* [Tutorial: Voice enable your assistant with the Speech SDK, C#](tutorial-voice-enable-your-bot-speech-sdk.md) ## Customization
@@ -62,4 +62,4 @@ Direct Line Speech and its associated functionality for voice assistants are an
* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free) * [Get the Speech SDK](speech-sdk.md) * [Create and deploy a basic bot](/azure/bot-service/bot-builder-tutorial-basic-deploy?view=azure-bot-service-4.0)
-* [Get the Virtual Assistant Solution and Enterprise Template](https://github.com/Microsoft/AI)
\ No newline at end of file
+* [Get the Virtual Assistant Solution and Enterprise Template](https://github.com/Microsoft/AI)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-voice-assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-voice-assistants.md
@@ -1,7 +1,7 @@
--- title: Voice assistants frequently asked questions titleSuffix: Azure Cognitive Services
-description: Get answers to the most popular questions about voice assistants using Custom Commands (Preview) or the Direct Line Speech channel.
+description: Get answers to the most popular questions about voice assistants using Custom Commands or the Direct Line Speech channel.
services: cognitive-services author: trrwilson manager: nitinme
@@ -20,11 +20,11 @@ If you can't find answers to your questions in this document, check out [other s
**Q: What is a voice assistant?**
-**A:** Like Cortana, a voice assistant is a solution that listens to a user's spoken utterances, analyzes the contents of those utterances for meaning, performs one or more actions in response to the utterance's intent, and then provides a response to the user that often includes a spoken component. It's a "voice-in, voice-out" experience for interacting with a system. voice assistant authors create an on-device application using the `DialogServiceConnector` in the Speech SDK to communicate with an assistant created using [Custom Commands (Preview)](custom-commands.md) or the [Direct Line Speech](direct-line-speech.md) channel of the Bot Framework. These assistants can use custom keywords, custom speech, and custom voice to provide an experience tailored to your brand or product.
+**A:** Like Cortana, a voice assistant is a solution that listens to a user's spoken utterances, analyzes the contents of those utterances for meaning, performs one or more actions in response to the utterance's intent, and then provides a response to the user that often includes a spoken component. It's a "voice-in, voice-out" experience for interacting with a system. voice assistant authors create an on-device application using the `DialogServiceConnector` in the Speech SDK to communicate with an assistant created using [Custom Commands](custom-commands.md) or the [Direct Line Speech](direct-line-speech.md) channel of the Bot Framework. These assistants can use custom keywords, custom speech, and custom voice to provide an experience tailored to your brand or product.
-**Q: Should I use Custom Commands (Preview) or Direct Line Speech? What's the difference?**
+**Q: Should I use Custom Commands or Direct Line Speech? What's the difference?**
-**A:** [Custom Commands (Preview)](custom-commands.md) is a lower-complexity set of tools to easily create and host an assistant that's well-suited to task completion scenarios. [Direct Line Speech](direct-line-speech.md) provides richer, more sophisticated capabilities that can enable robust conversational scenarios. See the [comparison of assistant solutions](voice-assistants.md#choosing-an-assistant-solution) for more information.
+**A:** [Custom Commands](custom-commands.md) is a lower-complexity set of tools to easily create and host an assistant that's well-suited to task completion scenarios. [Direct Line Speech](direct-line-speech.md) provides richer, more sophisticated capabilities that can enable robust conversational scenarios. See the [comparison of assistant solutions](voice-assistants.md#choosing-an-assistant-solution) for more information.
**Q: How do I get started?**
@@ -56,7 +56,7 @@ For a more detailed guide, please see the [tutorial section](tutorial-voice-enab
**A:** This error indicates a communication problem between your assistant and the voice assistant service. -- For Custom Commands (Preview), ensure that your Custom Commands (Preview) Application is published
+- For Custom Commands, ensure that your Custom Commands Application is published
- For Direct Line Speech, ensure that you've [connected your bot to the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech), [added Streaming protocol support](/azure/bot-service/directline-speech-bot) to your bot (with the related Web Socket support), and then check that your bot is responding to incoming requests from the channel. **Q: This code still doesn't work and/or I'm getting a different error when using a `DialogServiceConnector`. What should I do?**
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints.md
@@ -1,7 +1,7 @@
--- title: 'Set up web endpoints (Preview)' titleSuffix: Azure Cognitive Services
-description: set up web endpoints for custom commands
+description: set up web endpoints for custom commands
services: cognitive-services author: xiaojul manager: yetian
@@ -18,7 +18,7 @@ In this article, you will learn how to setup web endpoints in a Custom Commands
- Set up web endpoints in Custom Commands application - Call web endpoints in Custom Commands application-- Receive the web endpoints response
+- Receive the web endpoints response
- Integrate the web endpoints response into a custom JSON payload, send, and visualize it from a C# UWP Speech SDK client application ## Prerequisites
@@ -32,7 +32,7 @@ In this article, you will learn how to setup web endpoints in a Custom Commands
## Setup web endpoints
-1. Open the Custom Commands application you previously created.
+1. Open the Custom Commands application you previously created.
1. Go to "Web endpoints", click "New web endpoint". > [!div class="mx-imgBorder"]
@@ -58,7 +58,7 @@ In this article, you will learn how to setup web endpoints in a Custom Commands
1. Go to **TurnOnOff** command, select **ConfirmationResponse** under completion rule, then select **Add an action**. 1. Under **New Action-Type**, select **Call web endpoint** 1. In **Edit Action - Endpoints**, select **UpdateDeviceState**, which is the web endpoint we created.
-1. In **Configuration**, put the following values:
+1. In **Configuration**, put the following values:
> [!div class="mx-imgBorder"] > ![Call web endpoints action parameters](media/custom-commands/setup-web-endpoint-edit-action-parameters.png)
@@ -72,16 +72,16 @@ In this article, you will learn how to setup web endpoints in a Custom Commands
> - The suggested query parameters are only needed for the example endpoint 1. In **On Success - Action to execute**, select **Send speech response**.
-
+ In **Simple editor**, enter `{SubjectDevice} is {OnOff}`.
-
+ > [!div class="mx-imgBorder"] > ![Screenshot that shows the On Success - Action to execute screen.](media/custom-commands/setup-web-endpoint-edit-action-on-success-send-response.png) | Setting | Suggested value | Description | | ------- | --------------- | ----------- | | Action to execute | Send speech response | Action to execute if the request to web endpoint succeeds |
-
+ > [!NOTE] > - You can also directly access the fields in the http response by using `{YourWebEndpointName.FieldName}`. For example: `{UpdateDeviceState.TV}`
@@ -98,7 +98,7 @@ In this article, you will learn how to setup web endpoints in a Custom Commands
> [!NOTE] > - `{WebEndpointErrorMessage}` is optional. You are free to remove it if you don't want to expose any error message.
- > - Within our example endpoint, we send back http response with detailed error messages for common errors such as missing header parameters.
+ > - Within our example endpoint, we send back http response with detailed error messages for common errors such as missing header parameters.
### Try it out in test portal - On Success response\
@@ -116,7 +116,7 @@ In [How-to: Send activity to client application (Preview)](./how-to-custom-comma
However, in most of the cases you only want to send activity to the client application when the call to the web endpoint is successful. In this example, this is when the device's state is successfully updated. 1. Delete the **Send activity to client** action you previously added.
-1. Edit call web endpoint:
+1. Edit call web endpoint:
1. In **Configuration**, make sure **Query Parameters** is `item={SubjectDevice}&&value={OnOff}` 1. In **On Success**, change **Action to execute** to **Send activity to client** 1. Copy the JSON below to the **Activity Content**
@@ -130,7 +130,6 @@ However, in most of the cases you only want to send activity to the client appli
} } ```
-
Now you only send activity to client when the request to web endpoint is successful. ### Create visuals for syncing device state
@@ -144,7 +143,7 @@ Add the following XML to `MainPage.xaml` above the `"EnableMicrophoneButton"` bl
.........../> ```
-### Sync device state
+### Sync device state
In `MainPage.xaml.cs`, add the reference `using Windows.Web.Http;`. Add the following code to the `MainPage` class. This method will send a GET request to the example endpoint, and extract the current device state for your app. Make sure to change `<your_app_name>` to what you used in the **header** in Custom Command Web endpoint
@@ -154,7 +153,7 @@ private async void SyncDeviceState_ButtonClicked(object sender, RoutedEventArgs
//Create an HTTP client object var httpClient = new HttpClient();
- //Add a user-agent header to the GET request.
+ //Add a user-agent header to the GET request.
var your_app_name = "<your-app-name>"; Uri endpoint = new Uri("https://webendpointexample.azurewebsites.net/api/DeviceState");
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-create-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
@@ -47,17 +47,22 @@ The following table shows the processing states for imported datasets:
After validation is complete, you can see the total number of matched utterances for each of your datasets in the **Utterances** column. If the data type you have selected requires long-audio segmentation, this column only reflects the utterances we have segmented for you either based on your transcripts or through the speech transcription service. You can further download the dataset validated to view the detail results of the utterances successfully imported and their mapping transcripts. Hint: long-audio segmentation can take more than an hour to complete data processing.
-For en-US and zh-CN datasets, you can further download a report to check the pronunciation scores and the noise level for each of your recordings. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
+In the data detail view, you can further check the pronunciation scores and the noise level for each of your datasets. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and impact the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 50+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice. Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, you might exclude those utterances from your dataset.
+> [!NOTE]
+> It is required that if you are using Custom Neural Voice, you must register your voice talent in the **Voice Talent** tab. When preparing your recording script, make sure you include the below sentence to acquire the voice talent acknowledgement of using their voice data to create a TTS voice model and generate synthetic speech.
+ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the recordings in your training datasets are done by the same person that makes the consent. [Read more about how your data will be processed and how voice talent verification is done here](https://aka.ms/CNV-data-privacy).
+ ## Build your custom voice model After your dataset has been validated, you can use it to build your custom voice model.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Training**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
2. Click **Train model**.
@@ -67,15 +72,22 @@ After your dataset has been validated, you can use it to build your custom voice
A common use of the **Description** field is to record the names of the datasets that were used to create the model.
-4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models. For other locales, you must select more than 2,000 utterances to be able to train a voice.
+4. From the **Select training data** page, choose one or multiple datasets that you would like to use for training. Check the number of utterances before you submit them. You can start with any number of utterances for en-US and zh-CN voice models using the "Adaptive" training method. For other locales, you must select more than 2,000 utterances to be able to train a voice using a standard tier including the "Statistical parametric" and "Concatenative" training methods, and more than 300 utterances to train a custom neural voice.
> [!NOTE] > Duplicate audio names will be removed from the training. Make sure the datasets you select do not contain the same audio names across multiple .zip files. > [!TIP]
- > Using the datasets from the same speaker is required for quality results. When the datasets you have submitted for training contain a total number of less than 6,000 distinct utterances, you will train your voice model through the Statistical Parametric Synthesis technique. In the case where your training data exceeds a total number of 6,000 distinct utterances, you will kick off a training process with the Concatenation Synthesis technique. Normally the concatenation technology can result in more natural, and higher-fidelity voice results. [Contact the Custom Voice team](https://go.microsoft.com/fwlink/?linkid=2108737) if you want to train a model with the latest Neural TTS technology that can produce a digital voice equivalent to the publicly available [neural voices](language-support.md#neural-voices).
+ > Using the datasets from the same speaker is required for quality results. Different training methods require different training data size. To train a model with the "Statistical parametric" method, at least 2,000 distinct utterances are required. For the "Concatenative" method, it's 6,000 utterances, while for "Neural", the minimum data size requirement is 300 utterances.
-5. Click **Train** to begin creating your voice model.
+5. Select the **training method** in the next step.
+
+ > [!NOTE]
+ > If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
+
+ On this page you can also select to upload your script for testing. The testing script must be a txt file, less than 1Mb. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
+
+6. Click **Train** to begin creating your voice model.
The Training table displays a new entry that corresponds to this newly created model. The table also displays the status: Processing, Succeeded, Failed.
@@ -87,11 +99,14 @@ The status that's shown reflects the process of converting your dataset to a voi
| Succeeded | Your voice model has been created and can be deployed. | | Failed | Your voice model has been failed in training due to many reasons, for example unseen data problems or network issues. |
-Training time varies depending on the volume of audio data processed. Typical times range from about 30 minutes for hundreds of utterances to 40 hours for 20,000 utterances. Once your model training is succeeded, you can start to test it.
+Training time varies depending on the volume of audio data processed and the training method you have selected. It can range from 30 minutes to 40 hours. Once your model training is succeeded, you can start to test it.
> [!NOTE] > Free subscription (F0) users can train one voice font simultaneously. Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice fonts finishes training, and then try again.
+> [!NOTE]
+> Training of custom neural voices is not free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) here.
+ > [!NOTE] > The maximum number of voice models allowed to be trained per subscription is 10 models for free subscription (F0) users and 100 for standard subscription (S0) users.
@@ -99,33 +114,28 @@ If you are using the neural voice training capability, you can select to train a
## Test your voice model
-After your voice font is successfully built, you can test it before deploying it for use.
+Each training will generate 100 sample audios automatically to help you test the model. After your voice model is successfully built, you can test it before deploying it for use.
-1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Testing**.
+1. Navigate to **Text-to-Speech > Custom Voice > [name of project] > Model**.
-2. Click **Add test**.
+2. Click the name of the model you would like to test.
-3. Select one or multiple models that you would like to test.
+3. On the model detail page, you can find the sample audios under the **Testing** tab.
-4. Provide the text you want the voice(s) to speak. If you have selected to test multiple models at one time, the same text will be used for the testing for different models.
-
- > [!NOTE]
- > The language of your text must be the same as the language of your voice font. Only successfully trained models can be tested. Only plain text is supported in this step.
-
-5. Click **Create**.
-
-Once you have submitted your test request, you will return to the test page. The table now includes an entry that corresponds to your new request and the status column. It can take a few minutes to synthesize speech. When the status column says **Succeeded**, you can play the audio, or download the text input (a .txt file) and audio output (a .wav file), and further audition the latter for quality.
-
-You can also find the test results in the detail page of each models you have selected for testing. Go to the **Training** tab, and click the model name to enter the model detail page.
+The quality of the voice depends on a number of factors, including the size of the training data, the quality of the recording, the accuracy of the transcript file, how well the recorded voice in the training data matches the personality of the designed voice for your intended use case, and more. [Check here to learn more about the capabilities and limits of our technology and the best practice to improve your model quality](https://aka.ms/CNV-limits).
## Create and use a custom voice endpoint After you've successfully created and tested your voice model, you deploy it in a custom Text-to-Speech endpoint. You then use this endpoint in place of the usual endpoint when making Text-to-Speech requests through the REST API. Your custom endpoint can be called only by the subscription that you have used to deploy the font.
-To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Deployment**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
+To create a new custom voice endpoint, go to **Text-to-Speech > Custom Voice > Endpoint**. Select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom voice model you would like to associate with this endpoint.
After you have clicked the **Add** button, in the endpoint table, you will see an entry for your new endpoint. It may take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+You can **Suspend** and **Resume** your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL will be kept the same so you don't need to change your code in your apps.
+
+You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one your want to update.
+ > [!NOTE] > Free subscription (F0) users can have only one model deployed. Standard subscription (S0) users can create up to 50 endpoints, each with its own custom voice.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
@@ -17,7 +17,15 @@ ms.author: erhopf
When you're ready to create a custom text-to-speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-You can start with a small amount of data to create a proof of concept. However, the more data that you provide, the more natural your custom voice will sound. Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
+Before you can train your own text-to-speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
+
+> [!NOTE]
+> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
+
+> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
+
+> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
## Data types
@@ -27,22 +35,22 @@ In some cases, you may not have the right dataset ready and will want to test th
This table lists data types and how each is used to create a custom text-to-speech voice model.
-| Data type | Description | When to use | Additional service required | Quantity for training a model | Locale(s) |
-| --------- | ----------- | ----------- | --------------------------- | ----------------------------- | --------- |
-| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. | No hard requirement for en-US and zh-CN. More than 2,000+ distinct utterances for other locales. | [All Custom Voice locales](language-support.md#customization) |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. | No hard requirement | [All Custom Voice locales](language-support.md#customization) |
-| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.| No hard requirement | [All Custom Voice locales](language-support.md#customization) |
+| Data type | Description | When to use | Additional processing required |
+| --------- | ----------- | ----------- | --------------------------- |
+| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
+| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a transcript (.txt) that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
+| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type. > [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> The maximum number of datasets allowed to be imported per subscription is 10 zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
## Individual utterances + matching transcript You can prepare recordings of individual utterances and the matching transcript in two ways. Either write a script and have it read by a voice talent or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
-To produce a good voice font, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
> [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [How to record voice samples for a custom voice](record-custom-voice-samples.md).
@@ -86,9 +94,6 @@ Below is an example of how the transcripts are organized utterance by utterance
``` ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
-> [!TIP]
-> When building production text-to-speech voices, select utterances (or write scripts) that take into account both phonetic coverage and efficiency. Having trouble getting the results you want? [Contact the Custom Voice](mailto:speechsupport@microsoft.com) team to find out more about having us consult.
- ## Long audio + transcript (beta) In some cases, you may not have segmented audio available. We provide a service (beta) through the custom voice portal to help you segment long audio files and create transcriptions. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
@@ -34,10 +34,11 @@ The diagram below highlights the steps to create a custom voice model using the
## Custom Neural voices
-The neural voice customization capability is currently in public preview, limited to selected customers. Fill out this [application form](https://go.microsoft.com/fwlink/?linkid=2108737) to get started.
+Custom Voice currently supports both standard and neural tiers. Custom Neural Voice empowers users to build higher quality voice models while requiring less data, and provides measures to help you deploy AI responsibly. We recommend you should use Custom Neural Voice to develop more realistic voices for more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way. [Learn more about Custom Neural Voice](https://aka.ms/CNV-Transparency-Note).
> [!NOTE]
-> As part of Microsoft's commitment to designing responsible AI, our intent is to protect the rights of individuals and society, and foster transparent human-computer interactions. For this reason, Custom Neural Voice is not generally available to all customers. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our ethics principles. Learn more about our [application gating process](./concepts-gating-overview.md).
+> As part of Microsoft's commitment to designing responsible AI, we have limited the use of Custom Neural Voice. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](https://aka.ms/gating-overview) and [apply here](https://aka.ms/customneural).
+> The [languages](language-support.md#customization) and [regions](regions.md#custom-voices) supported for the standard and neural version of Custom Voice are different. Check the details before you start.
## Set up your Azure account
@@ -51,7 +52,7 @@ Once you've created an Azure account and a Speech service subscription, you'll n
4. If you'd like to switch to another Speech subscription, use the cog icon located in the top navigation. > [!NOTE]
-> You must have a F0 or a S0 key created in Azure before you can use the service.
+> You must have a F0 or a S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
## How to create a project
@@ -66,4 +67,4 @@ To create your first project, select the **Text-to-Speech/Custom Voice** tab, th
- [Prepare Custom Voice data](how-to-custom-voice-prepare-data.md) - [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Guide: Record your voice samples](record-custom-voice-samples.md)\ No newline at end of file
+- [Guide: Record your voice samples](record-custom-voice-samples.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -124,6 +124,8 @@ https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
Both the Microsoft Speech SDK and REST APIs support these voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region/endpoint through the [voices/list API](rest-text-to-speech.md#get-a-list-of-voices).
+To learn how you can configure and adjust speaking styles, including neural voices, see the [how-to](speech-synthesis-markup.md#adjust-speaking-styles) on Speech Synthesis Markup Language.
+ > [!IMPORTANT] > Pricing varies for standard, custom and neural voices. Please visit the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page for additional information.
@@ -282,8 +284,6 @@ Below neural voices are in public preview.
For more information about regional availability, see [regions](regions.md#standard-and-neural-voices).
-To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
- > [!IMPORTANT] > The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert over to "Aria".
@@ -387,10 +387,30 @@ More than 75 standard voices are available in over 45 languages and locales, whi
### Customization
-Voice customization is available for `de-DE`, `en-GB`, `en-IN`, `en-US`, `es-MX`, `fr-FR`, `it-IT`, `pt-BR`, and `zh-CN`. Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
+Custom Voice is available in the standard and the neural tier. The languages supported are different for these two tiers.
+
+| Language | Locale | Standard | Neural |
+|--|--|--|--|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Yes | Yes |
+| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes | Yes |
+| English (Australia) | `en-AU` | No | Yes |
+| English (India) | `en-IN` | Yes | Yes |
+| English (United Kingdom) | `en-GB` | Yes | Yes |
+| English (United States) | `en-US` | Yes | Yes |
+| French (Canada) | `fr-CA` | No | Yes |
+| French (France) | `fr-FR` | Yes | Yes |
+| German (Germany) | `de-DE` | Yes | Yes |
+| Italian (Italy) | `it-IT` | Yes | Yes |
+| Japanese (Japan) | `ja-JP` | No | Yes |
+| Korean (Korea) | `ko-KR` | No | Yes |
+| Portuguese (Brazil) | `pt-BR` | Yes | Yes |
+| Spanish (Mexico) | `es-MX` | Yes | Yes |
+| Spanish (Spain) | `es-ES` | No | Yes |
+
+Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
> [!NOTE]
-> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Voice training in all locales starts with a data set of 2,000+ utterances, except for the `en-US` and `zh-CN` where you can start with any size of training data.
+> We do not support bi-lingual model training in Custom Voice, except for the Chinese-English bi-lingual. Select "Chinese-English bilingual" if you want to train a Chinese voice that can speak English as well. Chinese-English bilingual model training using the standard method is available in North Europe and North Central US only. Custom Neural Voice training is available in UK South and East US.
## Speech translation
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/migrate-v2-to-v3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
@@ -19,11 +19,51 @@ Compared to v2, the v3 version of the Speech services REST API for speech-to-tex
## Forward compatibility
-All entities from v2 can be also found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 are **not** available in results from v2 APIs.
+All entities from v2 can also be found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 are **not** available in responses from v2 APIs.
-## Breaking changes
+## Migration steps
+
+This is a summary list of items you need to be aware of when you are preparing for migration. Details are found in the individual links. Depending on your current use of the API not all steps listed here may apply. Only a few changes require non-trivial changes in the calling code. Most changes just require a change to item names.
+
+General changes:
+
+1. [Change the host name](#host-name-changes)
+
+1. [Rename the property id to self in your client code](#identity-of-an-entity)
+
+1. [Change code to iterate over collections of entities](#working-with-collections-of-entities)
+
+1. [Rename the property name to displayName in your client code](#name-of-an-entity)
+
+1. [Adjust the retrieval of the metadata of referenced entities](#accessing-referenced-entities)
+
+1. If you use Batch transcription:
+
+ * [Adjust code for creating batch transcriptions](#creating-transcriptions)
+
+ * [Adapt code to the new transcription results schema](#format-of-v3-transcription-results)
-The list of breaking changes has been sorted by the magnitude of changes required to adapt. Only a few changes require non-trivial changes in the calling code. Most changes just require a change to item names.
+ * [Adjust code for how results are retrieved](#getting-the-content-of-entities-and-the-results)
+
+1. If you use Custom model training/testing APIs:
+
+ * [Apply modifications to custom model training](#customizing-models)
+
+ * [Change how base and custom models are retrieved](#retrieving-base-and-custom-models)
+
+ * [Rename the path segment accuracytests to evaluations in your client code](#accuracy-tests)
+
+1. If you use endpoints APIs:
+
+ * [Change how endpoint logs are retrieved](#retrieving-endpoint-logs)
+
+1. Other minor changes:
+
+ * [Pass all custom properties as customProperties instead of properties in your POST requests](#using-custom-properties)
+
+ * [Read the location from response header Location instead of Operation-Location](#response-headers)
+
+## Breaking changes
### Host name changes
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/overview.md
@@ -31,7 +31,7 @@ The following features are part of the Speech service. Use the links in this tab
| [Text-to-Speech](text-to-speech.md) | Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Choose from standard voices and neural voices (see [Language support](language-support.md)). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) | | | [Create Custom Voices](#customize-your-speech-experience) | Create custom voice fonts unique to your brand or product. | No | [Yes](#reference-docs) | | [Speech Translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multi-language translation of speech to your applications, tools, and devices. Use this service for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
-| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands (Preview) service for task completion. | [Yes](voice-assistants.md) | No |
+| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. | [Yes](voice-assistants.md) | No |
| [Speaker Recognition](speaker-recognition-overview.md) | Speaker verification & identification | The Speaker Recognition service provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker Recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. | Yes | [Yes](/rest/api/speakerrecognition/) |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/quickstart-custom-commands-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
@@ -17,7 +17,7 @@ ms.custom: references_regions
# Create a voice assistant using Custom Commands
-In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app. **Custom Commands** makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app.
## Region Availability At this time, Custom Commands supports speech subscriptions created in these regions:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/record-custom-voice-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
@@ -20,6 +20,14 @@ Before you can make these recordings, though, you need a script: the words that
Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
+> [!NOTE]
+> If you would like to train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom voice model. When preparing your recording script, make sure you include the below sentence.
+
+> ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+This sentence will be used to verify if the training data is done by the same person that makes the consent. Read more about the [voice talent verification](https://aka.ms/CNV-data-privacy) here.
+
+> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](https://aka.ms/gating-overview) and [apply the access here](https://aka.ms/customneural).
+ > [!TIP] > For the highest quality results, consider engaging Microsoft to help develop your custom voice. Microsoft has extensive experience producing high-quality voices for its own products, including Cortana and Office.
@@ -51,7 +59,7 @@ Your voice talent is the other half of the equation. They must be able to speak
Recording custom voice samples can be more fatiguing than other kinds of voice work. Most voice talent can record for two or three hours a day. Limit sessions to three or four a week, with a day off in-between if possible.
-Recordings made for a voice model should be emotionally neutral. That is, a sad utterance should not be read in a sad way. Mood can be added to the synthesized speech later through prosody controls. Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona.
+Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonate the styles you want.
A persona might have, for example, a naturally upbeat personality. So "their" voice might carry a note of optimism even when they speak neutrally. However, such a personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
@@ -206,7 +214,7 @@ Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 16 kHz before saving and, if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voice fonts](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Creating custom voices](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
@@ -215,4 +223,4 @@ Archive the original recordings in a safe place in case you need them later. Pre
You're ready to upload your recordings and create your custom voice. > [!div class="nextstepaction"]
-> [Create custom voice fonts](./how-to-custom-voice-create-voice.md)
\ No newline at end of file
+> [Create custom voice fonts](./how-to-custom-voice-create-voice.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/rest-text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
@@ -55,9 +55,11 @@ The `voices/list` endpoint allows you to get a full list of voices for a specifi
| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| South Africa North | `https://southafricanorth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/voices/list` | | UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
@@ -53,7 +53,7 @@ The Speech SDK exposes many features from the Speech service, but not all of the
### Voice assistants
-[Voice assistants](voice-assistants.md) using the Speech SDK enable developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant. The implementation uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands (Preview) service for task completion. Additionally, voice assistants can use custom voices created in the [Custom Voice Portal](https://aka.ms/customvoice) to add a unique voice output experience.
+[Voice assistants](voice-assistants.md) using the Speech SDK enable developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant. The implementation uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. Additionally, voice assistants can use custom voices created in the [Custom Voice Portal](https://aka.ms/customvoice) to add a unique voice output experience.
**Voice assistants** is available on the following platforms:
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/text-to-speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
@@ -34,7 +34,7 @@ In this overview, you learn about the benefits and capabilities of the text-to-s
* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
-* Speech Synthesis Markup Language (SSML) - An XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See [SSML](speech-synthesis-markup.md).
+* Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
## Get started
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/translator-how-to-signup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/translator-how-to-signup.md
@@ -35,7 +35,7 @@ After you sign in to the portal, you can create a subscription to Translator as
When you sign up for Translator, you get a personalized access key unique to your subscription. This key is required on each call to the Translator. 1. Retrieve your authentication key by first selecting the appropriate subscription.
-1. Select **Keys** in the **Resource Management** section of your subscription's details.
+1. Select **Keys and Endpoint** in the **Resource Management** section of your subscription's details.
1. Copy either of the keys listed for your subscription. ## Learn, test, and get support
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/tutorial-bulk-processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-bulk-processing.md new file mode 100644
@@ -0,0 +1,507 @@
+---
+title: "Tutorial: Extract form data in bulk using Azure Data Factory - Form Recognizer"
+titleSuffix: Azure Cognitive Services
+description: Set up Azure Data Factory activities to trigger the training and running of Form Recognizer models to digitize a large backlog of documents.
+
+author: PatrickFarley
+manager: nitinme
+
+ms.service: cognitive-services
+ms.subservice: forms-recognizer
+ms.topic: tutorial
+ms.date: 01/04/2021
+ms.author: pafarley
+---
+
+# Tutorial: Extract form data in bulk using Azure Data Factory
+
+In this tutorial, we'll look at how to use Azure services to ingest a large batch of forms into digital media. This tutorial will show how to automate the data ingestion from an Azure Data Lake of documents into an Azure SQL database. You'll be able to quickly train models and process new documents with a few clicks.
+
+## Business need
+
+Most organizations are now aware of how valuable the data they have in different formats (pdf, images, videos) is. They're looking for the best practices and most cost-effective ways to digitize those assets.
+
+Additionally, our customers often have different types of forms coming from their many clients and customers. Unlike the [quickstarts](./quickstarts/client-library.md), this tutorial shows you how to automatically train a model with new and different types of forms using a metadata-driven approach. If you don't have an existing model for the given form type, the system will create one for you and give you the model ID.
+
+By extracting the data from forms and combining it with existing operational systems and data warehouses, businesses can get insights and deliver value to their customers and business users.
+
+With Azure Form Recognizer, we help organizations harness their data, automate processes (invoice payments, tax processing, and so on), save money and time, and enjoy better data accuracy.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up Azure Data Lake to store your forms
+> * Use an Azure database to create a parametrization table
+> * Use Azure Key Vault to store sensitive credentials
+> * Train your Form Recognizer model in a Databricks notebook
+> * Extract your form data using a Databricks notebook
+> * Automate form training and extraction with Azure Data Factory
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A set of at least five forms of the same type. Ideally, this workflow is meant to support large sets of documents. See [Build a training data set](./build-training-data-set.md) for tips and options for putting together your training data set. For this tutorial, you can use the files under the **Train** folder of the [sample data set](https://go.microsoft.com/fwlink/?linkid=2128080).
+
+## Project architecture
+
+This project stands up a set of Azure Data Factory pipelines to trigger python notebooks that train, analyze, and extract data from documents in an Azure Data Lake storage account.
+
+The Form Recognizer REST AP requires some parameters as input. For security reasons, some of these parameters will be stored in an Azure Key Vault, while other less sensitive parameters, like the storage blob folder name, will be stored in a parameterization table in an Azure SQL database.
+
+For type of form to be analyzed, data engineers or data scientists will populate a row of the parameter table. Then they use Azure Data Factory to iterate over the list of detected form types and pass the relevant parameters to a Databricks notebook to train or retrain the Form Recognizer models. An Azure function could also be used here.
+
+The Azure Databricks notebook then uses the trained models to extract form data, and it exports that date to an Azure SQL database.
+
+:::image type="content" source="./media/tutorial-bulk-processing/architecture.png" alt-text="project architecture":::
++
+## Set up Azure Data Lake
+
+Your backlog of forms might be in your on-premise environment or in a (s)FTP server. This tutorial uses forms in an Azure Data Lake Gen 2 storage account. You can transfer your files there using Azure Data Factory, Azure Storage Explorer, or AzCopy. The training and scoring datasets can be in different containers, but the training datasets for all form types must be in the same container (though they can be in different folders).
+
+To create a new Data Lake, follow the instructions in [Create a storage account to use with Azure Data Lake Storage Gen2](https://docs.microsoft.com/azure/storage/blobs/create-data-lake-storage-account).
+
+## Create a parameterization table
+
+Next, we'll create a metadata table in an Azure SQL Database. This table will contain the non-sensitive data required by the Form Recognizer REST API. Whenever there is a new type of form in our dataset, we'll insert a new record in this table and trigger the training and scoring pipeline (to be implemented later).
+
+The following fields will be used in the table:
+
+* **form_description**: This field is not required as part of the training. It provides a description of the type of forms we are training the model for (for example, "client A forms," "Hotel B forms").
+* **training_container_name**: This field is the storage account container name where we have stored the training dataset. It can be the same container as **scoring_container_name**.
+* **training_blob_root_folder**: The folder within the storage account where we'll store the files for the training of the model.
+* **scoring_container_name**: This field is the storage account container name where we've stored the files we want to extract the key value pairs from. It can be the same container as **training_container_name**.
+* **scoring_input_blob_folder**: The folder in the storage account where we'll store the files to extract data from.
+* **model_id**: The ID of model we want to retrain. For the first run, the value must be set to -1, which will cause training notebook to create a new custom model to train. The training notebook will return the newly created model ID to the Azure Data Factory instance and, using a stored procedure activity, we'll update this value in the Azure SQL database.
+
+ Whenever you want to ingest a new form type, you'll need to manually reset the model ID to -1 before training the model.
+
+* **file_type**: The supported form types are `application/pdf`, `image/jpeg`, `image/png`, and `image/tif`.
+
+ If you have forms of different file types, you'll need to change this value and **model_id** when training a new form type.
+* **form_batch_group_id**: Over time, you might have multiple form types you train against the same model. The **form_batch_group_id** will allow you to specify all the form types that have been training using a specific model.
+
+### Create the table
+
+[Create an Azure SQL Database](https://ms.portal.azure.com/#create/Microsoft.SQLDatabase), and then run the following SQL script in the [query editor](https://docs.microsoft.com/azure/azure-sql/database/connect-query-portal) to create the needed table.
+
+```sql
+CREATE TABLE dbo.ParamFormRecogniser(
+ form_description varchar(50) NULL,
+ training_container_name varchar(50) NOT NULL,
+ training_blob_root_folder varchar(50) NULL,
+ scoring_container_name varchar(50) NOT NULL,
+ scoring_input_blob_folder varchar(50) NOT NULL,
+ scoring_output_blob_folder varchar(50) NOT NULL,
+ model_id varchar(50) NULL,
+ file_type varchar(50) NULL
+) ON PRIMARY
+GO
+```
+
+Run the following script to create the procedure for automatically updating **model_id** once it is trained.
+
+```SQL
+CREATE PROCEDURE [dbo].[update_model_id] ( @form_batch_group_id varchar(50),@model_id varchar(50))
+AS
+BEGIN
+ UPDATE [dbo].[ParamFormRecogniser]
+ SET [model_id] = @model_id
+ WHERE form_batch_group_id =@form_batch_group_id
+END
+```
+
+## Use Azure Key Vault to store sensitive credentials
+
+For security reasons, we don't want to store certain sensitive information in the parameterization table in the Azure SQL database. We'll store sensitive parameters as Azure Key Vault secrets.
+
+### Create an Azure Key Vault
+
+[Create a Key Vault resource](https://ms.portal.azure.com/#create/Microsoft.KeyVault). Then navigate to the Key Vault resource after it's created and, in the **settings** section, select **secrets** to add the parameters.
+
+A new window will appear, select **Generate/import**. Enter the name of the parameter and its value and click create. Do this for the following parameters:
+
+* **CognitiveServiceEndpoint**: The endpoint URL of your Form Recognizer API.
+* **CognitiveServiceSubscriptionKey**: The access key for your Form Recognizer service.
+* **StorageAccountName**: The storage account where the training dataset and forms we want to extract key-value pairs from are stored. If these are in different accounts, enter each of their account names as separate secrets. Remember that the training datasets must be in the same container for all form types, but they can be in different folders.
+* **StorageAccountSasKey**: the shared access signature (SAS) of the storage account. To retrieve the SAS URL, go to your storage resource and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read** and **List** permissions are checked, and click **Create**. Then copy the value in the **URL** section. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
+
+## Train your Form Recognizer model in a Databricks notebook
+
+You'll use Azure Databricks to store and run the Python code that interacts with the Form Recognizer service.
+
+### Create a notebook in Databricks
+
+[Create an Azure Databricks resource](https://ms.portal.azure.com/#create/Microsoft.Databricks) in the Azure portal. Navigate to the resource after it has been created and launch the workspace.
+
+### Create a secret scope backed by Azure Key Vault
+
+To reference the secrets in the Azure Key Vault we created above, you'll need to create a secret scope in Databricks. Follow the steps under [Create an Azure Key Vault-backed secret scope](https://docs.microsoft.com/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope).
+
+### Create a Databricks cluster
+
+A cluster is a collection of Databricks computation resources. To create a cluster:
+
+1. In the sidebar, click the **Clusters** button.
+1. On the **Clusters** page, click **Create Cluster**.
+1. On the **Create Cluster** page, specify a cluster name and select **7.2 (Scala 2.12, Spark 3.0.0)** in the Databricks Runtime Version drop-down.
+1. Click **Create Cluster**.
+
+### Write a settings notebook
+
+Now you're ready to add Python notebooks. First, create a notebook called **Settings**; this notebook will assign the values in your parameterization table to variables in the script. The values will later be passed in as parameters by Azure Data Factory. We'll also assign values from the secrets in the Key Vault to variables.
+
+1. To create the **Settings** notebook, click on the **workspace** button, in the new tab, click on the dropdown list and select **create** and then **notebook**.
+1. In the pop-up window, enter the name you want to give to the notebook and select **Python** as default language. Select your Databricks cluster, and select **Create**.
+1. In the first notebook cell, we retrieve the parameters passed by Azure Data Factory.
+
+ ```python
+ dbutils.widgets.text("form_batch_group_id", "","")
+ dbutils.widgets.get("form_batch_group_id")
+ form_batch_group_id = getArgument("form_batch_group_id")
+
+ dbutils.widgets.text("model_id", "","")
+ dbutils.widgets.get("model_id")
+ model_id = getArgument("model_id")
+
+ dbutils.widgets.text("training_container_name", "","")
+ dbutils.widgets.get("training_container_name")
+ training_container_name = getArgument("training_container_name")
+
+ dbutils.widgets.text("training_blob_root_folder", "","")
+ dbutils.widgets.get("training_blob_root_folder")
+ training_blob_root_folder= getArgument("training_blob_root_folder")
+
+ dbutils.widgets.text("scoring_container_name", "","")
+ dbutils.widgets.get("scoring_container_name")
+ scoring_container_name= getArgument("scoring_container_name")
+
+ dbutils.widgets.text("scoring_input_blob_folder", "","")
+ dbutils.widgets.get("scoring_input_blob_folder")
+ scoring_input_blob_folder= getArgument("scoring_input_blob_folder")
+
+
+ dbutils.widgets.text("file_type", "","")
+ dbutils.widgets.get("file_type")
+ file_type = getArgument("file_type")
+
+ dbutils.widgets.text("file_to_score_name", "","")
+ dbutils.widgets.get("file_to_score_name")
+ file_to_score_name= getArgument("file_to_score_name")
+ ```
+
+1. In the second cell, we retrieve secrets from Key Vault and assign them to variables.
+
+ ```python
+ cognitive_service_subscription_key = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "CognitiveserviceSubscriptionKey")
+ cognitive_service_endpoint = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "CognitiveServiceEndpoint")
+
+ training_storage_account_name = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "StorageAccountName")
+ storage_account_sas_key= dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "StorageAccountSasKey")
+
+ ScoredFile = file_to_score_name+ "_output.json"
+ training_storage_account_url="https://"+training_storage_account_name+".blob.core.windows.net/"+training_container_name+storage_account_sas_key
+ ```
+
+### Write a training notebook
+
+Now that we've completed the **Settings** notebook, we can create a notebook to train the model. As mentioned above, we will use files stored in a folder in an Azure Data Lake Gen 2 storage account (**training_blob_root_folder**). The folder name has been passed in as a variable. Each set of form types will be in the same folder and, as we loop over the parameter table, we'll train the model using all of the form types.
+
+1. Create a new notebook called **TrainFormRecognizer**.
+1. In the first cell, execute the Settings notebook:
+
+ ```python
+ %run "./Settings"
+ ```
+
+1. In the next cell, assign variables from the **Settings** file, and dynamically train the model for each form type, applying the code in the [REST quickstart](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md#get-training-results%20).
+
+ ```python
+ import json
+ import time
+ from requests import get, post
+
+ post_url = cognitive_service_endpoint + r"/formrecognizer/v2.0/custom/models"
+ source = training_storage_account_url
+ prefix= training_blob_root_folder
+
+ includeSubFolders=True
+ useLabelFile=False
+ headers = {
+ # Request headers
+ 'Content-Type': file_type,
+ 'Ocp-Apim-Subscription-Key': cognitive_service_subscription_key,
+ }
+ body = {
+ "source": source
+ ,"sourceFilter": {
+ "prefix": prefix,
+ "includeSubFolders": includeSubFolders
+ },
+ }
+ if model_id=="-1": # if you don't already have a model you want to retrain. In this case, we create a model and use it to extract the key-value pairs
+ try:
+ resp = post(url = post_url, json = body, headers = headers)
+ if resp.status_code != 201:
+ print("POST model failed (%s):\n%s" % (resp.status_code, json.dumps(resp.json())))
+ quit()
+ print("POST model succeeded:\n%s" % resp.headers)
+ get_url = resp.headers["location"]
+ model_id=get_url[get_url.index('models/')+len('models/'):]
+
+ except Exception as e:
+ print("POST model failed:\n%s" % str(e))
+ quit()
+ else :# if you already have a model you want to retrain, we reuse it and (re)train with the new form types.
+ try:
+ get_url =post_url+r"/"+model_id
+
+ except Exception as e:
+ print("POST model failed:\n%s" % str(e))
+ quit()
+ ```
+
+1. The final step in the training process is to get the training result in a json format.
+
+ ```python
+ n_tries = 10
+ n_try = 0
+ wait_sec = 5
+ max_wait_sec = 5
+ while n_try < n_tries:
+ try:
+ resp = get(url = get_url, headers = headers)
+ resp_json = resp.json()
+ print (resp.status_code)
+ if resp.status_code != 200:
+ print("GET model failed (%s):\n%s" % (resp.status_code, json.dumps(resp_json)))
+ n_try += 1
+ quit()
+ model_status = resp_json["modelInfo"]["status"]
+ print (model_status)
+ if model_status == "ready":
+ print("Training succeeded:\n%s" % json.dumps(resp_json))
+ n_try += 1
+ quit()
+ if model_status == "invalid":
+ print("Training failed. Model is invalid:\n%s" % json.dumps(resp_json))
+ n_try += 1
+ quit()
+ # Training still running. Wait and retry.
+ time.sleep(wait_sec)
+ n_try += 1
+ wait_sec = min(2*wait_sec, max_wait_sec)
+ print (n_try)
+ except Exception as e:
+ msg = "GET model failed:\n%s" % str(e)
+ print(msg)
+ quit()
+ print("Train operation did not complete within the allocated time.")
+ ```
+
+## Extract form data using a notebook
+
+### Mount the Azure Data Lake storage
+
+The next step is to score the different forms we have using the trained model. We'll mount the Azure Data Lake storage account in Databricks and refer to the mount during the ingesting process.
+
+Just like in the training stage, we'll use Azure Data Factory to invoke the extraction of the key-value pairs from the forms. We'll loop over the forms in the folders specified in the parameter table.
+
+1. Let's create the notebook to mount the storage account in Databricks. We'll call it **MountDataLake**.
+1. You'll need to call the **Settings** notebook first:
+
+ ```python
+ %run "./Settings"
+ ```
+
+1. In the second cell, we'll define variables for the sensitive parameters, which we'll retrieve from our Key Vault secrets.
+
+ ```python
+ cognitive_service_subscription_key = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "CognitiveserviceSubscriptionKey")
+ cognitive_service_endpoint = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "CognitiveServiceEndpoint")
+
+ scoring_storage_account_name = dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "StorageAccountName")
+ scoring_storage_account_sas_key= dbutils.secrets.get(scope = "FormRecognizer_SecretScope", key = "StorageAccountSasKey")
+
+ scoring_mount_point = "/mnt/"+scoring_container_name
+ scoring_source_str = "wasbs://{container}@{storage_acct}.blob.core.windows.net/".format(container=scoring_container_name, storage_acct=scoring_storage_account_name)
+ scoring_conf_key = "fs.azure.sas.{container}.{storage_acct}.blob.core.windows.net".format(container=scoring_container_name, storage_acct=scoring_storage_account_name)
+
+ ```
+
+1. Next, we'll try to unmount the storage account in case it was previously mounted.
+
+ ```python
+ try:
+ dbutils.fs.unmount(scoring_mount_point) # Use this to unmount as needed
+ except:
+ print("{} already unmounted".format(scoring_mount_point))
+
+ ```
+
+1. Finally, we'll mount the storage account.
++
+ ```python
+ try:
+ dbutils.fs.mount(
+ source = scoring_source_str,
+ mount_point = scoring_mount_point,
+ extra_configs = {scoring_conf_key: scoring_storage_account_sas_key}
+ )
+ except Exception as e:
+ print("ERROR: {} already mounted. Run previous cells to unmount first".format(scoring_mount_point))
+
+ ```
+
+ > [!NOTE]
+ > We only mounted the training storage account&mdash;in this case, the training files and the files we want to extract key-value pairs from are in the same storage account. If your scoring and training storage accounts are different, you will need to mount both storage accounts here.
+
+### Write the scoring notebook
+
+Now we can create a scoring notebook. Similarly to the training notebook, we'll use files stored in folders in the Azure Data Lake storage account we just mounted. The folder name is passed as a variable. We'll loop over all the forms in the specified folder and extract the key-value pairs from them.
+
+1. Create a new notebook and call it **ScoreFormRecognizer**.
+1. Execute the **Settings** and the **MountDataLake** notebooks.
+
+ ```python
+ %run "./Settings"
+ %run "./MountDataLake"
+ ```
+
+1. Then add the following code, which calls the [Analyze](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) API.
+
+ ```python
+ ########### Python Form Recognizer Async Analyze #############
+ import json
+ import time
+ from requests import get, post
+
+
+ #prefix= TrainingBlobFolder
+ post_url = cognitive_service_endpoint + "/formrecognizer/v2.0/custom/models/%s/analyze" % model_id
+ source = r"/dbfs/mnt/"+scoring_container_name+"/"+scoring_input_blob_folder+"/"+file_to_score_name
+ output = r"/dbfs/mnt/"+scoring_container_name+"/scoringforms/ExtractionResult/"+os.path.splitext(os.path.basename(source))[0]+"_output.json"
+
+ params = {
+ "includeTextDetails": True
+ }
+
+ headers = {
+ # Request headers
+ 'Content-Type': file_type,
+ 'Ocp-Apim-Subscription-Key': cognitive_service_subscription_key,
+ }
+
+ with open(source, "rb") as f:
+ data_bytes = f.read()
+
+ try:
+ resp = post(url = post_url, data = data_bytes, headers = headers, params = params)
+ if resp.status_code != 202:
+ print("POST analyze failed:\n%s" % json.dumps(resp.json()))
+ quit()
+ print("POST analyze succeeded:\n%s" % resp.headers)
+ get_url = resp.headers["operation-location"]
+ except Exception as e:
+ print("POST analyze failed:\n%s" % str(e))
+ quit()
+ ```
+
+1. In the next cell, we'll get the results of the key-value pair extraction. This cell will output the result. Because we want the result in JSON format to process further into our Azure SQL Database or Cosmos DB, we'll write the result to a .json file. The output file name will be the name of the scored file, concatenated with "_output.json". The file will be stored in the same folder as the source file.
+
+ ```python
+ n_tries = 10
+ n_try = 0
+ wait_sec = 5
+ max_wait_sec = 5
+ while n_try < n_tries:
+ try:
+ resp = get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": cognitive_service_subscription_key})
+ resp_json = resp.json()
+ if resp.status_code != 200:
+ print("GET analyze results failed:\n%s" % json.dumps(resp_json))
+ n_try += 1
+ quit()
+ status = resp_json["status"]
+ if status == "succeeded":
+ print("Analysis succeeded:\n%s" % json.dumps(resp_json))
+ n_try += 1
+ quit()
+ if status == "failed":
+ print("Analysis failed:\n%s" % json.dumps(resp_json))
+ n_try += 1
+ quit()
+ # Analysis still running. Wait and retry.
+ time.sleep(wait_sec)
+ n_try += 1
+ wait_sec = min(2*wait_sec, max_wait_sec)
+ except Exception as e:
+ msg = "GET analyze results failed:\n%s" % str(e)
+ print(msg)
+ n_try += 1
+ print("Analyze operation did not complete within the allocated time.")
+ quit()
+
+ ```
+1. Do the file writing procedure in a new cell:
+
+ ```python
+ import requests
+ file = open(output, "w")
+ file.write(str(resp_json))
+ file.close()
+ ```
+
+## Automate training and scoring with Azure Data Factory
+
+The only remaining step is to set up the Azure Data Factory (ADF) service to automate the training and scoring processes. First, follow the steps under [Create a data factory](https://docs.microsoft.com/azure/data-factory/quickstart-create-data-factory-portal#create-a-data-factory). After you create the ADF resource, you'll need to create three pipelines: one for training and two for scoring (explained below).
+
+### Training pipeline
+
+The first activity in the training pipeline is a Lookup to read and return the values in the parameterization table in the Azure SQL database. As all the training datasets will be in the same storage account and container (but potentially different folders), we'll keep the default value **First row only** attribute in the lookup activity settings. For each type of form to train the model against, we'll train the model using all the files in **training_blob_root_folder**.
+
+:::image type="content" source="./media/tutorial-bulk-processing/training-pipeline.png" alt-text="training pipeline in data factory":::
+
+The stored procedure takes two parameters: **model_id** and the **form_batch_group_id**. The code to return the model ID from the Databricks notebook is `dbutils.notebook.exit(model_id)`, and the code to read the code in stored procedure activity in data factory is `@activity('GetParametersFromMetadaTable').output.firstRow.form_batch_group_id`.
+
+### Scoring pipelines
+
+To extract the key-value pairs, we'll scan all the folders in the parameterization table and, for each folder, we'll extract the key-value pairs of all the files in it. As of today, ADF does not support nested ForEach loops. So instead, we'll create two pipelines. The first pipeline will do the lookup from the parameterization table and pass the folders list as a parameter to the second pipeline.
+
+:::image type="content" source="./media/tutorial-bulk-processing/scoring-pipeline-1a.png" alt-text="first scoring pipeline in data factory":::
+
+:::image type="content" source="./media/tutorial-bulk-processing/scoring-pipeline-1b.png" alt-text="first scoring pipeline in data factory, details":::
+
+The second pipeline will use a GetMeta activity to get the list of the files in the folder and pass it as a parameter to the scoring Databricks notebook.
+
+:::image type="content" source="./media/tutorial-bulk-processing/scoring-pipeline-2a.png" alt-text="second scoring pipeline in data factory":::
+
+:::image type="content" source="./media/tutorial-bulk-processing/scoring-pipeline-2b.png" alt-text="second scoring pipeline in data factory, details":::
+
+### Specify a degree of parallelism
+
+In both the training and scoring pipelines, you can specify the degree of parallelism to process multiple forms simultaneously.
+
+To set the degree of parallelism in the ADF pipeline:
+
+* Select the Foreach activity.
+* Uncheck the **Sequential** box.
+* Set the degree of parallelism in the **Batch count** text box. We recommend a maximum batch count of 15 for the scoring.
+
+:::image type="content" source="./media/tutorial-bulk-processing/parallelism.png" alt-text="parallelism configuration for scoring activity in ADF":::
+
+## How to use
+
+You now have an automated pipeline to digitize your backlog of forms and run some analytics on top of it. When you add new forms of a familiar type to an existing storage folder, simply re-run the scoring pipelines and they will update all of your output files, including output files for the new forms.
+
+If you add new forms of a new type, you'll also need to upload a training dataset to the appropriate container. Then, add a new row in the parameterization table, entering the locations of the new documents and their training dataset. Enter a value of -1 for **model_ID** to indicate that a new model must be trained for these forms. Then run the training pipeline in ADF. It will read from the table, train a model, and overwrite the model ID in the table. Then you can call the scoring pipelines to start writing the output files.
+
+## Next steps
+
+In this tutorial, you set up Azure Data Factory pipelines to trigger the training and running of Form Recognizer models to digitize a large backlog of files. Next, explore the Form Recognizer API to see what else you can do with it.
+
+* [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/plan-manage-costs.md
@@ -56,9 +56,9 @@ After you delete QnA Maker resources, the following resources might continue to
- [Azure App Service (for the runtime)](https://azure.microsoft.com/pricing/details/app-service/) - [Azure Cognitive Search (for the data)](https://azure.microsoft.com/pricing/details/search/)
-### Using Monetary Credit with Cognitive Services
+### Using Azure Prepayment credit with Cognitive Services
-You can pay for Cognitive Services charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Cognitive Services charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
## Create budgets
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/includes/calling-sdk-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/includes/calling-sdk-js.md
@@ -73,7 +73,7 @@ and a Phone Number for both callees.
Your Communication Services resource must be configured to allow PSTN calling. ```js
-const userCallee = { communicationUserId: <ACS_USER_ID> }
+const userCallee = { communicationUserId: <ACS_USER_ID> };
const pstnCallee = { phoneNumber: <PHONE_NUMBER>}; const groupCall = callAgent.call([userCallee, pstnCallee], placeCallOptions);
@@ -345,7 +345,7 @@ This will synchronously return the remote participant instance.
```js const userIdentifier = { communicationUserId: <ACS_USER_ID> };
-const pstnIdentifier = { phoneNumber: <PHONE_NUMBER>}
+const pstnIdentifier = { phoneNumber: <PHONE_NUMBER>};
const remoteParticipant = call.addParticipant(userIdentifier); const remoteParticipant = call.addParticipant(pstnIdentifier); ```
@@ -359,7 +359,7 @@ The participant will also be removed from the `remoteParticipants` collection.
```js const userIdentifier = { communicationUserId: <ACS_USER_ID> };
-const pstnIdentifier = { phoneNumber: <PHONE_NUMBER>}
+const pstnIdentifier = { phoneNumber: <PHONE_NUMBER>};
await call.removeParticipant(userIdentifier); await call.removeParticipant(pstnIdentifier); ```
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
@@ -5,7 +5,7 @@ author: abhijitpai
ms.author: abpai ms.service: cosmos-db ms.topic: conceptual
-ms.date: 11/19/2020
+ms.date: 01/19/2021
--- # Azure Cosmos DB service quotas
@@ -32,7 +32,7 @@ You can provision throughput at a container-level or a database-level in terms o
| Maximum storage per container | Unlimited | | Maximum storage per database | Unlimited | | Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB |
-| Minimum RU/s required per 1 GB | 10 RU/s<br>**Note:** if your container or database contains more than 1 TB of data, your account may be eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program) |
+| Minimum RU/s required per 1 GB | 10 RU/s<br>**Note:** this minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program) |
> [!NOTE] > To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md).
@@ -55,7 +55,7 @@ To estimate the minimum throughput required of a container with manual throughpu
Example: Suppose you have a container provisioned with 400 RU/s and 0 GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 10 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 200 GB. The minimum RU/s is now `MAX(400, 200 * 10 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
-**Note:** if your container or database contains more than 1 TB of data, your account may be eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
+**Note:** the minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
#### Minimum throughput on shared throughput database To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
@@ -67,7 +67,7 @@ To estimate the minimum throughput required of a shared throughput database with
Example: Suppose you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 10 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
-**Note:** if your container or database contains more than 1 TB of data, your account may be eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
+**Note:** the minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
In summary, here are the minimum provisioned RU limits.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/cosmos-db-reserved-capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-reserved-capacity.md
@@ -67,7 +67,7 @@ This recommendation to purchase a 30,000 RU/s reservation indicates that, among
|Field |Description | |---------|---------| |Scope | Option that controls how many subscriptions can use the billing benefit associated with the reservation. It also controls how the reservation is applied to specific subscriptions. <br/><br/> If you select **Shared**, the reservation discount is applied to Azure Cosmos DB instances that run in any subscription within your billing context. The billing context is based on how you signed up for Azure. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all individual subscriptions with pay-as-you-go rates created by the account administrator. <br/><br/> If you select **Single subscription**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription. <br/><br/> If you select **Single resource group**, the reservation discount is applied to Azure Cosmos DB instances in the selected subscription and the selected resource group within that subscription. <br/><br/> You can change the reservation scope after you buy the reserved capacity. |
- |Subscription | Subscription that's used to pay for the Azure Cosmos DB reserved capacity. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
+ |Subscription | Subscription that's used to pay for the Azure Cosmos DB reserved capacity. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br/><br/> Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br/><br/> Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. |
| Resource Group | Resource group to which the reserved capacity discount is applied. | |Term | One year or three years. | |Throughput Type | Throughput is provisioned as request units. You can buy a reservation for the provisioned throughput for both setups - single region writes as well as multiple region writes. The throughput type has two values to choose from: 100 RU/s per hour and 100 multi-region writes RU/s per hour.|
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/gremlin-headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/gremlin-headers.md
@@ -31,13 +31,12 @@ Keep in mind that taking dependency on these headers you are limiting portabilit
## Status codes
-Most common status codes returned by the server are listed below.
+Most common codes returned for `x-ms-status-code` status attribute by the server are listed below.
| Status | Explanation | | --- | --- | | **401** | Error message `"Unauthorized: Invalid credentials provided"` is returned when authentication password doesn't match Cosmos DB account key. Navigate to your Cosmos DB Gremlin account in the Azure portal and confirm that the key is correct.| | **404** | Concurrent operations that attempt to delete and update the same edge or vertex simultaneously. Error message `"Owner resource does not exist"` indicates that specified database or collection is incorrect in connection parameters in `/dbs/<database name>/colls/<collection or graph name>` format.|
-| **408** | `"Server timeout"` indicates that traversal took more than **30 seconds** and was canceled by the server. Optimize your traversals to run quickly by filtering vertices or edges on every hop of traversal to narrow down search scope.|
| **409** | `"Conflicting request to resource has been attempted. Retry to avoid conflicts."` This usually happens when vertex or an edge with an identifier already exists in the graph.| | **412** | Status code is complemented with error message `"PreconditionFailedException": One of the specified pre-condition is not met`. This error is indicative of an optimistic concurrency control violation between reading an edge or vertex and writing it back to the store after modification. Most common situations when this error occurs is property modification, for example `g.V('identifier').property('name','value')`. Gremlin engine would read the vertex, modify it, and write it back. If there is another traversal running in parallel trying to write the same vertex or an edge, one of them will receive this error. Application should submit traversal to the server again.| | **429** | Request was throttled and should be retried after value in **x-ms-retry-after-ms**|
@@ -48,6 +47,7 @@ Most common status codes returned by the server are listed below.
| **1004** | This status code indicates malformed graph request. Request can be malformed when it fails deserialization, non-value type is being deserialized as value type or unsupported gremlin operation requested. Application should not retry the request because it will not be successful. | | **1007** | Usually this status code is returned with error message `"Could not process request. Underlying connection has been closed."`. This situation can happen if client driver attempts to use a connection that is being closed by the server. Application should retry the traversal on a different connection. | **1008** | Cosmos DB Gremlin server can terminate connections to rebalance traffic in the cluster. Client drivers should handle this situation and use only live connections to send requests to the server. Occasionally client drivers may not detect that connection was closed. When application encounters an error, `"Connection is too busy. Please retry after sometime or open more connections."` it should retry traversal on a different connection.
+| **1009** | The operation did not complete in the allotted time and was canceled by the server. Optimize your traversals to run quickly by filtering vertices or edges on every hop of traversal to narrow search scope. Request timeout default is **60 seconds**. |
## Samples
@@ -106,4 +106,4 @@ try {
## Next steps * [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) * [Common Azure Cosmos DB REST response headers](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers)
-* [TinkerPop Graph Driver Provider Requirements]( http://tinkerpop.apache.org/docs/current/dev/provider/#_graph_driver_provider_requirements)
\ No newline at end of file
+* [TinkerPop Graph Driver Provider Requirements]( http://tinkerpop.apache.org/docs/current/dev/provider/#_graph_driver_provider_requirements)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/high-availability.md
@@ -4,7 +4,7 @@ description: This article describes how Azure Cosmos DB provides high availabili
author: markjbrown ms.service: cosmos-db ms.topic: conceptual
-ms.date: 11/04/2020
+ms.date: 01/18/2021
ms.author: mjbrown ms.reviewer: sngun
@@ -15,7 +15,7 @@ ms.reviewer: sngun
Azure Cosmos DB provides high availability in two primary ways. First, Azure Cosmos DB replicates data across regions configured within a Cosmos account. Second, Azure Cosmos DB maintains 4 replicas of data within a region.
-Azure Cosmos DB is a globally distributed database service and is a foundational service in Azure. By default, is available in [all regions where Azure is available](https://azure.microsoft.com/global-infrastructure/services/?products=cosmos-db&regions=all). You can associate any number of Azure regions with your Azure Cosmos account and your data is automatically and transparently replicated. You can add or remove a region to your Azure Cosmos account at any time. Cosmos DB is available in all five distinct Azure cloud environments available to customers:
+Azure Cosmos DB is a globally distributed database service and is a foundational service available in [all regions where Azure is available](https://azure.microsoft.com/global-infrastructure/services/?products=cosmos-db&regions=all). You can associate any number of Azure regions with your Azure Cosmos account and your data is automatically and transparently replicated. You can add or remove a region to your Azure Cosmos account at any time. Cosmos DB is available in all five distinct Azure cloud environments available to customers:
* **Azure public** cloud, which is available globally.
@@ -35,15 +35,15 @@ Within a region, Azure Cosmos DB maintains four copies of your data as replicas
* A partition-set is a collection of multiple replica-sets. Within each region, every partition is protected by a replica-set with all writes replicated and durably committed by a majority of replicas. Replicas are distributed across as many as 10-20 fault domains.
-* Each partition across all the regions is replicated. Each region contains all the data partitions of an Azure Cosmos container and can accept writes and serve reads.
+* Each partition across all the regions is replicated. Each region contains all the data partitions of an Azure Cosmos container and can serve reads as well as serve writes when multi-region writes is enabled.
If your Azure Cosmos account is distributed across *N* Azure regions, there will be at least *N* x 4 copies of all your data. Having an Azure Cosmos account in more than 2 regions improves the availability of your application and provides low latency across the associated regions. ## SLAs for availability
-As a globally distributed database, Azure Cosmos DB provides comprehensive SLAs that encompass throughput, latency at the 99th percentile, consistency, and high availability. The table below shows the guarantees for high availability provided by Azure Cosmos DB for single and multi-region accounts. For high availability, always configure your Azure Cosmos accounts to have multiple write regions.
+Azure Cosmos DB provides comprehensive SLAs that encompass throughput, latency at the 99th percentile, consistency, and high availability. The table below shows the guarantees for high availability provided by Azure Cosmos DB for single and multi-region accounts. For higher write availability, configure your Azure Cosmos account to have multiple write regions.
-|Operation type | Single region |Multi-region (single region writes)|Multi-region (multi-region writes) |
+|Operation type | Single-region |Multi-region (single-region writes)|Multi-region (multi-region writes) |
|---------|---------|---------|-------| |Writes | 99.99 |99.99 |99.999| |Reads | 99.99 |99.999 |99.999|
@@ -86,37 +86,33 @@ For the rare cases of regional outage, Azure Cosmos DB makes sure your database
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB.
-* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there is no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the event of a permanently irrecoverable write region, a multi-region Azure Cosmos account configured with bounded-staleness consistency, the potential data loss window is restricted to the staleness window (*K* or *T*) where K=100,000 updates and T=5 minutes. For session, consistent-prefix and eventual consistency levels, the potential data loss window is restricted to a maximum of 15 minutes. For more information on RTO and RPO targets for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
+* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there is no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the event of a permanently irrecoverable write region, a multi-region Azure Cosmos account configured with bounded-staleness consistency, the potential data loss window is restricted to the staleness window (*K* or *T*) where K=100,000 updates or T=5 minutes, which ever happens first. For session, consistent-prefix and eventual consistency levels, the potential data loss window is restricted to a maximum of 15 minutes. For more information on RTO and RPO targets for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
## Availability Zone support
-In addition to cross region resiliency, you can now enable **zone redundancy** when selecting a region to associate with your Azure Cosmos database.
+In addition to cross region resiliency, Azure Cosmos DB also supports **zone redundancy** in supported regions when selecting a region to associate with your Azure Cosmos account.
-With Availability Zone support, Azure Cosmos DB will ensure replicas are placed across multiple zones within a given region to provide high availability and resiliency during zonal failures. There are no changes to latency and other SLAs in this configuration. In the event of a single zone failure, zone redundancy provides full data durability with RPO=0 and availability with RTO=0.
+With Availability Zone (AZ) support, Azure Cosmos DB will ensure replicas are placed across multiple zones within a given region to provide high availability and resiliency to zonal failures. Availability Zones provide a 99.995% availability SLA with no changes to latency. In the event of a single zone failure, zone redundancy provides full data durability with RPO=0 and availability with RTO=0. Zone redundancy is a supplemental capability to regional replication. Zone redundancy alone cannot be relied upon to achieve regional resiliency.
-Zone redundancy is a *supplemental capability* to the [replication in multi-region writes](how-to-multi-master.md) feature. Zone redundancy alone cannot be relied upon to achieve regional resiliency. For example, in the event of regional outages or low latency access across the regions, it's advised to have multiple write regions in addition to zone redundancy.
+Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding one additional region to temporarily failover to, then removing and adding the desired region with zone redundancy enabled.
-When configuring multi-region writes for your Azure Cosmos account, you can opt into zone redundancy at no extra cost. Otherwise, please see the note below regarding the pricing for zone redundancy support. You can enable zone redundancy on an existing region of your Azure Cosmos account by removing the region and adding it back with the zone redundancy enabled. For a list of regions where availability zones are supported, see the [Availability zones](../availability-zones/az-region.md) documentation.
+When configuring multi-region writes for your Azure Cosmos account, you can opt into zone redundancy at no extra cost. Otherwise, please see the table below regarding pricing for zone redundancy support. For a list of regions where availability zones is available, see the [Availability zones](../availability-zones/az-region.md).
The following table summarizes the high availability capability of various account configurations:
-|KPI |Single region without Availability Zones (Non-AZ) |Single region with Availability Zones (AZ) |Multi-region writes with Availability Zones (AZ, 2 regions) ΓÇô Most recommended setting |
-|---------|---------|---------|---------|
-|Write availability SLA | 99.99% | 99.99% | 99.999% |
-|Read availability SLA | 99.99% | 99.99% | 99.999% |
-|Price | Single region billing rate | Single region Availability Zone billing rate | Multi-region billing rate |
-|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss |
-|Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss |
-|Read latency | Cross region | Cross region | Low |
-|Write latency | Cross region | Cross region | Low |
-|Regional outage ΓÇô data loss | Data loss | Data loss | Data loss <br/><br/> When using bounded staleness consistency with multiple write regions and more than one region, data loss is limited to the bounded staleness configured on your account <br /><br />You can avoid data loss during a regional outage by configuring strong consistency with multiple regions. This option comes with trade-offs that affect availability and performance. It can be configured only on accounts that are configured for single-region writes. |
-|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss |
-|Throughput | X RU/s provisioned throughput | X RU/s provisioned throughput * 1.25 | 2X RU/s provisioned throughput <br/><br/> This configuration mode requires twice the amount of throughput when compared to a single region with Availability Zones because there are two regions. |
+|KPI|Single-region without AZs|Single-region with AZs|Multi-region, single-region writes with AZs|Multi-region, multi-region writes with AZs|
+|---------|---------|---------|---------|---------|
+|Write availability SLA | 99.99% | 99.995% | 99.995% | 99.999% |
+|Read availability SLA | 99.99% | 99.995% | 99.995% | 99.999% |
+|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss |
+|Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss |
+|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](consistency-levels-tradeoffs.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](consistency-levels-tradeoffs.md) for more information.
+|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss |
+|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x 1.25 rate (***2***) | Multi-region write rate |
-> [!NOTE]
-> To enable Availability Zone support for a multi region Azure Cosmos account, the account must have multi-region writes enabled.
+***1*** For Serverless accounts request units (RU) are multiplied by a factor of 1.25.
-You can enable zone redundancy when adding a region to new or existing Azure Cosmos accounts. To enable zone redundancy on your Azure Cosmos account, you should set the `isZoneRedundant` flag to `true` for a specific location. You can set this flag within the locations property. For example, the following PowerShell snippet enables zone redundancy for the "Southeast Asia" region:
+***2*** 1.25 rate only applied to those regions in which AZ is enabled.
Availability Zones can be enabled via:
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/migrate-java-v4-sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-java-v4-sdk.md
@@ -85,7 +85,8 @@ In the Azure Cosmos DB Java SDK 3.x.x, the `CosmosItemProperties` object is expo
### Imports * The Azure Cosmos DB Java SDK 4.0 packages begin with `com.azure.cosmos`
- * Azure Cosmos DB Java SDK 3.x.x packages begin with `com.azure.data.cosmos`
+* Azure Cosmos DB Java SDK 3.x.x packages begin with `com.azure.data.cosmos`
+* Azure Cosmos DB Java SDK 2.x.x Sync API packages begin with `com.microsoft.azure.documentdb`
* Azure Cosmos DB Java SDK 4.0 places several classes in a nested package `com.azure.cosmos.models`. Some of these packages include:
@@ -109,7 +110,7 @@ This is different from Azure Cosmos DB Java SDK 3.x.x which exposes a fluent int
### Create resources
-The following code snippet shows the differences in how resources are created between the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how resources are created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -145,11 +146,38 @@ client.createDatabaseIfNotExists("YourDatabaseName")
return Mono.empty(); }).subscribe(); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+ConnectionPolicy defaultPolicy = ConnectionPolicy.GetDefault();
+// Setting the preferred location to Cosmos DB Account region
+defaultPolicy.setPreferredLocations(Lists.newArrayList("Your Account Location"));
+
+// Create document client
+// <CreateDocumentClient>
+client = new DocumentClient("your.hostname", "your.masterkey", defaultPolicy, ConsistencyLevel.Eventual)
+
+// Create database with specified name
+Database databaseDefinition = new Database();
+databaseDefinition.setId("YourDatabaseName");
+ResourceResponse<Database> databaseResourceResponse = client.createDatabase(databaseDefinition, new RequestOptions());
+
+// Read database with specified name
+String databaseLink = "dbs/YourDatabaseName";
+databaseResourceResponse = client.readDatabase(databaseLink, new RequestOptions());
+Database database = databaseResourceResponse.getResource();
+
+// Create container with specified name
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
--- ### Item operations
-The following code snippet shows the differences in how item operations are performed between the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how item operations are performed between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -167,11 +195,22 @@ Flux.fromIterable(docs)
.flatMap(doc -> container.createItem(doc)) .subscribe(); // ...Subscribing triggers stream execution. ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+// Container is created. Generate documents to insert.
+Document document = new Document();
+document.setId("YourDocumentId");
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true);
+Document responseDocument = documentResourceResponse.getResource();
+```
--- ### Indexing
-The following code snippet shows the differences in how indexing is created between the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how indexing is created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -191,7 +230,7 @@ List<IncludedPath> includedPaths = new ArrayList<>();
IncludedPath includedPath = new IncludedPath(); includedPath.path("/*"); includedPaths.add(includedPath);
-indexingPolicy.setIncludedPaths(includedPaths);
+indexingPolicy.includedPaths(includedPaths);
// Excluded paths List<ExcludedPath> excludedPaths = new ArrayList<>();
@@ -206,11 +245,39 @@ CosmosContainer containerIfNotExists = database.createContainerIfNotExists(conta
.block() .container(); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+// Custom indexing policy
+IndexingPolicy indexingPolicy = new IndexingPolicy();
+indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.NONE
+
+// Included paths
+List<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.setPath("/*");
+includedPaths.add(includedPath);
+indexingPolicy.setIncludedPaths(includedPaths);
+
+// Excluded paths
+List<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.setPath("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.setExcludedPaths(excludedPaths);
+
+// Create container with specified name and indexing policy
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection.setIndexingPolicy(indexingPolicy);
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
--- ### Stored procedures
-The following code snippet shows the differences in how stored procedures are created between the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how stored procedures are created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -257,6 +324,45 @@ container.getScripts()
return Mono.empty(); }).block(); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+logger.info("Creating stored procedure...\n");
+
+String sprocId = "createMyDocument";
+String sprocBody = "function createMyDocument() {\n" +
+ "var documentToCreate = {\"id\":\"test_doc\"}\n" +
+ "var context = getContext();\n" +
+ "var collection = context.getCollection();\n" +
+ "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
+ " function (err, documentCreated) {\n" +
+ "if (err) throw new Error('Error' + err.message);\n" +
+ "context.getResponse().setBody(documentCreated.id)\n" +
+ "});\n" +
+ "if (!accepted) return;\n" +
+ "}";
+StoredProcedure storedProcedureDef = new StoredProcedure();
+storedProcedureDef.setId(sprocId);
+storedProcedureDef.setBody(sprocBody);
+StoredProcedure storedProcedure = client.createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
+ .getResource();
+
+// ...
+
+logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
+
+RequestOptions options = new RequestOptions();
+options.setPartitionKey(new PartitionKey("test_doc"));
+
+StoredProcedureResponse storedProcedureResponse =
+ client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null);
+logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
+ sprocId,
+ storedProcedureResponse.getResponseAsString(),
+ storedProcedureResponse.getStatusCode(),
+ storedProcedureResponse.getRequestCharge()));
+```
--- ### Change feed
@@ -301,11 +407,15 @@ ChangeFeedProcessor.Builder()
.subscribeOn(Schedulers.elastic()) .subscribe(); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+* This feature is not supported as of Java SDK v2 sync.
--- ### Container level Time-To-Live(TTL)
-The following code snippet shows the differences in how to create time to live for data in the container using the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how to create time to live for data in the container using the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -321,11 +431,21 @@ CosmosContainerProperties containerProperties = new CosmosContainerProperties("m
containerProperties.defaultTimeToLive(90 * 60 * 60 * 24); container = database.createContainerIfNotExists(containerProperties, 400).block().container(); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+DocumentCollection documentCollection;
+
+// Create a new container with TTL enabled with default expiration value
+documentCollection.setDefaultTimeToLive(90 * 60 * 60 * 24);
+documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource();
+```
--- ### Item level Time-To-Live(TTL)
-The following code snippet shows the differences in how to create time to live for an item using the 4.0 and 3.x.x Async APIs:
+The following code snippet shows the differences in how to create time to live for an item using the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
@@ -366,6 +486,17 @@ SalesOrder salesOrder = new SalesOrder(
60 * 60 * 24 * 30 // Expire sales orders in 30 days ); ```+
+# [Java SDK 2.x.x Sync API](#tab/java-v2-sync)
+
+```java
+Document document = new Document();
+document.setId("YourDocumentId");
+document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true);
+Document responseDocument = documentResourceResponse.getResource();
+```
--- ## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/plan-manage-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/plan-manage-costs.md
@@ -68,7 +68,7 @@ As you start using Azure Cosmos DB resources from Azure portal, you can see the
If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-You can pay for Azure Cosmos DB charges with your Azure Enterprise Agreement monetary commitment credit. However, you can't use the monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure Cosmos DB charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use the Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
## Monitor costs
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/set-throughput.md
@@ -5,7 +5,7 @@ author: markjbrown
ms.author: mjbrown ms.service: cosmos-db ms.topic: conceptual
-ms.date: 11/10/2020
+ms.date: 01/19/2021
--- # Introduction to provisioned throughput in Azure Cosmos DB
@@ -104,7 +104,7 @@ The response of those methods also contains the [minimum provisioned throughput]
The actual minimum RU/s may vary depending on your account configuration. But generally it's the maximum of: * 400 RU/s
-* Current storage in GB * 10 RU/s (unless your container or database contains more than 1 TB of data, see our [high storage / low throughput program](#high-storage-low-throughput-program))
+* Current storage in GB * 10 RU/s (this constraint can be relaxed in some cases, see our [high storage / low throughput program](#high-storage-low-throughput-program))
* Highest RU/s provisioned on the database or container / 100 ### Changing the provisioned throughput
@@ -134,7 +134,7 @@ As described in the [Current provisioned throughput](#current-provisioned-throug
This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts.
-You currently need to have at least 1 container or shared-throughput database containing more than 1 TB of data in your account to be eligible. To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRzBPrdEMjvxPuDm8fCLUtXpUREdDU0pCR0lVVFY5T1lRVEhWNUZITUJGMC4u). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
+To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRzBPrdEMjvxPuDm8fCLUtXpUREdDU0pCR0lVVFY5T1lRVEhWNUZITUJGMC4u). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
## Comparison of models This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-keywords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-keywords.md
@@ -5,7 +5,7 @@ author: timsander1
ms.service: cosmos-db ms.subservice: cosmosdb-sql ms.topic: conceptual
-ms.date: 07/29/2020
+ms.date: 01/20/2021
ms.author: tisande ---
@@ -103,6 +103,73 @@ Queries with an aggregate system function and a subquery with `DISTINCT` are not
SELECT COUNT(1) FROM (SELECT DISTINCT f.lastName FROM f) ```
+## LIKE
+
+Returns a Boolean value depending on whether a specific character string matches a specified pattern. A pattern can include regular characters and wildcard characters. You can write logically equivalent queries using either the `LIKE` keyword or the [RegexMatch](sql-query-regexmatch.md) system function. YouΓÇÖll observe the same index utilization regardless of which one you choose. Therefore, you should use `LIKE` if you prefer its syntax more than regular expressions.
+
+> [!NOTE]
+> Because `LIKE` can utilize an index, you should [create a range index](indexing-policy.md) for properties you are comparing using `LIKE`.
+
+You can use the following wildcard characters with LIKE:
+
+| Wildcard character | Description | Example |
+| -------------------- | ------------------------------------------------------------ | ------------------------------------------- |
+| % | Any string of zero or more characters | WHERE c.description LIKE ΓÇ£%SO%PS%ΓÇ¥ |
+| _ (underscore) | Any single character | WHERE c.description LIKE ΓÇ£%SO_PS%ΓÇ¥ |
+| [ ] | Any single character within the specified range ([a-f]) or set ([abcdef]). | WHERE c.description LIKE ΓÇ£%SO[t-z]PS%ΓÇ¥ |
+| [^] | Any single character not within the specified range ([^a-f]) or set ([^abcdef]). | WHERE c.description LIKE ΓÇ£%SO[^abc]PS%ΓÇ¥ |
++
+### Using LIKE with the % wildcard character
+
+The `%` character matches any string of zero or more characters. For example, by placing a `%` at the beginning and end of the pattern, the following query returns all items with a description that contains `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "%fruit%"
+```
+
+If you only used a `%` character at the beginning of the pattern, youΓÇÖd only return items with a description that started with `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE "fruit%"
+```
++
+### Using NOT LIKE
+
+The below example returns all items with a description that does not contain `fruit`:
+
+```sql
+SELECT *
+FROM c
+WHERE c.description NOT LIKE "%fruit%"
+```
+
+### Using the escape clause
+
+You can search for patterns that include one or more wildcard characters using the ESCAPE clause. For example, if you wanted to search for descriptions that contained the string `20-30%`, you wouldnΓÇÖt want to interpret the `%` as a wildcard character.
+
+```sql
+SELECT *
+FROM c
+WHERE c.description LIKE '%20-30!%%' ESCAPE '!'
+```
+
+### Using wildcard characters as literals
+
+You can enclose wildcard characters in brackets to treat them as literal characters. When you enclose a wildcard character in brackets, you remove any special attributes. Here are some examples:
+
+| Pattern | Meaning |
+| ----------------- | ------- |
+| LIKE ΓÇ£20-30[%]ΓÇ¥ | 20-30% |
+| LIKE ΓÇ£[_]nΓÇ¥ | _n |
+| LIKE ΓÇ£[ [ ]ΓÇ¥ | [ |
+| LIKE ΓÇ£]ΓÇ¥ | ] |
+ ## IN Use the IN keyword to check whether a specified value matches any value in a list. For example, the following query returns all family items where the `id` is `WakefieldFamily` or `AndersenFamily`.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/cost-analysis-common-uses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-analysis-common-uses.md
@@ -193,7 +193,7 @@ Costs are only shown for your active enrollment. If you transferred an enrollmen
1. In the Azure portal, navigate to **Cost Management + Billing** > **Overview**.
-1. Click **Breakdown** for the current month and view your monetary commitment burn down.
+1. Click **Breakdown** for the current month and view your Azure Prepayment (previously called monetary commitment) burn down.
[![EA costs overview - breakdown summary](./media/cost-analysis-common-uses/breakdown1.png)](./media/cost-analysis-common-uses/breakdown1.png#lightbox) 1. Click the **Usage and Charges** tab and view the prior month's breakdown in the chosen timespan. [![Usage and charges tab](./media/cost-analysis-common-uses/breakdown2.png)](./media/cost-analysis-common-uses/breakdown2.png#lightbox)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
@@ -24,7 +24,7 @@ You can use the Budget API to send email alerts in a different language. For mor
## Credit alerts
-Credit alerts notify you when your Azure credit monetary commitments are consumed. Monetary commitments are for organizations with Enterprise Agreements. Credit alerts are generated automatically at 90% and at 100% of your Azure credit balance. Whenever an alert is generated, it's reflected in cost alerts and in the email sent to the account owners.
+Credit alerts notify you when your Azure Prepayment (previously called monetary commitment) is consumed. Azure Prepayment is for organizations with Enterprise Agreements. Credit alerts are generated automatically at 90% and at 100% of your Azure Prepayment credit balance. Whenever an alert is generated, it's reflected in cost alerts and in the email sent to the account owners.
## Department spending quota alerts
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/cost-mgt-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-mgt-best-practices.md
@@ -98,7 +98,7 @@ To learn more about the various options, visit [How to buy Azure](https://azure.
#### [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) -- Options for up-front monetary commitments
+- Options for up-front Azure Prepayment (previously called monetary commitment)
- Access to reduced Azure pricing #### [Azure in CSP](https://azure.microsoft.com/offers/ms-azr-0145p/)
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/get-started-partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/get-started-partners.md
@@ -3,7 +3,7 @@ title: Get started with Azure Cost Management for partners
description: This article explains how partners use Azure Cost Management features and how they enable Cost Management access for their customers. author: bandersmsft ms.author: banders
-ms.date: 11/16/2020
+ms.date: 01/19/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: cost-management
@@ -66,9 +66,9 @@ After you've onboarded your customers to a Microsoft Customer Agreement, the fol
Use the billing account scope to view pre-tax costs across all your customers and billing profiles. Invoice costs are only shown for customer's consumption-based products on the Microsoft Customer Agreement. However, invoice costs are shown for purchased-based products for customers on both the Microsoft Customer Agreement and the CSP offer. Currently, the default currency to view costs in the scope is US dollars. Budgets set for the scope are also in USD.
-Regardless of different customer-billed currencies, partners use Billing account scope to set budgets and manage costs in USD across their customers, subscriptions, resources, and resource groups.
+Regardless of different billed currencies, partners use Billing account scope to set budgets and manage costs in USD across their customers, subscriptions, resources, and resource groups.
-Partners also filter costs in a specific billing currency across customers in the cost analysis view. Select the **Actual cost** list to view costs in supported customer billing currencies.
+Partners also filter costs in a specific billing currency across customers in the cost analysis view. Select the **Actual cost** list to view costs in supported billing currencies.
![Example showing Actual cost selection for currencies](./media/get-started-partners/actual-cost-selector.png)
@@ -78,7 +78,7 @@ Use the [amortized cost view](quick-acm-cost-analysis.md#customize-cost-views) i
Use the billing profile scope to view pre-tax costs in the billing currency across all your customers for all products and subscriptions included in an invoice. You can filter costs in a billing profile for a specific invoice using the **InvoiceID** filter. The filter shows the consumption and product purchase costs for a specific invoice. You can also filter the costs for a specific customer on the invoice to see pre-tax costs.
-After you onboard customers to a Microsoft Customer Agreement, you receive an invoice that includes all charges for all products (consumption, purchases, and entitlements) for these customers on the Microsoft Customer Agreement. When billed in the same currency, these invoices also include the charges for entitlement and purchased products such as SaaS, Azure Marketplace, and reservations for customers who are still in the CSP offer.
+After you onboard customers to a Microsoft Customer Agreement, you receive an invoice that includes all charges for all products (consumption, purchases, and entitlements) for these customers on the Microsoft Customer Agreement. When billed in the same currency, these invoices also include the charges for entitlement and purchased products such as SaaS, Azure Marketplace, and reservations for customers who are still in the classic CSP offer no on the Azure plan.
To help reconcile charges against the customer invoice, the billing profile scope enables you to see all costs that accrue for an invoice for your customers. Like the invoice, the scope shows costs for every customer in the new Microsoft Customer Agreement. The scope also shows every charge for customer entitlement products still in the current CSP offer.
@@ -86,7 +86,7 @@ The billing profile and billing account scopes are the only applicable scopes th
Billing profiles define the subscriptions that are included in an invoice. Billing profiles are the functional equivalent of an enterprise agreement enrollment. A billing profile is the scope where invoices are generated.
-Currently, the customer's billing currency is the default currency when viewing costs in the billing profile scope. Budgets set at the billing profile scope are in the billing currency.
+Currently, the billing currency is the default currency when viewing costs in the billing profile scope. Budgets set at the billing profile scope are in the billing currency.
Partners can use the scope to reconcile to invoices. And, they use the scope to set budgets in the billing currency for the following items:
@@ -215,7 +215,7 @@ The following data fields are found in usage detail files and Cost Management AP
| Quantity | Measured quantity purchased or consumed. The amount of the meter used during the billing period. | Number of units. Ensure it matches the information in your billing system during reconciliation. | | unitOfMeasure | Identifies the unit that the service is charged in. For example, GB and hours. | Identifies the unit that the service is charged in. For example, GB, hours, and 10,000 s. | | pricingCurrency | The currency defining the unit price. | The currency in the price list.|
-| billingCurrency | The currency defining the billed cost. | The currency of the customer's geographic region. |
+| billingCurrency | The currency defining the billed cost. | The currency defined as the billed currency on the invoice. |
| chargeType | Defines the type of charge that the cost represents in Azure Cost Management like purchase and refund. | The type of charge or adjustment. Not available for current activity. | | costinBillingCurrency | ExtendedCost or blended cost before tax in the billed currency. | N/A | | costinPricingCurrency | ExtendedCost or blended cost before tax in pricing currency to correlate with prices. | N/A |
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/billing-subscription-transfer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/billing-subscription-transfer.md
@@ -8,7 +8,7 @@ tags: billing,top-support-issue
ms.service: cost-management-billing ms.subservice: billing ms.topic: how-to
-ms.date: 11/11/2020
+ms.date: 01/06/2021
ms.author: banders ms.custom: contperf-fy21q1 ---
@@ -68,6 +68,18 @@ If you've accepted the billing ownership of an Azure subscription, we recommend
1. Remote Access credentials for services like Azure Virtual Machines. 1. If you're working with a partner, consider updating the partner ID on the subscription. You can update the partner ID in the [Azure portal](https://portal.azure.com). For more information, see [Link a partner ID to your Azure accounts](link-partner-id.md)
+## Cancel a transfer request
+
+Only one transfer request is active at a time. A transfer request is valid for 15 days. After the 15 days, the transfer request expires.
+
+To cancel a transfer request:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Subscriptions** > Select the subscription that you sent a transfer request for > select **Transfer billing ownership**.
+1. At the bottom of the page, select **Cancel the transfer request**.
+
+:::image type="content" source="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" alt-text="Example showing the Transfer billing ownership window with the Cancel the transfer request option" lightbox="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" :::
+ ## Troubleshooting Use the following troubleshooting information if you're having trouble transferring subscriptions.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/cost-management-automation-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/cost-management-automation-scenarios.md
@@ -98,7 +98,7 @@ The following APIs are for Enterprise only:
### What's the difference between the Enterprise Reporting APIs and the Consumption APIs? When should I use each? These APIs have a similar set of functionality and can answer the same broad set of questions in the billing and cost management space. But they target different audiences: -- Enterprise Reporting APIs are available to customers who have signed an Enterprise Agreement with Microsoft that grants them access to negotiated monetary commitments and custom pricing. The APIs require a key that you can get from the [Enterprise Portal](https://ea.azure.com). For a description of these APIs, see [Overview of Reporting APIs for Enterprise customers](enterprise-api.md).
+- Enterprise Reporting APIs are available to customers who have signed an Enterprise Agreement with Microsoft that grants them access to negotiated Azure Prepayment (previously called monetary commitment) and custom pricing. The APIs require a key that you can get from the [Enterprise Portal](https://ea.azure.com). For a description of these APIs, see [Overview of Reporting APIs for Enterprise customers](enterprise-api.md).
- Consumption APIs are available to all customers, with a few exceptions. For more information, see [Azure consumption API overview](consumption-api-overview.md) and the [Azure Consumption API reference](/rest/api/consumption/). We recommend the provided APIs as the solution for the latest development scenarios.
@@ -107,7 +107,7 @@ These APIs provide fundamentally different data:
- The [Usage Details API](/rest/api/consumption/usagedetails) provides Azure usage and cost information per meter instance. The provided data has already passed through the cost metering system in Azure and had cost applied to it, along with other possible changes:
- - Changes to account for the use of prepaid monetary commitments
+ - Changes to account for the use of prepaid Azure Prepayment
- Changes to account for usage discrepancies discovered by Azure - The [Usage API](/previous-versions/azure/reference/mt219003(v=azure.100)) provides raw Azure usage information before it passes through the cost metering system in Azure. This data might not have any correlation with the usage or charge amount that's seen after the Azure charge metering system.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-agreements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-agreements.md
@@ -3,7 +3,7 @@ title: Azure EA agreements and amendments
description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use. author: bandersmsft ms.author: banders
-ms.date: 09/03/2020
+ms.date: 01/19/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: enterprise
@@ -16,7 +16,7 @@ The article describes how Azure EA agreements and amendments might affect your a
## Enrollment provisioning status
-The start date of a new Azure Prepayment is defined by the date that the regional operations center processed it. Since Azure Prepayment orders via the Azure EA portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure EA portal.
+The start date of a new Azure Prepayment (previously called monetary commitment) is defined by the date that the regional operations center processed it. Since Azure Prepayment orders via the Azure EA portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure EA portal.
## Support for enterprise customers
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-enrollment-invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
@@ -3,7 +3,7 @@ title: Azure Enterprise enrollment invoices
description: This article explains how to manage and act on your Azure Enterprise invoice. author: bandersmsft ms.author: banders
-ms.date: 12/09/2020
+ms.date: 01/19/2021
ms.topic: conceptual ms.service: cost-management-billing ms.subservice: enterprise
@@ -233,7 +233,7 @@ Refer to [Azure services](https://azure.microsoft.com/services/) and [Azure pric
### Enterprise Agreement units of measure
-The units of measure for Enterprise Agreements are often different than seen in our other programs such as the Microsoft Online Services Agreement program (MOSA). This disparity means that, for a number of services, the unit of measure is aggregated to provide the normalized pricing. The unit of measure shown in the Azure Enterprise portal's Usage Summary view is always the Enterprise measure. A full list of current units of measure and conversions for each service is provided in the [Friendly Service Names](https://azurepricing.blob.core.windows.net/supplemental/Friendly_Service_Names.xlsx) Excel file.
+The units of measure for Enterprise Agreements are often different than seen in our other programs such as the Microsoft Online Services Agreement program (MOSA). This disparity means that, for a number of services, the unit of measure is aggregated to provide the normalized pricing. The unit of measure shown in the Azure Enterprise portal's Usage Summary view is always the Enterprise measure. A full list of current units of measure and conversions for each service is provided by submitting a [support request](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
### Conversion between usage detail report and the usage summary page
@@ -320,12 +320,5 @@ The invoices will be released following the month after the billing period ends.
## Next steps -- The following Excel files provide details on Azure services and are updated on the 6th and 20th of every month:-
- | Title | Description | File name |
- | --- | --- | --- |
- | [Friendly Service Names](https://azurepricing.blob.core.windows.net/supplemental/Friendly_Service_Names.xlsx) | Lists all active services and includes: <br> <ul><li>service category</li> <li>friendly service name</li> <li>Prepayment name and part number</li> <li>consumption name and part number</li> <li>units of measure</li> <li>conversion factors between reported usage and displayed Enterprise portal usage</li></ul> | Friendly\_Service\_Names.xlsx |
- | [Service Download Fields](https://azurepricing.blob.core.windows.net/supplemental/Service_Download_Fields.xlsx) | This spreadsheet provides a listing of all possible combinations of the service-related fields in the Usage Download Report. | Service\_Download\_Fields.xlsx |
- - For information about understanding your invoice and charges, see [Understand your Azure Enterprise Agreement bill](../understand/review-enterprise-agreement-bill.md). - To start using the Azure Enterprise portal, see [Get started with the Azure EA portal](ea-portal-get-started.md).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-vm-reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-vm-reservations.md
@@ -20,7 +20,7 @@ You can exchange a reservation for another reservation of the same type. It's al
### Partial refunds
-WeΓÇÖll issue a partial refund when EA customers return reservations that were purchased using overage and not monetary commitment.
+WeΓÇÖll issue a partial refund when EA customers return reservations that were purchased using overage and not Azure Prepayment (previously called monetary commitment).
The refund will be displayed in the EA portal as a negative adjustment in the previous month and a positive adjustment in the current month. It will show up similarly to a reservations exchange. The credit memo will reference the original invoice number; therefore, to reconcile the initial purchase with the credit memo, please refer to the original invoice number.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/enterprise-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/enterprise-api.md
@@ -15,7 +15,7 @@ ms.author: banders
> [!Note] > Microsoft no longer updates the Azure Billing - Enterprise Reporting APIs. Instead, you should use [Azure Consumption](/rest/api/consumption) APIs.
-The Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers have signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated monetary commitments and gain access to custom pricing for Azure resources.
+The Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers have signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
All date and time parameters required for APIs must be represented as combined Coordinated Universal Time (UTC) values. Values returned by APIs are shown in UTC format.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/usage-rate-card-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/usage-rate-card-overview.md
@@ -31,13 +31,13 @@ Use the Azure [Resource Usage API](/previous-versions/azure/reference/mt219003(v
* **Hourly or Daily Aggregations** - Callers can specify whether they want their Azure usage data in hourly buckets or daily buckets. The default is daily. * **Instance metadata (includes resource tags)** ΓÇô Get instance-level detail like the fully qualified resource uri (/subscriptions/{subscription-id}/..), the resource group information, and resource tags. This metadata helps you deterministically and programmatically allocate usage by the tags, for use-cases like cross-charging. * **Resource metadata** - Resource details such as the meter name, meter category, meter sub category, unit, and region give the caller a better understanding of what was consumed. We're also working to align resource metadata terminology across the Azure portal, Azure usage CSV, EA billing CSV, and other public-facing experiences, to let you correlate data across experiences.
-* **Usage for different offer types** ΓÇô Usage data is available for offer types like Pay-as-you-go, MSDN, Monetary commitment, Monetary credit, and EA, except [CSP](/partner-center).
+* **Usage for different offer types** ΓÇô Usage data is available for offer types like Pay-as-you-go, MSDN, Azure Prepayment (previously called monetary commitment), Azure Prepayment credit, and EA, except [CSP](/partner-center).
## Azure Resource RateCard API (Preview) Use the [Azure Resource RateCard API](/previous-versions/azure/reference/mt219005(v=azure.100)) to get the list of available Azure resources and estimated pricing information for each. The API includes: * **Azure role-based access control (Azure RBAC)** - Configure your access policies on the [Azure portal](https://portal.azure.com) or through [Azure PowerShell cmdlets](/powershell/azure/) to specify which users or applications can get access to the RateCard data. Callers must use standard Azure Active Directory tokens for authentication. Add the caller to either the Reader, Owner, or Contributor role to get access to the usage data for a particular Azure subscription.
-* **Support for Pay-as-you-go, MSDN, Monetary commitment, and Monetary credit offers (EA and [CSP](/partner-center) not supported)** - This API provides Azure offer-level rate information. The caller of this API must pass in the offer information to get resource details and rates. We're currently unable to provide EA rates because EA offers have customized rates per enrollment.
+* **Support for Pay-as-you-go, MSDN, Azure Prepayment, and Azure Prepayment credit offers (EA and [CSP](/partner-center) not supported)** - This API provides Azure offer-level rate information. The caller of this API must pass in the offer information to get resource details and rates. We're currently unable to provide EA rates because EA offers have customized rates per enrollment.
## Scenarios Here are some of the scenarios that are made possible with the combination of the Usage and the RateCard APIs:
@@ -45,7 +45,7 @@ Here are some of the scenarios that are made possible with the combination of th
* **Azure spend during the month** - Use the combination of the Usage and RateCard APIs to get better insights into your cloud spend during the month. You can analyze the hourly and daily buckets of usage and charge estimates. * **Set up alerts** ΓÇô Use the Usage and the RateCard APIs to get estimated cloud consumption and charges, and set up resource-based or monetary-based alerts. * **Predict bill** ΓÇô Get your estimated consumption and cloud spend, and apply machine learning algorithms to predict what the bill would be at the end of the billing cycle.
-* **Pre-consumption cost analysis** ΓÇô Use the RateCard API to predict how much your bill would be for your expected usage when you move your workloads to Azure. If you have existing workloads in other clouds or private clouds, you can also map your usage with the Azure rates to get a better estimate of Azure spend. This estimate gives you the ability to pivot on offer, and compare and contrast between the different offer types beyond Pay-As-You-Go, like Monetary commitment and Monetary credit. The API also gives you the ability to see cost differences by region and allows you to do a what-if cost analysis to help you make deployment decisions.
+* **Pre-consumption cost analysis** ΓÇô Use the RateCard API to predict how much your bill would be for your expected usage when you move your workloads to Azure. If you have existing workloads in other clouds or private clouds, you can also map your usage with the Azure rates to get a better estimate of Azure spend. This estimate gives you the ability to pivot on offer, and compare and contrast between the different offer types beyond Pay-As-You-Go, like Azure Prepayment and Azure Prepayment credit. The API also gives you the ability to see cost differences by region and allows you to do a what-if cost analysis to help you make deployment decisions.
* **What-if analysis** - * You can determine whether it is more cost-effective to run workloads in another region, or on another configuration of the Azure resource. Azure resource costs may differ based on the Azure region you're using.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/exchange-and-refund-azure-reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
@@ -42,7 +42,7 @@ First, Microsoft cancels the existing reservation and refunds the pro-rated amou
### Enterprise agreement customers
-Money is added to the monetary commitment for exchanges and refunds if the original purchase was made using one. If the monetary commitment term using the reservation was purchased is no longer active, then credit is added to your current enterprise agreement monetary commitment term. The credit is valid for 90 days from the date of refund. Unused credit expires at the end of 90 days.
+Money is added to the Azure Prepayment (previously called monetary commitment) for exchanges and refunds if the original purchase was made using one. If the Azure Prepayment term using the reservation was purchased is no longer active, then credit is added to your current enterprise agreement Azure Prepayment term. The credit is valid for 90 days from the date of refund. Unused credit expires at the end of 90 days.
If the original purchase was made as an overage, the original invoice on which the reservation was purchased and all later invoices are reopened and readjusted. Microsoft issues a credit memo for the refunds.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-app-service-isolated-stamp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-app-service-isolated-stamp.md
@@ -43,7 +43,7 @@ You can buy Isolated Stamp reserved capacity in the [Azure portal](https://porta
1. Go to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22AppService%22%7D). 1. Select a subscription. Use the **Subscription** list to choose the subscription that's used to pay for the reserved capacity. The payment method of the subscription is charged the costs for the reserved capacity. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (offer numbers: MS-AZR-0003P or MS-AZR-0023P) or a CSP subscription.
- - For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage.
+ - For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
- For Pay-As-You-Go subscription, the charges are billed to the credit card or invoice payment method on the subscription. 1. Select a **Scope** to choose a subscription scope. - **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-databricks-reserved-capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-databricks-reserved-capacity.md
@@ -43,7 +43,7 @@ You can buy Databricks plans in the [Azure portal](https://portal.azure.com/#bla
**To Purchase:** 1. Go to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22Databricks%22%7D).
-1. Select a subscription. Use the **Subscription** list to select the subscription that's used to pay for the reserved capacity. The payment method of the subscription is charged the upfront costs for the reserved capacity. Charges are deducted from the enrollment's monetary commitment balance or charged as overage.
+1. Select a subscription. Use the **Subscription** list to select the subscription that's used to pay for the reserved capacity. The payment method of the subscription is charged the upfront costs for the reserved capacity. Charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
1. Select a scope. Use the **Scope** list to select a subscription scope: - **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only. - **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md
@@ -223,7 +223,7 @@ The following information explains the meaning of various reservation fields.
`SapHana` **Subscription**
- The subscription used to pay for the reservation. The payment method on the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement. The charges are deducted from the monetary commitment balance, if available, or charged as overage.
+ The subscription used to pay for the reservation. The payment method on the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement. The charges are deducted from the Azure Prepayment (previously called monetary commitment) balance, if available, or charged as overage.
**Scope** The reservation's scope should be single scope.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-sql-data-warehouse-charges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/prepay-sql-data-warehouse-charges.md
@@ -41,7 +41,7 @@ For example, assume your total consumption of Azure Synapse Analytics is DW3000c
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**. 3. Select a subscription. Use the Subscription list to choose the subscription that's used to pay for the reserved capacity. The payment method of the subscription is charged the costs for the reserved capacity. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (offer numbers: MS-AZR-0003P or MS-AZR-0023P).
- - For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage.
+ - For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
- For Pay-As-You-Go subscription, the charges are billed to the credit card or invoice payment method on the subscription. 4. Select a scope. Use the Scope list to choose a subscription scope. - **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
@@ -51,7 +51,7 @@ For more information, seeΓÇ»[Buy a reservation](prepare-buy-reservation.md).
## How is a reservation billed?
-The reservation is charged to the payment method tied to the subscription. The reservation cost is deducted from your monetary commitment balance, if available. When your monetary commitment balance doesn't cover the cost of the reservation, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have on your account is billed immediately for up-front purchases. Monthly payments appear on your invoice and your credit card is charged monthly. When you're billed by invoice, you see the charges on your next invoice.
+The reservation is charged to the payment method tied to the subscription. The reservation cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the reservation, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have on your account is billed immediately for up-front purchases. Monthly payments appear on your invoice and your credit card is charged monthly. When you're billed by invoice, you see the charges on your next invoice.
## Who can manage a reservation by default
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/review-enterprise-agreement-bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
@@ -15,7 +15,7 @@ ms.author: banders
Azure customers with an Enterprise Agreement receive an invoice when they exceed the organization's credit or use services that aren't covered by the credit.
-Your organization's credit includes your monetary commitment. The monetary commitment is the amount your organization paid upfront for usage of Azure services. You can add monetary commitment funds to your Enterprise Agreement by contacting your Microsoft account manager or reseller.
+Your organization's credit includes your Azure Prepayment (previously called monetary commitment). Azure Prepayment is the amount your organization paid upfront for usage of Azure services. You can add Azure Prepayment funds to your Enterprise Agreement by contacting your Microsoft account manager or reseller.
This tutorial applies only to Azure customers with an Azure Enterprise Agreement.
@@ -150,7 +150,7 @@ Some reasons for differences in pricing:
## Request detailed usage information
-Enterprise administrators can view a summary of their usage data, monetary commitment consumed, and charges associated with additional usage in the Azure Enterprise portal. The charges are presented at the summary level across all accounts and subscriptions.
+Enterprise administrators can view a summary of their usage data, Azure Prepayment consumed, and charges associated with additional usage in the Azure Enterprise portal. The charges are presented at the summary level across all accounts and subscriptions.
To view detailed usage in specific accounts, download the usage detail report by going to **Reports** > **Download Usage**.
@@ -161,7 +161,7 @@ For indirect enrollments, your partner needs to enable the markup function befor
## Reports
-Enterprise administrators can view a summary of their usage data, monetary commitment consumed, and charges associated with additional usage in the Azure Enterprise portal. The charges are presented at the summary level across all accounts and subscriptions.
+Enterprise administrators can view a summary of their usage data, Azure Prepayment consumed, and charges associated with additional usage in the Azure Enterprise portal. The charges are presented at the summary level across all accounts and subscriptions.
### Azure Enterprise reports
data-factory https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-rest-api.md
@@ -6,15 +6,15 @@ documentationcenter: ''
author: linda33wj manager: shwang ms.reviewer: douglasl- ms.service: data-factory ms.workload: data-services ms.tgt_pltfrm: ms.devlang: rest-api ms.topic: quickstart
-ms.date: 06/10/2019
+ms.date: 01/18/2021
ms.author: jingwang ---+ # Quickstart: Create an Azure data factory and pipeline by using the REST API > [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
@@ -298,7 +298,7 @@ Here is the sample output:
``` ## Create pipeline
-In this example, this pipeline contains one activity and takes two parameters - input blob path and output blob path. The values for these parameters are set when the pipeline is triggered/run. The copy activity refers to the same blob dataset created in the previous step as input and output. When the dataset is used as an input dataset, input path is specified. And, when the dataset is used as an output dataset, the output path is specified.
+In this example, this pipeline contains one Copy activity. The Copy activity refers to the "InputDataset" and the "OutputDataset" created in the previous step as input and output.
```powershell $request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${dataFactoryName}/pipelines/Adfv2QuickStartPipeline?api-version=${apiVersion}"
@@ -378,10 +378,7 @@ Here is the sample output:
## Create pipeline run
-In this step, you set values of **inputPath** and **outputPath** parameters specified in pipeline with the actual values of source and sink blob paths, and trigger a pipeline run. The pipeline run ID returned in the response body is used in later monitoring API.
-
-Replace value of **inputPath** and **outputPath** with your source and sink blob path to copy data from and to before saving the file.
-
+In this step, you trigger a pipeline run. The pipeline run ID returned in the response body is used in later monitoring API.
```powershell $request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline/createRun?api-version=${apiVersion}"
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-overview.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: overview
-ms.date: 09/23/2020
+ms.date: 01/18/2021
ms.author: alkohli #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro is and how it works so I can use it to process and transform data before sending to Azure. ---
@@ -68,7 +68,7 @@ The Azure Stack Edge Pro solution comprises of Azure Stack Edge resource, Azure
Azure Stack Edge Pro physical device, Azure resource, and target storage account to which you transfer data do not all have to be in the same region. -- **Resource availability** - For this preview release, the resource is available in East US, West EU, and South East Asia regions.
+- **Resource availability** - For this release, the resource is available in East US, West EU, and South East Asia regions.
- **Device availability** - For a list of all the countries/regions where the Azure Stack Edge Pro device is available, go to **Availability** section in the **Azure Stack Edge Pro** tab for [Azure Stack Edge Pro pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro).
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-technical-specifications-compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: conceptual
-ms.date: 10/07/2020
+ms.date: 01/19/2021
ms.author: alkohli ---
@@ -22,7 +22,7 @@ The Azure Stack Edge Pro device has the following specifications for compute and
| Specification | Value | |-------------------------|----------------------------| | CPU | 2 X Intel Xeon Silver 4214 (Cascade Lake) CPU |
-| Memory | 128 (8x16 GB) GB RAM |
+| Memory | 128 (8x16 GB) GB RAM <br> Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM |
## Compute acceleration specifications
@@ -52,7 +52,7 @@ Your Azure Stack Edge Pro device has six network interfaces, PORT1- PORT6.
| Specification | Description | |-------------------------|----------------------------|
-| Network interfaces | **2 X 1 GbE interfaces** ΓÇô 1 management interface Port 1 is used for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP. <br>The other interface Port 2 is user configurable, can be used for data transfer, and is DHCP by default. <br>**4 X 25 GbE interfaces** ΓÇô These data interfaces, Port 3 through Port 6, can be configured by user as DHCP (default) or static. These can also operate as 10 GbE interfaces. |
+| Network interfaces | **2 X 1 GbE interfaces** ΓÇô 1 management interface Port 1 is used for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP. <br>The other interface Port 2 is user configurable, can be used for data transfer, and is DHCP by default. <br>**4 X 25 GbE interfaces** ΓÇô These data interfaces, Port 3 through Port 6, can be configured by user as DHCP (default) or static. They can also operate as 10 GbE interfaces. |
Your Azure Stack Edge Pro device has the following network hardware:
@@ -64,7 +64,7 @@ Here are the details for the Mellanox card:
| Parameter | Description | |-------------------------|----------------------------| | Model | ConnectX®-4 Lx EN network interface card |
-| Model Description | 25GbE dual-port SFP28; PCIe3.0 x8; ROHS R6 |
+| Model Description | 25 GbE dual-port SFP28; PCIe3.0 x8; ROHS R6 |
| Device Part Number (R640) | MCX4121A-ACAT | | PSID (R640) | MT_2420110034 |
@@ -84,12 +84,10 @@ The Azure Stack Edge Pro devices have five 2.5" NVMe DC P4610 SSDs, each with a
| Boot SATA solid-state drives (SSD) | 1 | | Boot SSD capacity | 240 GB | | Total capacity | 8.0 TB |
-| Total usable capacity* | ~ 4.19 TB |
+| Total usable capacity | ~ 4.19 TB |
+| RAID configuration | Storage Spaces Direct with a combination of mirroring and parity |
| SAS controller | HBA330 12 Gbps | -
-**After parity resiliency and reserving space for internal use.*
- <!--Remove based on feedback from Ravi ## Other hardware specifications
@@ -146,7 +144,8 @@ This section lists the specifications related to the enclosure environment such
| Enclosure | Operational specifications | |-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Airflow | System airflow is front to rear. System must be operated with a low-pressure, rear-exhaust installation. <!--Back pressure created by rack doors and obstacles should not exceed 5 pascals (0.5 mm water gauge).--> |
-| Maximum altitude, operational | 3048 meters (10,000 feet) with maximum operating temperature de-rated determined by [Operating temperature de-rating specifications](#operating-temperature-de-rating-specifications). |
+| Ingress protection (IP) | This type of rack-mounted equipment for indoor use typically isn't tested for ingress protection (protection against solids and liquids for an electrical enclosure). Manufacturer's safety assessment shows IPXO (no ingress protection). |
+| Maximum altitude, operational | 3048 meters (10,000 feet) with maximum operating temperature de-rated determined by [Operating temperature de-rating specifications](#operating-temperature-de-rating-specifications). |
| Maximum altitude, non-operational | 12,000 meters (39,370 feet) | | Shock, operational | 6 G for 11 milliseconds in 6 orientations | | Shock, non-operational | 71 G for 2 milliseconds in 6 orientations |
databox https://docs.microsoft.com/en-us/azure/databox/data-box-disk-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-limits.md
@@ -71,7 +71,7 @@ Here are the sizes of the Azure objects that can be written. Make sure that all
| Block Blob | ~ 4.75 TiB | | Page Blob | 8 TiB <br> (Every file uploaded in Page Blob format must be 512 bytes aligned, else the upload fails. <br> Both the VHD and VHDX are 512 bytes aligned.) | |Azure Files | 1 TiB <br> Max. size of share is 5 TiB |
-| Managed disks |4 TiB <br> For more information on size and limits, see: <li>[Scalability targets for managed disks](../virtual-machines/windows/disk-scalability-targets.md#managed-virtual-machine-disks)</li>|
+| Managed disks |4 TiB <br> For more information on size and limits, see: <li>[Scalability targets for managed disks](../virtual-machines/disks-scalability-targets.md#managed-virtual-machine-disks)</li>|
## Azure block blob, page blob, and file naming conventions
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-faq.md
@@ -30,6 +30,12 @@ Under a tenant, a single DDoS protection plan can be used across multiple subscr
See [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/) for more details.
+## Is the service zone resilient?
+Yes. Azure DDoS Protection is zone-resilient by default.
+
+## How do I configure the service to be zone-resilient?
+No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Azure DDoS Protection resources is available by default and managed by the service itself.
+ ## What about protection at the service layer (layer 7)? Customers can use Azure DDoS Protection service in combination with a Web Application Firewall (WAF) to for protection both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/how-to-set-up-your-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-your-network.md
@@ -218,7 +218,7 @@ In a star network, every host is connected to a central hub. In its simplest for
Here are some recommendations for deploying multiple sensors:
-| **Number **| **Meters** | **Dependency** | **Number of sensors** |
+| **Number** | **Meters** | **Dependency** | **Number of sensors** |
|--|--|--|--| | The maximum distance between switches | 80 meters | Prepared Ethernet cable | More than 1 | | Number of OT networks | More than 1 | No physical connectivity | More than 1 |
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-create-azure-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-azure-function.md
@@ -64,25 +64,21 @@ In order to use the SDK, you'll need to include the following packages into your
You can do this by right-selecting on your project and select _Manage NuGet Packages_ from the list. Then, in the window that opens, select _Browse_ tab and search for the following packages. Select _Install_ and _accept_ the License agreement to install the packages. * `Azure.DigitalTwins.Core`
-* `Azure.Identity`
-
-For configuration of the Azure SDK pipeline to set up properly for Azure Functions, you will also need the following packages. Repeat the same process as above to install all the packages.
-
+* `Azure.Identity`
* `System.Net.Http`
-* `Azure.Core.Pipeline`
+* `Azure.Core`
**Option 2. Add packages using `dotnet` command-line tool:** Alternatively, you can use the following `dotnet add` commands in a command line tool:+ ```cmd/sh
+dotnet add package Azure.DigitalTwins.Core
+dotnet add package Azure.Identity
dotnet add package System.Net.Http
-dotnet add package Azure.Core.Pipeline
+dotnet add package Azure.Core
```
-Then, add two more dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
- * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
- * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
- Next, in your Visual Studio Solution Explorer, open _function.cs_ file where you have sample code and add the following _using_ statements to your function. :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/adtIngestFunctionSample.cs" id="Function_dependencies":::
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-maps.md
@@ -5,7 +5,7 @@ titleSuffix: Azure Digital Twins
description: See how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map. author: alexkarcher-msft ms.author: alkarche # Microsoft employees only
-ms.date: 6/3/2020
+ms.date: 1/19/2021
ms.topic: how-to ms.service: digital-twins
@@ -72,7 +72,7 @@ This pattern reads from the room twin directly, rather than the IoT device, whic
## Create a function to update maps
-You're going to create an *Event Grid-triggered function* inside your function app from the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
+You're going to create an **Event Grid-triggered function** inside your function app from the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)). This function will unpack those notifications and send updates to an Azure Maps feature stateset to update the temperature of one room.
See the following document for reference info: [*Azure Event Grid trigger for Azure Functions*](../azure-functions/functions-bindings-event-grid-trigger.md).
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-integrate-time-series-insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
@@ -5,7 +5,7 @@ titleSuffix: Azure Digital Twins
description: See how to set up event routes from Azure Digital Twins to Azure Time Series Insights. author: alexkarcher-msft ms.author: alkarche # Microsoft employees only
-ms.date: 7/14/2020
+ms.date: 1/19/2021
ms.topic: how-to ms.service: digital-twins
@@ -45,25 +45,22 @@ Azure Digital Twins instances can emit [twin update events](how-to-interpret-eve
The Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md) walks through a scenario where a thermometer is used to update a temperature attribute on a digital twin representing a room. This pattern relies on the twin updates, rather than forwarding telemetry from an IoT device, which gives you the flexibility to change the underlying data source without needing to update your Time Series Insights logic.
-1. First, create an event hub namespace, which will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Create an event hub using Azure portal*](../event-hubs/event-hubs-create.md).
+1. First, create an event hub namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Create an event hub using Azure portal*](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs).
```azurecli-interactive
- # Create an Event Hubs namespace. Specify a name for the Event Hubs namespace.
- az eventhubs namespace create --name <name for your Event Hubs namespace> --resource-group <resource group name> -l <region, for example: East US>
+ az eventhubs namespace create --name <name for your Event Hubs namespace> --resource-group <resource group name> -l <region>
```
-2. Create an event hub within the namespace.
+2. Create an event hub within the namespace to receive twin change events. Specify a name for the event hub.
```azurecli-interactive
- # Create an event hub to receive twin change events. Specify a name for the event hub.
az eventhubs eventhub create --name <name for your Twins event hub> --resource-group <resource group name> --namespace-name <Event Hubs namespace from above> ```
-3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule?view=azure-cli-latest&preserve-view=true#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions.
+3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule?view=azure-cli-latest&preserve-view=true#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule.
```azurecli-interactive
- # Create an authorization rule. Specify a name for the rule.
- az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from above> --eventhub-name <Twins event hub name from above> --name <name for your Twins auth rule>
+ az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from above> --eventhub-name <Twins event hub name from above> --name <name for your Twins auth rule>
``` 4. Create an Azure Digital Twins [endpoint](concepts-route-events.md#create-an-endpoint) that links your event hub to your Azure Digital Twins instance.
@@ -87,7 +84,7 @@ Before moving on, take note of your *Event Hubs namespace* and *resource group*,
## Create a function in Azure
-Next, you'll use Azure Functions to create an Event Hubs-triggered function inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), or your own.
+Next, you'll use Azure Functions to create an **Event Hubs-triggered function** inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), or your own.
This function will convert those twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
@@ -111,15 +108,15 @@ To create the second event hub, you can either use the Azure CLI instructions be
1. Prepare your *Event Hubs namespace* and *resource group* name from earlier in this article
-2. Create a new event hub
+2. Create a new event hub. Specify a name for the event hub.
+ ```azurecli-interactive
- # Create an event hub. Specify a name for the event hub.
az eventhubs eventhub create --name <name for your TSI event hub> --resource-group <resource group name from earlier> --namespace-name <Event Hubs namespace from earlier> ```
-3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule?view=azure-cli-latest&preserve-view=true#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions
+3. Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule?view=azure-cli-latest&preserve-view=true#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule.
+ ```azurecli-interactive
- # Create an authorization rule. Specify a name for the rule.
- az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from earlier> --eventhub-name <TSI event hub name from above> --name <name for your TSI auth rule>
+ az eventhubs eventhub authorization-rule create --rights Listen Send --resource-group <resource group name> --namespace-name <Event Hubs namespace from earlier> --eventhub-name <TSI event hub name from above> --name <name for your TSI auth rule>
``` ## Configure your function
@@ -203,4 +200,4 @@ The digital twins are stored by default as a flat hierarchy in Time Series Insig
You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following references: * [*How-to: Manage a digital twin*](./how-to-manage-twin.md)
-* [*How-to: Query the twin graph*](./how-to-query-graph.md)
+* [*How-to: Query the twin graph*](./how-to-query-graph.md)
\ No newline at end of file
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
@@ -138,9 +138,12 @@ The snippet uses the [*Room.json*](https://github.com/Azure-Samples/digital-twin
Before you run the sample, do the following: 1. Download the model files, place them in your project, and replace the `<path-to>` placeholders in the code below to tell your program where to find them. 2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
-3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
- * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
- * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), the second provides tools to help with authentication against Azure.
+
+ ```cmd/sh
+ dotnet add package Azure.DigitalTwins.Core
+ dotnet add package Azure.Identity
+ ```
You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this. [!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)]
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
@@ -228,9 +228,12 @@ The snippet uses the [Room.json](https://github.com/Azure-Samples/digital-twins-
Before you run the sample, do the following: 1. Download the model file, place it in your project, and replace the `<path-to>` placeholder in the code below to tell your program where to find it. 2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's hostname.
-3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
- * [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
- * [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), the second provides tools to help with authentication against Azure.
+
+ ```cmd/sh
+ dotnet add package Azure.DigitalTwins.Core
+ dotnet add package Azure.Identity
+ ```
You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this. [!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)]
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/tutorial-code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-code.md
@@ -58,9 +58,12 @@ This will create several files inside your directory, including one called *Prog
Keep the command window open, as you'll continue to use it throughout the tutorial.
-Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. You can use the links below to navigate to the packages on NuGet, where you can find the console commands (including for .NET CLI) to add the latest version of each to your project.
-* [**Azure.DigitalTwins.Core**](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-* [**Azure.Identity**](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+Next, **add two dependencies to your project** that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), the second provides tools to help with authentication against Azure.
+
+```cmd/sh
+dotnet add package Azure.DigitalTwins.Core
+dotnet add package Azure.Identity
+```
## Get started with project code
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
@@ -252,7 +252,7 @@ If you advertise default routes, we force traffic to services offered over Micro
### Can virtual networks linked to the same ExpressRoute circuit talk to each other?
-Yes. Virtual machines deployed in virtual networks connected to the same ExpressRoute circuit can communicate with each other.
+Yes. Virtual machines deployed in virtual networks connected to the same ExpressRoute circuit can communicate with each other. We recommend setting up [virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview) to facilitate this communication.
### Can I use site-to-site connectivity for virtual networks in conjunction with ExpressRoute?
@@ -424,4 +424,4 @@ Your existing circuit will continue advertising the prefixes for Microsoft 365.
### Does the ExpressRoute service store customer data?
-No.
\ No newline at end of file
+No.
expressroute https://docs.microsoft.com/en-us/azure/expressroute/plan-manage-cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/plan-manage-cost.md
@@ -72,9 +72,9 @@ When you create an ExpressRoute circuit, you might choose to create an ExpressRo
If you have an ExpressRoute gateway after deleting the ExpressRoute circuit, you'll still be charged for the cost until you delete it.
-### Using Monetary Credit
+### Using Azure Prepayment credit
-You can pay for ExpressRoute charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
+You can pay for ExpressRoute charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
## Monitor costs
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-nested-iot-edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/tutorial-nested-iot-edge.md
@@ -420,6 +420,7 @@ In the [Azure portal](https://ms.portal.azure.com/):
"env": { "REGISTRY_PROXY_REMOTEURL": { "value": "https://mcr.microsoft.com"
+ }
}, "status": "running", "restartPolicy": "always"
@@ -448,7 +449,7 @@ In the [Azure portal](https://ms.portal.azure.com/):
}, "runtime": { "settings": {
- "minDockerVersion": "v1.25",
+ "minDockerVersion": "v1.25"
}, "type": "docker" },
@@ -570,7 +571,7 @@ In the [Azure portal](https://ms.portal.azure.com/):
}, "runtime": { "settings": {
- "minDockerVersion": "v1.25",
+ "minDockerVersion": "v1.25"
}, "type": "docker" },
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-android.md
@@ -65,7 +65,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you choose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string \
+ az iot hub device-identity connection-string show\
--hub-name {YourIoTHubName} \ --device-id MyAndroidDevice \ --output table
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-dotnet.md
@@ -74,7 +74,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string \
+ az iot hub device-identity connection-string show \
--hub-name {YourIoTHubName} \ --device-id MyDotnetDevice \ --output table
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-java.md
@@ -75,7 +75,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you choose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string \
+ az iot hub device-identity connection-string show\
--hub-name {YourIoTHubName} \ --device-id MyJavaDevice \ --output table
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-node.md
@@ -67,7 +67,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string \
+ az iot hub device-identity connection-string show \
--hub-name {YourIoTHubName} \ --device-id MyNodeDevice \ --output table
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-python.md
@@ -60,7 +60,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyPythonDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyPythonDevice --output table
``` Make a note of the device connection string, which looks like:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-device-streams-echo-c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-c.md
@@ -127,7 +127,7 @@ You must register a device with your IoT hub before it can connect. In this sect
> Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
``` Note the returned device connection string for later use in this quickstart. It looks like the following example:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-device-streams-echo-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-csharp.md
@@ -69,7 +69,7 @@ A device must be registered with your IoT hub before it can connect. In this sec
> Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
``` Note the returned device connection string for later use in this quickstart. It looks like the following example:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-device-streams-echo-nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-nodejs.md
@@ -74,7 +74,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub show-connection-string --policy-name service --name {YourIoTHubName} --output table
+ az iot hub connection-string show --policy-name service --name {YourIoTHubName} --output table
``` Note the returned service connection string for later use in this quickstart. It looks like the following example:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-device-streams-proxy-c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-c.md
@@ -133,7 +133,7 @@ A device must be registered with your IoT hub before it can connect. In this sec
> Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
``` Note the returned device connection string for later use in this quickstart. It looks like the following example:
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-device-streams-proxy-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-csharp.md
@@ -91,7 +91,7 @@ A device must be registered with your IoT hub before it can connect. In this qui
> Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity show-connection-string --hub-name {YourIoTHubName} --device-id MyDevice --output table
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
``` Note the returned device connection string for later use in this quickstart. It looks like the following example:
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/soft-delete-change https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-change.md
@@ -1,6 +1,6 @@
---
-title: Enable soft delete on all Azure Key Vaults | Microsoft Docs
-description: Use this document to adopt soft-delete for all key vaults.
+title: Enable soft-delete on all key vault objects - Azure Key Vault | Microsoft Docs
+description: Use this document to adopt soft-delete for all key vaults and to make application and administration changes to avoid conflict errors.
services: key-vault author: ShaneBala-keyvault manager: ravijan
@@ -17,110 +17,109 @@ ms.author: sudbalas
# Soft-delete will be enabled on all key vaults > [!WARNING]
-> **Breaking Change**: The ability to opt out of soft-delete will be deprecated soon. Azure Key Vault users and administrators should enable soft-delete on their key vaults immediately.
+> Breaking change: the ability to opt out of soft-delete will be deprecated soon. Azure Key Vault users and administrators should enable soft-delete on their key vaults immediately.
>
-> For Managed HSM, soft-delete is enabled by default and cannot be disabled.
+> For Azure Key Vault Managed HSM, soft-delete is enabled by default and can't be disabled.
-When a secret is deleted from a key vault without soft-delete protection, the secret is permanently deleted. Users can currently opt out of soft-delete during key vault creation but, to protect your secrets from accidental or malicious deletion by a user, Microsoft will soon enable soft-delete protection on **all** key vaults, and users will no longer have the option to opt out or turn soft-delete off.
+When a secret is deleted from a key vault without soft-delete protection, the secret is permanently deleted. Users can currently opt out of soft-delete during key vault creation. However, Microsoft will soon enable soft-delete protection on all key vaults to protect secrets from accidental or malicious deletion by a user. Users will no longer be able to opt out of or turn off soft-delete.
-:::image type="content" source="../media/softdeletediagram.png" alt-text="<alt text>":::
+:::image type="content" source="../media/softdeletediagram.png" alt-text="Diagram showing how a key vault is deleted with soft-delete protection versus without soft-delete protection.":::
For full details on the soft-delete functionality, see [Azure Key Vault soft-delete overview](soft-delete-overview.md).
-## Can my application work with soft delete enabled?
+## Can my application work with soft-delete enabled?
> [!Important]
-> **Please review the following information carefully before turning on soft delete for your key vaults**
+> Review the following information carefully before turning on soft-delete for your key vaults.
-Key Vault names are globally unique. The names of secrets stored in a key vault are also unique. You will not be able to reuse the name of a key vault or key vault object that exists in the soft deleted state.
+Key vault names are globally unique. The names of secrets stored in a key vault are also unique. You won't be able to reuse the name of a key vault or key vault object that exists in the soft-deleted state.
-**Example #1** If your application programmatically creates a key vault named 'Vault A' and later deletes 'Vault A'. The key vault will be moved to the soft deleted state. Your application will not be able to recreate another key vault named 'Vault A' until the key vault is purged from the soft deleted state.
+For example, if your application programmatically creates a key vault named "Vault A" and later deletes "Vault A," the key vault will be moved to the soft-deleted state. Your application won't be able to re-create another key vault named "Vault A" until the key vault is purged from the soft-deleted state.
-**Example #2** If your application creates a key named `test key` in key vault A, and later deletes the key from vault A, your application will not be able to create a new key named `test key` in key vault A until the `test key` object is purged from the soft deleted state.
+Also, if your application creates a key named `test key` in "Vault A" and later deletes that key, your application won't be able to create a new key named `test key` in "Vault A" until the `test key` object is purged from the soft-deleted state.
-This may result in conflict errors if you attempt to delete a key vault object and recreate it with the same name without purging it from the soft-deleted state first. This may cause your applications or automation to fail. Consult your dev team prior to making the required application and administration changes below.
+Attempting to delete a key vault object and re-create it with the same name without purging it from the soft-deleted state first can cause conflict errors. These errors might cause your applications or automation to fail. Consult your dev team before you make the following required application and administration changes.
### Application changes
-If your application assumes that soft-delete is not enabled and expects that deleted secret or key vault names are available for immediate reuse, your application logic will need to make the following changes in order to adopt this change.
+If your application assumes that soft-delete isn't enabled and expects that deleted secret or key vault names are available for immediate reuse, you'll need to make the following changes to your application logic.
-1. Delete the original key vault or secret
-2. Purge the key vault or secret in the soft-deleted state.
-3. Wait ΓÇô immediate recreate may result in a conflict.
-4. Re-create the key vault with the same name.
-5. Implement re-try if the create operation still results in a name conflict error, it may take up to 10 minutes for DNS records to update in the worst-case scenario.
+1. Delete the original key vault or secret.
+1. Purge the key vault or secret in the soft-deleted state.
+1. Wait for the purge to complete. Immediate re-creation might result in a conflict.
+1. Re-create the key vault with the same name.
+1. If the create operation still results in a name conflict error, try re-creating the key vault again. Azure DNS records might take up to 10 minutes to update in the worst-case scenario.
### Administration changes
-Security principals that need access to permanently delete secrets must be granted additional access policy permissions to purge these secrets and the key vault.
+Security principals that need access to permanently delete secrets must be granted more access policy permissions to purge these secrets and the key vault.
-If you have an Azure Policy on your key vaults that mandates that soft-delete is turned off, this policy will need to be disabled. You may need to escalate this issue to an administrator that controls Azure Policies applied to your environment. If this policy is not disabled, you may lose the ability to create new key vaults in the scope of the applied policy.
+Disable any Azure policy on your key vaults that mandates that soft-delete is turned off. You might need to escalate this issue to an administrator who controls Azure policies applied to your environment. If this policy isn't disabled, you might lose the ability to create new key vaults in the scope of the applied policy.
-If your organization is subject to legal compliance requirements and cannot allow deleted key vaults and secrets to remain in a recoverable state, for an extended period of time, you will have to adjust the retention period of soft-delete, which is configurable between 7 ΓÇô 90 days, to meet your organizationΓÇÖs standards.
+If your organization is subject to legal compliance requirements and can't allow deleted key vaults and secrets to remain in a recoverable state for an extended period of time, you'll have to adjust the retention period of soft-delete to meet your organization's standards. You can configure the retention period to last from 7 to 90 days.
## Procedures ### Audit your key vaults to check if soft-delete is enabled
-1. Log in to the Azure portal.
-2. Search for "Azure Policy".
-3. Select "Definitions".
-4. Under Category, select "Key Vault" in the filter.
-5. Select the "Key Vault should have soft delete enabled" policy.
-6. Click "Assign".
-7. Set the scope to your subscription.
-8. Make sure the effect of the policy is set to "Audit".
-9. Select "Review + Create".
-10. In can take up to 24 hours for a full scan of your environment to complete.
-11. In the Azure Policy Blade, click "Compliance".
-12. Select the policy you applied.
-
-You should now be able to filter and see which of your key vaults have soft-delete enabled (compliant resources) and which key vaults do not have soft-delete enabled (non-compliant resources).
-
-### Turn on Soft Delete for an existing key vault
-
-1. Log in to the Azure portal.
-2. Search for your Key Vault.
-3. Select "Properties" under settings.
-4. Under Soft-Delete, select the radio button corresponding to "Enable recovery of this vault and its objects".
-5. Set the retention period for soft-delete.
-6. Select "Save".
+1. Sign in to the Azure portal.
+1. Search for **Azure Policy**.
+1. Select **Definitions**.
+1. Under **Category**, select **Key Vault** in the filter.
+1. Select the **Key Vault should have soft-delete enabled** policy.
+1. Select **Assign**.
+1. Set the scope to your subscription.
+1. Make sure the effect of the policy is set to **Audit**.
+1. Select **Review + Create**. A full scan of your environment might take up to 24 hours to complete.
+1. In the **Azure Policy** pane, select **Compliance**.
+1. Select the policy you applied.
+
+You can now filter and see which key vaults have soft-delete enabled (compliant resources) and which key vaults don't have soft-delete enabled (non-compliant resources).
+
+### Turn on soft-delete for an existing key vault
+
+1. Sign in to the Azure portal.
+1. Search for your key vault.
+1. Select **Properties** under **Settings**.
+1. Under **Soft-Delete**, select the **Enable recovery of this vault and its objects** option.
+1. Set the retention period for soft-delete.
+1. Select **Save**.
### Grant purge access policy permissions to a security principal
-1. Log in to the Azure portal.
-2. Search for your Key Vault.
-3. Select "Access Policies" under settings.
-4. Select the service principal you would like to grant access to.
-5. For each dropdown under key, secret, and certificate permissions scroll down to "Privileged Operations" and select the "Purge" permission.
+1. Sign in to the Azure portal.
+1. Search for your key vault.
+1. Select **Access Policies** under **Settings**.
+1. Select the service principal you'd like to grant access to.
+1. Move through each drop-down menu under **Key**, **Secret**, and **Certificate permissions** until you see **Privileged Operations**. Select the **Purge** permission.
## Frequently asked questions ### Does this change affect me?
-If you already have soft-delete turned on or if you do not delete and recreate key vault objects with the same name, you likely will not notice any change in the behavior of key vault.
+If you already have soft-delete turned on or if you don't delete and re-create key vault objects with the same name, you likely won't notice any change in the behavior of the key vault.
-If you have an application that deletes and recreates key vault objects with the same naming conventions frequently, you will have to make changes in your application logic to maintain expected behavior. Please see the "How do I respond to breaking changes?" section above.
+If you have an application that deletes and re-creates key vault objects with the same naming conventions frequently, you'll have to make changes in your application logic to maintain expected behavior. See the [Application changes](#application-changes) section in this article.
### How do I benefit from this change?
-Soft delete protection will provide your organization with an additional layer of protection against accidental or malicious deletion. As a key vault administrator, you can restrict access to both recover permissions and purge permissions.
+Soft-delete protection provides your organization with another layer of protection against accidental or malicious deletion. As a key vault administrator, you can restrict access to both recover permissions and purge permissions.
-If a user accidentally deletes a key vault or secret, you can grant them access permissions to recover the secret themselves without creating a risk that they permanently delete the secret or key vault. This self-serve process will minimize down-time in your environment and guarantee the availability of your secrets.
+If a user accidentally deletes a key vault or secret, you can grant them access permissions to recover the secret themselves without creating the risk that they permanently delete the secret or key vault. This self-serve process will minimize downtime in your environment and guarantee the availability of your secrets.
### How do I find out if I need to take action?
-Please follow the steps above in the section titled "Procedure to Audit Your Key Vaults to Check If Soft-Delete Is On". Any key vault that does not have soft-delete turned on will be affected by this change. Additional tools to help audit will be available soon, and this document will be updated.
+Follow the steps in the [Audit your key vaults to check if soft delete is enabled](#audit-your-key-vaults-to-check-if-soft-delete-is-enabled) section in this article. This change will affect any key vault that doesn't have soft-delete turned on.
### What action do I need to take?
-Make sure that you do not have to make changes to your application logic. Once you have confirmed that, turn on soft-delete on all your key vaults.
+After you've confirmed that you don't have to make changes to your application logic, turn on soft-delete on all of your key vaults.
-### By when do I need to take action?
+### When do I need to take action?
-To make sure that your applications are not affected, turn on soft-delete on your key vaults as soon as possible.
+To make sure that your applications aren't affected, turn on soft-delete on your key vaults as soon as possible.
## Next steps -- Contact us with any questions regarding this change at [akvsoftdelete@microsoft.com](mailto:akvsoftdelete@microsoft.com).-- Read the [Soft-delete overview](soft-delete-overview.md)
+- Contact us with any questions about this change at [akvsoftdelete@microsoft.com](mailto:akvsoftdelete@microsoft.com).
+- Read the [Soft-delete overview](soft-delete-overview.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-plan-manage-cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md
@@ -76,9 +76,9 @@ ws.delete(delete_dependent_resources=True)
If you create Azure Kubernetes Service (AKS) in your workspace, or if you attach any compute resources to your workspace you must delete them separately in [Azure portal](https://portal.azure.com).
-### Using Monetary Credit with Azure Machine Learning
+### Using Azure Prepayment credit with Azure Machine Learning
-You can pay for Azure Machine Learning charges with your EA monetary commitment credit. However, you can't use EA monetary commitment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure Machine Learning charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment to pay for charges for third party products and services including those from the Azure Marketplace.
## Create budgets
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
@@ -60,7 +60,7 @@ automl_config = AutoMLConfig(task = "classification")
Automated machine learning supports data that resides on your local desktop or in the cloud such as Azure Blob Storage. The data can be read into a **Pandas DataFrame** or an **Azure Machine Learning TabularDataset**. [Learn more about datasets](how-to-create-register-datasets.md).
-Requirements for training data:
+Requirements for training data in machine learning:
- Data must be in tabular form. - The value to predict, target column, must be in the data.
@@ -91,9 +91,9 @@ dataset = Dataset.Tabular.from_delimited_files(data)
## Training, validation, and test data
-You can specify separate **training and validation sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure data splits and cross validation](how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
+You can specify separate **training data and validation data sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure data splits and cross validation](how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
-If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, AutoML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter.
+If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter.
|Training&nbsp;data&nbsp;size| Validation technique | |---|-----|
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-cross-validation-data-splits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
@@ -16,16 +16,16 @@ ms.date: 06/16/2020
# Configure data splits and cross-validation in automated machine learning
-In this article, you learn the different options for configuring training/validation data splits and cross-validation for your automated machine learning, automated ML, experiments.
+In this article, you learn the different options for configuring training data and validation data splits along with cross-validation settings for your automated machine learning, automated ML, experiments.
-In Azure Machine Learning, when you use automated ML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data.
+In Azure Machine Learning, when you use automated ML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data. [Learn more about how metrics are calculated based on validation type](#metric-calculation-for-cross-validation-in-machine-learning).
Automated ML experiments perform model validation automatically. The following sections describe how you can further customize validation settings with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/?preserve-view=true&view=azure-ml-py). For a low-code or no-code experience, see [Create your automated machine learning experiments in Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md). > [!NOTE]
-> The studio currently supports training/validation data splits and cross-validation options, but it does not support specifying individual data files for your validation set.
+> The studio currently supports training and validation data splits as well as cross-validation options, but it does not support specifying individual data files for your validation set.
## Prerequisites
@@ -37,11 +37,11 @@ For this article you need,
* An understanding of train/validation data splits and cross-validation as machine learning concepts. For a high-level explanation,
- * [About Train, Validation and Test Sets in Machine Learning](https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7)
+ * [About training, validation and test data in machine learning](https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7)
- * [Understand Cross Validation in machine learning](https://towardsdatascience.com/understanding-cross-validation-419dbd47e9bd)
+ * [Understand Cross Validation in machine learning](https://towardsdatascience.com/understanding-cross-validation-419dbd47e9bd)
-## Default data splits and cross-validation
+## Default data splits and cross-validation in machine learning
Use the [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?preserve-view=true&view=azure-ml-py) object to define your experiment and training settings. In the following code snippet, notice that only the required parameters are defined, that is the parameters for `n_cross_validation` or `validation_ data` are **not** included.
@@ -58,7 +58,7 @@ automl_config = AutoMLConfig(compute_target = aml_remote_compute,
) ```
-If you do not explicitly specify either a `validation_data` or `n_cross_validation` parameter, AutoML applies default techniques depending on the number of rows in the single dataset `training_data` provided:
+If you do not explicitly specify either a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques depending on the number of rows provided in the single dataset `training_data`:
|Training&nbsp;data&nbsp;size| Validation technique | |---|-----|
@@ -67,7 +67,7 @@ If you do not explicitly specify either a `validation_data` or `n_cross_validati
## Provide validation data
-In this case, you can either start with a single data file and split it into training and validation sets or you can provide a separate data file for the validation set. Either way, the `validation_data` parameter in your `AutoMLConfig` object assigns which data to use as your validation set. This parameter only accepts data sets in the form of an [Azure Machine Learning dataset](how-to-create-register-datasets.md) or pandas dataframe.
+In this case, you can either start with a single data file and split it into training data and validation data sets or you can provide a separate data file for the validation set. Either way, the `validation_data` parameter in your `AutoMLConfig` object assigns which data to use as your validation set. This parameter only accepts data sets in the form of an [Azure Machine Learning dataset](how-to-create-register-datasets.md) or pandas dataframe.
The following code example explicitly defines which portion of the provided data in `dataset` to use for training and validation.
@@ -152,8 +152,15 @@ automl_config = AutoMLConfig(compute_target = aml_remote_compute,
> [!NOTE] > To use `cv_split_column_names` with `training_data` and `label_column_name`, please upgrade your Azure Machine Learning Python SDK version 1.6.0 or later. For previous SDK versions, please refer to using `cv_splits_indices`, but note that it is used with `X` and `y` dataset input only. +
+## Metric calculation for cross validation in machine learning
+
+When either k-fold or Monte Carlo cross validation is used, metrics are computed on each validation fold and then aggregated. The aggregation operation is an average for scalar metrics and a sum for charts. Metrics computed during cross validation are based on all folds and therefore all samples from the training set. [Learn more about metrics in automated machine learning](how-to-understand-automated-ml.md).
+
+When either a custom validation set or an automatically selected validation set is used, model evaluation metrics are computed from only that validation set, not the training data.
+ ## Next steps * [Prevent imbalanced data and overfitting](concept-manage-ml-pitfalls.md). * [Tutorial: Use automated machine learning to predict taxi fares - Split data section](tutorial-auto-train-models.md#split-the-data-into-train-and-test-sets).
-* How to [Auto-train a time-series forecast model](how-to-auto-train-forecast.md).
\ No newline at end of file
+* How to [Auto-train a time-series forecast model](how-to-auto-train-forecast.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-jupyter-notebooks.md
@@ -1,7 +1,7 @@
---
-title: How to run Jupyter Notebooks in your workspace
+title: How to run Jupyter notebooks in your workspace
titleSuffix: Azure Machine Learning
-description: Learn how run a Jupyter Notebook without leaving your workspace in Azure Machine Learning studio.
+description: Learn how run a Jupyter notebook without leaving your workspace in Azure Machine Learning studio.
services: machine-learning author: abeomor ms.author: osomorog
@@ -10,21 +10,13 @@ ms.service: machine-learning
ms.subservice: core ms.topic: conceptual ms.custom: how-to
-ms.date: 06/27/2020
+ms.date: 01/19/2021
# As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio --- # How to run Jupyter Notebooks in your workspace -
-Learn how to run your Jupyter Notebooks directly in your workspace in Azure Machine Learning studio. While you can launch [Jupyter](https://jupyter.org/) or [JupyterLab](https://jupyterlab.readthedocs.io), you can also edit and run your notebooks without leaving the workspace.
-
-See how you can:
-
-* Create Jupyter Notebooks in your workspace
-* Run an experiment from a notebook
-* Change the notebook environment
-* Find details of the compute instances used to run your notebooks
+Learn how to run your Jupyter notebooks directly in your workspace in Azure Machine Learning studio. While you can launch [Jupyter](https://jupyter.org/) or [JupyterLab](https://jupyterlab.readthedocs.io), you can also edit and run your notebooks without leaving the workspace.
## Prerequisites
@@ -44,7 +36,7 @@ To create a new notebook:
:::image type="content" source="media/how-to-run-jupyter-notebooks/create-new-file.png" alt-text="Create new file"::: 1. Name the file.
-1. For Jupyter Notebook Files, select **Notebook** as the file type.
+1. For Jupyter notebook Files, select **Notebook** as the file type.
1. Select a file directory. 1. Select **Create**.
@@ -101,9 +93,9 @@ To edit a notebook, open any notebook located in the **User files** section of y
You can edit the notebook without connecting to a compute instance. When you want to run the cells in the notebook, select or create a compute instance. If you select a stopped compute instance, it will automatically start when you run the first cell.
-When a compute instance is running, you can also use code completion, powered by [Intellisense](https://code.visualstudio.com/docs/editor/intellisense), in any Python Notebook.
+When a compute instance is running, you can also use code completion, powered by [Intellisense](https://code.visualstudio.com/docs/editor/intellisense), in any Python notebook.
-You can also launch Jupyter or JupyterLab from the Notebook toolbar. Azure Machine Learning does not provide updates and fix bugs from Jupyter or JupyterLab as they are Open Source products outside of the boundary of Microsoft Support.
+You can also launch Jupyter or JupyterLab from the notebook toolbar. Azure Machine Learning does not provide updates and fix bugs from Jupyter or JupyterLab as they are Open Source products outside of the boundary of Microsoft Support.
### Focus mode
@@ -150,18 +142,6 @@ Every notebook is autosaved every 30 seconds. Autosave updates only th
Select **Checkpoints** in the notebook menu to create a named checkpoint and to revert the notebook to a saved checkpoint. -
-### Useful keyboard shortcuts
-
-|Keyboard |Action |
-|---------|---------|
-|Shift+Enter | Run a cell |
-|Ctrl+Space | Activate IntelliSense |
-|Ctrl+M(Windows) | Enable/disable tab trapping in notebook. |
-|Ctrl+Shift+M(Mac & Linux) | Enable/disable tab trapping in notebook. |
-|Tab (when tab trap enabled) | Add a '\t' character (indent)
-|Tab (when tab trap disabled) | Change focus to next focusable item (delete cell button, run button, etc.)
- ## Delete a notebook You *can't* delete the **Samples** notebooks. These notebooks are part of the studio and are updated each time a new SDK is published.
@@ -169,27 +149,45 @@ You *can't* delete the **Samples** notebooks. These notebooks are part of the s
You *can* delete **User files** notebooks in any of these ways: * In the studio, select the **...** at the end of a folder or file. Make sure to use a supported browser (Microsoft Edge, Chrome, or Firefox).
-* From any Notebook toolbar, select [**Open terminal**](#terminal) to access the terminal window for the compute instance.
+* From any notebook toolbar, select [**Open terminal**](#terminal) to access the terminal window for the compute instance.
* In either Jupyter or JupyterLab with their tools.
-## Run an experiment
+## Run a notebook or Python script
-To run an experiment from a Notebook, you first connect to a running [compute instance](concept-compute-instance.md). If you don't have a compute instance, use these steps to create one:
+To run a notebook or a Python script, you first connect to a running [compute instance](concept-compute-instance.md). If you don't have a compute instance, use these steps to create one:
-1. Select **+** in the Notebook toolbar.
+1. Select **+** in the notebook or script toolbar.
2. Name the Compute and choose a **Virtual Machine Size**. 3. Select **Create**.
-4. The compute instance is connected to the Notebook automatically and you can now run your cells.
+4. The compute instance is connected to the file automatically. You can now run the notebook cells or the Python script using the tool to the left of the compute instance
Only you can see and use the compute instances you create. Your **User files** are stored separately from the VM and are shared among all compute instances in the workspace. ### View logs and output
-Use [Notebook widgets](/python/api/azureml-widgets/azureml.widgets?preserve-view=true&view=azure-ml-py) to view the progress of the run and logs. A widget is asynchronous and provides updates until training finishes. Azure Machine Learning widgets are also supported in Jupyter and JupterLab.
+Use [notebook widgets](/python/api/azureml-widgets/azureml.widgets?preserve-view=true&view=azure-ml-py) to view the progress of the run and logs. A widget is asynchronous and provides updates until training finishes. Azure Machine Learning widgets are also supported in Jupyter and JupterLab.
+
+:::image type="content" source="media/how-to-run-jupyter-notebooks/jupyter-widget.png" alt-text="Screenshot: Jupyter notebook widget ":::
+
+## Explore variables in the notebook
+
+On the notebook toolbar, use the **Variable explorer** tool to show the name, type, length, and sample values for all variables that have been created in your notebook.
+
+:::image type="content" source="media/how-to-run-jupyter-notebooks/variable-explorer.png" alt-text="Screenshot: Variable explorer tool":::
+
+Select the tool to show the variable explorer window.
+
+:::image type="content" source="media/how-to-run-jupyter-notebooks/variable-explorer-window.png" alt-text="Screenshot: Variable explorer window":::
+
+## Navigate with a TOC
+
+On the notebook toolbar, use the **Table of contents** tool to display or hide the table of contents. Start a markdown cell with a heading to add it to the table of contents. Click on an entry in the table to scroll to that cell in the notebook.
+
+:::image type="content" source="media/how-to-run-jupyter-notebooks/table-of-contents.png" alt-text="Screenshot: Table of contents in the notebook":::
## Change the notebook environment
-The Notebook toolbar allows you to change the environment on which your Notebook runs.
+The notebook toolbar allows you to change the environment on which your notebook runs.
These actions will not change the notebook state or the values of any variables in the notebook:
@@ -210,9 +208,9 @@ These actions will reset the notebook state and will reset all variables in the
### Add new kernels
-The Notebook will automatically find all Jupyter kernels installed on the connected compute instance. To add a kernel to the compute instance:
+The notebook will automatically find all Jupyter kernels installed on the connected compute instance. To add a kernel to the compute instance:
-1. Select [**Open terminal**](#terminal) in the Notebook toolbar.
+1. Select [**Open terminal**](#terminal) in the notebook toolbar.
1. Use the terminal window to create a new environment. For example, the code below creates `newenv`: ```shell conda create -y --name newenv
@@ -254,7 +252,90 @@ An indicator next to the **Kernel** dropdown shows its status.
| Green |Kernel connected, idle, busy| | Gray |Kernel not connected |
-## Find compute details
+## Shortcut keys
+Similar to Jupyter Notebooks, Azure Machine Learning Studio notebooks have a modal user interface. The keyboard does different things depending on which mode the notebook cell is in. Azure Machine Learning Studio notebooks support the following two modes for a given code cell: command mode and edit mode.
+
+### Command mode shortcuts
+
+A cell is in command mode when there is no text cursor prompting you to type. When a cell is in Command mode, you can edit the notebook as a whole but not type into individual cells. Enter command mode by pressing `ESC` or using the mouse to select outside of a cell's editor area. The left border of the active cell is blue and solid, and its **Run** button is blue.
+
+ :::image type="content" source="media/how-to-run-jupyter-notebooks/command-mode.png" alt-text="Notebook cell in command mode ":::
+
+| Shortcut | Description |
+| ----------------------------- | ------------------------------------|
+| Enter | Enter edit mode |
+| Shift + Enter | Run cell, select below |
+| Control/Command + Enter | Run cell |
+| Alt + Enter | Run cell, insert code cell below |
+| Control/Command + Alt + Enter | Run cell, insert markdown cell below|
+| Alt + R | Run all |
+| Y | Convert cell to code |
+| M | Convert cell to markdown |
+| Up/K | Select cell above |
+| Down/J | Select cell below |
+| A | Insert code cell above |
+| B | Insert code cell below |
+| Control/Command + Shift + A | Insert markdown cell above |
+| Control/Command + Shift + B | Insert markdown cell below |
+| X | Cut selected cell |
+| C | Copy selected cell |
+| Shift + V | Paste selected cell above |
+| V | Paste selected cell below |
+| D D | Delete selected cell|
+| O | Toggle output |
+| Shift + O | Toggle output scrolling |
+| I I | Interrupt kernel |
+| 0 0 | Restart kernel |
+| Shift + Space | Scroll up |
+| Space | Scroll down|
+| Tab | Change focus to next focusable item (when tab trap disabled)|
+| Control/Command + S | Save notebook |
+| 1 | Change to h1|
+| 2 | Change to h2|
+| 3 | Change to h3|
+| 4 | Change to h4 |
+| 5 | Change to h5 |
+| 6 | Change to h6 |
+
+### Edit mode shortcuts
+
+Edit mode is indicated by a text cursor prompting you to type in the editor area. When a cell is in edit mode, you can type into the cell. Enter edit mode by pressing `Enter` or using the mouse to select on a cell's editor area. The left border of the active cell is green and hatched, and its **Run** button is green. You also see the cursor prompt in the cell in Edit mode.
+
+ :::image type="content" source="media/how-to-run-jupyter-notebooks/edit-mode.png" alt-text="Notebook cell in edit mode":::
+
+Using the following keystroke shortcuts, you can more easily navigate and run code in Azure Machine Learning notebooks when in Edit mode.
+
+| Shortcut | Description|
+| ----------------------------- | ----------------------------------------------- |
+| Escape | Enter command mode|
+| Control/Command + Space | Activate IntelliSense |
+| Shift + Enter | Run cell, select below |
+| Control/Command + Enter | Run cell |
+| Alt + Enter | Run cell, insert code cell below |
+| Control/Command + Alt + Enter | Run cell, insert markdown cell below |
+| Alt + R | Run all cells |
+| Up | Move cursor up or previous cell |
+| Down | Move cursor down or next cell |
+| Control/Command + S | Save notebook |
+| Control/Command + Up | Go to cell start |
+| Control/Command + Down | Go to cell end |
+| Tab | Code completion or indent (if tab trap enabled) |
+| Control/Command + M | Enable/disable tab trap |
+| Control/Command + ] | Indent |
+| Control/Command + [ | Dedent |
+| Control/Command + A | Select all|
+| Control/Command + Z | Undo |
+| Control/Command + Shift + Z | Redo |
+| Control/Command + Y | Redo |
+| Control/Command + Home | Go to cell start|
+| Control/Command + End | Go to cell end |
+| Control/Command + Left | Go one word left |
+| Control/Command + Right | Go one word right |
+| Control/Command + Backspace | Delete word before |
+| Control/Command + Delete | Delete word after |
+| Control/Command + / | Toggle comment on cu
+
+## Find compute details
Find details about your compute instances on the **Compute** page in [studio](https://ml.azure.com).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-with-custom-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-custom-image.md
@@ -97,6 +97,12 @@ fastai_env.docker.base_image = None
fastai_env.docker.base_dockerfile = "./Dockerfile" ```
+>[!IMPORTANT]
+> Azure Machine Learning only supports Docker images that provide the following software:
+> * Ubuntu 16.04 or greater.
+> * Conda 4.5.# or greater.
+> * Python 3.5+.
+ For more information about creating and managing Azure Machine Learning environments, see [Create and use software environments](how-to-use-environments.md). ### Create or attach a compute target
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-troubleshoot-environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-environments.md
@@ -12,38 +12,40 @@ ms.topic: troubleshooting
ms.custom: devx-track-python --- # Troubleshoot environment image builds+ Learn how to troubleshoot issues with Docker environment image builds and package installations. ## Prerequisites
-* An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree).
+* An Azure subscription. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree).
* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py). * The [Azure CLI](/cli/azure/install-azure-cli?preserve-view=true&view=azure-cli-latest). * The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md). * To debug locally, you must have a working Docker installation on your local system.
-## Docker Image Build Failures
+## Docker image build failures
-For the majority of image build failures, the root cause can be found in the image build log.
-You can find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your ACR tasks runs logs
+For most image build failures, you'll find the root cause in the image build log.
+Find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your Azure Container Registry task run logs.
-In most cases, it is easier to reproduce errors locally. Check the kind of error and try one of the following `setuptools`:
+It's usually easier to reproduce errors locally. Check the kind of error and try one of the following `setuptools`:
-- Install conda dependency locally `conda install suspicious-dependency==X.Y.Z`-- Install pip dependency locally `pip install suspicious-dependency==X.Y.Z`-- Try to materialize the entire environment `conda create -f conda-specification.yml`
+- Install a conda dependency locally: `conda install suspicious-dependency==X.Y.Z`.
+- Install a pip dependency locally: `pip install suspicious-dependency==X.Y.Z`.
+- Try to materialize the entire environment: `conda create -f conda-specification.yml`.
> [!IMPORTANT]
-> Make sure that platform and interpreter on your local compute match the ones on the remote.
+> Make sure that the platform and interpreter on your local compute cluster match the ones on the remote compute cluster.
### Timeout
-Timeout issues can happen for various network issues:
+The following network issues can cause timeout errors:
+ - Low internet bandwidth - Server issues-- Large dependency that can't be downloaded with the given conda or pip timeout settings
+- Large dependencies that can't be downloaded with the given conda or pip timeout settings
-Messages similar to the below will indicate the issue:
+Messages similar to the following examples will indicate the issue:
``` ('Connection broken: OSError("(104, \'ECONNRESET\')")', OSError("(104, 'ECONNRESET')"))
@@ -52,60 +54,63 @@ Messages similar to the below will indicate the issue:
ReadTimeoutError("HTTPSConnectionPool(host='****', port=443): Read timed out. (read timeout=15)",) ```
-Possible solutions:
+If you get an error message, try one of the following possible solutions:
-- Try a different source for the dependency if available such as mirrors, blob storage, or other python feeds.-- Update conda or pip. If a custom docker file is used, update the timeout settings.-- Some pip versions have known issues. Consider adding a specific version of pip into environment dependencies .
+- Try a different source, such as mirrors, Azure Blob Storage, or other Python feeds, for the dependency.
+- Update conda or pip. If you're using a custom Docker file, update the timeout settings.
+- Some pip versions have known issues. Consider adding a specific version of pip to the environment dependencies.
+
+### Package not found
-### Package Not Found
+The following errors are most common for image build failures:
-This is the most common case for image build failures.
+- Conda package couldn't be found:
-- Conda package could not be found
- ```
- ResolvePackageNotFound:
- - not-existing-conda-package
- ```
+ ```
+ ResolvePackageNotFound:
+ - not-existing-conda-package
+ ```
-- Specified pip package or version could not be found
- ```
- ERROR: Could not find a version that satisfies the requirement invalid-pip-package (from versions: none)
- ERROR: No matching distribution found for invalid-pip-package
- ```
+- Specified pip package or version couldn't be found:
-- Bad nested pip dependency
- ```
- ERROR: No matching distribution found for bad-backage==0.0 (from good-package==1.0)
- ```
+ ```
+ ERROR: Could not find a version that satisfies the requirement invalid-pip-package (from versions: none)
+ ERROR: No matching distribution found for invalid-pip-package
+ ```
-Check the package exists on the specified sources. Use [pip search](https://pip.pypa.io/en/stable/reference/pip_search/) to verify pip dependencies.
+- Bad nested pip dependency:
-`pip search azureml-core`
+ ```
+ ERROR: No matching distribution found for bad-package==0.0 (from good-package==1.0)
+ ```
-For conda dependencies, use [conda search](https://docs.conda.io/projects/conda/en/latest/commands/search.html).
+Check that the package exists on the specified sources. Use [pip search](https://pip.pypa.io/en/stable/reference/pip_search/) to verify pip dependencies:
-`conda search conda-forge::numpy`
+- `pip search azureml-core`
-For more options:
+For conda dependencies, use [conda search](https://docs.conda.io/projects/conda/en/latest/commands/search.html):
+
+- `conda search conda-forge::numpy`
+
+For more options, try:
- `pip search -h` - `conda search -h`
-#### Installer Notes
+#### Installer notes
Make sure that the required distribution exists for the specified platform and Python interpreter version.
-For pip dependencies, navigate to `https://pypi.org/project/[PROJECT NAME]/[VERSION]/#files` to see if required version is available. For example, https://pypi.org/project/azureml-core/1.11.0/#files
+For pip dependencies, go to `https://pypi.org/project/[PROJECT NAME]/[VERSION]/#files` to see if the required version is available. Go to https://pypi.org/project/azureml-core/1.11.0/#files to see an example.
-For conda dependencies, check package on the channel repository.
-For channels maintained by Anaconda, Inc., check [here](https://repo.anaconda.com/pkgs/).
+For conda dependencies, check the package on the channel repository.
+For channels maintained by Anaconda, Inc., check the [Anaconda Packages page](https://repo.anaconda.com/pkgs/).
-### Pip Package Update
+### Pip package update
-During install or update of a pip package the resolver may need to update an already installed package to satisfy the new requirements.
-Uninstall can fail for various reasons related to pip version or the way the dependency was installed.
-The most common scenario is that a dependency installed by conda could not be uninstalled by pip.
-For this scenario, consider uninstalling the dependency using `conda remove mypackage`.
+During an installation or an update of a pip package, the resolver might need to update an already-installed package to satisfy the new requirements.
+Uninstallation can fail for various reasons related to the pip version or the way the dependency was installed.
+The most common scenario is that a dependency installed by conda couldn't be uninstalled by pip.
+For this scenario, consider uninstalling the dependency by using `conda remove mypackage`.
``` Attempting uninstall: mypackage
@@ -116,62 +121,74 @@ ERROR: Cannot uninstall 'mypackage'. It is a distutils installed project and thu
Certain installer versions have issues in the package resolvers that can lead to a build failure.
-If a custom base image or dockerfile is used, we recommend using conda version 4.5.4 or higher.
+If you're using a custom base image or Dockerfile, we recommend using conda version 4.5.4 or later.
+
+A pip package is required to install pip dependencies. If a version isn't specified in the environment, the latest version will be used.
+We recommend using a known version of pip to avoid transient issues or breaking changes that the latest version of the tool might cause.
-Pip package is required to install pip dependencies and if a version is not specified in the environment the latest version will be used.
-We recommend using a known version of pip to avoid transient issues or breaking changes that can be caused by the latest version of the tool.
+Consider pinning the pip version in your environment if you see the following message:
-Consider pinning the pip version in your environment if you see any of the messages below:
+ ```
+ Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.
+ ```
-`Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.`
+Pip subprocess error:
+ ```
+ ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, update the hashes as well. Otherwise, examine the package contents carefully; someone may have tampered with them.
+ ```
-`Pip subprocess error:
-ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, update the hashes as well. Otherwise, examine the package contents carefully; someone may have tampered with them.`
+Pip installation can be stuck in an infinite loop if there are unresolvable conflicts in the dependencies.
+If you're working locally, downgrade the pip version to < 20.3.
+In a conda environment created from a YAML file, you'll see this issue only if conda-forge is the highest-priority channel. To mitigate the issue, explicitly specify pip < 20.3 (!=20.3 or =20.2.4 pin to other version) as a conda dependency in the conda specification file.
-In addition, pip installation can be stuck in an infinite loop if there are unresolvable conflicts in the dependencies.
-If working locally, downgrade the pip version to < 20.3.
-In a conda environment created from a YAML file, this issue will only be seen if conda-forge is highest priority channel. To mitigate the issue, explicitly specify pip < 20.3 (!=20.3 or =20.2.4 pin to other version) as a conda dependency in the conda specification file.
+## Service-side failures
-## Service Side Failures
+See the following scenarios to troubleshoot possible service-side failures.
+
+### You're unable to pull an image from a container registry, or the address couldn't be resolved for a container registry
-### Unable to pull image from MCR/Address could not be resolved for Container Registry.
Possible issues:-- Path name to container registry may not be resolving correctly. Check that image names use double slashes and the direction of slashes on Linux vs Windows hosts is correct.-- If the ACR behind a Vnet is using a private endpoint in [an unsupported region](https://docs.microsoft.com/azure/private-link/private-link-overview#availability), configure the ACR behind a VNet using the service endpoint (Public access) from the portal and retry.-- After putting the ACR behind a VNet, ensure that the [ARM template](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network#azure-container-registry) is run. This enables the workspace to communicate with the ACR instance.
+- The path name to the container registry might not be resolving correctly. Check that image names use double slashes and the direction of slashes on Linux versus Windows hosts is correct.
+- If a container registry behind a virtual network is using a private endpoint in [an unsupported region](https://docs.microsoft.com/azure/private-link/private-link-overview#availability), configure the container registry by using the service endpoint (public access) from the portal and retry.
+- After you put the container registry behind a virtual network, run the [Azure Resource Manager template](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network#azure-container-registry) so the workspace can communicate with the container registry instance.
+
+### You get a 401 error from a workspace container registry
+
+Resynchronize storage keys by using [ws.sync_keys()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py#sync-keys--).
+
+### The environment keeps throwing a "Waiting for other conda operations to finish…" error
+
+When an image build is ongoing, conda is locked by the SDK client. If the process crashed or was canceled incorrectly by the user, conda stays in the locked state. To resolve this issue, manually delete the lock file.
+
+### Your custom Docker image isn't in the registry
-### 401 error from workspace ACR
-Resynchronize storage keys using [ws.sync_keys()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py#sync-keys--)
+Check if the [correct tag](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments#create-an-environment) is used and that `user_managed_dependencies = True`. `Environment.python.user_managed_dependencies = True` disables conda and uses the user's installed packages.
-### Environment keeps throwing "Waiting for other Conda operations to finish…" Error
-When an image build is ongoing, conda is locked by the SDK client. If the process crashed or was canceled incorrectly by the user - conda stays in the locked state. To resolve this, manually delete the lock file.
+### You get one of the following common virtual network issues
-### Custom docker image not in registry
-Check if the [correct tag](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments#create-an-environment) is used and that `user_managed_dependencies = True`. `Environment.python.user_managed_dependencies = True` disables Conda and uses the user's installed packages.
+- Check that the storage account, compute cluster, and container registry are all in the same subnet of the virtual network.
+- When your container registry is behind a virtual network, it can't directly be used to build images. You'll need to use the compute cluster to build images.
+- Storage might need to be placed behind a virtual network if you:
+ - Use inferencing or private wheel.
+ - See 403 (not authorized) service errors.
+ - Can't get image details from Azure Container Registry.
-### Common VNet issues
+### The image build fails when you're trying to access network protected storage
-1. Check that the storage account, compute cluster, and Azure Container Registry are all in the same subnet of the virtual network.
-2. When ACR is behind a VNet, it can't directly be used to build images. The compute cluster needs to be used to build images.
-3. Storage may need to be placed behind a VNet when:
- - Using inferencing or private wheel
- - Seeing 403 (not authorized) service errors
- - Can't get image details from ACR/MCR
+- Azure Container Registry tasks don't work behind a virtual network. If the user has their container registry behind a virtual network, they need to use the compute cluster to build an image.
+- Storage should be behind a virtual network in order to pull dependencies from it.
-### Image build fails when trying to access network protected storage
-- ACR tasks do not work behind the VNet. If the user has their ACR behind the VNet, they need to use the compute cluster to build an image.-- Storage should be behind VNet in order to be able to pull dependencies from it.
+### You can't run experiments when storage has network security enabled
-### Cannot run experiments when storage has network security enabled
-When using default Docker images and enabling user-managed dependencies, you must use the MicrosoftContainerRegistry and AzureFrontDoor.FirstParty [service tags](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network) to allowlist MCR and its dependencies.
+If you're using default Docker images and enabling user-managed dependencies, use the MicrosoftContainerRegistry and AzureFrontDoor.FirstParty [service tags](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network) to allowlist Azure Container Registry and its dependencies.
- See [enabling VNET](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network#azure-container-registry) for more.
+ For more information, see [Enabling virtual networks](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network#azure-container-registry).
-### Creating an ICM
+### You need to create an ICM
-When creating/assigning an ICM to Metastore, please include the CSS support ticket so that we can better understand the issue.
+When you're creating/assigning an ICM to Metastore, include the CSS support ticket so that we can better understand the issue.
## Next steps - [Train a machine learning model to categorize flowers](how-to-train-scikit-learn.md)-- [Train a machine learning model using a custom Docker image](how-to-train-with-custom-image.md)\ No newline at end of file
+- [Train a machine learning model by using a custom Docker image](how-to-train-with-custom-image.md)
\ No newline at end of file
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
@@ -130,6 +130,8 @@ myenv.docker.base_image_registry="your_registry_location"
You can also specify a custom Dockerfile. It's simplest to start from one of Azure Machine Learning base images using Docker ```FROM``` command, and then add your own custom steps. Use this approach if you need to install non-Python packages as dependencies. Remember to set the base image to None.
+Please note that Python is an implicit dependency in Azure Machine Learning so a custom dockerfile must have Python installed.
+ ```python # Specify docker steps as a string. dockerfile = r"""
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-bring-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
@@ -96,6 +96,20 @@ tutorial
If you didn't run `train.py` locally in the previous tutorial, you won't have the `data/` directory. In this case, run the `torchvision.datasets.CIFAR10` method locally with `download=True` in your `train.py` script.
+Also, to run on local, make sure you exit the tutorial environment and activate the new conda environment:
+
+```bash
+conda deactivate # If you are still using the tutorial environment, exit it
+```
+
+```bash
+conda env create -f .azureml/pytorch-env.yml # create the new conda environment with updated dependencies
+```
+
+```bash
+conda activate pytorch-aml-env # activate new conda environment
+```
+ To run the modified training script locally, call: ```bash
mariadb https://docs.microsoft.com/en-us/azure/mariadb/concept-reserved-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/concept-reserved-pricing.md
@@ -45,7 +45,7 @@ The following table describes required fields.
| Field | Description | | :------------ | :------- |
-| Subscription | The subscription used to pay for the Azure Database for MariaDB reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MariaDB reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Subscription | The subscription used to pay for the Azure Database for MariaDB reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MariaDB reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for MariaDB servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for MariaDB servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for MariaDB servers in the selected subscription and the selected resource group within that subscription. | Region | The Azure region that's covered by the Azure Database for MariaDB reserved capacity reservation. | Deployment Type | The Azure Database for MariaDB resource type that you want to buy the reservation for.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-consumption-commitment-benefit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-consumption-commitment-benefit.md
@@ -19,7 +19,7 @@ A select set of Microsoft commercial marketplace offers will contribute towards
We validate all offers that participate in this program to ensure you receive high-quality solutions.
-To take advantage of this benefit, simply purchase a qualifying offer on Azure Marketplace using a subscription thatΓÇÖs related to your Azure agreement. Azure prepayment and monetary commitments are not eligible for this benefit.
+To take advantage of this benefit, simply purchase a qualifying offer on Azure Marketplace using a subscription thatΓÇÖs related to your Azure agreement. Azure Prepayment (previously called monetary commitment) are not eligible for this benefit.
> [!IMPORTANT] > Exclusions may apply to CtC agreements signed prior to this marketplace benefit. If you have questions about eligibility, contact your Microsoft account executive.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-create-certification-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
@@ -316,7 +316,7 @@ Refer to the following table for any issues that arise when you download the VM
|Invalid VHD name|Check to see whether any special characters, such as a percent sign `%` or quotation marks `"`, exist in the VHD name.|Rename the VHD file by removing the special characters.| |
-## First 1 MB (2048 sectors, each sector of 512 bytes) partition
+## First partition starts at 1 MB (2048 Sectors)
If you are [building your own image](azure-vm-create-using-own-image.md), ensure the first 2048 sectors (1 MB) of the OS disk is empty. Otherwise, your publishing will fail. This requirement is applicable to the OS disk only (not data disks). If you are building your image [from an approved base](azure-vm-create-using-approved-base.md), you can skip this requirement.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/marketplace-commercial-transaction-capabilities-and-considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-commercial-transaction-capabilities-and-considerations.md
@@ -155,7 +155,7 @@ In this scenario, Microsoft bills $100.00 for your software license and pays out
**Credit cards and monthly invoice** ΓÇô Customers can also pay using a credit card and a monthly invoice. In this case, your software license fees will be billed just like the Enterprise Agreement scenario, as an itemized cost, separate from any Azure-specific usage costs.
-**Free credits and monetary commitment** ΓÇô Some customers elect to prepay Azure with a monetary commitment in the Enterprise Agreement or have been provided free credits for use with Azure. Although these credits can be used to pay for Azure usage, they can't be used to pay for publisher software license fees.
+**Free credits and Azure Prepayment** ΓÇô Some customers elect to prepay Azure with Azure Prepayment (previously called monetary commitment) in the Enterprise Agreement or have been provided free credits for use with Azure. Although these credits can be used to pay for Azure usage, they can't be used to pay for publisher software license fees.
**Billing and collections** ΓÇô Publisher software license billing is presented using the customer-selected method of invoicing and follows the invoicing timeline. Customers without an Enterprise Agreement in place are billed monthly for marketplace software licenses. Customers with an Enterprise Agreement are billed monthly via an invoice that is presented quarterly.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/marketplace-virtual-machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-virtual-machines.md
@@ -68,7 +68,7 @@ VM offers require at least one plan. A plan defines the solution scope and limit
VMs are fully commerce-enabled, using pay-as-you-go or bring-your-own-license (BYOL) licensing models. Microsoft hosts the commerce transaction and bills your customer on your behalf. You get the benefit of using the preferred payment relationship between your customer and Microsoft, including any Enterprise Agreements. For more information, see [Commercial marketplace transact capabilities](./marketplace-commercial-transaction-capabilities-and-considerations.md). > [!NOTE]
-> The monetary commitments associated with an Enterprise Agreement can be used against the Azure usage of your VM, but not against your software licensing fees.
+> The Azure Prepayment (previously called monetary commitment) associated with an Enterprise Agreement can be used against the Azure usage of your VM, but not against your software licensing fees.
### Licensing options
marketplace https://docs.microsoft.com/en-us/azure/marketplace/support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/support.md
@@ -1,12 +1,12 @@
---
-title: Get support for the commercial marketplace portal in Partner Center
-description: Learn about your support options in Partner Center, including how to file a support request.
+title: Get support for the commercial marketplace program in Partner Center
+description: Learn about your support options for the commercial marketplace program in Partner Center, including how to file a support request.
ms.service: marketplace ms.subservice: partnercenter-marketplace-publisher ms.topic: conceptual author: navits09 ms.author: navits
-ms.date: 01/14/2020
+ms.date: 01/19/2020
--- # Support for the commercial marketplace program in Partner Center
@@ -17,60 +17,43 @@ Microsoft provides support for a wide variety of products and services. Finding
- If youΓÇÖre a publisher and have detected a security issue with an application running on Azure, see [How to log a security event support ticket](/azure/security/fundamentals/event-support-ticket). Publishers must report suspected security events, including security incidents and vulnerabilities of their Azure Marketplace software and service offerings, at the earliest opportunity. - If you're a publisher and have a question relating to your app or service, review the following support options.
-## Support options for publishers
+## Get help or open a support ticket
-1. Sign in to the [commercial marketplace program on Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) with your work account (if you have not yet done so, you will need to [create a Partner Center account](partner-center-portal/create-account.md).
+1. Sign in with your work account. If you have not yet done so, you will need to [create a Partner Center account](partner-center-portal/create-account.md).
-2. In the upper menu on the right side of the page, select the **Support** icon.
+1. In the menu on the upper-right of the page, select the **Support** icon. The **Help and support** pane appears on the right side of the page.
-3. The **Help and support** pane will appear from the right-hand side of the page.
+1. For help with the commercial marketplace, select **Commercial Marketplace**.
![Support drop-down menu](./media/support/commercial-marketplace-support-pane.png)
- Or go to the **Home page** pane and select **Help and support**.
+1. In the **Problem summary** box, enter a brief description of the issue.
- ![Help and support from Home page](./media/support/homepage-help-support.png)
+1. In the **Problem type** box, do one of the following:
-4. Select **[Documentation](../index.yml)** to review comprehensive answers to questions and resources.
+ - **Option 1**: Enter keywords such as: Marketplace, Azure app, SaaS offer, account management, lead management, deployment issue, payout, or co-sell offer migration. Then select a problem type from the recommended list that appears.
-5. Select **[Marketplace Partner community forum](https://www.microsoftpartnercommunity.com/t5/Azure-Marketplace-and-AppSource/bd-p/2222)** to answer your questions by leveraging the knowledge of other Microsoft publishers.
+ - **Option 2**: Select **Browse topics** from the **Category** list and then select **Commercial Marketplace**. Then select the appropriate **Topic** and **Subtopic**.
-6. Select **[Additional help](https://aka.ms/marketplacepublishersupport)** to open a **New support request** ticket.
+1. After you have found the topic of your choice, select **Review Solutions**.
-## How to open a support ticket
+ ![Next step](./media/support/next-step.png)
-Now you're ready to open a support ticket on the **Help and Support** screen.
+The following options are shown:
-![Help and support](./media/support/help-and-support.png)
+- To select a different topic, click **Select a different issue**.
+- To help solve the issue, review the recommended steps and documents, if available.
->[!Note]
->If you are logged in Partner Center, you will receive better experience with support.
-
-**Option 1:** Enter keywords such as: Marketplace, Azure app, SaaS offer, account management, lead management, deployment issue, payout, etc.
-
-**Option 2:** Browse topics -> select **Category** = commercial marketplace -> select the appropriate **Topic** then **Subtopic**.
-
-Once you have found the topic of your choice, select **Review Solutions**.
-
-![Next step](./media/support/next-step.png)
-
-The following options will become available:
--- To select a different topic, select a different topic link under **selected issue**.-- Review the description for this issue, if available. It is the text shown above the **recommended steps**.-- Review **recommended steps**, if available.-- Review **recommended documents**, if available.-
-![Recommended solutions](./media/support/recommended-solutions.png)
+ ![Recommended solutions](./media/support/recommended-solutions.png)
-If you cannot find your answer in **recommended solutions**, select **provide issue details**. Complete all required fields to speed up the resolution process, then select **submit**.
+If you cannot find your answer in the self help, select **Provide issue details**. Complete all required fields to speed up the resolution process, then select **Submit**.
>[!Note]
->If you have not logged in Partner Center and the topic requires authentication, you will be requested to log in before you can proceed. For public topics, authentication is not required.
+>If you have not signed in to Partner Center, you may be required to sign in before you can create a ticket.
## Track your existing support requests
-To review all of your open and closed tickets, go to **Commercial Marketplace** on the left navigation bar, and then select **support**.
+To review your open and closed tickets, in the left-navigation menu, select **Commercial Marketplace** > **Support**.
## Record issue details with a HAR file
mysql https://docs.microsoft.com/en-us/azure/mysql/concept-reserved-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concept-reserved-pricing.md
@@ -45,7 +45,7 @@ The following table describes required fields.
| Field | Description | | :------------ | :------- |
-| Subscription | The subscription used to pay for the Azure Database for MySQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Subscription | The subscription used to pay for the Azure Database for MySQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for MySQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for MySQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for MySQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for MySQL servers in the selected subscription and the selected resource group within that subscription. | Region | The Azure region that's covered by the Azure Database for MySQL reserved capacity reservation. | Deployment Type | The Azure Database for MySQL resource type that you want to buy the reservation for.
mysql https://docs.microsoft.com/en-us/azure/mysql/flexible-server/connect-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-csharp.md new file mode 100644
@@ -0,0 +1,307 @@
+---
+title: 'Quickstart: Connect using C# - Azure Database for MySQL Flexible Server'
+description: This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL Flexible Server.
+author: mksuni
+ms.author: sumuth
+ms.service: mysql
+ms.custom: "mvc, devx-track-csharp"
+ms.devlang: csharp
+ms.topic: quickstart
+ms.date: 01/16/2021
+---
+
+# Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL - Flexible Server
+
+This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for MySQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you do not have one.
+- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity.
+- [Create a database and non-admin user](../howto-create-users.md)
+
+[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+## Create a C# project
+At a command prompt, run:
+
+```
+mkdir AzureMySqlExample
+cd AzureMySqlExample
+dotnet new console
+dotnet add package MySqlConnector
+```
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for MySQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-csharp/server-overview-name-login.png" alt-text="Azure Database for MySQL server name":::
+
+## Step 1: Connect and insert data
+Use the following code to connect and load the data by using `CREATE TABLE` and `INSERT INTO` SQL statements. The code uses the methods of the `MySqlConnection` class:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand), sets the CommandText property
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
+
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlCreate
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "DROP TABLE IF EXISTS inventory;";
+ await command.ExecuteNonQueryAsync();
+ Console.WriteLine("Finished dropping table (if existed)");
+
+ command.CommandText = "CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);";
+ await command.ExecuteNonQueryAsync();
+ Console.WriteLine("Finished creating table");
+
+ command.CommandText = @"INSERT INTO inventory (name, quantity) VALUES (@name1, @quantity1),
+ (@name2, @quantity2), (@name3, @quantity3);";
+ command.Parameters.AddWithValue("@name1", "banana");
+ command.Parameters.AddWithValue("@quantity1", 150);
+ command.Parameters.AddWithValue("@name2", "orange");
+ command.Parameters.AddWithValue("@quantity2", 154);
+ command.Parameters.AddWithValue("@name3", "apple");
+ command.Parameters.AddWithValue("@quantity3", 100);
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows inserted={0}", rowCount));
+ }
+
+ // connection will be closed by the 'using' block
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+## Step 2: Read data
+
+Use the following code to connect and read the data by using a `SELECT` SQL statement. The code uses the `MySqlConnection` class with methods:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.
+- [ExecuteReaderAsync()](/dotnet/api/system.data.common.dbcommand.executereaderasync) to run the database commands.
+- [ReadAsync()](/dotnet/api/system.data.common.dbdatareader.readasync#System_Data_Common_DbDataReader_ReadAsync) to advance to the records in the results. Then the code uses GetInt32 and GetString to parse the values in the record.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlRead
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER@YOUR-SERVER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "SELECT * FROM inventory;";
+
+ using (var reader = await command.ExecuteReaderAsync())
+ {
+ while (await reader.ReadAsync())
+ {
+ Console.WriteLine(string.Format(
+ "Reading from table=({0}, {1}, {2})",
+ reader.GetInt32(0),
+ reader.GetString(1),
+ reader.GetInt32(2)));
+ }
+ }
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+## Step 3: Update data
+Use the following code to connect and read the data by using an `UPDATE` SQL statement. The code uses the `MySqlConnection` class with method:
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlUpdate
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "UPDATE inventory SET quantity = @quantity WHERE name = @name;";
+ command.Parameters.AddWithValue("@quantity", 200);
+ command.Parameters.AddWithValue("@name", "banana");
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows updated={0}", rowCount));
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
++
+## Step 4: Delete data
+Use the following code to connect and delete the data by using a `DELETE` SQL statement.
+
+The code uses the `MySqlConnection` class with method
+- [OpenAsync()](/dotnet/api/system.data.common.dbconnection.openasync#System_Data_Common_DbConnection_OpenAsync) to establish a connection to MySQL.
+- [CreateCommand()](/dotnet/api/system.data.common.dbconnection.createcommand) to set the CommandText property.
+- [ExecuteNonQueryAsync()](/dotnet/api/system.data.common.dbcommand.executenonqueryasync) to run the database commands.
++
+Replace the `Server`, `Database`, `UserID`, and `Password` parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using MySqlConnector;
+
+namespace AzureMySqlExample
+{
+ class MySqlDelete
+ {
+ static async Task Main(string[] args)
+ {
+ var builder = new MySqlConnectionStringBuilder
+ {
+ Server = "YOUR-SERVER.mysql.database.azure.com",
+ Database = "YOUR-DATABASE",
+ UserID = "USER",
+ Password = "PASSWORD",
+ SslMode = MySqlSslMode.Required,
+ };
+
+ using (var conn = new MySqlConnection(builder.ConnectionString))
+ {
+ Console.WriteLine("Opening connection");
+ await conn.OpenAsync();
+
+ using (var command = conn.CreateCommand())
+ {
+ command.CommandText = "DELETE FROM inventory WHERE name = @name;";
+ command.Parameters.AddWithValue("@name", "orange");
+
+ int rowCount = await command.ExecuteNonQueryAsync();
+ Console.WriteLine(String.Format("Number of rows deleted={0}", rowCount));
+ }
+
+ Console.WriteLine("Closing connection");
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using Portal](./how-to-manage-server-portal.md)<br/>
+
+> [!div class="nextstepaction"]
+> [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)
++
mysql https://docs.microsoft.com/en-us/azure/mysql/flexible-server/connect-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-java.md new file mode 100644
@@ -0,0 +1,485 @@
+---
+title: 'Quickstart: Use Java and JDBC with Azure Database for MySQLFlexible Server'
+description: Learn how to use Java and JDBC with an Azure Database for MySQL Flexible Server database.
+author: mksuni
+ms.author: sumuth
+ms.service: mysql
+ms.custom: mvc, devcenter, devx-track-azurecli
+ms.topic: quickstart
+ms.devlang: java
+ms.date: 01/16/2021
+---
+
+# Quickstart: Use Java and JDBC with Azure Database for MySQL Flexible Server
+
+This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL Flexible Server](./index.yml).
+
+## Prerequisites
+
+- An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/).
+- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need.
+- A supported [Java Development Kit](/azure/developer/java/fundamentals/java-jdk-long-term-support), version 8 (included in Azure Cloud Shell).
+- The [Apache Maven](https://maven.apache.org/) build tool.
+
+## Prepare the working environment
+
+We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+
+Set up those environment variables by using the following commands:
+
+```bash
+AZ_RESOURCE_GROUP=database-workshop
+AZ_DATABASE_NAME= flexibleserverdb
+AZ_LOCATION=<YOUR_AZURE_REGION>
+AZ_MYSQL_USERNAME=demo
+AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
+AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.
+- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+
+Next, create a resource group:
+
+```azurecli
+az group create \
+ --name $AZ_RESOURCE_GROUP \
+ --location $AZ_LOCATION \
+ | jq
+```
+
+> [!NOTE]
+> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
+> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
+
+## Create an Azure Database for MySQL instance
+
+The first thing we'll create is a managed MySQL server.
+
+> [!NOTE]
+> You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-server-portal.md).
+
+In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
+
+```azurecli
+az mysql flexible-server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --sku-name Standard_B1ms \
+ --storage-size 5120 \
+ --admin-user $AZ_MYSQL_USERNAME \
+ --admin-password $AZ_MYSQL_PASSWORD \
+ --public-access $AZ_LOCAL_IP_ADDRESS
+ | jq
+```
+
+Make sure your enter <YOUR-IP-ADDRESS> in order to access the server from your local machine. This command creates a Burstable Tier MySQL flexible server suitable for development.
+
+The MySQL server that you created has a empty database called **flexibleserverdb**. We will use this database for this article.
+
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+### Create a new Java project
+
+Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>mysql</groupId>
+ <artifactId>mysql-connector-java</artifactId>
+ <version>8.0.20</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+
+This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+
+- Java 8
+- A recent MySQL driver for Java
+
+### Prepare a configuration file to connect to Azure Database for MySQL
+
+Create a *src/main/resources/application.properties* file, and add:
+
+```properties
+url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
+user=demo
+password=$AZ_MYSQL_PASSWORD
+```
+
+- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.
+- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+
+> [!NOTE]
+> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
+
+### Create an SQL file to generate the database schema
+
+We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+
+```sql
+DROP TABLE IF EXISTS todo;
+CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BOOLEAN);
+```
+
+## Code the application
+
+### Connect to the database
+
+Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
+
+Create a *src/main/java/DemoApplication.java* file, that contains:
+
+```java
+package com.example.demo;
+
+import com.mysql.cj.jdbc.AbandonedConnectionCleanupThread;
+
+import java.sql.*;
+import java.util.*;
+import java.util.logging.Logger;
+
+public class DemoApplication {
+
+ private static final Logger log;
+
+ static {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log =Logger.getLogger(DemoApplication.class.getName());
+ }
+
+ public static void main(String[] args) throws Exception {
+ log.info("Loading application properties");
+ Properties properties = new Properties();
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+
+ log.info("Connecting to the database");
+ Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
+ log.info("Database connection test: " + connection.getCatalog());
+
+ log.info("Create database schema");
+ Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
+ Statement statement = connection.createStatement();
+ while (scanner.hasNextLine()) {
+ statement.execute(scanner.nextLine());
+ }
+
+ /*
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ insertData(todo, connection);
+ todo = readData(connection);
+ todo.setDetails("congratulations, you have updated data!");
+ updateData(todo, connection);
+ deleteData(todo, connection);
+ */
+
+ log.info("Closing database connection");
+ connection.close();
+ AbandonedConnectionCleanupThread.uncheckedShutdown();
+ }
+}
+```
+
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
+
+In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+
+> [!NOTE]
+> The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+
+> [!NOTE]
+> The `AbandonedConnectionCleanupThread.uncheckedShutdown();` line at the end is a MySQL driver specific command to destroy an internal thread when shutting down the application.
+> It can be safely ignored.
+
+You can now execute this main class with your favorite tool:
+
+- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it.
+- Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+
+The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Closing database connection
+```
+
+### Create a domain class
+
+Create a new `Todo` Java class, next to the `DemoApplication` class, and add the following code:
+
+```java
+package com.example.demo;
+
+public class Todo {
+
+ private Long id;
+ private String description;
+ private String details;
+ private boolean done;
+
+ public Todo() {
+ }
+
+ public Todo(Long id, String description, String details, boolean done) {
+ this.id = id;
+ this.description = description;
+ this.details = details;
+ this.done = done;
+ }
+
+ public Long getId() {
+ return id;
+ }
+
+ public void setId(Long id) {
+ this.id = id;
+ }
+
+ public String getDescription() {
+ return description;
+ }
+
+ public void setDescription(String description) {
+ this.description = description;
+ }
+
+ public String getDetails() {
+ return details;
+ }
+
+ public void setDetails(String details) {
+ this.details = details;
+ }
+
+ public boolean isDone() {
+ return done;
+ }
+
+ public void setDone(boolean done) {
+ this.done = done;
+ }
+
+ @Override
+ public String toString() {
+ return "Todo{" +
+ "id=" + id +
+ ", description='" + description + '\'' +
+ ", details='" + details + '\'' +
+ ", done=" + done +
+ '}';
+ }
+}
+```
+
+This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
+
+### Insert data into Azure Database for MySQL
+
+In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
+
+```java
+private static void insertData(Todo todo, Connection connection) throws SQLException {
+ log.info("Insert data");
+ PreparedStatement insertStatement = connection
+ .prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");
+
+ insertStatement.setLong(1, todo.getId());
+ insertStatement.setString(2, todo.getDescription());
+ insertStatement.setString(3, todo.getDetails());
+ insertStatement.setBoolean(4, todo.isDone());
+ insertStatement.executeUpdate();
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+insertData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Closing database connection
+```
+
+### Reading data from Azure Database for MySQL
+
+Let's read the data previously inserted, to validate that our code works correctly.
+
+In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
+
+```java
+private static Todo readData(Connection connection) throws SQLException {
+ log.info("Read data");
+ PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
+ ResultSet resultSet = readStatement.executeQuery();
+ if (!resultSet.next()) {
+ log.info("There is no data in the database!");
+ return null;
+ }
+ Todo todo = new Todo();
+ todo.setId(resultSet.getLong("id"));
+ todo.setDescription(resultSet.getString("description"));
+ todo.setDetails(resultSet.getString("details"));
+ todo.setDone(resultSet.getBoolean("done"));
+ log.info("Data read from the database: " + todo.toString());
+ return todo;
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+todo = readData(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Closing database connection
+```
+
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+### Updating data in Azure Database for MySQL
+
+Let's update the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
+
+```java
+private static void updateData(Todo todo, Connection connection) throws SQLException {
+ log.info("Update data");
+ PreparedStatement updateStatement = connection
+ .prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");
+
+ updateStatement.setString(1, todo.getDescription());
+ updateStatement.setString(2, todo.getDetails());
+ updateStatement.setBoolean(3, todo.isDone());
+ updateStatement.setLong(4, todo.getId());
+ updateStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the two following lines in the `main` method:
+
+```java
+todo.setDetails("congratulations, you have updated data!");
+updateData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Closing database connection
+```
+
+### Deleting data in Azure Database for MySQL
+
+Finally, let's delete the data we previously inserted.
+
+Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
+
+```java
+private static void deleteData(Todo todo, Connection connection) throws SQLException {
+ log.info("Delete data");
+ PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
+ deleteStatement.setLong(1, todo.getId());
+ deleteStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now uncomment the following line in the `main` method:
+
+```java
+deleteData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: demo
+[INFO ] Create database schema
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have set up JDBC correctly!', done=true}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you have updated data!', done=true}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
+```
+
+## Clean up resources
+
+Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for MySQL.
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your MySQL database to Azure Database for MySQL using dump and restore](../concepts-migrate-dump-restore.md)
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-troubleshoot-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-troubleshoot-overview.md
@@ -204,8 +204,10 @@ Elapsed Time 330 sec
``` ## Considerations
+* Only one troubleshoot operation can be run at a time per subscription. To run another troubleshoot operation, wait for the previous one to complete. Triggering more operations while a previous one hasn't completed will cause subsequent operations to fail.
* CLI Bug: If you are using Azure CLI to run the command, the VPN Gateway and the Storage account need to be in same resource group. Customers with the resources in different resource groups can use PowerShell or the Azure portal instead. + ## Next steps To learn how to diagnose a problem with a gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md).
notebooks https://docs.microsoft.com/en-us/azure/notebooks/quickstart-export-jupyter-notebook-project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notebooks/quickstart-export-jupyter-notebook-project.md new file mode 100644
@@ -0,0 +1,102 @@
+---
+title: Export a Jupyter Notebook project from the Azure Notebooks Preview
+description: Quickly export a Jupyter Notebook project.
+ms.topic: quickstart
+ms.date: 06/29/2020
+---
+
+# Quickstart: Export a Jupyter Notebook project in Azure Notebooks Preview
+
+[!INCLUDE [notebooks-status](../../includes/notebooks-status.md)]
+
+In this quickstart, you will download an Azure Notebooks project for use in other Jupyter Notebook solutions.
+
+## Prerequisites
+
+An existing Azure Notebooks project.
+
+## Export an Azure Notebooks project
+
+1. Go to [Azure Notebooks](https://notebooks.azure.com) and sign in. For details, see [Quickstart - Sign in to Azure Notebooks](quickstart-sign-in-azure-notebooks.md).
+
+1. From your public profile page, select **My Projects** at the top of the page:
+
+ ![My Projects link on the top of the browser window](media/quickstarts/my-projects-link.png)
+
+1. Select a project.
+1. Click the download "Download" button to trigger a zip file download that contains all of your project files.
+1. Alternatively, from a specific project page, click the "Download Project" button to download all the files of a given project.
+
+After downloading your project files, you can use them with other Jupyter Notebook solutions. Some options described in the sections below include:
+- [Visual Studio Code](#use-notebooks-in-visual-studio-code)
+- [GitHub Codespaces](#use-notebooks-in-github-codespaces)
+- [Azure Machine Learning](#use-notebooks-with-azure-machine-learning)
+- [Azure Lab Services](#use-azure-lab-services)
+- [GitHub](#use-github)
+
+## Create an environment for notebooks
+
+If you'd like to create an environment that matches that of the Azure Notebooks Preview, you can use the script file provided in GitHub.
+
+1. Navigate to the Azure Notebooks [GitHub repository](https://github.com/microsoft/AzureNotebooks) or [directly access the environment folder](https://aka.ms/aznbrequirementstxt).
+1. From a command prompt, navigate to the directory you want to use for your projects.
+1. Download the environment folder contents and follow the README instructions to install the Azure Notebooks package dependencies.
++
+## Use Notebooks in Visual Studio Code
+
+[VS Code](https://code.visualstudio.com/) is a free code editor that you can use locally or connected to remote compute. Combined with the Python extension, it offers a full environment for Python development including a rich native experience for working with Jupyter Notebooks.
+
+![VS Code Jupyter Notebook support](media/vs-code-jupyter-notebook.png)
+
+After [downloading](#export-an-azure-notebooks-project) your project files you can use them with VS Code. For guidance using VS Code with Jupyter Notebooks, see the [Working with Jupyter Notebooks in Visual Studio Code](https://code.visualstudio.com/docs/python/jupyter-support) and [Data Science in Visual Studio Code](https://code.visualstudio.com/docs/python/data-science-tutorial) tutorials.
+
+You can also use the [Azure Notebooks environment script](#create-an-environment-for-notebooks) with Visual Studio Code to create an environment that matches the Azure Notebooks Preview.
+
+## Use Notebooks in GitHub Codespaces
+
+GitHub Codespaces provides cloud hosted environments where you can edit your notebooks using Visual Studio Code or in your web browser. It offers the same great Jupyter experience as VS Code, but without needing to install anything on your device. If you donΓÇÖt want to set up a local environment and prefer a cloud-backed solution, then creating a codespace is a great option. To get started:
+1. [Download](#export-an-azure-notebooks-project) your project files.
+1. [Create a GitHub repository](https://help.github.com/github/getting-started-with-github/create-a-repo) for storing your notebooks.
+1. [Add your files](https://help.github.com/github/managing-files-in-a-repository/adding-a-file-to-a-repository) to the repository.
+1. [Request Access to the GitHub Codespaces Preview](https://github.com/features/codespaces)
+
+## Use Notebooks with Azure Machine Learning
+
+Azure Machine Learning provides an end-to-end machine learning platform to enable users to build and deploy models faster on Azure. Azure ML allows you to run Jupyter Notebooks on a VM or a shared cluster computing environment. If you are in need of a cloud-based solution for your ML workload with experiment tracking, dataset management, and more, we recommend Azure Machine Learning. To get started with Azure ML:
+
+1. [Download](#export-an-azure-notebooks-project) your project files.
+1. [Create a Workspace](../machine-learning/how-to-manage-workspace.md) in the Azure portal.
+
+ ![Create a Workspace](../machine-learning/media/how-to-manage-workspace/create-workspace.gif)
+
+1. Open the [Azure Studio (preview)](https://ml.azure.com/).
+1. Using the left-side navigation bar, select **Notebooks**.
+1. Click on the **Upload files** button and upload the project files that you downloaded from Azure Notebooks.
+
+For additional information about Azure ML and running Jupyter Notebooks, you can review the [documentation](../machine-learning/how-to-run-jupyter-notebooks.md) or try the [Intro to Machine Learning](/learn/modules/intro-to-azure-machine-learning-service/) module on Microsoft Learn.
++
+## Use Azure Lab Services
+
+[Azure Lab Services](https://azure.microsoft.com/services/lab-services/) allow educators to easily setup and provide on-demand access to preconfigured VMs for an entire classroom. If you're looking for a way to work with Jupyter Notebooks and cloud compute in a tailored classroom environment, Lab Services is a great option.
+
+![image](../lab-services/media/tutorial-setup-classroom-lab/new-lab-button.png)
+
+ After [downloading](#export-an-azure-notebooks-project) your project files you can use them with Azure Lab Services. For guidance about setting up a lab, see [Set up a lab to teach data science with Python and Jupyter Notebooks](../lab-services/class-type-jupyter-notebook.md)
+
+## Use GitHub
+
+GitHub provides a free, source-control-backed way to store notebooks (and other files), share your notebooks with others, and work collaboratively. If youΓÇÖre looking for a way to share your projects and collaborate with others, GitHub is a great option and can be combined with [GitHub Codespaces](#use-notebooks-in-github-codespaces) for a great development experience. To get started with GitHub
+
+1. [Download](#export-an-azure-notebooks-project) your project files.
+1. [Create a GitHub repository](https://help.github.com/github/getting-started-with-github/create-a-repo) for storing your notebooks.
+1. [Add your files](https://help.github.com/github/managing-files-in-a-repository/adding-a-file-to-a-repository) to the repository.
+
+## Next steps
+
+- [Learn about Python in Visual Studio Code](https://code.visualstudio.com/docs/python/python-tutorial)
+- [Learn about Azure Machine Learning and Jupyter Notebooks](../machine-learning/how-to-run-jupyter-notebooks.md)
+- [Learn about GitHub Codespaces](https://github.com/features/codespaces)
+- [Learn about Azure Lab Services](https://azure.microsoft.com/services/lab-services/)
+- [Learn about GitHub](https://help.github.com/github/getting-started-with-github/)
\ No newline at end of file
partner-solutions https://docs.microsoft.com/en-us/azure/partner-solutions/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/partner-solutions/overview.md
@@ -4,13 +4,13 @@ description: Learn about solutions offered by partners on Azure.
author: tfitzmac ms.topic: conceptual ms.service: partner-services
-ms.date: 01/15/2021
+ms.date: 01/19/2021
ms.author: tomfitz --- # Extend Azure with solutions from partners
-Partner organizations offer solutions that you can use in Azure to enhance your cloud infrastructure. These solutions are fully integrated into Azure. You work with these solutions in much the same way you would work with solutions from Microsoft. You use a resource provider, resource types, and an API to manage the solution.
+Partner organizations offer solutions that you can use in Azure to enhance your cloud infrastructure. These solutions are fully integrated into Azure. You work with these solutions in much the same way you would work with solutions from Microsoft. You use a resource provider, resource types, and SDKs to manage the solution.
Partner solutions are available through the Marketplace.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concept-reserved-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concept-reserved-pricing.md
@@ -48,7 +48,7 @@ The following table describes required fields.
| Field | Description | | :------------ | :------- |
-| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
| Scope | The vCore reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for PostgreSQL servers in the selected subscription and the selected resource group within that subscription. | Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL reserved capacity reservation. | Deployment Type | The Azure Database for PostgreSQL resource type that you want to buy the reservation for.
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-hyperscale-reserved-pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-reserved-pricing.md
@@ -54,7 +54,7 @@ The following table describes required fields.
| Field | Description | |--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's monetary commitment balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select **Shared**, the vCore reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions within your billing context. For Enterprise Agreement customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator. If you select **Single subscription**, the vCore reservation discount is applied to Hyperscale (Citus) server groups in this subscription. If you select **Single resource group**, the reservation discount is applied to Hyperscale (Citus) server groups in the selected subscription and the selected resource group within that subscription. | | Region | The Azure region that's covered by the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation. | | Term | One year or three years. |
postgresql https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/connect-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/connect-csharp.md new file mode 100644
@@ -0,0 +1,329 @@
+---
+title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Flexible Server'
+description: This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server.
+author: mksuni
+ms.author: sumuth
+ms.service: postgresql
+ms.custom: "mvc, devcenter, devx-track-csharp"
+ms.devlang: csharp
+ms.topic: quickstart
+ms.date: 01/16/2021
+---
+
+# Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Flexible Server
+
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL.
+
+## Prerequisites
+For this quickstart you need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Create an Azure Database for PostgreSQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you do not have one.
+- Use the empty *postgres* database available on the server or create a [new database](./quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql).
+- Install the [.NET Framework](https://www.microsoft.com/net/download) for your platform (Windows, Ubuntu Linux, or macOS).
+- Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.
+- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio.
+
+## Get connection information
+Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+
+1. Log in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Click the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+ :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+
+## Step 1: Connect and insert data
+Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.
+- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin";
+ private static string DBname = "postgres";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0};Username={1};Database={2};Port={3};Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
++
+ using (var conn = new NpgsqlConnection(connString))
+
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS inventory", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished dropping table (if existed)");
+
+ }
+
+ using (var command = new NpgsqlCommand("CREATE TABLE inventory(id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER)", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating table");
+ }
+
+ using (var command = new NpgsqlCommand("INSERT INTO inventory (name, quantity) VALUES (@n1, @q1), (@n2, @q2), (@n3, @q3)", conn))
+ {
+ command.Parameters.AddWithValue("n1", "banana");
+ command.Parameters.AddWithValue("q1", 150);
+ command.Parameters.AddWithValue("n2", "orange");
+ command.Parameters.AddWithValue("q2", 154);
+ command.Parameters.AddWithValue("n3", "apple");
+ command.Parameters.AddWithValue("q3", 100);
+
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+## Step 2: Read data
+Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
+- [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.
+- [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresRead
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin";
+ private static string DBname = "postgres";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
++
+ using (var command = new NpgsqlCommand("SELECT * FROM inventory", conn))
+ {
+
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine(
+ string.Format(
+ "Reading from table=({0}, {1}, {2})",
+ reader.GetInt32(0).ToString(),
+ reader.GetString(1),
+ reader.GetInt32(2).ToString()
+ )
+ );
+ }
+ reader.Close();
+ }
+ }
+
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
++
+## Step 3: Update data
+Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method:
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
+- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.
+- [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
+
+> [!IMPORTANT]
+> Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
+
+```csharp
+using System;
+using Npgsql;
+
+namespace Driver
+{
+ public class AzurePostgresUpdate
+ {
+ // Obtain connection string information from the portal
+ //
+ private static string Host = "mydemoserver.postgres.database.azure.com";
+ private static string User = "mylogin";
+ private static string DBname = "postgres";
+ private static string Password = "<server_admin_password>";
+ private static string Port = "5432";
+
+ static void Main(string[] args)
+ {
+ // Build connection string using parameters from portal
+ //
+ string connString =
+ String.Format(
+ "Server={0}; User Id={1}; Database={2}; Port={3}; Password={4};SSLMode=Prefer",
+ Host,
+ User,
+ DBname,
+ Port,
+ Password);
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+
+ using (var command = new NpgsqlCommand("UPDATE inventory SET quantity = @q WHERE