Updates from: 01/29/2021 04:08:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/company-branding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/company-branding.md deleted file mode 100644
@@ -1,131 +0,0 @@
- Title: Add branding to your organization's sign-in page-
-description: Learn how to add your organization's branding to the Azure Active Directory B2C pages.
------- Previously updated : 12/10/2020---
-zone_pivot_groups: b2c-policy-type
--
-# Add branding to your organization's Azure Active Directory B2C pages
-
-[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-
-You can customize your Azure AD B2C pages with a banner logo, background image, and background color by using Azure Active Directory [Company branding](../active-directory/fundamentals/customize-branding.md). The company branding includes signing up, signing in, profile editing, and password resetting.
-
-> [!TIP]
-> To customize other aspects of your user flow pages beyond the banner logo, background image, or background color, see how to [customize the UI with HTML template](customize-ui-with-html.md).
-
-## Prerequisites
-
-[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
--
-To customize your user flow pages, you first configure company branding in Azure Active Directory, then you enable it in the page layouts of your user flows in Azure AD B2C.
-
-## Configure company branding
-
-Start by setting the banner logo, background image, and background color within **Company branding**.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. Under **Manage**, select **Company branding**.
-1. Follow the steps in [Add branding to your organization's Azure Active Directory sign-in page](../active-directory/fundamentals/customize-branding.md).
-
-Keep these things in mind when you configure company branding in Azure AD B2C:
-
-* Company branding in Azure AD B2C is currently limited to **background image**, **banner logo**, and **background color** customization. The other properties in the company branding pane, for example those in **Advanced settings**, are *not supported*.
-* In your user flow pages, the background color is shown before the background image is loaded. We recommended you choose a background color that closely matches the colors in your background image for a smoother loading experience.
-* The banner logo appears in the verification emails sent to your users when they initiate a sign-up user flow.
-
-::: zone pivot="b2c-user-flow"
-
-## Enable branding in user flow pages
-
-Once you've configured company branding, enable it in your user flows.
-
-1. In the left menu of the Azure portal, select **Azure AD B2C**.
-1. Under **Policies**, select **User flows (policies)**.
-1. Select the user flow for which you'd like to enable company branding. Company branding is **not supported** for the standard *Sign in* and standard *Profile editing* user flow types.
-1. Under **Customize**, select **Page layouts**, and then select the layout you'd like to brand. For example, select **Unified sign up or sign in page**.
-1. For the **Page Layout Version (Preview)**, choose version **1.2.0** or above.
-1. Select **Save**.
-
-If you'd like to brand all pages in the user flow, set the page layout version for each page layout in the user flow.
-
-![Page layout selection in Azure AD B2C in the Azure portal](media/company-branding/portal-02-page-layout-select.png)
-
-::: zone-end
-
-::: zone pivot="b2c-custom-policy"
-
-## Select a page layout
-
-To enable company branding, you need to [define a page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. To specify a page layout in your custom policies that use an old **DataUri** value.
-
-Learn how to [Migrating to page layout](contentdefinitions.md#migrating-to-page-layout) with page version.
-
-The following example shows the content definition identifiers and the corresponding **DataUri** with page contract:
-
-```xml
-<ContentDefinitions>
- <ContentDefinition Id="api.error">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.idpselections">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.idpselections.signup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.signuporsignin">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.selfasserted">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.selfasserted.profileupdate">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.phonefactor">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0</DataUri>
- </ContentDefinition>
-</ContentDefinitions>
-```
-
-::: zone-end
-
-The following example shows a custom banner logo and background image on a *Sign up and sign in* user flow page that uses the Ocean Blue template:
-
-![Branded sign-up/sign-in page served by Azure AD B2C](media/company-branding/template-ocean-blue-branded.png)
-
-## Use company branding assets in custom HTML
-
-To use your company branding assets in [custom HTML](customize-ui-with-html.md), add the following tags outside the `<div id="api">` tag:
-
-```HTML
-<img data-tenant-branding-background="true" />
-<img data-tenant-branding-logo="true" alt="Company Logo" />
-```
-
-The image source is replaced with that of the background image and banner logo.
-
-## Next steps
-
-Find more information about how you can customize the user interface of your applications in [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/customize-ui-with-html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui-with-html.md
@@ -1,7 +1,7 @@
Title: Customize the user interface
+ Title: Customize the user interface with HTML templates
-description: Learn how to customize the user interface for your applications that use Azure Active Directory B2C.
+description: Learn how to customize the user interface with HTML templates for your applications that use Azure Active Directory B2C.
@@ -9,21 +9,21 @@
Previously updated : 12/10/2020 Last updated : 01/28/2021 zone_pivot_groups: b2c-policy-type
-# Customize the user interface in Azure Active Directory B2C
+# Customize the user interface with HTML templates in Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] Branding and customizing the user interface that Azure Active Directory B2C (Azure AD B2C) displays to your customers helps provide a seamless user experience in your application. These experiences include signing up, signing in, profile editing, and password resetting. This article introduces the methods of user interface (UI) customization. > [!TIP]
-> If you want to modify only the banner logo, background image, and background color of your user flow pages, you can try the [Company branding](company-branding.md) feature.
+> If you want to modify only the banner logo, background image, and background color of your user flow pages, you can try the [Company branding](customize-ui.md) feature.
## Custom HTML and CSS overview
@@ -383,7 +383,15 @@ To use the sample:
1. Now modify the policy, pointing to your HTML file, as mentioned previously. 1. If you see missing fonts, images, or CSS, check your references in the extensions policy and the \*.html files.
-## Next steps
+## Use company branding assets in custom HTML
+
+To use [company branding](customize-ui.md#configure-company-branding) assets in a custom HTML, add the following tags outside the `<div id="api">` tag. The image source is replaced with that of the background image and banner logo.
-Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
+```HTML
+<img data-tenant-branding-background="true" />
+<img data-tenant-branding-logo="true" alt="Company Logo" />
+```
+
+## Next steps
+Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
\ No newline at end of file
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/customize-ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui.md new file mode 100644 /dev/null
@@ -0,0 +1,277 @@
+
+ Title: Customize the user interface
+
+description: Learn how to customize the user interface for your applications that use Azure Active Directory B2C.
+++++++ Last updated : 01/28/2021+++
+zone_pivot_groups: b2c-policy-type
++
+# Customize the user interface in Azure Active Directory B2C
+
+[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
+
+Branding and customizing the user interface that Azure Active Directory B2C (Azure AD B2C) displays to your customers, helps provide a seamless user experience in your application. These experiences include signing up, signing in, profile editing, and password resetting. In this article, you customize your Azure AD B2C pages using page template, and company branding.
+
+> [!TIP]
+> To customize other aspects of your user flow pages beyond the page template, banner logo, background image, or background color, see how to [customize the UI with HTML template](customize-ui-with-html.md).
+
+## Prerequisites
+
+[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+
+## Overview
+
+Azure AD B2C provide several built-in templates you can choose from to give your user experience pages a professional look. These page templates can also and serve as starting point for your own customization, using the [company branding](#company-branding) feature.
+
+### Ocean Blue
+
+Example of the Ocean Blue template rendered on sign up sign in page:
+
+![Ocean Blue template screenshot](media/customize-ui/template-ocean-blue.png)
+
+### Slate Gray
+
+Example of the Slate Gray template rendered on sign up sign in page:
+
+![Slate Gray template screenshot](media/customize-ui/template-slate-gray.png)
+
+### Classic
+
+Example of the Classic template rendered on sign up sign in page:
+
+![Classic template screenshot](media/customize-ui/template-classic.png)
+
+### Company branding
+
+You can customize your Azure AD B2C pages with a banner logo, background image, and background color by using Azure Active Directory [Company branding](../active-directory/fundamentals/customize-branding.md). The company branding includes signing up, signing in, profile editing, and password resetting.
+
+The following example shows a *Sign up and sign in* page with a custom logo, background image, using Ocean Blue template:
+
+![Branded sign-up/sign-in page served by Azure AD B2C](media/customize-ui/template-ocean-blue-branded.png)
++
+## Select a page template
+
+::: zone pivot="b2c-user-flow"
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **User flows**.
+1. Select a user flow you want to customize.
+1. Under **Customize** in the left menu, select **Page layouts** and then select a **Template**.
+ ![Template selection drop-down in user flow page of Azure portal](./media/customize-ui/select-page-template.png)
+
+When you choose a template, the selected template is applied to all pages in your user flow. The URI for each page is visible in the **Custom page URI** field.
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
+
+To select a page template, set the `LoadUri` element of the [content definitions](contentdefinitions.md). The following example shows the content definition identifiers and the corresponding `LoadUri`.
+
+Ocean Blue:
+
+```xml
+<ContentDefinitions>
+ <ContentDefinition Id="api.error">
+ <LoadUri>~/tenant/templates/AzureBlue/exception.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections">
+ <LoadUri>~/tenant/templates/AzureBlue/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections.signup">
+ <LoadUri>~/tenant/templates/AzureBlue/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.signuporsignin">
+ <LoadUri>~/tenant/templates/AzureBlue/unified.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted.profileupdate">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountsignup">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.phonefactor">
+ <LoadUri>~/tenant/templates/AzureBlue/multifactor-1.0.0.cshtml</LoadUri>
+ </ContentDefinition>
+</ContentDefinitions>
+```
+
+Slate Gray:
+
+```xml
+<ContentDefinitions>
+ <ContentDefinition Id="api.error">
+ <LoadUri>~/tenant/templates/MSA/exception.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections">
+ <LoadUri>~/tenant/templates/MSA/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections.signup">
+ <LoadUri>~/tenant/templates/MSA/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.signuporsignin">
+ <LoadUri>~/tenant/templates/MSA/unified.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted">
+ <LoadUri>~/tenant/templates/MSA/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted.profileupdate">
+ <LoadUri>~/tenant/templates/MSA/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountsignup">
+ <LoadUri>~/tenant/templates/MSA/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <LoadUri>~/tenant/templates/MSA/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.phonefactor">
+ <LoadUri>~/tenant/templates/MSA/multifactor-1.0.0.cshtml</LoadUri>
+ </ContentDefinition>
+</ContentDefinitions>
+```
+
+Classic:
+
+```xml
+<ContentDefinitions>
+ <ContentDefinition Id="api.error">
+ <LoadUri>~/tenant/templates/default/exception.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections">
+ <LoadUri>~/tenant/templates/default/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections.signup">
+ <LoadUri>~/tenant/templates/default/idpSelector.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.signuporsignin">
+ <LoadUri>~/tenant/templates/AzureBlue/unified.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted">
+ <LoadUri>~/tenant/templates/default/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted.profileupdate">
+ <LoadUri>~/tenant/templates/default/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountsignup">
+ <LoadUri>~/tenant/templates/default/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <LoadUri>~/tenant/templates/default/selfAsserted.cshtml</LoadUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.phonefactor">
+ <LoadUri>~/tenant/templates/default/multifactor-1.0.0.cshtml</LoadUri>
+ </ContentDefinition>
+</ContentDefinitions>
+```
+::: zone-end
++
+## Configure company branding
+
+To customize your user flow pages, you first configure company branding in Azure Active Directory, then you enable it in your user flows in Azure AD B2C.
+
+Start by setting the banner logo, background image, and background color within **Company branding**.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Under **Manage**, select **Company branding**.
+1. Follow the steps in [Add branding to your organization's Azure Active Directory sign-in page](../active-directory/fundamentals/customize-branding.md).
+
+Keep these things in mind when you configure company branding in Azure AD B2C:
+
+* Company branding in Azure AD B2C is currently limited to **background image**, **banner logo**, and **background color** customization. The other properties in the company branding pane, for example, **Advanced settings**, are *not supported*.
+* In your user flow pages, the background color is shown before the background image is loaded. We recommended you choose a background color that closely matches the colors in your background image for a smoother loading experience.
+* The banner logo appears in the verification emails sent to your users when they initiate a sign-up user flow.
+++
+## Enable company branding in user flow pages
+
+::: zone pivot="b2c-user-flow"
+
+Once you've configured company branding, enable it in your user flows.
+
+1. In the left menu of the Azure portal, select **Azure AD B2C**.
+1. Under **Policies**, select **User flows (policies)**.
+1. Select the user flow for which you'd like to enable company branding. Company branding is **not supported** for the standard *Sign in* and standard *Profile editing* user flow types.
+1. Under **Customize**, select **Page layouts**, and then select the page you'd like to brand. For example, select **Unified sign up or sign in page**.
+1. For the **Page Layout Version (Preview)**, choose version **1.2.0** or above.
+1. Select **Save**.
+
+If you'd like to brand all pages in the user flow, set the page layout version for each page layout in the user flow.
+
+![Page layout selection in Azure AD B2C in the Azure portal](media/customize-ui/portal-02-page-layout-select.png)
+
+::: zone-end
+
+::: zone pivot="b2c-custom-policy"
+
+Once you've configured company branding, enable it in your custom policy. Configure the [page layout version](contentdefinitions.md#migrating-to-page-layout) with page `contract` version for *all* of the content definitions in your custom policy. The format of the value must contain the word `contract`: _urn:com:microsoft:aad:b2c:elements:**contract**:page-name:version_. To specify a page layout in your custom policies that use an old **DataUri** value. For more information, learn how to [migrate to page layout](contentdefinitions.md#migrating-to-page-layout) with page version.
+
+The following example shows the content definitions with their corresponding the page contract, and *Ocean Blue* page template:
+
+```xml
+<ContentDefinitions>
+ <ContentDefinition Id="api.error">
+ <LoadUri>~/tenant/templates/AzureBlue/exception.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections">
+ <LoadUri>~/tenant/templates/AzureBlue/idpSelector.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections.signup">
+ <LoadUri>~/tenant/templates/AzureBlue/idpSelector.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.signuporsignin">
+ <LoadUri>~/tenant/templates/AzureBlue/unified.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted.profileupdate">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountsignup">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <LoadUri>~/tenant/templates/AzureBlue/selfAsserted.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.phonefactor">
+ <LoadUri>~/tenant/templates/AzureBlue/multifactor-1.0.0.cshtml</LoadUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0</DataUri>
+ </ContentDefinition>
+</ContentDefinitions>
+```
+
+::: zone-end
+
+## Next steps
+
+Find more information about how you can customize the user interface of your applications in [Customize the user interface of your application in Azure Active Directory B2C](customize-ui-with-html.md).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-multi-tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
@@ -192,4 +192,4 @@ When working with custom policies, you might sometimes need additional informati
To help diagnose issues, you can temporarily put the policy into "developer mode" and collect logs with Azure Application Insights. Find out how in [Azure Active Directory B2C: Collecting Logs](troubleshoot-with-application-insights.md).
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-facebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-facebook.md
@@ -81,6 +81,21 @@ To enable sign-in for users with a Facebook account in Azure Active Directory B2
::: zone pivot="b2c-custom-policy"
+## Create a policy key
+
+You need to store the App Secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+4. On the Overview page, select **Identity Experience Framework**.
+5. Select **Policy Keys** and then select **Add**.
+6. For **Options**, choose `Manual`.
+7. Enter a **Name** for the policy key. For example, `FacebookSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+8. In **Secret**, enter your App Secret that you previously recorded.
+9. For **Key usage**, select `Signature`.
+10. Click **Create**.
+ ## Configure a Facebook account as an identity provider 1. In the `SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`** file, replace the value of `client_id` with the Facebook application ID:
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
@@ -41,7 +41,7 @@ To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. Under **Authentication settings**, select **Edit** 1. Select **Enable 3-legged OAuth** checkbox. 1. Select **Request email address from users** checkbox.
- 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1A_signup_signin_twitter`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
+ 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/microsoft-graph-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
@@ -9,7 +9,7 @@
Previously updated : 01/27/2021 Last updated : 01/28/2021
@@ -40,9 +40,13 @@ A phone number that can be used by a user to sign-in using [SMS or voice calls](
- [Update](/graph/api/phoneauthenticationmethod-update) - [Delete](/graph/api/phoneauthenticationmethod-delete)
+Note, the [list](/graph/api/authentication-list-phonemethods) operation returns only enabled phone numbers. The following phone number should be enabled to use with the list operations.
+
+![Enable phone sign-in](./media/microsoft-graph-operations/enable-phone-sign-in.png)
+ ## Self-service password reset email address (beta)
-An email address that can be used by a user to rest the password for [Username sign-in account](identity-provider-local.md#username-sign-in). For more information, see [Azure AD authentication methods API](/graph/api/resources/emailauthenticationmethod).
+An email address that can be used by a [username sign-in account](identity-provider-local.md#username-sign-in) to reset the password. For more information, see [Azure AD authentication methods API](/graph/api/resources/emailauthenticationmethod).
- [Add](/graph/api/emailauthenticationmethod-post) - [List](/graph/api/emailauthenticationmethod-list)
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/oauth2-technical-profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/oauth2-technical-profile.md
@@ -86,6 +86,7 @@ The technical profile also returns claims that aren't returned by the identity p
| ClaimsEndpointAccessTokenName | No | The name of the access token query string parameter. Some identity providers' claims endpoints support GET HTTP request. In this case, the bearer token is sent by using a query string parameter instead of the authorization header. Default value: `access_token`. | | ClaimsEndpointFormatName | No | The name of the format query string parameter. For example, you can set the name as `format` in this LinkedIn claims endpoint `https://api.linkedin.com/v1/people/~?format=json`. | | ClaimsEndpointFormat | No | The value of the format query string parameter. For example, you can set the value as `json` in this LinkedIn claims endpoint `https://api.linkedin.com/v1/people/~?format=json`. |
+| BearerTokenTransmissionMethod | No | Specifies how the token is sent. The default method is a query string. To send the token as a request header, set to `AuthorizationHeader`. |
| ProviderName | No | The name of the identity provider. | | response_mode | No | The method that the identity provider uses to send the result back to Azure AD B2C. Possible values: `query`, `form_post` (default), or `fragment`. | | scope | No | The scope of the request that is defined according to the OAuth2 identity provider specification. Such as `openid`, `profile`, and `email`. |
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/service-limits.md new file mode 100644 /dev/null
@@ -0,0 +1,67 @@
+
+ Title: Azure AD B2C service limits and restrictions
+
+description: Reference for service limits and restrictions for Azure Active Directory B2C service.
+++++++ Last updated : 01/29/2021++++
+# Azure Active Directory B2C service limits and restrictions
+
+This article contains the usage constraints and other service limits for the Azure Active Directory B2C (Azure AD B2C) service.
+
+## End user/consumption related limits
+
+The following end-user related service limits apply to all authentication and authorization protocols supported by Azure AD B2C, including SAML, Open ID Connect, OAuth2, and ROPC.
+
+|Category |Limit |
+|||
+|Number of requests per IP address per Azure AD B2C tenant |6,000/5min  |
+|Total number of requests per Azure AD B2C tenant |12,000/min |
+
+The number of requests can vary depending on the number of directory reads and writes that occur during the Azure AD B2C user journey. For example, a simple sign-in journey that reads from the directory consists of 1 request. If the sign-in journey must also update the directory, this operation is counted as an additional request.
+
+## Azure AD B2C configuration limits
+
+The following table lists the administrative configuration limits in the Azure AD B2C service.
+
+|Category |Limit |
+|||
+|Number of applications per Azure AD B2C tenant |250 |
+|Number of scopes per applicationΓÇ» |1000 |
+|Number of [custom attributes](user-profile-attributes.md#extension-attributes) per user <sup>1</sup> |100 |
+|Number of redirect URLs per application |100 |
+|Number of sign out URLs per application  |1  |
+|String Limit per Attribute |250 Chars |
+|Number of B2C tenants per subscription |20 |
+|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 |
+|Number of policies per Azure AD B2C tenant |200 |
+|Maximum policy file size |400 KB |
+
+<sup>1</sup> See also [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md).
+
+## Next steps
+
+- Learn about [Microsoft GraphΓÇÖs throttling guidance](/graph/throttling.md)
+- Learn about the [validation differences for Azure AD B2C applications](../active-directory/develop/supported-accounts-validation.md)
+++++++++++++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/plan-auto-user-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
@@ -318,7 +318,7 @@ Refer to the following links to troubleshoot any issues that may turn up during
* [Keep up to date on what's new with Azure AD](https://azure.microsoft.com/updates/?product=active-directory)
-* [Stack overflow Azure AD forum](https://stackoverflow.com/questions/tagged/azure-active-directory)
+* [Microsoft Q&A Azure AD forum](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
## Next steps * [Configure Automatic User Provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -50,7 +50,7 @@ Every application requires different attributes to create a user or group. Start
|--|--|--| |loginName|userName|userPrincipalName| |firstName|name.givenName|givenName|
-|lastName|name.lastName|lastName|
+|lastName|name.familyName|surName|
|workMail|emails[type eq ΓÇ£workΓÇ¥].value|Mail| |manager|manager|manager| |tag|urn:ietf:params:scim:schemas:extension:2.0:CustomExtension:tag|extensionAttribute1|
@@ -1195,7 +1195,7 @@ The SCIM spec does not define a SCIM-specific scheme for authentication and auth
|--|--|--|--| |Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Supported on a case-by-case basis for gallery apps. Not supported for non-gallery apps.| |Long-lived bearer token|Long-lived tokens do not require a user to be present. They are easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. |
-|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog.|
+|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Not supported for gallery and non-gallery apps. Support is in our backlog.| > [!NOTE]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/workday-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/workday-integration-reference.md
@@ -38,7 +38,7 @@ To further secure the connectivity between Azure AD provisioning service and Wor
1. Copy all IP address ranges listed within the element *addressPrefixes* and use the range to build your IP address list. 1. Log in to Workday admin portal. 1. Access the **Maintain IP Ranges** task to create a new IP range for Azure data centers. Specify the IP ranges (using CIDR notation) as a comma-separated list.
-1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use **Authentication Whitelist** to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes.
+1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use the authentication allow list to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes.
1. Access the **Activate All Pending Authentication Policy Changes** task to confirm changes. ### Limiting access to worker data in Workday using constrained security groups
@@ -344,7 +344,7 @@ If any of the above queries returns a future-dated hire, then the following *Get
</Get_Workers_Request> ```
-### Retrieving worker data attributes
+## Retrieving worker data attributes
The *Get_Workers* API can return different data sets associated with a worker. Depending on the [XPATH API expressions](workday-attribute-reference.md) configured in the provisioning schema, Azure AD provisioning service determines which data sets to retrieve from Workday. Accordingly, the *Response_Group* flags are set in the *Get_Workers* request.
@@ -399,6 +399,9 @@ The table below provides guidance on mapping configuration to use to retrieve a
| 45 | User Account Data | No | wd:Worker\_Data/wd:User\_Account\_Data | | 46 | Worker Document Data | No | wd:Worker\_Data/wd:Worker\_Document\_Data |
+>[!NOTE]
+>Each Workday entity listed in the table is protected by a **Domain Security Policy** in Workday. If you are unable to retrieve any attribute associated with the entity after setting the right XPATH, check with your Workday admin to ensure that the appropriate domain security policy is configured for the integration system user associated with the provisioning app. For example, to retrieve *Skill data*, *Get* access is required on the Workday domain *Worker Data: Skills and Experience*.
+ Here are some examples on how you can extend the Workday integration to meet specific requirements. **Example 1**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-oath-tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
@@ -36,25 +36,25 @@ OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
![Uploading OATH tokens to the MFA OATH tokens blade](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png)
-Once tokens are acquired they must be uploaded in a comma-separated values (CSV) file format including the UPN, serial number, secret key, time interval, manufacturer, and model as shown in the following example:
+Once tokens are acquired they must be uploaded in a comma-separated values (CSV) file format including the UPN, serial number, secret key, time interval, manufacturer, and model, as shown in the following example:
```csv upn,serial number,secret key,time interval,manufacturer,model Helga@contoso.com,1234567,1234567abcdef1234567abcdef,60,Contoso,HardwareKey
-```
+```
> [!NOTE] > Make sure you include the header row in your CSV file. Once properly formatted as a CSV file, an administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
-Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version.
+Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version.
-Once any errors have been addressed, the administrator then can activate each key by selecting **Activate** for the token and entering the OTP displayed on the token.
+Once any errors have been addressed, the administrator then can activate each key by selecting **Activate** for the token and entering the OTP displayed on the token. You can activate a maximum of 200 OATH tokens every 5 minutes.
Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/azuread-dev/howto-get-appsource-certified https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/azuread-dev/howto-get-appsource-certified.md
@@ -109,9 +109,9 @@ For more information about the AppSource trial experience, see [this video](http
## Get support
-For Azure AD integration, we use [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-active-directory+appsource) with the community to provide support.
+For Azure AD integration, we use [Microsoft Q&A](https://docs.microsoft.com/answers/products/) with the community to provide support.
-We highly recommend you ask your questions on Stack Overflow first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]` and `[appsource]`](https://stackoverflow.com/questions/tagged/azure-active-directory+appsource).
+We highly recommend you ask your questions on Microsoft Q&A first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]`](https://docs.microsoft.com/answers/topics/azure-active-directory.html).
Use the following comments section to provide feedback and help us refine and shape our content.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/configure-token-lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/configure-token-lifetimes.md
@@ -33,7 +33,7 @@ To get started, do the following steps:
1. To see all policies that have been created in your organization, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement. ```powershell
- Get-AzureADPolicy -All
+ Get-AzureADPolicy -All $true
``` 1. To see which apps and service principals are linked to a specific policy you identified run the following [Get-AzureADPolicyAppliedObject](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing **1a37dad8-5da7-4cc8-87c7-efbc0326cf20** with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/consent-framework-links https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/consent-framework-links.md
@@ -27,4 +27,4 @@ This article is to help you learn more about how the Azure AD consent framework
- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](../azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code) ## Next steps
-[AzureAD StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[AzureAD Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/delegated-and-app-perms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/delegated-and-app-perms.md
@@ -24,4 +24,4 @@
- For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. ## Next steps
-[AzureAD StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
+[AzureAD Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-support-help-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
@@ -22,20 +22,20 @@ If you're just starting to integrate with Azure Active Directory (Azure AD), Mic
> [!div class="checklist"] > * How to search whether your question hasn't been answered by the community, or if an existing documentation for the feature you're trying to implement already exists > * In some cases, you just want to use our support tools to help you debug a specific problem
-> * If you can't find the answer that you need, you may want to ask a question on *Stack Overflow*
+> * If you can't find the answer that you need, you may want to ask a question on *Microsoft Q&A*
> * If you find an issue with one of our authentication libraries, raise a *GitHub* issue > * Finally, if you need to talk to someone, you might want to open a support request ## Search
-If you have a development-related question, you may be able to find the answer in the documentation, [GitHub samples](https://github.com/azure-samples), or answers to [Stack Overflow](https://www.stackoverflow.com) questions.
+If you have a development-related question, you may be able to find the answer in the documentation, [GitHub samples](https://github.com/azure-samples), or answers to [Microsoft Q&A](https://docs.microsoft.com/answers/products/) questions.
### Scoped search
-For faster results, scope your search to Stack Overflow, the documentation, and the code samples by using the following query in your favorite search engine:
+For faster results, scope your search to Microsoft Q&A, the documentation, and the code samples by using the following query in your favorite search engine:
```
-{Your Search Terms} (site:stackoverflow.com OR site:docs.microsoft.com OR site:github.com/azure-samples OR site:cloudidentity.com OR site:developer.microsoft.com/graph)
+{Your Search Terms} (site:http://www.docs.microsoft.com/answers/products/ OR site:docs.microsoft.com OR site:github.com/azure-samples OR site:cloudidentity.com OR site:developer.microsoft.com/graph)
``` Where *{Your Search Terms}* correspond to your search keywords.
@@ -47,26 +47,26 @@ Where *{Your Search Terms}* correspond to your search keywords.
| [jwt.ms](https://jwt.ms) | Paste an ID or access token to decode the claims names and values. | | [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)| Tool that lets you make requests and see responses against the Microsoft Graph API. |
-## Post a question to Stack Overflow
+## Post a question to Microsoft Q&A
-Stack Overflow is the preferred channel for development-related questions. Here, members of the developer community and Microsoft team members are directly involved in helping you to solve your problems.
+Microsoft Q&A is the preferred channel for development-related questions. Here, members of the developer community and Microsoft team members are directly involved in helping you to solve your problems.
-If you can't find an answer to your question through search, submit a new question to Stack Overflow. Use one of the following tags when asking questions to help the community identify and answer your question more quickly:
+If you can't find an answer to your question through search, submit a new question to Microsoft Q&A. Use one of the following tags when asking questions to help the community identify and answer your question more quickly:
|Component/area | Tags | |||
-| ADAL library | [[adal]](https://stackoverflow.com/questions/tagged/adal) |
-| MSAL library | [[msal]](https://stackoverflow.com/questions/tagged/msal) |
-| OWIN middleware | [[azure-active-directory]](https://stackoverflow.com/questions/tagged/azure-active-directory) |
-| [Azure B2B](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](https://stackoverflow.com/questions/tagged/azure-ad-b2b) |
-| [Azure B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](https://stackoverflow.com/questions/tagged/azure-ad-b2c) |
-| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[microsoft-graph]](https://stackoverflow.com/questions/tagged/microsoft-graph) |
-| Any other area related to authentication or authorization topics | [[azure-active-directory]](https://stackoverflow.com/questions/tagged/azure-active-directory) |
+| ADAL library | [[adal]](https://docs.microsoft.com/answers/topics/azure-ad-adal-deprecation.html) |
+| MSAL library | [[msal]](https://docs.microsoft.com/answers/topics/azure-ad-msal.html) |
+| OWIN middleware | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-active-directory.html) |
+| [Azure B2B](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](https://docs.microsoft.com/answers/topics/azure-ad-b2b.html) |
+| [Azure B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](https://docs.microsoft.com/answers/topics/azure-ad-b2c.html) |
+| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](https://docs.microsoft.com/answers/topics/azure-ad-graph.html) |
+| Any other area related to authentication or authorization topics | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-ad-graph.html) |
-The following posts from Stack Overflow contain tips on how to ask questions and how to add source code. Follow these guidelines to increase the chances for community members to assess and respond to your question quickly:
+The following posts from Microsoft Q&A contain tips on how to ask questions and how to add source code. Follow these guidelines to increase the chances for community members to assess and respond to your question quickly:
-* [How do I ask a good question](https://stackoverflow.com/help/how-to-ask)
-* [How to create a minimal, complete, and verifiable example](https://stackoverflow.com/help/mcve)
+* [How do I ask a good question](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html)
+* [How to create a minimal, complete, and verifiable example](https://docs.microsoft.com/answers/articles/24907/how-to-write-a-quality-answer.html)
## Create a GitHub issue
@@ -83,6 +83,6 @@ If you need to talk to someone, you can open a support request. If you are an Az
* If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
-* If you are not an Azure customer, you can also open a support request with Microsoft via [our commercial support](https://support.microsoft.com/en-us/gp/contactus81?Audience=Commercial).
+* If you are not an Azure customer, you can also open a support request with Microsoft via [our commercial support](https://support.serviceshub.microsoft.com/supportforbusiness).
You can also try a [virtual agent](https://support.microsoft.com/contactus/?ws=support) to obtain support or ask questions.\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-logging-android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-android.md
@@ -63,4 +63,4 @@ Logger.getInstance().setEnableLogcatLog(true);
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples)](sample-v2-code.md).
\ No newline at end of file
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-logging-dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-dotnet.md
@@ -60,4 +60,4 @@ class Program
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples)](sample-v2-code.md).
\ No newline at end of file
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-logging-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-java.md
@@ -69,4 +69,4 @@ PublicClientApplication app2 = PublicClientApplication.builder(PUBLIC_CLIENT_ID)
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples)](sample-v2-code.md).
\ No newline at end of file
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-logging-js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-js.md
@@ -53,4 +53,4 @@ var UserAgentApplication = new Msal.UserAgentApplication(msalConfig);
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples)](sample-v2-code.md).
\ No newline at end of file
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-logging-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-logging-python.md
@@ -49,6 +49,6 @@ For more information about logging in Python, please refer to Python's [Logging
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples)](sample-v2-code.md).
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-migration.md
@@ -69,7 +69,7 @@ __Q: How does MSAL work with AD FS?__
A: MSAL.NET supports certain scenarios to authenticate against AD FS 2019. If your app needs to acquire tokens directly from earlier version of AD FS, you should remain on ADAL. [Learn more](msal-net-adfs-support.md). __Q: How do I get help migrating my application?__
-A: See the [Migration guidance](#migration-guidance) section of this article. If, after reading the guide for your app's platform, you have additional questions, you can post on Stack Overflow with the tag `[adal-deprecation]` or open an issue in library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
+A: See the [Migration guidance](#migration-guidance) section of this article. If, after reading the guide for your app's platform, you have additional questions, you can post on Microsoft Q&A with the tag `[azure-ad-adal-deprecation]` or open an issue in library's GitHub repository. See the [Languages and frameworks](msal-overview.md#languages-and-frameworks) section of the MSAL overview article for links to each library's repo.
## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/perms-for-given-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/perms-for-given-api.md
@@ -25,4 +25,4 @@
## Next steps
-[AzureAD StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[AzureAD Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/registration-config-sso-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/registration-config-sso-how-to.md
@@ -39,4 +39,4 @@ For iOS, see [Enabling Cross App SSO in iOS](../azuread-dev/howto-v1-enable-sso-
[Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md)<br>
-[AzureAD StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[AzureAD Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-production.md
@@ -26,14 +26,17 @@ Now that you know how to acquire a token to call web APIs, here are some things
Check out a [deployment sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter3) for learning how to deploy your SPA and Web API projects with Azure Storage and Azure App Services, respectively.
-## Next steps
+## Code samples
+
+These code samples demonstrate several key operations for a single-page app.
+- [SPA with an ASP.NET back-end](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnetcore-webapi): How to get tokens for your own back-end web API (ASP.NET Core) by using **MSAL.js**.
-- Deep dive of the quickstart sample, which explains the code for how to sign in users and get an access token to call the **Microsoft Graph API** by using **MSAL.js**: [JavaScript SPA tutorial](./tutorial-v2-javascript-spa.md).
+- [Node.js Web API (Azure AD](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2): How to validate access tokens for your back-end web API (Node.js) by using **passport-azure-ad**.
-- Sample that demonstrates how to get tokens for your own back-end web API (ASP.NET Core) by using **MSAL.js**: [SPA with an ASP.NET back-end](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnetcore-webapi).
+- [SPA with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-singlepageapp): How to use **MSAL.js** to sign in users in an app that's registered with **Azure Active Directory B2C** (Azure AD B2C).
-- Sample that demonstrates how to validate access tokens for your back-end web API (Node.js) by using **passport-azure-ad**: [Node.js Web API (Azure AD](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2).
+- [Node.js Web API (Azure AD B2C)](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi): How to use **passport-azure-ad** to validate access tokens for apps registered with **Azure Active Directory B2C** (Azure AD B2C).
-- Sample that shows how to use **MSAL.js** to sign in users in an app that's registered with **Azure Active Directory B2C** (Azure AD B2C): [SPA with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-singlepageapp).
+## Next steps
-- Sample that shows how to use **passport-azure-ad** to validate access tokens for apps registered with **Azure Active Directory B2C** (Azure AD B2C): [Node.js Web API (Azure AD B2C)](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi).
+- [JavaScript SPA tutorial](./tutorial-v2-javascript-spa.md): Deep dive to how to sign in users and get an access token to call the **Microsoft Graph API** by using **MSAL.js**.
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/setup-multi-tenant-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/setup-multi-tenant-app.md
@@ -25,4 +25,4 @@ Here is a list of recommended topics to learn more about multi-tenant applicatio
- For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios ## Next steps
-[AzureAD StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[AzureAD Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/troubleshoot-publisher-verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/troubleshoot-publisher-verification.md
@@ -9,7 +9,7 @@
Previously updated : 05/08/2020 Last updated : 01/28/2021
@@ -145,88 +145,96 @@ HTTP/1.1 200 OK
The following is a list of the potential error codes you may receive, either when troubleshooting with Microsoft Graph or going through the process in the app registration portal.
-### MPNAccountNotFoundOrNoAccess
+### MPNAccountNotFoundOrNoAccess
-The MPN ID you provided (<MPNID>) does not exist, or you do not have access to it. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) does not exist, or you do not have access to it. Provide a valid MPN ID and try again.
Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID.
-### MPNGlobalAccountNotFound
+### MPNGlobalAccountNotFound
-The MPN ID you provided (<MPNID>) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
Most commonly caused when an MPN ID is provided that corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details.
-### MPNAccountInvalid
+### MPNAccountInvalid
-The MPN ID you provided (<MPNID>) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided.
-### MPNAccountNotVetted
+### MPNAccountNotVetted
-The MPN ID (<MPNID>) you provided has not completed the vetting process. Complete this process in Partner Center and try again.
+The MPN ID (`MPNID`) you provided has not completed the vetting process. Complete this process in Partner Center and try again.
Most commonly caused by when the MPN account has not completed the [verification](/partner-center/verification-responses) process.
-### NoPublisherIdOnAssociatedMPNAccount
+### NoPublisherIdOnAssociatedMPNAccount
-The MPN ID you provided (<MPNID>) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided.
-### MPNIdDoesNotMatchAssociatedMPNAccount
+### MPNIdDoesNotMatchAssociatedMPNAccount
-The MPN ID you provided (<MPNID>) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided.
-### ApplicationNotFound
+### ApplicationNotFound
-The target application (<AppId>) cannot be found. Provide a valid application ID and try again.
+The target application (`AppId`) cannot be found. Provide a valid application ID and try again.
Most commonly caused when verification is being performed via Graph API, and the id of the application provided is incorrect. Note- the id of the application must be provided, not the AppId/ClientId.
-### B2CTenantNotAllowed
+### B2CTenantNotAllowed
-This capability is not supported in an Azure AD B2C tenant.
+This capability is not supported in an Azure AD B2C tenant.
-### EmailVerifiedTenantNotAllowed
+### EmailVerifiedTenantNotAllowed
-This capability is not supported in an email verified tenant.
+This capability is not supported in an email verified tenant.
-### NoPublisherDomainOnApplication
+### NoPublisherDomainOnApplication
-The target application (\<AppId\>) must have a Publisher Domain set. Set a Publisher Domain and try again.
+The target application (`AppId`) must have a Publisher Domain set. Set a Publisher Domain and try again.
Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) is not configured on the app.
-### PublisherDomainMismatch
+### PublisherDomainMismatch
-The target application's Publisher Domain (<publisherDomain>) does not match the domain used to perform email verification in Partner Center (<pcDomain>). Ensure these domains match and try again.
+The target application's Publisher Domain (`publisherDomain`) does not match the domain used to perform email verification in Partner Center (`pcDomain`). Ensure these domains match and try again.
Occurs when neither the app's [Publisher Domain](howto-configure-publisher-domain.md) nor one of the [custom domains](../fundamentals/add-custom-domain.md) added to the Azure AD tenant match the domain used to perform email verification in Partner Center.
-### NotAuthorizedToVerifyPublisher
+### NotAuthorizedToVerifyPublisher
-You are not authorized to set the verified publisher property on application (<AppId>)
+You are not authorized to set the verified publisher property on application (<`AppId`)
Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information.
-### MPNIdWasNotProvided
+### MPNIdWasNotProvided
-The MPN ID was not provided in the request body or the request content type was not "application/json".
+The MPN ID was not provided in the request body or the request content type was not "application/json".
-### MSANotSupported
+### MSANotSupported
This feature is not supported for Microsoft consumer accounts. Only applications registered in Azure AD by an Azure AD user are supported. ### InteractionRequired
-Occurs when multi-factor authentication has not been performed before attempting to add a verified publisher to the app. See [common issues](#common-issues) for more information. Note: MFA must be performed in the same session when attempting to add a verified publisher. If MFA is enabled but not required to be performed in the session, the request will fail.
+Occurs when multi-factor authentication has not been performed before attempting to add a verified publisher to the app. See [common issues](#common-issues) for more information. Note: MFA must be performed in the same session when attempting to add a verified publisher. If MFA is enabled but not required to be performed in the session, the request will fail.
The error message displayed will be: "Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to proceed."
+### UnableToAddPublisher
+
+The error message displayed is: "A verified publisher cannot be added to this application. Please contact your administrator for assistance."
+
+First, verify you've met the [publisher verification requirements](publisher-verification-overview.md#requirements).
+
+When a request to add a verified publisher is made, an number of signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft does not disclose the specific criteria used to determine whether a request is risky or not.
+ ## Next steps If you have reviewed all of the previous information and are still receiving an error from Microsoft Graph, gather as much of the following information as possible related to the failing request and [contact Microsoft support](developer-support-help-options.md#open-a-support-request).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-aspnet-daemon-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
@@ -234,9 +234,9 @@ When no longer needed, delete the app object that you created in the [Register y
## Get help
-Use [Stack Overflow](http://stackoverflow.com/questions/tagged/msal) to get support from the community.
-Ask your questions on Stack Overflow first, and browse existing issues to see if someone has asked your question before.
-Make sure that your questions or comments are tagged with "adal," "msal," and "dotnet."
+Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-ad-msal.html) to get support from the community.
+Ask your questions on Microsoft Q&A first, and browse existing issues to see if someone has asked your question before.
+Make sure that your questions or comments are tagged with "azure-ad-adal-deprecation," "azure-ad-msal," and "dotnet-standard."
If you find a bug in the sample, please raise the issue on [GitHub Issues](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp/issues).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-howto-get-appsource-certified https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-get-appsource-certified.md
@@ -100,9 +100,9 @@ For more information about the AppSource trial experience, see [this video](http
## Get support
-For Azure AD integration, we use [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-active-directory+appsource) with the community to provide support.
+For Azure AD integration, we use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html) with the community to provide support.
-We highly recommend you ask your questions on Stack Overflow first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]` and `[appsource]`](https://stackoverflow.com/questions/tagged/azure-active-directory+appsource).
+We highly recommend you ask your questions on Microsoft Q&A first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]`](https://docs.microsoft.com/answers/topics/azure-active-directory.html).
Use the following comments section to provide feedback and help us refine and shape our content.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-deployment-plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-deployment-plans.md
@@ -35,7 +35,7 @@ From any of the plan pages, use your browser's Print to PDF capability to create
| Capability | Description| | -| - | | [Single sign-on](../manage-apps/plan-sso-deployment.md)| Single sign-on helps your users access the apps and resources they need to do business while signing in only once. After they've signed in, they can go from Microsoft Office to SalesForce to Box to internal applications without being required to enter credentials a second time. |
-| [Access panel](../manage-apps/access-panel-deployment-plan.md)| Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others. |
+| [My Apps](../manage-apps/my-apps-deployment-plan.md)| Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others. |
| [Devices](../devices/plan-device-deployment.md) | This article helps you evaluate the methods to integrate your device with Azure AD, choose the implementation plan, and provides key links to supported device management tools. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-get-started-premium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-get-started-premium.md
@@ -19,9 +19,6 @@
# Sign up for Azure Active Directory Premium editions You can purchase and associate Azure Active Directory (Azure AD) Premium editions with your Azure subscription. If you need to create a new Azure subscription, you'll also need to activate your licensing plan and Azure AD service access.
-> [!NOTE]
->Azure AD Premium and Basic editions are available for customers in China using the worldwide instance of Azure Active Directory. Azure AD Premium and Basic editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information, talk to us using the [Azure Active Directory Forum](https://feedback.azure.com/forums/169401-azure-active-directory/).
- Before you sign up for Active Directory Premium 1 or Premium 2, you must first determine which of your existing subscription or plan to use: - Through your existing Azure or Microsoft 365 subscription
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/access-panel-deployment-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-deployment-plan.md deleted file mode 100644
@@ -1,310 +0,0 @@
- Title: Plan an Azure Active Directory My Apps deployment
-description: Guidance on deploying Azure Active Directory My Apps
------- Previously updated : 12/31/2020---
-# Plan an Azure Active Directory My Apps deployment
-
-Azure Active Directory (Azure AD) My Apps is a web-based portal that helps lower support costs, increase productivity and security, and reduce user frustration. The system includes detailed reporting that tracks when you access the system and notifies administrators of misuse or abuse. To learn about using My Apps from an end-user perspective, see [My Apps portal help](../user-help/my-apps-portal-end-user-access.md).
-
-By using Azure AD My Apps, you can:
-
-* Discover and access all of their companyΓÇÖs Azure AD-connected resources, such as applications
-* Request access to new apps and groups
-* Manage access to these resources for others
-* Manage self-service password resets and Azure AD Multi-Factor Authentication settings
-* Manage their devices
-
-It also allows administrators to manage:
-
-* Terms of service
-* Organizations
-* Access reviews
--
-## Benefits of Azure AD My Apps integration
-
-Azure AD My Apps benefits businesses in the following ways:
-
-**Provides intuitive user experience**: My Apps provides you with a single platform for all of your Azure single sign-on (SSO)-connected applications. You have a unified portal to find existing settings and new capabilities, like group management and self-service password reset, as they're added. The intuitive experience allows users to return to work faster and be more productive, while reducing their frustration.
-
-**Increases productivity**: All user applications in My Apps have SSO enabled. Enabling SSO across enterprise applications and Microsoft 365 creates a superior sign-in experience by reducing or eliminating additional sign-in prompts. My Apps uses self-service and dynamic membership and improves the overall security of your identity system. My Apps ensures that the right people manage access to the applications. My Apps serves as a coherent landing page for you to quickly find resources and continue work tasks.
-
-**Manages cost**: Enabling My Apps with Azure AD can help with the divestment of on-premises infrastructures. It reduces support costs by providing you with a consistent portal to find all of your apps, request access to resources, and manage accounts.
-
-**Increases flexibility and security**: My Apps gives you access to the security and flexibility that a cloud platform provides. Administrators can easily change settings to applications and resources and can accommodate new security requirements without affecting users.
-
-**Enables robust auditing and usage tracking**: Auditing and usage tracking for all user capabilities let you know when users are using their resources and ensures that you can assess security.
-
-### Licensing considerations
-
-My Apps is free and requires no licenses to use at a basic level. However, the number of objects in your directory and the additional features you want to deploy can require additional licenses. Some common Azure AD scenarios that have licensing requirements include the following security features:
-
-* [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
-* [Group-based membership](../fundamentals/active-directory-manage-groups.md)
-* [Self-service password reset](../authentication/tutorial-enable-sspr.md)
-* [Azure Active Directory Identity Protection](../identity-protection/overview-identity-protection.md)
-
-See the [full licensing guide for Azure AD](https://azure.microsoft.com/pricing/details/active-directory/).
-
-### Prerequisites for deploying Azure AD My Apps
-
-Complete the following prerequisites before you begin this project:
-
-* [Integrate application SSO](./plan-sso-deployment.md)
-* [Manage Azure AD user and group infrastructure](../fundamentals/active-directory-manage-groups.md)
-
-## Plan Azure AD My Apps deployment
-
-The following table outlines the key use cases for a My Apps deployment:
-
-| Area| Description |
-| - | - |
-| Access| My Apps portal is accessible from corporate and personal devices within the corporate network. |
-|Access | My Apps portal is accessible from corporate devices outside of the corporate network. |
-| Auditing| Usage data is downloaded into corporate systems at least every 29 days. |
-| Governance| Life cycle of the user assignments to Azure AD-connected applications and groups is defined and monitored. |
-| Security| Access to resources is controlled via user and group assignments. Only authorized users can manage resource access. |
-| Performance| Access assignment propagation timelines are documented and monitored. |
-| User Experience| Users are aware of My Apps capabilities and how to use them.|
-| User Experience| Users can manage their access to applications and groups.|
-| User Experience| Users can manage their accounts. |
-| User Experience| Users are aware of browser compatibility. |
-| Support| Users can find support for My Apps issues. |
--
-> [!TIP]
-> My Apps can be used with internal company URLs while remote using Application Proxy. To learn more, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md).
-
-### Best practices for deploying Azure AD My Apps
-
-The functionality of My Apps can be enabled gradually. We recommend the following order of deployment:
-
-1. My Apps
- * App launcher
- * Self-service app management
- * Microsoft 365 integration
-
-1. Self-service app discovery
- * Self-service password reset
- * Multi-Factor Authentication settings
- * Device management
- * Terms of use
- * Manage organizations
-
-1. My Groups
- * Self-service group management
-1. Access reviews
- * Access review management
-
-Starting with My Apps introduces users to the portal as a common place for accessing resources. The addition of self-service application discovery builds on the My Apps experience. My Groups and access reviews build on the self-service capabilities.
-
-### Plan configurations for Azure My Apps
-
-The following table lists several important My Apps configurations and the typical values you might use:
-
-| Configuration| Typical values |
-| - | - |
-| Determine the pilot groups| Identify the Azure AD security group to be used and ensure all pilot members are a part of the group. |
-| Determine the group or groups to be enabled for production.| Identify the Azure AD security groups, or the Active Directory groups synced to Azure AD, to be used. Ensure all pilot members are a part of the group. |
-| Allow users to use SSO to certain types of applications| Federated SSO, OAuth, Password SSO, App Proxy |
-| Allow users to use self-service password reset | Yes |
-| Allow users to use Multi-Factor Authentication| Yes |
-| Allow users to use self-service group management for certain types of groups| Security groups, Microsoft 365 groups |
-| Allow users to use self-service app management| Yes |
-| Allow users to use access reviews| Yes |
-
-### Plan consent strategy
-
-Users or administrators must consent to any applicationΓÇÖs terms of use and privacy policies. If possible, given your business rules, use administrator consent, which gives users a better experience.
-
-To use administrator consent, you must be a global administrator of the organization, and the applications must be either:
-
-* Registered in your organization
-* Registered in another Azure AD organization and previously consented by at least one user
-
-For more information, see [Configure the way end users consent to an application in Azure Active Directory](configure-user-consent.md).
-
-### Engage the right stakeholders
-
-When technology projects fail, they typically do so because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
-
-### Plan communications
-
-Communication is critical to the success of any new service. Proactively inform your users how and when their experience will change and how to gain support if needed.
-
-Although My Apps doesn't typically create user issues, it's important to prepare. Create guides and a list of all resources for your support personnel before your launch.
-
-#### Communications templates
-
-Microsoft provides [customizable templates for emails and other communications](https://aka.ms/APTemplates) for My Apps. You can adapt these assets for use in other communications channels as appropriate for your corporate culture.
-
-## Plan your SSO configuration
-
-When a user signs in to an application, they go through an authentication process and are required to prove who they are. Without SSO, a password is stored in the application, and the user is required to know this password. With SSO, usersΓÇÖ credentials are passed to the application, so they don't need to reenter passwords for each application.
-
-To launch applications in My Apps, SSO must be enabled. Azure AD supports multiple SSO options. To learn more, see [Single sign-on options in Azure AD](sso-options.md).
-
-> [!NOTE]
-> To learn more about using Azure AD as an identity provider for an app, see the [Quickstart Series on Application Management](view-applications-portal.md).
-
-For the best experience with the My Apps page, start with the integration of cloud applications that are available for federated SSO. Federated SSO allows users to have a consistent one-click experience across their app launching surfaces and tends to be more robust in configuration control.
-
-Use Federated SSO with Azure AD (OpenID Connect/SAML) when an application supports it, instead of password-based SSO and ADFS.
-
-For more information on how to deploy and configure your SaaS applications, see the [SaaS SSO deployment plan](./plan-sso-deployment.md).
-
-#### Plan to deploy the My Apps browser extension
-
-When users sign in to password-based SSO applications, they need to install and use the My Apps secure sign-in extension. The extension executes a script that transmits the password into the applicationΓÇÖs sign-in form. Users are prompted to install the extension when they first launch the password-based SSO application. More information about the extension can found in this documentation on [installing My Apps browser extension]().
-
-If you must integrate password-based SSO applications, you should define a mechanism to deploy the extension at scale with [supported browsers](../user-help/my-apps-portal-end-user-access.md). Options include:
-
-* [Group Policy for Internet Explorer]()
-* [Configuration Manager for Internet Explorer](/configmgr/core/clients/deploy/deploy-clients-to-windows-computers)
-* [User-driven download and configuration for Chrome, Firefox, Microsoft Edge, or IE](../user-help/my-apps-portal-end-user-access.md)
-
-Users who don't use password-based SSO applications also benefit from the extension. These benefits include the ability to launch any app from its search bar, finding access to recently used applications, and having a link to the My Apps page.
-
-#### Plan for mobile access
-
-A browser protected with Intune policy (Microsoft Edge or Intune Managed Browser) is necessary for mobile users launching password-based SSO applications. A policy-protected browser enables the transfer of the password saved for the application. Microsoft Edge or the managed browser provides a set of web data protection features. You can also use Microsoft Edge for enterprise scenarios on iOS and Android devices. Microsoft Edge supports the same management scenarios as the Intune Managed Browser and improves the user experience. Learn more: [Manage web access using a Microsoft Intune policy-protected browser](/intune/app-configuration-managed-browser).
-
-## Plan your My Apps Deployment
-
-The foundation of My Apps is the application launcher portal, which users access at [https://myapps.microsoft.com](https://myapps.microsoft.com/). The My Apps page gives users a single place to start their work and get to their necessary applications. Here, users find a list of all the applications they have single sign-on access to.
-
-> [!NOTE]
-> The same applications will be shown in the Microsoft 365 app launcher.
-
-Plan the order in which you'll add applications to the My Apps launcher, and decide whether you'll roll them out gradually or all at once. To do so, create an application inventory listing the type of authentication and any existing SSO integrations for each application.
-
-#### Add applications to the My Apps panel
-
-Any Azure AD SSO-enabled application can be added to the My Apps launcher. Other applications are added by using the Linked SSO option. You can configure an application tile that links to the URL of your existing web application. Linked SSO allows you to start directing users to the My Apps portal without migrating all the applications to Azure AD SSO. You can gradually move to Azure AD SSO-configured applications without disrupting the users' experience.
-
-#### Use My Apps collections
-
-By default, all applications are listed together on a single page. But you can use collections to group together related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. For information, see [How to use My Apps collections](access-panel-collections.md).
-
-#### Plan whether to use My Apps or an existing portal
-
-Your users might already have an application or portal other than My Apps. If so, decide whether to support both portals or only use one.
-
-If an existing portal is already being used as a starting point for users, you can integrate My Apps functionality by using user access URLs. User access URLs function as direct links to the applications available in the My Apps portal. These URLs can be embedded within any existing website. When a user selects the link, it opens the application from the My Apps portal.
-
-You can find the user access URL property in the **Properties** area of the application in the Azure portal.
-
-![A screenshot of the apps panel](media/access-panel-deployment-plan/ap-dp-user-access-url.png)
--
-## Plan self-service application discovery and access
-
-Once a core set of applications is deployed to a userΓÇÖs My Apps page, you should enable self-service app management features. Self-service app discovery enables users to:
-
-* Find new apps to add to their My Apps.
-* Add optional apps that they might not know they need during setup.
-
-Approval workflows are available for explicit approval to access applications. Users who are approvers will receive notifications within the My Apps portal when there are pending request for access to the application.
-
-## Plan self-service group membership
-
-You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists.
-
-To plan for self-service group membership, determine if you'll allow all users in your organization to create and manage groups or only a subset of users. If you're allowing a subset of users, you'll need to set up a group to which those people are added. See [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md) for details on enabling these scenarios.
-
-## Plan reporting and auditing
-
-Azure AD provides [reports that offer technical and business insights](../reports-monitoring/overview-reports.md). Work with your business and technical application owners to assume ownership of these reports and to consume them on a regular basis. The following table provides some examples of typical reporting scenarios.
-
-| Example | Manage risk| Increase productivity| Governance and compliance |
-| - |- | - | - |
-| Report types| Application permissions and usage| Account provisioning activity| Review who is accessing the applications |
-| Potential actions| Audit access; revoke permissions| Remediate any provisioning errors| Revoke access |
-
-Azure AD keeps most auditing data for 30 days. The data is available via Azure Admin Portal or API for you to download into your analysis systems.
-
-#### Auditing
-
-Audit logs for application access are available for 30 days. If security auditing within your enterprise requires longer retention, the logs need to be exported into a Security Information Event and Management (SIEM) tool, such as Splunk or ArcSight.
-
-For auditing, reporting, and disaster recovery backups, document the required frequency of download, what the target system is, and who's responsible for managing each backup. You might not need separate auditing and reporting backups. Your disaster recovery backup should be a separate entity.
-
-## Deploy applications to usersΓÇÖ My Apps panel
-
-After an application has been configured for SSO, groups are assigned access. Users in the assigned groups will have access, and they will see the application in their My Apps and the Microsoft 365 app launcher.
-
-See [Assign users and groups to an application in Active Directory](./assign-user-or-group-access-portal.md).
-
-If during testing or deployment you want to add the groups but not yet allow the applications to show in My Apps, see [Hide an application from userΓÇÖs experience in Azure Active Directory](hide-application-from-user-portal.md).
-
-### Deploy Microsoft 365 applications to My Apps
-
-For Microsoft 365 applications, users receive a copy of Office based on licenses assigned to them. A prerequisite for access to Office applications is for users to be assigned the correct licenses tied to the Office applications. When you assign a user a license, they'll automatically see the applications that are associated with the license in their My Apps page and in the Microsoft 365 app launcher.
-
-If you want to hide a set of Office applications from users, there's an option to hide apps from the My Apps portal, while still allowing access from the Microsoft 365 portal. Learn more: [Hide an application from userΓÇÖs experience in Azure Active Directory](hide-application-from-user-portal.md).
-
-### Deploy application self-service capabilities
-
-Self-service application access allows users to self-discover and request access to applications. Users have the freedom to access the apps they need without going through an IT group each time to request access. When a user requests access and is approved, either automatically or manually by an app owner, they're added to a group on the back end. Reporting is enabled on who has requested, approved, or removed access, and it gives you control over managing the assigned roles.
-
-You can delegate approval of application access requests to business approvers. The business approver can set the app access passwords from the business approverΓÇÖs My Apps page.
-
-Learn more: [How to use self-service application access](access-panel-manage-self-service-access.md).
-
-## Validate your deployment
-
-Ensure your My Apps deployment is thoroughly tested and a rollback plan is in place.
-
-The following tests should be conducted with both corporate-owned devices and personal devices. These test cases should also reflect your business use cases. Following are a few cases based on the sample business requirements in this document and on typical technical scenarios. Add others specific to your needs.
-
-#### Application SSO access test case examples:
--
-| Business case| Expected result |
-| - | -|
-| User signs in into the My Apps portal| User can sign in and see their applications |
-| User launches a federated SSO application| User is automatically signed in to the application |
-| User launches a password SSO application for the first time| User needs to install the My Apps extension |
-| User launches a password SSO application a subsequent time| User is automatically signed in to the application |
-| User launches an app from Microsoft 365 Portal| User is automatically signed in to the application |
-| User launches an app from the Managed Browser| User is automatically signed in to the application |
--
-#### Application self-service capabilities test case examples
-
-| Business case| Expected result |
-| - | - |
-| User can manage membership to the application| User can add/remove members who have access to the app |
-| User can edit the application| User can edit the applicationΓÇÖs description and credentials for password SSO applications |
-
-### Rollback steps
-
-ItΓÇÖs important to plan what to do if your deployment doesnΓÇÖt go as planned. If SSO configuration fails during deployment, you must understand how to [troubleshoot SSO issues](../hybrid/tshoot-connect-sso.md) and reduce impact to your users. In extreme circumstances, you might need to [roll back SSO](../manage-apps/plan-sso-deployment.md#rollback-process).
--
-## Manage your implementation
-
-Use the least privileged role to accomplish a required task within Azure Active Directory. [Review the different roles that are available](../roles/permissions-reference.md) and choose the right one to solve your needs for each persona for this application. Some roles might need to be applied temporarily and removed after the deployment is completed.
-
-| Personas| Roles| Azure AD role |
-| - | -| -|
-| Helpdesk admin| Tier 1 support| None |
-| Identity admin| Configure and debug when issues impact Azure AD| Global admin |
-| Application admin| User attestation in application, configuration on users with permissions| None |
-| Infrastructure admins| Cert rollover owner| Global admin |
-| Business owner/stakeholder| User attestation in application, configuration on users with permissions| None |
-
-You can use [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to manage your roles to provide additional auditing, control, and access review for users with directory permissions.
-
-## Next steps
-[Plan a deployment of Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-management-fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-fundamentals.md
@@ -30,7 +30,7 @@ This article contains recommendations and best practices for managing applicatio
| Use federated SAML-based SSO | When an application supports it, use Federated, SAML-based SSO with Azure AD instead of password-based SSO and ADFS. | | Use SHA-256 for certificate signing | Azure AD uses the SHA-256 algorithm by default to sign the SAML response. Use SHA-256 unless the application requires SHA-1 (see [Certificate signing options](certificate-signing-options.md) and [Application sign-in problem](application-sign-in-problem-application-error.md).) | | Require user assignment | By default, users can access to your enterprise applications without being assigned to them. However, if the application exposes roles, or if you want the application to appear on a userΓÇÖs My Apps, require user assignment. |
-| Deploy My Apps to your users | [My Apps](end-user-experiences.md) at `https://myapps.microsoft.com` is a web-based portal that provides users with a single point of entry for their assigned cloud-based applications. As additional capabilities like group management and self-service password reset are added, users can find them in My Apps. See [Plan My Apps deployment](access-panel-deployment-plan.md).
+| Deploy My Apps to your users | [My Apps](end-user-experiences.md) at `https://myapps.microsoft.com` is a web-based portal that provides users with a single point of entry for their assigned cloud-based applications. As additional capabilities like group management and self-service password reset are added, users can find them in My Apps. See [Plan My Apps deployment](my-apps-deployment-plan.md).
| Use group assignment | If included in your subscription, assign groups to an application so you can delegate ongoing access management to the group owner. | | Establish a process for managing certificates | The maximum lifetime of a signing certificate is three years. To prevent or minimize outage due to a certificate expiring, use roles and email distribution lists to ensure that certificate-related change notifications are closely monitored. |
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connector-installation-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connector-installation-problem.md
@@ -8,7 +8,7 @@
Previously updated : 05/21/2018 Last updated : 01/28/2021
@@ -19,7 +19,7 @@ Microsoft Azure Active Directory Application Proxy Connector is an internal doma
## General Problem Areas with Connector installation
-When the installation of a connector fails, the root cause is usually one of the following areas:
+When the installation of a connector fails, the root cause is usually one of the following areas. **As a precursor to any troubleshooting, be sure to reboot the connector.**
1. **Connectivity** ΓÇô to complete a successful installation, the new connector needs to register and establish future trust properties. This is done by connecting to the Azure Active Directory Application Proxy cloud service.
@@ -109,4 +109,4 @@ Connect to `https://login.microsoftonline.com` and use the same credentials. Mak
Select your user account, then "Directory Role" in the resulting menu. Verify that the selected role is "Application Administrator". If you are unable to access any of the pages along these steps, you do not have the required role. ## Next steps
-[Understand Azure AD Application Proxy connectors](application-proxy-connectors.md)
\ No newline at end of file
+[Understand Azure AD Application Proxy connectors](application-proxy-connectors.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md
@@ -17,7 +17,7 @@
# Problems signing in to SAML-based single sign-on configured apps To troubleshoot the sign-in issues below, we recommend the following to better diagnosis and automate the resolution steps: -- Install the [My Apps Secure Browser Extension](./access-panel-deployment-plan.md) to help Azure Active Directory (Azure AD) to provide better diagnosis and resolutions when using the testing experience in the Azure portal.
+- Install the [My Apps Secure Browser Extension](my-apps-deployment-plan.md) to help Azure Active Directory (Azure AD) to provide better diagnosis and resolutions when using the testing experience in the Azure portal.
- Reproduce the error using the testing experience in the app configuration page in the Azure portal. Learn more on [Debug SAML-based single sign-on applications](./debug-saml-sso-issues.md) If you use the [testing experience](./debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow the steps below to open the SAML-based single sign-on configuration page.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/assign-user-or-group-access-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
@@ -143,7 +143,7 @@ This example assigns the user Britta Simon to the [Microsoft Workplace Analytics
## Related articles - [Learn more about end-user access to applications](end-user-experiences.md)-- [Plan an Azure AD My Apps deployment](access-panel-deployment-plan.md)
+- [Plan an Azure AD My Apps deployment](my-apps-deployment-plan.md)
- [Managing access to apps](what-is-access-management.md) ## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/common-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/common-scenarios.md
@@ -38,7 +38,7 @@ No more managing password. Securely access all the resources you need with your
|Feature | Description | Recommendation | |||| |SSO|Standards-based federated SSO using trusted industry standards.|Always use [SAML / OIDC](../develop/v2-howto-app-gallery-listing.md) to enable SSO when your application supports it.|
-|My Apps|Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others.| Deploy [My Apps](./access-panel-deployment-plan.md) in your organization once you've integrated your apps with Azure AD for SSO.|
+|My Apps|Offer your users a simple hub to discover and access all their applications. Enable them to be more productive with self-service capabilities, like requesting access to apps and groups, or managing access to resources on behalf of others.| Deploy [My Apps](my-apps-deployment-plan.md) in your organization once you've integrated your apps with Azure AD for SSO.|
## Scenario 2: Automate provisioning and deprovisioning
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-admin-consent-workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
@@ -154,4 +154,4 @@ For more information on consenting to applications, see [Azure Active Directory
[Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
-[Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[Azure AD on Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
@@ -49,7 +49,7 @@ Using Azure AD as your Identity Provider (IdP) and configuring single sign-on (S
In the [quickstart series](view-applications-portal.md), you learned how to add an app to your tenant, which lets Azure AD knows it's being used as the Identity Provider (IdP) for the app. Some apps are already pre-configured and they show in the Azure AD gallery. Other apps are not in the gallery and you have to create a generic app and configure it manually. Depending on the app, the password-based SSO option might not be available. If you don't see the Password-based option list on the single sign-on page for the app, then it is not available. > [!IMPORTANT]
-> The My Apps browser extension is required for password-based SSO. To learn more, see [Plan a My Apps deployment](access-panel-deployment-plan.md).
+> The My Apps browser extension is required for password-based SSO. To learn more, see [Plan a My Apps deployment](my-apps-deployment-plan.md).
The configuration page for password-based SSO is simple. It includes only the URL of the sign-on page that the app uses. This string must be the page that includes the username input field.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-permission-classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-permission-classifications.md
@@ -121,4 +121,4 @@ To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+* [Azure AD on Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent-groups.md
@@ -119,4 +119,4 @@ To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+* [Azure AD on Microsoft Q&A ](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent.md
@@ -180,4 +180,4 @@ To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+* [Azure AD on Microsoft Q&A.](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/grant-admin-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/grant-admin-consent.md
@@ -96,4 +96,4 @@ As always, carefully review the permissions an application requests before grant
[Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
-[Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+[Azure AD on Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-app-consent-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
@@ -147,4 +147,4 @@ To learn more:
* [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) To get help or find answers to your questions:
-* [Azure AD on StackOverflow](https://stackoverflow.com/questions/tagged/azure-active-directory)
\ No newline at end of file
+* [Azure AD on StackOverflow](https://docs.microsoft.com/answers/topics/azure-active-directory.html)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-application-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-application-permissions.md
@@ -2,7 +2,7 @@
Title: Manage user and admin permissions - Azure Active Directory | Microsoft Docs description: Learn how to review and manage permissions for the application on Azure AD. For example, revoke all permissions granted to an application. -+
@@ -102,69 +102,69 @@ Retrieve the service principal object ID.
4. Select **Properties**, and then copy the object ID. ```powershell
- $sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
- $sp.ObjectId
+$sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
+$sp.ObjectId
``` Remove all users who are assigned to the application. ```powershell
- Connect-AzureAD
+Connect-AzureAD
- # Get Service Principal using objectId
- $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
+# Get Service Principal using objectId
+$sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
- # Get Azure AD App role assignments using objectId of the Service Principal
- $assignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -All $true
+# Get Azure AD App role assignments using objectId of the Service Principal
+$assignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -All $true
- # Remove all users and groups assigned to the application
- $assignments | ForEach-Object {
- if ($_.PrincipalType -eq "User") {
- Remove-AzureADUserAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.ObjectId
- } elseif ($_.PrincipalType -eq "Group") {
- Remove-AzureADGroupAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.ObjectId
- }
+# Remove all users and groups assigned to the application
+$assignments | ForEach-Object {
+ if ($_.PrincipalType -eq "User") {
+ Remove-AzureADUserAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.ObjectId
+ } elseif ($_.PrincipalType -eq "Group") {
+ Remove-AzureADGroupAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.ObjectId
}
+}
``` Revoke permissions granted to the application. ```powershell
- Connect-AzureAD
+Connect-AzureAD
- # Get Service Principal using objectId
- $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
+# Get Service Principal using objectId
+$sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
- # Get all delegated permissions for the service principal
- $spOAuth2PermissionsGrants = Get-AzureADOAuth2PermissionGrant -All $true| Where-Object { $_.clientId -eq $sp.ObjectId }
+# Get all delegated permissions for the service principal
+$spOAuth2PermissionsGrants = Get-AzureADOAuth2PermissionGrant -All $true| Where-Object { $_.clientId -eq $sp.ObjectId }
- # Remove all delegated permissions
- $spOAuth2PermissionsGrants | ForEach-Object {
- Remove-AzureADOAuth2PermissionGrant -ObjectId $_.ObjectId
- }
+# Remove all delegated permissions
+$spOAuth2PermissionsGrants | ForEach-Object {
+ Remove-AzureADOAuth2PermissionGrant -ObjectId $_.ObjectId
+}
- # Get all application permissions for the service principal
- $spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+# Get all application permissions for the service principal
+$spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
- # Remove all delegated permissions
- $spApplicationPermissions | ForEach-Object {
- Remove-AzureADServiceAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId
- }
+# Remove all delegated permissions
+$spApplicationPermissions | ForEach-Object {
+ Remove-AzureADServiceAppRoleAssignment -ObjectId $_.PrincipalId -AppRoleAssignmentId $_.objectId
+}
``` Invalidate the refresh tokens. ```powershell
- Connect-AzureAD
+Connect-AzureAD
- # Get Service Principal using objectId
- $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
+# Get Service Principal using objectId
+$sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
- # Get Azure AD App role assignments using objectID of the Service Principal
- $assignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -All $true | Where-Object {$_.PrincipalType -eq "User"}
+# Get Azure AD App role assignments using objectID of the Service Principal
+$assignments = Get-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -All $true | Where-Object {$_.PrincipalType -eq "User"}
- # Revoke refresh token for all users assigned to the application
- $assignments | ForEach-Object {
- Revoke-AzureADUserAllRefreshToken -ObjectId $_.PrincipalId
- }
+# Revoke refresh token for all users assigned to the application
+$assignments | ForEach-Object {
+ Revoke-AzureADUserAllRefreshToken -ObjectId $_.PrincipalId
+}
``` ## Next steps - [Manage consent to applications and evaluate consent request](manage-consent-requests.md) - [Configure user consent](configure-user-consent.md)-- [Configure admin consent workflow](configure-admin-consent-workflow.md)\ No newline at end of file
+- [Configure admin consent workflow](configure-admin-consent-workflow.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/my-apps-deployment-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/my-apps-deployment-plan.md new file mode 100644 /dev/null
@@ -0,0 +1,269 @@
+
+ Title: Plan Azure Active Directory My Apps configuration
+description: Planning guide to effectively use My Apps in your organization.
+++++++ Last updated : 02/29/2020++++
+# Plan Azure Active Directory My Apps configuration
+
+> [!NOTE]
+> This article is designed for IT professionals who need to plan the configuration of their organizationΓÇÖs My Apps portal. For information for the end user about how to use My Apps and collections, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+>
+> **For end user documentation, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md)**.
+
+Azure Active Directory (Azure AD) My Apps is a web-based portal for launching and managing apps. The My Apps page gives users a single place to start their work and find all the applications to which they have access. Users access My Apps at [https://myapps.microsoft.com](https://myapps.microsoft.com/).
+
+> [!VIDEO https://www.youtube.com/embed/atj6Ivn5m0k]
+
+## Why Configure My apps
+
+The My Apps portal is available to users by default and cannot be turned off. ItΓÇÖs important to configure it so that they have the best possible experience, and the portal stays useful.
+
+Any application in the Azure Active Directory enterprise applications list appears when both of the following conditions are met:
+
+* The visibility property for the app is set to true.
+
+* The app is assigned to any user or group. It appears for assigned users.
+
+Configuring the portal ensures that the right people can easily find the right apps.
+
+
+### How is the My Apps portal used?
+
+Users access the My Apps portal to:
+
+* Discover and access all their organizationΓÇÖs Azure AD-connected applications to which they have access.
+
+ * ItΓÇÖs best to ensure apps are configured for single sign-on (SSO) to provide users the best experience.
+
+* Request access to new apps that are configured for self-service.
+
+* Create personal collections of apps.
+
+* Manage access to apps for others when assigned the role of group owner or delegated control for the group used to grant access to the application(s).
+
+Administrators can configure:
+
+* [Consent experiences](../manage-apps/configure-user-consent.md) including terms of service.
+
+* [Self-service application discovery and access requests](../manage-apps/access-panel-manage-self-service-access.md).
+
+* [Collections of applications](../manage-apps/access-panel-collections.md).
+
+* Assignment of icons to applications
+
+* User-friendly names for applications
+
+* Company branding shown on My Apps
+
+
+
+## Plan consent configuration
+
+There are two types of consent: user consent and consent for apps accessing data.
+
+![Screen shot of consent configuration](./media/my-apps-deployment-plan/my-apps-consent.png)
+
+### User consent for applications
+
+Users or administrators must consent to any applicationΓÇÖs terms of use and privacy policies. You must decide if users or only administrators can consent to applications. **We recommend that if your business rules allow, you use administrator consent to maintain control of the applications in your tenant**.
+
+To use administrator consent, you must be a global administrator of the organization, and the applications must be either:
+
+* Registered in your organization.
+
+* Registered in another Azure AD organization and previously consented to by at least one user.
+
+If you want to allow users to consent, you must decide if you want them to consent to any app, or only under specific circumstances.
+
+For more information, see [Configure the way end users consent to an application in Azure Active Directory.](../manage-apps/configure-user-consent.md)
+
+### Group owner consent for apps accessing data
+
+Determine if owners of the Azure AD security groups or M365 groups are able to consent to applications to access data for the groups they own. You can disallow, allow all group owners, or allow only a subset of group owners.
+
+For more information, see [Configure group consent permissions](../manage-apps/configure-user-consent-groups.md).
+
+Then, configure your [User and group owner consent settings](https://portal.azure.com/) in the Azure portal.
+
+### Plan communications
+
+Communication is critical to the success of any new service. Proactively inform your users how and when their experience will change and how to gain support if needed.
+
+Although My Apps doesnΓÇÖt typically create user issues, itΓÇÖs important to prepare. Create guides and a list of all resources for your support personnel before your launch.
+
+#### Communications templates
+
+Microsoft provides [customizable templates for emails and other communications](https://aka.ms/APTemplates) for My Apps. You can adapt these assets for use in other communications channels as appropriate for your corporate culture.
+
+
+
+## Plan your SSO configuration
+
+It's best if SSO is enabled for all apps in the My Apps portal so that users have a seamless experience without the need to enter their credentials.
+
+Azure AD supports multiple SSO options.
+
+* To learn more, see [Single sign-on options in Azure AD](sso-options.md).
+
+* To learn more about using Azure AD as an identity provider for an app, see the [Quickstart Series on Application Management](../manage-apps/view-applications-portal.md).
+
+### Use federated SSO if possible
+
+For the best user experience with the My Apps page, start with the integration of cloud applications that are available for federated SSO (OpenID Connect or SAML). Federated SSO allows users to have a consistent one-click experience across app launching surfaces and tends to be more robust in configuration control.
+
+For more information on how to configure your software as a service (SaaS) applications for SSO, see the [SaaS SSO deployment plan]../Desktop/plan-sso-deployment.md).
+
+### Considerations for special SSO circumstances
+
+> [!TIP]
+> For a better user experience, use Federated SSO with Azure AD (OpenID Connect/SAML) when an application supports it, instead of password-based SSO and ADFS.
+
+To sign in to password-based SSO applications, or to applications that are accessed by Azure AD Application Proxy, users need to install and use the My Apps secure sign-in extension. Users are prompted to install the extension when they first launch the password-based SSO or Application Proxy application.
+
+![Screenshot of](./media/my-apps-deployment-plan/ap-dp-install-myapps.png)
+
+For detailed information on the extension, see [Installing My Apps browser extension](../user-help/my-apps-portal-end-user-access.md).
+
+If you must integrate these applications, you should define a mechanism to deploy the extension at scale with [supported browsers](../user-help/my-apps-portal-end-user-access.md). Options include:
+
+* [User-driven download and configuration for Chrome, Firefox, Microsoft Edge, or IE](../user-help/my-apps-portal-end-user-access.md)
+
+* [Configuration Manager for Internet Explorer](https://docs.microsoft.com/mem/configmgr/core/clients/deploy/deploy-clients-to-windows-computers)
+
+The extension allows users to launch any app from its search bar, finding access to recently used applications, and having a link to the My Apps page.
+
+![Screenshot of my apps extension search](./media/my-apps-deployment-plan/my-app-extsension.png)
+
+#### Plan for mobile access
+
+For applications that use password-based SSO or accessed by using [Microsoft Azure AD Application Proxy](../manage-apps/application-proxy.md), you must use Microsoft Edge mobile. For other applications, any mobile browser can be used.
+
+### Linked SSO
+
+Applications can be added by using the Linked SSO option. You can configure an application tile that links to the URL of your existing web application. Linked SSO allows you to start directing users to the My Apps portal without migrating all the applications to Azure AD SSO. You can gradually move to Azure AD SSO-configured applications without disrupting the usersΓÇÖ experience.
+
+## Plan the user experience
+
+By default, all applications to which the user has access and all applications configured for self-service discovery appear in the userΓÇÖs My Apps panel. For many organizations, this can be a very large list, which can become burdensome if not organized
+
+### Plan My Apps collections
+
+Every Azure AD application to which a user has access will appear on My Apps in the All Apps collection. Use collections to group related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on.
+
+End users can also customize their experience by
+
+* Creating their own app collections.
+
+* [Hiding and reordering app collections](access-panel-collections.md).
+
+![Screenshot of self-service configuration](./media/my-apps-deployment-plan/collections.png)
+
+ThereΓÇÖs an option to hide apps from the My Apps portal, while still allowing access from other locations, such as the Microsoft 365 portal. Learn more: [Hide an application from userΓÇÖs experience in Azure Active Directory](hide-application-from-user-portal.md).
+
+> [!IMPORTANT]
+> Only 950 apps to which a user has access can be accessed through My Apps. This includes apps hidden by either the user or the administrator.
+
+### Plan self-service group management membership
+
+You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and delegate control of group membership. Self-service group management features arenΓÇÖt available for mail-enabled security groups or distribution lists.
+
+To plan for self-service group membership, determine if youΓÇÖll allow all users in your organization to create and manage groups or only a subset of users. If youΓÇÖre allowing a subset of users, youΓÇÖll need to set up a group to which those people are added.
+
+See [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md) for details on enabling these scenarios.
+
+### Plan self-service application access
+
+You can enable users to discover and request access to applications via the My Apps panel. To do so, you must first
+
+* enable self-service group management
+
+* enable app for SSO
+
+* create a group for application access
+
+![Screen shot of My Apps self service configuration](./media/my-apps-deployment-plan/my-apps-self-service.png)
+
+When users request access, they're requesting access to the underlying group, and group owners can be delegate permission to manage the group membership and thus application access. Approval workflows are available for explicit approval to access applications. Users who are approvers will receive notifications within the My Apps portal when there are pending request for access to the application.
+
+## Plan reporting and auditing
+
+Azure AD provides [reports that offer technical and business insights]../reports-monitoring/overview-reports.md). Work with your business and technical application owners to assume ownership of these reports and to consume them regularly. The following table provides some examples of typical reporting scenarios.
+
+| Example| Manage risk| Increase productivity| Governance and compliance |
+| - | - | - | -|
+| Report types| Application permissions and usage| Account provisioning activity| Review who is accessing the applications |
+| Potential actions| Audit access; revoke permissions| Remediate any provisioning errors| Revoke access |
++
+Azure AD keeps most auditing data for 30 days. The data is available via Azure Admin Portal or API for you to download into your analysis systems.
+
+#### Auditing
+
+Audit logs for application access are available for 30 days. If your organization requires longer retention, export the logs to a Security Information Event and Management (SIEM) tool, such as Splunk or ArcSight.
+
+For auditing, reporting, and disaster recovery backups, document the required frequency of download, what the target system is, and whoΓÇÖs responsible for managing each backup. You might not need separate auditing and reporting backups. Your disaster recovery backup should be a separate entity.
+
+## Validate your deployment
+
+Ensure your My Apps deployment is thoroughly tested and a rollback plan is in place.
+
+Conduct the following tests with both corporate-owned devices and personal devices. These test cases should also reflect your business use cases. Following are a few cases based on typical technical scenarios. Add others specific to your needs.
+
+#### Application SSO access test case examples:
++
+| Business case| Expected result |
+| - | - |
+| User signs in into the My Apps portal| User can sign in and see their applications |
+| User launches a federated SSO application| User is automatically signed in to the application |
+| User launches a password SSO application for the first time| User needs to install the My Apps extension |
+| User launches a password SSO application a subsequent time| User is automatically signed in to the application |
+| User launches an app from Microsoft 365 Portal| User is automatically signed in to the application |
+| User launches an app from the Managed Browser| User is automatically signed in to the application |
++
+#### Application self-service capabilities test case examples
++
+| Business case| Expected result |
+| - | - |
+| User can manage membership to the application| User can add/remove members who have access to the app |
+| User can edit the application| User can edit the applicationΓÇÖs description and credentials for password SSO applications |
++
+### Rollback steps
+
+ItΓÇÖs important to plan what to do if your deployment doesnΓÇÖt go as planned. If SSO configuration fails during deployment, you must understand how to [troubleshoot SSO issues](../hybrid/tshoot-connect-sso.md) and reduce impact to your users. In extreme circumstances, you might need to [roll back SSO](plan-sso-deployment.md).
+
+## Manage your implementation
+
+Use the least privileged role to accomplish a required task within Azure Active Directory. [Review the different roles that are available](../roles/permissions-reference.md) and choose the right one to solve your needs for each persona for this application. Some roles might need to be applied temporarily and removed after the deployment is completed.
+
+| Personas| Roles| Azure AD role |
+| - | - | - |
+| Helpdesk admin| Tier 1 support| None |
+| Identity admin| Configure and debug when issues impact Azure AD| Global admin |
+| Application admin| User attestation in application, configuration on users with permissions| None |
+| Infrastructure admins| Cert rollover owner| Global admin |
+| Business owner/stakeholder| User attestation in application, configuration on users with permissions| None |
++
+You can use [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to manage your roles to provide additional auditing, control, and access review for users with directory permissions.
+
+## Next steps
+
+[Plan a deployment of Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
+
+[Plan an Application Proxy deployment](application-proxy-deployment-plan.md)
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/plan-sso-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-sso-deployment.md
@@ -58,7 +58,7 @@ There are two primary ways in which you can enable your users to single sign-on
Using Azure AD for password-based SSO requires deploying a browser extension that will securely retrieve the credentials and fill out the login forms. Define a mechanism to deploy the extension at scale with [supported browsers](../user-help/my-apps-portal-end-user-access.md). Options include: -- [Group Policy for Internet Explorer](./access-panel-deployment-plan.md)
+- [Group Policy for Internet Explorer](my-apps-deployment-plan.md)
- [Configuration Manager for Internet Explorer](/configmgr/core/clients/deploy/deploy-clients-to-windows-computers) - [User driven download and configuration for Chrome, Firefox, Microsoft Edge, or IE](../user-help/my-apps-portal-end-user-access.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-extended https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-extended.md
@@ -15,7 +15,9 @@
# Get all Application Proxy apps and list extended information
-This PowerShell script example lists information about all Azure Active Directory (Azure AD) Application Proxy applications, including the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), and authentication type (ExternalAuthenticationType).
+This PowerShell script example lists information about all Azure Active Directory (Azure AD) Application Proxy applications, including the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), authentication type (ExternalAuthenticationType), SSO mode and further settings.
+
+Changing the value of the $ssoMode variable enables a filtered output by SSO mode. Further details are documented in the script.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
@@ -23,7 +25,7 @@ This PowerShell script example lists information about all Azure Active Director
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD).
## Sample script
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/troubleshoot-password-based-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
@@ -17,7 +17,7 @@
To use password-based single sign-on (SSO) in My Apps, the browser extension must be installed. The extension downloads automatically when you select an app that's configured for password-based SSO. To learn about using My Apps from an end-user perspective, see [My Apps portal help](../user-help/my-apps-portal-end-user-access.md). ## My Apps browser extension not installed
-Make sure the browser extension is installed. To learn more, see [Plan an Azure Active Directory My Apps deployment](access-panel-deployment-plan.md).
+Make sure the browser extension is installed. To learn more, see [Plan an Azure Active Directory My Apps deployment](my-apps-deployment-plan.md).
## Single sign-on not configured Make sure password-based single sign-on is configured. To learn more, see [Configure password-based single sign-on](configure-password-single-sign-on-non-gallery-applications.md).
@@ -243,4 +243,4 @@ The following information explains what each notification item means and provide
## Next steps * [Quickstart Series on Application Management](view-applications-portal.md)
-* [Plan a My Apps deployment](access-panel-deployment-plan.md)
+* [Plan a My Apps deployment](my-apps-deployment-plan.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
@@ -85,7 +85,7 @@ Welcome to what's new in Azure Active Directory application management documenta
- [How to use self-service application access](access-panel-manage-self-service-access.md) - [Troubleshoot problems signing in to an application from Azure AD My Apps](application-sign-in-other-problem-access-panel.md) - [Troubleshoot password-based single sign-on in Azure AD](troubleshoot-password-based-sso.md)-- [Plan an Azure Active Directory My Apps deployment](access-panel-deployment-plan.md)
+- [Plan an Azure Active Directory My Apps deployment](my-apps-deployment-plan.md)
- [What is single sign-on (SSO)?](what-is-single-sign-on.md) - [Take action on overprivileged or suspicious applications in Azure Active Directory](manage-application-permissions.md) - [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-configure.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
@@ -4,7 +4,7 @@ description: List of services that support managed identities for Azure resource
Previously updated : 10/07/2020 Last updated : 01/28/2021
@@ -108,6 +108,14 @@ Managed identity type | All Generally Available<br>Global Azure Regions | Azure
| User assigned | Not available | Not available | Not available | Not available |
+### Azure Communication Services
+
+Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | ![Available][check] | Not available | Not available | Not available |
+| User assigned | ![Available][check] | Not available | Not available | Not available |
++ ### Azure Container Instances Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
@@ -154,6 +162,17 @@ Refer to the following list to configure managed identity for Azure Data Factory
- [REST](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity-using-rest-api) - [SDK](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity-using-sdk)
+### Azure Digital Twins
+
+Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | ![Available][check] | Not available | Not available | Not available |
+| User assigned | Not available | Not available | Not available | Not available |
+
+Refer to the following list to configure managed identity for Azure Digital Twins (in regions where available):
+
+- [Azure portal](~/articles/digital-twins/how-to-enable-managed-identities.md)
+ ### Azure Event Grid Managed identity type |All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
@@ -427,4 +446,4 @@ Refer to the following list to configure access to Azure Resource
> Microsoft Power BI also [supports managed identities](../../stream-analytics/powerbi-output-managed-identity.md).
-[check]: media/services-support-managed-identities/check.png "Available"
\ No newline at end of file
+[check]: media/services-support-managed-identities/check.png "Available"
active-directory https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-provisioning-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
@@ -1,6 +1,6 @@
Title: Provisioning logs in the Azure Active Directory portal (preview) | Microsoft Docs
-description: Introduction to provisioning logs reports in the Azure Active Directory portal
+ Title: Overview of provisioning logs in the Azure portal (preview) | Microsoft Docs
+description: Get an introduction to provisioning log reports in Azure Active Directory through the Azure portal.
documentationcenter: ''
@@ -20,20 +20,20 @@
-# Provisioning reports in the Azure Active Directory portal (preview)
+# Overview of provisioning logs in the Azure portal (preview)
The reporting architecture in Azure Active Directory (Azure AD) consists of the following components: -- **Activity**
- - **Sign-ins** ΓÇô Information about the usage of managed applications, and user sign-in activities.
- - **Audit logs** - [Audit logs](concept-audit-logs.md) provide system activity information about users and group management, managed applications and directory activities.
- - **Provisioning logs** - Provide system activity about user, groups, and roles that are provisioned by the Azure AD provisioning service.
+- Activity:
+ - **Sign-ins**: Information about the usage of managed applications and user sign-in activities.
+ - [Audit logs](concept-audit-logs.md): System activity information about user and group management, managed applications, and directory activities.
+ - **Provisioning logs**: System activity about users, groups, and roles that are provisioned by the Azure AD provisioning service.
-- **Security**
- - **Risky sign-ins** - A [risky sign-in](../identity-protection/overview-identity-protection.md) is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
- - **Users flagged for risk** - A [risky user](../identity-protection/overview-identity-protection.md) is an indicator for a user account that might have been compromised.
+- Security:
+ - **Risky sign-ins**: A [risky sign-in](../identity-protection/overview-identity-protection.md) is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
+ - **Users flagged for risk**: A [risky user](../identity-protection/overview-identity-protection.md) is an indicator for a user account that might have been compromised.
-This topic gives you an overview of the provisioning logs. They provide answers to questions such as:
+This topic gives you an overview of the provisioning logs. The logs provide answers to questions such as:
* What groups were successfully created in ServiceNow? * What users were successfully removed from Adobe?
@@ -41,30 +41,29 @@ This topic gives you an overview of the provisioning logs. They provide answers
## Prerequisites
-### Who can access the data?
-* Application owners can view logs for applications they own
+These users can access the data in provisioning logs:
+
+* Application owners (logs for their own applications)
* Users in the Security Administrator, Security Reader, Report Reader, Application Administrator, and Cloud Application Administrator roles * Users in a custom role with the [provisioningLogs permission](../roles/custom-enterprise-app-permissions.md#full-list-of-permissions)
-* Global Administrators
-
+* Global administrators
-### What Azure AD license do you need to access provisioning activities?
-Your tenant must have an Azure AD Premium license associated with it to see the all up provisioning activity report. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition.
+For you to view the provisioning activity report, your tenant must have an Azure AD Premium license associated with it. To upgrade your Azure AD edition, see [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md).
## Ways of interacting with the provisioning logs
-Customers have four ways of interacting with the provisioning logs:
+Customers can interact with the provisioning logs in four ways:
-1. Accessing the logs from the Azure portal as described below.
-1. Streaming the provisioning logs into [Azure Monitor](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-log-analytics), allowing for extended data retention, building custom dashboard, alerts, and queries.
-1. Querying the [Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta) for the provisioning logs.
-1. Downloading the provisioning logs as a CSV file or json.
+- Accessing the logs from the Azure portal, as described in the next section.
+- Streaming the provisioning logs into [Azure Monitor](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-log-analytics). This method allows for extended data retention and building custom dashboards, alerts, and queries.
+- Querying the [Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta) for the provisioning logs.
+- Downloading the provisioning logs as a CSV or JSON file.
## Access the logs from the Azure portal
-You can access the provisioning logs by selecting **Provisioning Logs** in the **Monitoring** section of the **Azure Active Directory** blade in the [Azure portal](https://portal.azure.com). It can take up to two hours for some provisioning records to show up in the portal.
+You can access the provisioning logs by selecting **Provisioning Logs** in the **Monitoring** section of the **Azure Active Directory** pane in the [Azure portal](https://portal.azure.com). It can take up to two hours for some provisioning records to appear in the portal.
-![Provisioning logs](./media/concept-provisioning-logs/access-provisioning-logs.png "Provisioning logs")
+![Screenshot that shows selections for accessing provisioning logs.](./media/concept-provisioning-logs/access-provisioning-logs.png "Provisioning logs")
A provisioning log has a default list view that shows:
@@ -77,39 +76,45 @@ A provisioning log has a default list view that shows:
- The date
-![Default columns](./media/concept-provisioning-logs/default-columns.png "Default columns")
+![Screenshot that shows default columns in a provisioning log.](./media/concept-provisioning-logs/default-columns.png "Default columns")
-You can customize the list view by clicking **Columns** in the toolbar.
+You can customize the list view by selecting **Columns** on the toolbar.
-![Column chooser](./media/concept-provisioning-logs/column-chooser.png "Column chooser")
+![Screenshot that shows the button for customizing columns.](./media/concept-provisioning-logs/column-chooser.png "Column chooser")
-This enables you to display additional fields or remove fields that are already displayed.
+This area enables you to display additional fields or remove fields that are already displayed.
-![Available columns](./media/concept-provisioning-logs/available-columns.png "Available columns")
+![Screenshot that shows available columns with some selected.](./media/concept-provisioning-logs/available-columns.png "Available columns")
Select an item in the list view to get more detailed information.
-![Detailed information](./media/concept-provisioning-logs/steps.png "Filter")
+![Screenshot that shows detailed information.](./media/concept-provisioning-logs/steps.png "Filter")
## Filter provisioning activities
-You can filter your provisioning data. Some filter values are dynamically populated based on your tenant. If, for example, you don't have any create events in your tenant, there won't be a filter option for create.
+You can filter your provisioning data. Some filter values are dynamically populated based on your tenant. If, for example, you don't have any "create" events in your tenant, there won't be a **Create** filter option.
+ In the default view, you can select the following filters: -- Identity-- Date-- Status-- Action
+- **Identity**
+- **Date**
+- **Status**
+- **Action**
++
+![Screenshot that shows filter values.](./media/concept-provisioning-logs/default-filter.png "Filter")
+The **Identity** filter enables you to specify the name or the identity that you care about. This identity might be a user, group, role, or other object.
-![Add filters](./media/concept-provisioning-logs/default-filter.png "Filter")
+You can search by the name or ID of the object. The ID varies by scenario. For example, when you're provisioning an object from Azure AD to Salesforce, the source ID is the object ID of the user in Azure AD.
+The target ID is the ID of the user in Salesforce. When you're provisioning from Workday to Active Directory, the source ID is the Workday worker employee ID.
-The **Identity** filter enables you to specify the name or the identity that you care about. This identity could be a user, group, role, or other object. You can search by the name or ID of the object. The ID varies by scenario. For example, when provisioning an object from Azure AD to SalesForce, the Source ID is the object ID of the user in Azure AD while the TargetID is the ID of the user in Salesforce. When provisioning from Workday to Active Directory, the Source ID is the Workday worker employee ID. Note that the Name of the user may not always be present in the Identity column. There will always be one ID.
+> [!NOTE]
+> The name of the user might not always be present in the **Identity** column. There will always be one ID.
-The **Date** filter enables to you to define a timeframe for the returned data.
-Possible values are:
+The **Date** filter enables to you to define a timeframe for the returned data. Possible values are:
- 1 month - 7 days
@@ -119,186 +124,146 @@ Possible values are:
When you select a custom time frame, you can configure a start date and an end date. - The **Status** filter enables you to select: -- All-- Success-- Failure-- Skipped---
-The **Action** filter enables you to filter the:
+- **All**
+- **Success**
+- **Failure**
+- **Skipped**
-- Create -- Update-- Delete-- Disable-- Other
+The **Action** filter enables you to filter these actions:
-In addition, to the filters of the default view, you can also set the following filters:
+- **Create**
+- **Update**
+- **Delete**
+- **Disable**
+- **Other**
-- Job ID-- Cycle ID-- Change ID-- Source ID-- Target ID-- Application
+In addition to the filters of the default view, you can set the following filters.
+![Screenshot that shows fields that you can add as filters.](./media/concept-provisioning-logs/add-filter.png "Pick a field")
-![Pick a field](./media/concept-provisioning-logs/add-filter.png "Pick a field")
+- **Job ID**: A unique job ID is associated with each application that you've enabled provisioning for.
+- **Cycle ID**: The cycle ID uniquely identifies the provisioning cycle. You can share this ID with product support to look up the cycle in which this event occurred.
-- **Job ID** - A unique Job ID is associated with each application that you have enabled provisioning for.
+- **Change ID**: The change ID is a unique identifier for the provisioning event. You can share this ID with product support to look up the provisioning event.
-- **Cycle ID** - Uniquely identifies the provisioning cycle. You can share this ID to support to look up the cycle in which this event occurred.
+- **Source System**: You can specify where the identity is getting provisioned from. For example, when you're provisioning an object from Azure AD to ServiceNow, the source system is Azure AD.
-- **Change ID** - Unique identifier for the provisioning event. You can share this ID to support to look up the provisioning event.
+- **Target System**: You can specify where the identity is getting provisioned to. For example, when you're provisioning an object from Azure AD to ServiceNow, the target system is ServiceNow.
--- **Source System** - Enables you to specify where the identity is getting provisioned from. For example, when provisioning an object from Azure AD to ServiceNow, the Source system is Azure AD. --- **Target System** - Enables you to specify where the identity is getting provisioned to. For example, when provisioning an object from Azure AD to ServiceNow, the Target System is ServiceNow. --- **Application** - Enables you to show only records of applications with a display name that contains a specific string.-
-
+- **Application**: You can show only records of applications with a display name that contains a specific string.
## Provisioning details
-When you select an item in the provisioning list view, you get more details about this item.
-The details are grouped based on the following categories:
--- Steps--- Troubleshoot and recommendations--- Modified properties--- Summary
+When you select an item in the provisioning list view, you get more details about this item. The details are grouped into the following tabs.
+![Screenshot that shows four tabs that contain provisioning details.](./media/concept-provisioning-logs/provisioning-tabs.png "Tabs")
-![Provisioning details](./media/concept-provisioning-logs/provisioning-tabs.png "Tabs")
+- **Steps**: Outlines the steps taken to provision an object. Provisioning an object can consist of four steps:
+
+ 1. Import the object.
+ 1. Determine if the object is in scope.
+ 1. Match the object between source and target.
+ 1. Provision the object (create, update, delete, or disable).
+ ![Screenshot shows the provisioning steps on the Steps tab.](./media/concept-provisioning-logs/steps.png "Filter")
+- **Troubleshooting & Recommendations**: Provides the error code and reason. The error information is available only if a failure happens.
-### Steps
+- **Modified Properties**: Shows the old value and the new value. If there is no old value, that column is blank.
-The **Steps** tab outlines the steps taken to provision an object. Provisioning an object can consist of four steps:
--- Import object-- Determine if the object is in scope-- Match object between source and target-- Provision object (take action - this could be a create, update, delete, or disable)---
-![Screenshot shows the Steps tab, which shows the provisioning steps.](./media/concept-provisioning-logs/steps.png "Filter")
--
-### Troubleshoot and recommendations
--
-The **troubleshoot and recommendations** tab provides the error code and reason. The error information is only available in the case of a failure.
--
-### Modified properties
-
-The **modified properties** shows the old value and new value. In cases where there is no old value the old value column is blank.
-
-### Summary
-
-The **summary** tab provides an overview of what happened and identifiers for the object in the source and target system.
+- **Summary**: Provides an overview of what happened and identifiers for the object in the source and target systems.
## Download logs as CSV or JSON
-You can download the provisioning logs for use later by navigating to the logs in the Azure portal and clicking download. The file will be filtered based on the filter criteria you have selected. You may want to make the filters as specific as possible to reduce the time it takes to download and the size of the download. The CSV download is broken up into three files:
-
-* ProvisioningLogs: Downloads all the logs, except the provisioning steps and modified properties.
-* ProvisioningLogs_ProvisioningSteps: Contains the provisioning steps and the change ID. The change ID can be used to join the event with the other two files.
-* ProvisioningLogs_ModifiedProperties: Contains the attributes that were changed and the change ID. The change ID can be used to join the event with the other two files.
+You can download the provisioning logs for later use by going to the logs in the Azure portal and selecting **Download**. The file will be filtered based on the filter criteria you've selected. Make the filters as specific as possible to reduce the size and time of the download.
-#### Opening the JSON file
-To open the Json file, use a text editor such as [Microsoft Visual Studio Code](https://aka.ms/vscode). Visual Studio Code makes it easier to read by providing syntax highlighting. The json file can also be opened using browsers in a non editable format e.g. [Microsoft Edge](https://aka.ms/msedge)
+The CSV download includes three files:
-#### Prettifying the JSON file
-The JSON file is downloaded in minified format to reduce the size of the download. This, in turn, can make the payload difficult to read. Check out two options to prettify the file:
+* **ProvisioningLogs**: Downloads all the logs, except the provisioning steps and modified properties.
+* **ProvisioningLogs_ProvisioningSteps**: Contains the provisioning steps and the change ID. You can use the change ID to join the event with the other two files.
+* **ProvisioningLogs_ModifiedProperties**: Contains the attributes that were changed and the change ID. You can use the change ID to join the event with the other two files.
-1. Use Visual Studio Code to format the JSON
+#### Open the JSON file
+To open the JSON file, use a text editor such as [Microsoft Visual Studio Code](https://aka.ms/vscode). Visual Studio Code makes the file easier to read by providing syntax highlighting. You can also open the JSON file by using browsers in an uneditable format, such as [Microsoft Edge](https://aka.ms/msedge).
-Follow the instructions defined [here](https://code.visualstudio.com/docs/languages/json#_formatting) to format the JSON file using Visual Studio Code.
+#### Prettify the JSON file
+The JSON file is downloaded in minified format to reduce the size of the download. This format can make the payload hard to read. Check out two options to prettify the file:
-2. Use PowerShell to format the JSON
+- Use [Visual Studio Code to format the JSON](https://code.visualstudio.com/docs/languages/json#_formatting).
-This script will output the json in a prettified format with tabs and spaces.
+- Use PowerShell to format the JSON. This script will output the JSON in a format that includes tabs and spaces:
-` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON`
+ ` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON`
-`$JSONContent | ConvertTo-Json > <PATH TO OUTPUT THE JSON FILE>`
+ `$JSONContent | ConvertTo-Json > <PATH TO OUTPUT THE JSON FILE>`
-#### Parsing the JSON file
+#### Parse the JSON file
-Here are some sample commands to work with the JSON file using PowerShell. You can use any programming language that you are comfortable with.
+Here are some sample commands to work with the JSON file by using PowerShell. You can use any programming language that you're comfortable with.
-First, [read the JSON file](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/convertfrom-json?view=powershell-7.1) by running:
+First, [read the JSON file](https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/convertfrom-json?view=powershell-7.1) by running this command:
` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON`
-Now you can parse the data per your scenario. Here are a couple examples:
+Now you can parse the data according to your scenario. Here are a couple of examples:
-1. Output all jobIDs in the JsonFile
+- Output all job IDs in the JSON file:
-`foreach ($provitem in $JSONContent) { $provitem.jobId }`
+ `foreach ($provitem in $JSONContent) { $provitem.jobId }`
-2. Output all changeIds for events where the action was "create"
+- Output all change IDs for events where the action was "create":
-`foreach ($provitem in $JSONContent) { `
-` if ($provItem.action -eq 'Create') {`
-` $provitem.changeId `
-` }`
-`}`
+ `foreach ($provitem in $JSONContent) { `
+ ` if ($provItem.action -eq 'Create') {`
+ ` $provitem.changeId `
+ ` }`
+ `}`
## What you should know -- The Azure portal stores reported provisioning data for 30 days if you have a premium edition and 7 days if you have a free edition.The provisioning logs can be published to [log analytics](../app-provisioning/application-provisioning-log-analytics.md) for retention beyond 30 days.
+Here are some tips and considerations for provisioning reports:
+
+- The Azure portal stores reported provisioning data for 30 days if you have a premium edition and 7 days if you have a free edition. You can publish the provisioning logs to [Log Analytics](../app-provisioning/application-provisioning-log-analytics.md) for retention beyond 30 days.
-- You can use the Change ID attribute as unique identifier. This is, for example, helpful when interacting with product support.
+- You can use the change ID attribute as unique identifier. This is helpful when you're interacting with product support, for example.
-- You may see skipped events for users that are not in scope. This is expected, especially when the sync scope is set to all users and groups. Our service will evaluate all the objects in the tenant, even the ones that are out of scope.
+- You might see skipped events for users who are not in scope. This is expected, especially when the sync scope is set to all users and groups. The service will evaluate all the objects in the tenant, even the ones that are out of scope.
-- The provisioning logs are currently unavailable in the government cloud. If you're unable to access the provisioning logs, please use the audit logs as a temporary workaround.
+- The provisioning logs are currently unavailable in the government cloud. If you can't access the provisioning logs, use the audit logs as a temporary workaround.
-- The provisioning logs do not show role imports (applies to AWS, SalesForce, and ZenDesk). The logs for role imports can be found in the audit logs.
+- The provisioning logs don't show role imports (applies to AWS, Salesforce, and Zendesk). You can find the logs for role imports in the audit logs.
-## Error Codes
+## Error codes
-Use the table below to better understand how to resolve errors you may find in the provisioning logs. For any error codes that are missing, provide feedback using the link at the bottom of this page.
+Use the following table to better understand how to resolve errors that you find in the provisioning logs. For any error codes that are missing, provide feedback by using the link at the bottom of this page.
-|Error Code|Description|
+|Error code|Description|
|||
-|Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application, or review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the following [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
-|TooManyRequests|The target app rejected this attempt to update the user because it is overloaded and receiving too many requests. There is nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.|
-|InternalServerError |The target app returned an unexpected error. There may be a service issue with the target application that is preventing this from working. This attempt will automatically be retired in 40 minutes.|
-|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD was able to authenticate with the target application, but was not authorized to perform the update. Please review any instructions provided by the target application as well as the respective application [tutorial](../saas-apps/tutorial-list.md).|
-|UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application may not be correct, or there may be a service issue with the target application that is preventing this from working.|
-|WebExceptionProtocolError |An HTTP protocol error occurred while connecting to the target application. There is nothing to do. This attempt will automatically be retired in 40 minutes.|
-|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Check to ensure the user exists. To force a re-match of all users, use the MS Graph API to [restart job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta). Restarting provisioning will trigger an initial cycle, which can take time to complete. It also deletes the cache the provisioning service uses to operate, meaning that all users and groups in the tenant will have to be evaluated again and certain provisioning events could be dropped.|
-|NotImplemented | The target app returned an unexpected response. The configuration of the app may not be correct, or there may be a service issue with the target app that is preventing this from working. Please review any instructions provided by the target application and the respective application [tutorial](../saas-apps/tutorial-list.md). |
-|MandatoryFieldsMissing, MissingValues |The user could not be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure the required fields are not omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
-|SchemaAttributeNotFound |Could not perform the operation because an attribute was specified that does not exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure your configuration is correct.|
+|Conflict, EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|
+|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.|
+|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing this from working. This attempt will automatically be retired in 40 minutes.|
+|InsufficientRights, MethodNotAllowed, NotPermitted, Unauthorized| Azure AD authenticated with the target application but was not authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).|
+|UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing this from working.|
+|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There is nothing to do. This attempt will automatically be retired in 40 minutes.|
+|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.|
+|NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing this from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). |
+|MandatoryFieldsMissing, MissingValues |The user could not be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields are not omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.|
+|SchemaAttributeNotFound |The operation couldn't be performed because an attribute was specified that does not exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure that your configuration is correct.|
|InternalError |An internal service error occurred within the Azure AD provisioning service. There is nothing to do. This attempt will automatically be retried in 40 minutes.|
-|InvalidDomain |The operation could not be performed due to an attribute value containing an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |
-|Timeout |The operation could not be completed because the target application took too long to respond. There is nothing to do. This attempt will automatically be retried in 40 minutes.|
-|LicenseLimitExceeded|The user could not be created in the target application because there are no available licenses for this user. Either procure more licenses for the target application, or review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes.|
-|DuplicateTargetEntries |The operation could not be completed because more than one user in the target application was found with the configured matching attributes. Either remove the duplicate user from the target application, or reconfigure your attribute mappings as described [here](../app-provisioning/customize-application-attributes.md).|
-|DuplicateSourceEntries | The operation could not be completed because more than one user was found with the configured matching attributes. Either remove the duplicate user, or reconfigure your attribute mappings as described [here](../app-provisioning/customize-application-attributes.md).|
-|ImportSkipped | When each user is evaluated, we attempt to import the user from the source system. This error commonly occurs when the user being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, we cannot evaluate scoping, matching, or export changes. Note, presence of this error does not indicate that the user is in scope as we have not yet evaluated scoping for the user.|
-|EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The skip could be due to the user being out of scope or the user already existing in the target system with no further changes required.|
-|SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| When performing a GET request to retrieve a user or group, we received multiple users or groups in the response. We expected to receive only one user or group in the response. If, [for example](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group), we do a GET request to retrieve a group and provide a filter to exclude members and your SCIM endpoint returns the members, we will throw this error.|
+|InvalidDomain |The operation couldn't be performed because an attribute value contains an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |
+|Timeout |The operation couldn't be completed because the target application took too long to respond. There is nothing to do. This attempt will automatically be retried in 40 minutes.|
+|LicenseLimitExceeded|The user couldn't be created in the target application because there are no available licenses for this user. Procure more licenses for the target application. Or, review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes.|
+|DuplicateTargetEntries |The operation couldn't be completed because more than one user in the target application was found with the configured matching attributes. Remove the duplicate user from the target application, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).|
+|DuplicateSourceEntries | The operation couldn't be completed because more than one user was found with the configured matching attributes. Remove the duplicate user, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).|
+|ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. Note that the presence of this error does not indicate that the user is in scope, because you haven't yet evaluated scoping for the user.|
+|EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.|
+|SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. [For example](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group), if you do a GET request to retrieve a group and provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
## Next steps * [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Provisioning logs graph API](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta)
\ No newline at end of file
+* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/bluejeans-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bluejeans-provisioning-tutorial.md
@@ -15,68 +15,66 @@
# Tutorial: Configure BlueJeans for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in BlueJeans and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to BlueJeans.
+This tutorial describes the steps you need to perform in both BlueJeans and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [BlueJeans](https://www.bluejeans.com/pricing) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in BlueJeans
+> * Remove users in BlueJeans when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and BlueJeans
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/bluejeans-tutorial) to BlueJeans (recommended)
## Prerequisites
-The scenario outlined in this tutorial assumes that you already have the following:
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant
-* A BlueJeans tenant with the [My Company](https://www.BlueJeans.com/pricing) plan or better enabled
-* A user account in BlueJeans with Admin permissions
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A BlueJeans tenant with [My Company](https://www.bluejeans.com/pricing) plan or better enabled.
+* A user account in BlueJeans with Admin permissions.
+* SCIM provisioning enabled in BlueJeans Enterprise.
> [!NOTE] > The Azure AD provisioning integration relies on the [BlueJeans API](https://BlueJeans.github.io/developer), which is available to BlueJeans teams on the Standard plan or better.
-## Adding BlueJeans from the gallery
-
-Before configuring BlueJeans for automatic user provisioning with Azure AD, you need to add BlueJeans from the Azure AD application gallery to your list of managed SaaS applications.
-
-**To add BlueJeans from the Azure AD application gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Azure Active Directory**.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Go to **Enterprise applications**, and then select **All applications**.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and BlueJeans](../app-provisioning/customize-application-attributes.md).
- ![The Enterprise applications blade](common/enterprise-applications.png)
+## Step 2. Configure BlueJeans to support provisioning with Azure AD
-3. To add a new application, select the **New application** button at the top of the pane.
+1. Login to BlueJeans admin console. Navigate to Group Settings > Security.
+2. Select **Single Sign On** and **Configure Now**.
- ![The New application button](common/add-new-app.png)
+ ![now](./media/bluejeans-provisioning-tutorial/configure.png)
-4. In the search box, enter **BlueJeans**, select **BlueJeans** in the results panel, and then select the **Add** button to add the application.
+3. In the **Provision & Integration** window, select **Create and manage user accounts through IDP** and click **GENERATE TOKEN**.
- ![BlueJeans in the results list](common/search-new-app.png)
+ ![generate](./media/bluejeans-provisioning-tutorial/token.png)
+
+4. Copy and save the Token.
+5. The BlueJeans Tenant URL is `https://api.bluejeans.com/v2/scim`. The **Tenant URL** and the **Secret Token** from the previous step will be entered in the Provisioning tab of your BlueJeans application in the Azure portal.
-## Assigning users to BlueJeans
+## Step 3. Add BlueJeans from the Azure AD application gallery
-Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been "assigned" to an application in Azure AD are synchronized.
+Add BlueJeans from the Azure AD application gallery to start managing provisioning to BlueJeans. If you have previously setup BlueJeans for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in Azure AD need access to BlueJeans. Once decided, you can assign these users and/or groups to BlueJeans by following the instructions here:
+## Step 4. Define who will be in scope for provisioning
-* [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-### Important tips for assigning users to BlueJeans
+* When assigning users to BlueJeans, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* It is recommended that a single Azure AD user is assigned to BlueJeans to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later.
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning a user to BlueJeans, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
+## Step 5. Configure automatic user provisioning to BlueJeans
-## Configuring automatic user provisioning to BlueJeans
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in BlueJeans based on user and/or group assignments in Azure AD.
-
-> [!TIP]
-> You may also choose to enable SAML-based single sign-on for BlueJeans, following the instructions provided in the [BlueJeans single sign-on tutorial](bluejeans-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
### To configure automatic user provisioning for BlueJeans in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **BlueJeans**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
![Enterprise applications blade](common/enterprise-applications.png)
@@ -86,26 +84,26 @@ This section guides you through the steps to configure the Azure AD provisioning
3. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
4. Set the **Provisioning Mode** to **Automatic**. ![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your BlueJeans Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to BlueJeans. If the connection fails, ensure your BlueJeans account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, input your BlueJeans Tenant URL and Secret Token retrieved in Step 2. Click **Test Connection** to ensure Azure AD can connect to BlueJeans. If the connection fails, ensure your BlueJeans account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+6. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
![Notification Email](common/provisioning-notification-email.png)
-7. Click **Save**.
+7. Select **Save**.
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to BlueJeans**.
-9. Review the user attributes that are synchronized from Azure AD to BlueJeans in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in BlueJeans for update operations. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to BlueJeans in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in BlueJeans for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the BlueJeans API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
@@ -124,7 +122,7 @@ This section guides you through the steps to configure the Azure AD provisioning
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-12. Define the users and/or groups that you would like to provision to BlueJeans by choosing the desired values in **Scope** in the **Settings** section.
+12. Define the users that you would like to provision to BlueJeans by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
@@ -132,9 +130,14 @@ This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on BlueJeans.
+This operation starts the initial synchronization of all users defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector Limitations
@@ -147,10 +150,4 @@ For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
-
-<!--Image references-->
-
-[1]: ./media/bluejeans-provisioning-tutorial/tutorial_general_01.png
-[2]: ./media/bluejeans-tutorial/tutorial_general_02.png
-[3]: ./media/bluejeans-tutorial/tutorial_general_03.png
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/cloudknox-permissions-management-platform-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cloudknox-permissions-management-platform-tutorial.md new file mode 100644 /dev/null
@@ -0,0 +1,150 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with CloudKnox Permissions Management Platform | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and CloudKnox Permissions Management Platform.
++++++++ Last updated : 01/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with CloudKnox Permissions Management Platform
+
+In this tutorial, you'll learn how to integrate CloudKnox Permissions Management Platform with Azure Active Directory (Azure AD). When you integrate CloudKnox Permissions Management Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to CloudKnox Permissions Management Platform.
+* Enable your users to be automatically signed-in to CloudKnox Permissions Management Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CloudKnox Permissions Management Platform single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* CloudKnox Permissions Management Platform supports **IDP** initiated SSO
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding CloudKnox Permissions Management Platform from the gallery
+
+To configure the integration of CloudKnox Permissions Management Platform into Azure AD, you need to add CloudKnox Permissions Management Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **CloudKnox Permissions Management Platform** in the search box.
+1. Select **CloudKnox Permissions Management Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for CloudKnox Permissions Management Platform
+
+Configure and test Azure AD SSO with CloudKnox Permissions Management Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CloudKnox Permissions Management Platform.
+
+To configure and test Azure AD SSO with CloudKnox Permissions Management Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure CloudKnox Permissions Management Platform SSO](#configure-cloudknox-permissions-management-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CloudKnox Permissions Management Platform test user](#create-cloudknox-permissions-management-platform-test-user)** - to have a counterpart of B.Simon in CloudKnox Permissions Management Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **CloudKnox Permissions Management Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+
+ In the **Reply URL** text box, type a URL using the following pattern:
+ `https://app.cloudknox.io/saml/<ID>`
+
+ > [!NOTE]
+ > The Reply URL value is not real. Update the value with the actual Reply URL. Contact [CloudKnox Permissions Management Platform Client support team](mailto:support@cloudknox.io) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. CloudKnox Permissions Management Platform application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, CloudKnox Permissions Management Platform application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source attribute|
+ | | |
+ | First_Name | user.givenname |
+ | Groups | user.groups |
+ | Last_Name | user.surname |
+ | Email_Address | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up CloudKnox Permissions Management Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CloudKnox Permissions Management Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **CloudKnox Permissions Management Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure CloudKnox Permissions Management Platform SSO
+
+To configure single sign-on on **CloudKnox Permissions Management Platform** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CloudKnox Permissions Management Platform support team](mailto:support@cloudknox.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CloudKnox Permissions Management Platform test user
+
+In this section, you create a user called Britta Simon in CloudKnox Permissions Management Platform. Work with [CloudKnox Permissions Management Platform support team](mailto:support@cloudknox.io) to add the users in the CloudKnox Permissions Management Platform platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the CloudKnox Permissions Management Platform for which you set up the SSO
+
+* You can use Microsoft My Apps. When you click the CloudKnox Permissions Management Platform tile in the My Apps, you should be automatically signed in to the CloudKnox Permissions Management Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure CloudKnox Permissions Management Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/olfeo-saas-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/olfeo-saas-tutorial.md new file mode 100644 /dev/null
@@ -0,0 +1,136 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Olfeo SAAS | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Olfeo SAAS.
++++++++ Last updated : 01/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Olfeo SAAS
+
+In this tutorial, you'll learn how to integrate Olfeo SAAS with Azure Active Directory (Azure AD). When you integrate Olfeo SAAS with Azure AD, you can:
+
+* Control in Azure AD who has access to Olfeo SAAS.
+* Enable your users to be automatically signed-in to Olfeo SAAS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Olfeo SAAS single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Olfeo SAAS supports **SP** initiated SSO
+
+## Adding Olfeo SAAS from the gallery
+
+To configure the integration of Olfeo SAAS into Azure AD, you need to add Olfeo SAAS from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Olfeo SAAS** in the search box.
+1. Select **Olfeo SAAS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Olfeo SAAS
+
+Configure and test Azure AD SSO with Olfeo SAAS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Olfeo SAAS.
+
+To configure and test Azure AD SSO with Olfeo SAAS, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Olfeo SAAS SSO](#configure-olfeo-saas-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Olfeo SAAS test user](#create-olfeo-saas-test-user)** - to have a counterpart of B.Simon in Olfeo SAAS that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Olfeo SAAS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.olfeo.com/api/sso/saml/<ID>/login`
+
+ b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.olfeo.com/api/sso/saml/<ID>/login`
+
+ c. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.olfeo.com/api/sso/saml/<ID>/acs`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [Olfeo SAAS Client support team](mailto:equipe-rd@olfeo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Olfeo SAAS.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Olfeo SAAS**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Olfeo SAAS SSO
+
+To configure single sign-on on **Olfeo SAAS** side, you need to send the **App Federation Metadata Url** to [Olfeo SAAS support team](mailto:equipe-rd@olfeo.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Olfeo SAAS test user
+
+In this section, you create a user called Britta Simon in Olfeo SAAS. Work with [Olfeo SAAS support team](mailto:equipe-rd@olfeo.com) to add the users in the Olfeo SAAS platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Olfeo SAAS Sign-on URL where you can initiate the login flow.
+
+* Go to Olfeo SAAS Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Olfeo SAAS tile in the My Apps, this will redirect to Olfeo SAAS Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Olfeo SAAS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/penji-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/penji-tutorial.md new file mode 100644 /dev/null
@@ -0,0 +1,149 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Penji | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Penji.
++++++++ Last updated : 01/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Penji
+
+In this tutorial, you'll learn how to integrate Penji with Azure Active Directory (Azure AD). When you integrate Penji with Azure AD, you can:
+
+* Control in Azure AD who has access to Penji.
+* Enable your users to be automatically signed-in to Penji with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Penji single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Penji supports **SP** initiated SSO
+
+## Adding Penji from the gallery
+
+To configure the integration of Penji into Azure AD, you need to add Penji from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Penji** in the search box.
+1. Select **Penji** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Penji
+
+Configure and test Azure AD SSO with Penji using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Penji.
+
+To configure and test Azure AD SSO with Penji, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Penji SSO](#configure-penji-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Penji test user](#create-penji-test-user)** - to have a counterpart of B.Simon in Penji that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Penji** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign on URL** text box, type the URL:
+ `https://web.penjiapp.com/login`
+
+ b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://cloud.penjiapp.com/saml/<ID>/sp/metadata`
+
+ c. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://cloud.penjiapp.com/saml/<ID>/login/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Penji Client support team](mailto:support@penjiapp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Penji application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Penji application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | urn:oid:0.9.2342.19200300.100.1.1 | user.userprincipalname |
+ | urn:oid:0.9.2342.19200300.100.1.3 | user.mail |
++
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Penji.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Penji**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Penji SSO
+
+To configure single sign-on on **Penji** side, you need to send the **App Federation Metadata Url** to [Penji support team](mailto:support@penjiapp.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Penji test user
+
+In this section, you create a user called Britta Simon in Penji. Work with [Penji support team](mailto:support@penjiapp.com) to add the users in the Penji platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Penji Sign-on URL where you can initiate the login flow.
+
+* Go to Penji Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Penji tile in the My Apps, this will redirect to Penji Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Penji you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md
@@ -219,9 +219,8 @@ In this section, you will configure how user data flows from SuccessFactors to A
1. In the **Attribute mappings** section, you can define how individual SuccessFactors attributes map to Active Directory attributes.
- >[!NOTE]
- >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
-
+ >[!NOTE]
+ >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md
@@ -181,7 +181,7 @@ This section provides steps for user account provisioning from SuccessFactors to
To provision to Active Directory on-premises, the Provisioning agent must be installed on a domain-joined server that has network access to the desired Active Directory domain(s).
-Transfer the downloaded agent installer to the server host and follow the steps listed [in the install agent section](../cloud-provisioning/how-to-install.md) to complete the agent configuration.
+Transfer the downloaded agent installer to the server host and follow the steps listed [in the install agent section](../cloud-sync/how-to-install.md) to complete the agent configuration.
### Part 3: In the provisioning app, configure connectivity to SuccessFactors and Active Directory In this step, we establish connectivity with SuccessFactors and Active Directory in the Azure portal.
@@ -204,12 +204,12 @@ In this step, we establish connectivity with SuccessFactors and Active Directory
> This setting only comes into play for user account creations if the *parentDistinguishedName* attribute is not configured in the attribute mappings. This setting is not used for user search or update operations. The entire domain sub tree falls in the scope of the search operation. * **Notification Email ΓÇô** Enter your email address, and check the "send email if failure occurs" checkbox.
- > [!NOTE]
- > The Azure AD Provisioning Service sends email notification if the provisioning job goes into a [quarantine](../app-provisioning/application-provisioning-quarantine-status.md) state.
+ > [!NOTE]
+ > The Azure AD Provisioning Service sends email notification if the provisioning job goes into a [quarantine](../app-provisioning/application-provisioning-quarantine-status.md) state.
* Click the **Test Connection** button. If the connection test succeeds, click the **Save** button at the top. If it fails, double-check that the SuccessFactors credentials and the AD credentials configured on the agent setup are valid.
- >[!div class="mx-imgBorder"]
- >![Azure portal](./media/sap-successfactors-inbound-provisioning/sf2ad-provisioning-creds.png)
+ >[!div class="mx-imgBorder"]
+ >![Azure portal](./media/sap-successfactors-inbound-provisioning/sf2ad-provisioning-creds.png)
* Once the credentials are saved successfully, the **Mappings** section will display the default mapping **Synchronize SuccessFactors Users to On Premises Active Directory**
@@ -246,9 +246,8 @@ In this section, you will configure how user data flows from SuccessFactors to A
1. In the **Attribute mappings** section, you can define how individual SuccessFactors attributes map to Active Directory attributes.
- >[!NOTE]
- >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
-
+ >[!NOTE]
+ >For the complete list of SuccessFactors attribute supported by the application, please refer to [SuccessFactors Attribute Reference](../app-provisioning/sap-successfactors-attribute-reference.md)
1. Click on an existing attribute mapping to update it, or click **Add new mapping** at the bottom of the screen to add new mappings. An individual attribute mapping supports these properties:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/sigma-computing-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sigma-computing-tutorial.md new file mode 100644 /dev/null
@@ -0,0 +1,168 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Sigma Computing | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Sigma Computing.
++++++++ Last updated : 01/27/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Sigma Computing
+
+In this tutorial, you'll learn how to integrate Sigma Computing with Azure Active Directory (Azure AD). When you integrate Sigma Computing with Azure AD, you can:
+
+* Control in Azure AD who has access to Sigma Computing.
+* Enable your users to be automatically signed-in to Sigma Computing with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Sigma Computing single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Sigma Computing supports **SP and IDP** initiated SSO
+* Sigma Computing supports **Just In Time** user provisioning
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Sigma Computing from the gallery
+
+To configure the integration of Sigma Computing into Azure AD, you need to add Sigma Computing from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Sigma Computing** in the search box.
+1. Select **Sigma Computing** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Sigma Computing
+
+Configure and test Azure AD SSO with Sigma Computing using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sigma Computing.
+
+To configure and test Azure AD SSO with Sigma Computing, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Sigma Computing SSO](#configure-sigma-computing-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Sigma Computing test user](#create-sigma-computing-test-user)** - to have a counterpart of B.Simon in Sigma Computing that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Sigma Computing** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ ||
+ | `https://app.sigmacomputing.com/<CustomerOrg>`|
+ | `https://aws.sigmacomputing.com/<CustomerOrg>`|
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign-on URL. Contact [Sigma Computing Client support team](mailto:support@sigmacomputing.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Click **Save**.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Sigma Computing** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Sigma Computing.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Sigma Computing**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Sigma Computing SSO
+
+1. Log in to your Sigma account.
+
+1. Navigate to the **Admin Portal** by selecting **Administration** from the user menu.
+
+1. Perform the following steps in the below page.
+
+ ![Configure Sigma Computing SSO section](./media/sigma-computing-tutorial/authentication.png)
+
+ a. Select the **Authentication** page from the left panel.
+
+ b. Under **Authentication Method**, select **SAML** or **SAML or password**.
+
+ c. In the **Identity provider login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ d. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Identity Provider X509 certificate** textbox.
+
+ e. Click on **Save**.
+
+### Create Sigma Computing test user
+
+In this section, a user called Britta Simon is created in Sigma Computing. Sigma Computing supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sigma Computing, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Sigma Computing Sign on URL where you can initiate the login flow.
+
+* Go to Sigma Computing Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Sigma Computing for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Sigma Computing tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Sigma Computing for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Sigma Computing you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/timeclock-365-saml-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/timeclock-365-saml-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 12/16/2020 Last updated : 01/28/2021
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Timeclock 365 SAML** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -106,7 +106,15 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Timeclock 365 SAML SSO
-1. Open a new tab in your browser, and sign in to your Timeclock 365 SAML company site as an administrator.
+1. To automate the configuration within Timeclock 365 SAML, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+2. After adding extension to the browser, click on **Set up Timeclock 365 SAML** will direct you to the Timeclock 365 SAML application. From there, provide the admin credentials to sign into Timeclock 365 SAML. The browser extension will automatically configure the application for you and automate steps 3-4.
+
+ ![Setup configuration](common/setup-sso.png)
+
+3. If you want to set up Timeclock 365 SAML manually, in a different web browser window, sign in to your Timeclock 365 SAML company site as an administrator.
1. Perform the below mentioned steps.
@@ -145,4 +153,4 @@ In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Timeclock 365 SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
+Once you configure Timeclock 365 SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/workday-inbound-cloud-only-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md
@@ -210,11 +210,11 @@ Once the Workday provisioning app configurations have been completed, you can tu
## Next steps
+* [Learn more about Azure AD and Workday integration scenarios and web service calls](../app-provisioning/workday-integration-reference.md)
* [Learn more about supported Workday Attributes for inbound provisioning](../app-provisioning/workday-attribute-reference.md) * [Learn how to configure Workday Writeback](workday-writeback-tutorial.md) * [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between Workday and Azure Active Directory](workday-tutorial.md)
-* [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
* [Learn how to export and import your provisioning configurations](../app-provisioning/export-import-provisioning-configuration.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/workday-inbound-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workday-inbound-tutorial.md
@@ -151,7 +151,7 @@ In this step, you'll grant "domain security" policy permissions for the worker d
1. Search and select the security group created in the previous step. >[!div class="mx-imgBorder"]
- >![Select Security Group](./media/workday-inbound-tutorial/select-security-group-msft-wdad.png)
+ >![Select Security Group](./media/workday-inbound-tutorial/select-security-group-workday.png)
1. Click on the ellipsis (...) next to the group name and from the menu, select **Security Group > Maintain Domain Permissions for Security Group** >[!div class="mx-imgBorder"]
@@ -222,7 +222,7 @@ In this step, you'll grant "business process security" policy permissions for th
## Provisioning Agent installation prerequisites
-Review the [provisioning agent installation prerequisites](../cloud-provisioning/how-to-prerequisites.md) before proceeding to the next section.
+Review the [provisioning agent installation prerequisites](../cloud-sync/how-to-prerequisites.md) before proceeding to the next section.
## Configuring user provisioning from Workday to Active Directory
@@ -261,7 +261,7 @@ This section provides steps for user account provisioning from Workday to each A
To provision to Active Directory on-premises, the Provisioning agent must be installed on a domain-joined server that has network access to the desired Active Directory domain(s).
-Transfer the downloaded agent installer to the server host and follow the steps listed [in the **Install agent** section](../cloud-provisioning/how-to-install.md) to complete the agent configuration.
+Transfer the downloaded agent installer to the server host and follow the steps listed [in the **Install agent** section](../cloud-sync/how-to-install.md) to complete the agent configuration.
### Part 3: In the provisioning app, configure connectivity to Workday and Active Directory In this step, we establish connectivity with Workday and Active Directory in the Azure portal.
@@ -331,7 +331,7 @@ In this section, you will configure how user data flows from Workday to Active D
* Operator: IS NOT NULL > [!TIP]
- > When you are configuring the provisioning app for the first time, you will need to test and verify your attribute mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using the scoping filters under **Source Object Scope** to test your mappings with a few test users from Workday. Once you have verified that the mappings work, then you can either remove the filter or gradually expand it to include more users.
+ > When you are configuring the provisioning app for the first time, you will need to test and verify your attribute mappings and expressions to make sure that it is giving you the desired result. Microsoft recommends using [scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) under **Source Object Scope** and [on-demand provisioning](../app-provisioning/provision-on-demand.md) to test your mappings with a few test users from Workday. Once you have verified that the mappings work, then you can either remove the filter or gradually expand it to include more users.
> [!CAUTION] > The default behavior of the provisioning engine is to disable/delete users that go out of scope. This may not be desirable in your Workday to AD integration. To override this default behavior refer to the article [Skip deletion of user accounts that go out of scope](../app-provisioning/skip-out-of-scope-deletions.md)
@@ -1066,7 +1066,8 @@ With respect to data retention, the Azure AD provisioning service does not gener
## Next steps
+* [Learn more about Azure AD and Workday integration scenarios and web service calls](../app-provisioning/workday-integration-reference.md)
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between Workday and Azure Active Directory](workday-tutorial.md)
-* [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
+* [Learn how to configure Workday Writeback](workday-writeback-tutorial.md)
* [Learn how to use Microsoft Graph APIs to manage provisioning configurations](/graph/api/resources/synchronization-overview)\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/workday-writeback-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workday-writeback-tutorial.md
@@ -170,6 +170,7 @@ Once the Workday provisioning app configurations have been completed, you can tu
## Next steps
+* [Learn more about Azure AD and Workday integration scenarios and web service calls](../app-provisioning/workday-integration-reference.md)
* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) * [Learn how to configure single sign-on between Workday and Azure Active Directory](workday-tutorial.md) * [Learn how to integrate other SaaS applications with Azure Active Directory](tutorial-list.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/auth-app-android-china https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/auth-app-android-china.md
@@ -10,7 +10,7 @@
Previously updated : 12/08/2020 Last updated : 01/27/2021
@@ -18,7 +18,6 @@
The Microsoft Authenticator app for Android is available for download in China. The Google Play Store isn't available in China, so the app must be downloaded from other Chinese app marketplaces. The Microsoft Authenticator app for Android is currently available in the following stores in China: -- [Baidu](https://shouji.baidu.com/software/26638379.html) - [Lenovo](https://www.lenovomm.com/appdetail/com.azure.authenticator/20197724) - [Huawei](https://appgallery.cloud.huawei.com/uowap/https://docsupdatetracker.net/index.html#/detailApp/C100262999?source=appshare&subsource=C100262999&shareTo=weixin&locale=zh_CN) - [Samsung Galaxy Store](http://apps.samsung.com/appquery/appDetail.as?appId=com.azure.authenticator)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/user-help-auth-app-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
@@ -10,7 +10,7 @@
Previously updated : 01/15/2020 Last updated : 01/28/2021
@@ -63,7 +63,11 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**Q**: What data does the Authenticator store on my behalf and how can I delete it?
-**A**: The Authenticator app collects three types of information:<ul><li>Account info you provide when you add your account. This data can be removed by removing your account.</li><li>Diagnostic log data that stays only in the app until you select **Send Logs** the app's **Help** menu to send logs to Microsoft. These logs can contain personal data such as email addresses, server addresses, or IP addresses. They also can contain device data such as device name and operating system version. Any personal data collected is limited to info needed to help troubleshoot app issues. You can browse these log files in the app at any time to see the info being gathered. If you send your log files, Authentication app engineers will use them only to troubleshoot customer-reported issues.</li><li>Non-personally identifiable usage data, such "started add account flow/successfully added account," or "notification approved." This data is an integral part of our engineering decisions. Your usage helps us determine where we can improve the apps in ways that are important to you. You see a notification of this data collection when you use the app for the first time. It informs you then that it can be turned off on the app's **Settings** page. You can turn this setting on or off at any time.</li></ul>
+**A**: The Authenticator app collects three types of information:
+
+- Account info you provide when you add your account. This data can be removed by removing your account.
+- Diagnostic log data that stays only in the app until you **Send feedback** in the app's top menu to send logs to Microsoft. These logs can contain personal data such as email addresses, server addresses, or IP addresses. They also can contain device data such as device name and operating system version. Any personal data collected is limited to info needed to help troubleshoot app issues. You can browse these log files in the app at any time to see the info being gathered. If you send your log files, Authentication app engineers will use them only to troubleshoot customer-reported issues.
+- Non-personally identifiable usage data, such "started add account flow/successfully added account," or "notification approved." This data is an integral part of our engineering decisions. Your usage helps us determine where we can improve the apps in ways that are important to you. You see a notification of this data collection when you use the app for the first time. It informs you that it can be turned off on the app's **Settings** page. You can turn this setting on or off at any time.
### Codes in the app
@@ -98,7 +102,7 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**Q**: Why do I only get notifications when the app is open? When the app is closed, I don't get notifications.
-**A**: If you're getting notifications, but not an alert, even with your ringer on, you should check your app settings. Make sure the app is turned on to use sound or to vibrate for notifications. If you don't get notifications at all, you should check the following conditions:<ul><li>Is your phone in Do Not Disturb or Quiet mode? These modes can prevent apps from sending notifications.</li><li>Can you get notifications from other apps? If not, it could be a problem with the network connections on your phone, or the notifications channel from Android or Apple. You can try to resolve your network connections through your phone settings. You might need to talk to your service provider to help with the Android or Apple notifications channel.</li><li>Can you get notifications for some accounts on the app, but not others? If yes, remove the problematic account from your app, add it again allowing notifications, and see if that fixes the problem.</li></ul>If you tried all of these steps and are still having issues, we recommend sending your log files for diagnostics. Open the app, go to **Help**, and then select **Send logs**. After that, go to the [Microsoft Authenticator app forum](https://social.technet.microsoft.com/Forums/en-US/home?forum=MicrosoftAuthenticatorApp) and tell us the problem you're seeing and the steps you tried.
+**A**: If you're getting notifications, but not an alert, even with your ringer on, you should check your app settings. Make sure the app is turned on to use sound or to vibrate for notifications. If you don't get notifications at all, you should check the following conditions:<ul><li>Is your phone in Do Not Disturb or Quiet mode? These modes can prevent apps from sending notifications.</li><li>Can you get notifications from other apps? If not, it could be a problem with the network connections on your phone, or the notifications channel from Android or Apple. You can try to resolve your network connections through your phone settings. You might need to talk to your service provider to help with the Android or Apple notifications channel.</li><li>Can you get notifications for some accounts on the app, but not others? If yes, remove the problematic account from your app, add it again allowing notifications, and see if that fixes the problem.</li></ul>If you tried all of these steps and are still having issues, we recommend sending your log files for diagnostics. Open the app, go to app’s top-level menu, and then select **Send feedback**. After that, go to the [Microsoft Authenticator app forum](https://social.technet.microsoft.com/Forums/en-US/home?forum=MicrosoftAuthenticatorApp) and tell Microsoft the problem you're seeing and the steps you tried.
### Switch to push notifications
@@ -200,13 +204,13 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**Q**: My Apple Watch companion app crashed. Can I send you my crash logs so you can investigate?
-**A**: You first have to make sure you've chosen to share your analytics with us. If you're a TestFlight user, you're already signed up. Otherwise, you can go to **Settings > Privacy > Analytics** and select both the **Share iPhone & Watch analytics** and the **Share with App Developers** options.<br>After you sign up, you can try to reproduce your crash so your crash logs are automatically sent to us for investigation. However, if you can't reproduce your crash, you can manually copy your log files and send them to us.<ol><li>Open the Watch app on your phone, go to **Settings > General**, and then click **Copy Watch Analytics**.</li><li>Find the corresponding crash under **Settings > Privacy > Analytics > Analytics Data**, and then manually copy the entire text.</li><li>Open Authenticator on your phone and paste that copied text into the **Share with App Developers** text box on the **Send logs** page.</li></ol>
+**A**: You first have to make sure you've chosen to share your analytics with us. If you're a TestFlight user, you're already signed up. Otherwise, you can go to **Settings > Privacy > Analytics** and select both the **Share iPhone & Watch analytics** and the **Share with App Developers** options.<br>After you sign up, you can try to reproduce your crash so your crash logs are automatically sent to us for investigation. However, if you can't reproduce your crash, you can manually copy your log files and send them to us.<ol><li>Open the Watch app on your phone, go to **Settings > General**, and then click **Copy Watch Analytics**.</li><li>Find the corresponding crash under **Settings > Privacy > Analytics > Analytics Data**, and then manually copy the entire text.</li><li>Open Authenticator on your phone and paste that copied text into theΓÇ»**Describe the issue you are facing** box under **Having trouble?** on theΓÇ»**Send feedback** page. </li></ol>
-## Autofill for consumers
+## Autofill with Authenticator
-**Q**: What is Autofill in Authenticator?
+**Q**: What is Autofill with Authenticator?
-**A**: The Authenticator app now securely stores and autofills passwords on apps and websites you visit on your phone. You can use Autofill to sync and autofill your passwords on your iOS and Android devices. After setting up the Authenticator app as an autofill provider on your phone, it offers to save your passwords when you enter them on a site or app sign-in page. The passwords are saved as part of [your Microsoft account](https://account.microsoft.com/account) and are also available when you sign in to Microsoft Edge with your Microsoft account.
+**A**: The Authenticator app now securely stores and autofills passwords on apps and websites you visit on your phone. You can use Autofill to sync and autofill your passwords on your iOS and Android devices. After setting up the Authenticator app as an autofill provider on your phone, it offers to save your passwords when you enter them on a site or in an app sign-in page. The passwords are saved as part of [your personal Microsoft account](https://account.microsoft.com/account) and are also available when you sign in to Microsoft Edge with your personal Microsoft account.
**Q**: What information can Authenticator autofill for me?
@@ -217,23 +221,22 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**A**: Follow these steps: 1. Open the Authenticator app.
-1. In **Settings** under **Beta**, turn on **Autofill**.
1. On the **Passwords** tab in Authenticator, select **Sign in with Microsoft** and sign in using [your Microsoft account](https://account.microsoft.com/account). This feature currently supports only Microsoft accounts and doesn't yet support work or school accounts. **Q**: How do I make Authenticator the default autofill provider on my phone? **A**: Follow these steps:
-1. Open Authenticator **Settings**, and under **Beta** turn on **Autofill**.
-1. On the **Passwords** tab inside the app, sign in using [your Microsoft account](https://account.microsoft.com/account).
+1. Open the Authenticator app.
+1. On the **Passwords** tab inside the app, select **Sign in with Microsoft** and sign in using [your Microsoft account](https://account.microsoft.com/account).
1. Do one of the following: - On iOS, under **Settings**, select **How to turn on Autofill** in the Autofill settings section to learn how to set Authenticator as the default autofill provider.
- - On Android, under **Settings**, select **Set as Autofill provider** in the Autofill settings section to set Authenticator as the default autofill provider.
+ - On Android, under **Settings**, select **Set as Autofill provider** in the Autofill settings section.
-**Q**: What if **Autofill** switch is grayed out for me in Settings?
+**Q**: What if **Autofill** switch is not available for me in Settings?
-**A**: Autofill is currently in beta and has not yet been enabled for all organizations or account types. If the **Autofill** switch in **Settings** is grayed out for you, it is likely because you are using Authenticator app with your work account. You can use this feature on a device where your work account isnΓÇÖt added. If your organization works with Microsoft, the **Autofill** switch will be enabled even when a work account is added to Authenticator.
+**A**: If Autofill is not available for you in Authenticator, it might be because autofill has not yet been allowed for your organization or account type. You can use this feature on a device where your work or school account isnΓÇÖt added. To learn more on how to allow Autofill for your organization, see [Autofill for IT admins](#autofill-for-it-admins).
**Q**: How do I stop syncing passwords?
@@ -243,28 +246,24 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**A**: Authenticator app already provides a high level of security for multi-factor authentication and account management, and the same high security bar is also extended to managing your passwords. -- **Strong authentication is needed by Authenticator app**: Signing into Authenticator requires a second factor. This means that your passwords inside Authenticator app can't be accessed even if someone has your Microsoft account password.-- **Autofill data is protected with biometrics and passcode**: Before you can autofill password on an app or site, Authenticator requires biometric or device passcode. This ensures that even if someone else has access to your device, they cannot fill or see your password, as theyΓÇÖd be unable to provide the biometric or device PIN. Furthermore, a user cannot open the Passwords page unless they provide biometric or PIN, even if they turn off App Lock in app settings.-- **Encrypted Passwords on the device**: Passwords on device are encrypted, and encryption/decryption keys are never stored and always generated on-the-fly. Passwords are only decrypted when user wants to, that is, during autofill or when user wants to see the password, both of which require biometric or PIN.-- **Cloud and network security**: Your passwords on the cloud are encrypted and decrypted only when they reach your device. Passwords are synced over an SSL-protected HTTPS connection, which ensures no attacker can eavesdrop on sensitive data when it is being synced. We also ensure we check the sanity of data being synced over network using cryptographic hashed functions (specifically, hash-based message authentication code).
+- **Strong authentication is needed by Authenticator app**: Signing into Authenticator requires a second factor. This means that your passwords inside Authenticator app are protected even if someone has your Microsoft account password.
+- **Autofill data is protected with biometrics and passcode**: Before you can autofill password on an app or site, Authenticator requires biometric or device passcode. This helps add extra security so that even if someone else has access to your device, they can't fill or see your password, because theyΓÇÖre unable to provide the biometric or device PIN input. Also, a user cannot open the Passwords page unless they provide biometric or PIN, even if they turn off App Lock in app settings.
+- **Encrypted Passwords on the device**: Passwords on device are encrypted, and encryption/decryption keys are never stored and always generated when needed. Passwords are only decrypted when user wants to, that is, during autofill or when user wants to see the password, both of which require biometric or PIN.
+- **Cloud and network security**: Your passwords on the cloud are encrypted and decrypted only when they reach your device. Passwords are synced over an SSL-protected HTTPS connection, which helps prevent an attacker from eavesdropping on sensitive data when it is being synced. We also ensure we check the sanity of data being synced over network using cryptographic hashed functions (specifically, hash-based message authentication code).
## Autofill for IT admins **Q**: Will my employees or students get to use password autofill in Authenticator app?
-**A**: No. Autofill feature is currently in beta and has not yet been enabled for all organizations or account types. If your employee or student has added their work or school account into Microsoft Authenticator app, passwords autofill will not be accessible to them. The one exception to this restriction is when your employee or student adds their work or school account into Microsoft cloud-based multi-factor authentication as an [external or third-party account](user-help-auth-app-add-non-ms-account.md).
-
-**Q**: Can I make autofill feature available to my employees (or students)?
-
-**A**: Yes. To enable your employees or students your enterprise or school can be added to an allow list. Reach out to your support or Microsoft contact to get added to the allow list. Additionally, if youΓÇÖre an IT administrator for your organization, you can also fill out a form to express your interest in joining at [Allow-list enterprise for Autofill in Authenticator](https://aka.ms/RequestAutofillInAuthenticator).
+**A**: Yes, Autofill now works for most enterprise users even when a work or school account is added to the Authenticator app. You can fill out a form to configure (allow or deny) Autofill for your organization and [send it to the Authenticator team](https://aka.ms/ConfigureAutofillInAuthenticator).
**Q**: Will my usersΓÇÖ work or school account password get automatically synced? **A**: No. Password autofill won't sync work or school account password for your users. When users visit a site or an app, Authenticator will offer to save the password for that site or app, and password is saved only when user chooses to.
-**Q**: Can I allow-list only certain users of my organization for Autofill?
+**Q**: Can I allowlist only certain users of my organization for Autofill?
-**A**: No. Enterprises can only enable passwords autofill for all or none of their employees at this time. We will gradually expand these controls.
+**A**: No. Enterprises can only enable passwords autofill for all or none of their employees at this time.
**Q**: What if my employee or student has multiple work or school accounts? For example, my employee has accounts from multiple enterprises or schools in their Microsoft Authenticator.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/user-help-authenticator-app-import-passwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-authenticator-app-import-passwords.md new file mode 100644 /dev/null
@@ -0,0 +1,231 @@
+
+ Title: Import passwords into the Microsoft Authenticator app - Azure AD
+description: How to import passwords into the Microsoft Authentication app from popular password managers.
++++++++ Last updated : 01/28/2021++++
+# Import passwords into the Microsoft Authenticator app
+
+Microsoft Authenticator supports importing passwords from Google Chrome, Firefox, LastPass, Bitwarden, and Roboform. If Microsoft doesnΓÇÖt currently support your existing password manager, you can [manually enter sign-in credentials into our template CSV](https://go.microsoft.com/fwlink/?linkid=2134938). To import your existing passwords and manage them in the Authenticator app, just export your passwords from your existing password manager into our comma-separated values (CSV) format. Then, import the exported CSV to Authenticator in our Chrome browser extension or directly into the Authenticator app (Android and iOS).
+
+## Import from Google Chrome or Android Smart Lock
+
+You can import your passwords from Google Chrome or Android Smart Lock to Authenticator on either your smartphone or your desktop computer. You can:
+
+- [Import from Chrome on Android and iOS](#import-from-chrome-on-android-and-ios)
+- [Import from Chrome desktop browser](#import-from-chrome-desktop-browser)
+
+### Import from Chrome on Android and iOS
+
+Google Chrome users on Android and Apple phones can import their passwords directly from their phone with few simple steps.
+
+1. Install Authenticator app on your phone and open the **Passwords** tab.
+
+1. Sign in to Google Chrome on your phone.
+
+1. Tap the ![Google Chrome ellipsis menu](./media/user-help-authenticator-app-import-passwords/ellipsis-chrome.png) at the top right for Android phones or at bottom right for iOS devices, and then tap **Settings.**
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Google Chrome Settings menu location](./media/user-help-authenticator-app-import-passwords/android-settings-menu.png)
+ iOS | ![Google Chrome Settings menu icon](./media/user-help-authenticator-app-import-passwords/apple-settings-menu.png)
+
+1. In **Settings**, open **Passwords**.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Andoid Chrome Passwords command location](./media/user-help-authenticator-app-import-passwords/android-passwords-location.png)
+ iOS | ![Apple Chrome Passwords command location](./media/user-help-authenticator-app-import-passwords/apple-passwords-location.png)
+
+1. On Android devices, tap the ![Google Chrome ellipsis menu](./media/user-help-authenticator-app-import-passwords/ellipsis-chrome.png) at the top right for Android phones, or at bottom right for iOS devices, and then tap **Export passwords**.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android Chrome Export passwords location](./media/user-help-authenticator-app-import-passwords/android-export-passwords-location.png)
+ iOS | ![Apple Chrome Export passwords location](./media/user-help-authenticator-app-import-passwords/apple-export-passwords-location.png)
+
+ You must provide a PIN, fingerprint, or facial recognition. Confirm your identity and tap **Export passwords** again to start exporting.
+
+1. After the passwords are exported, Chrome prompts you to choose which app you're importing into. Select **Authenticator** to start importing passwords.YouΓÇÖll be informed about import status when itΓÇÖs complete.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android Chrome import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple Chrome import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+### Import from Chrome desktop browser
+
+Before you begin, you must install and sign in to the [Microsoft Autofill extension](https://chrome.google.com/webstore/detail/microsoft-autofill/fiedbfgcleddlbcmgdigjgdfcggjcion) on your Chrome browser.
+
+1. Open [Google Password Manager](https://passwords.google.com) in any browser. If you havenΓÇÖt already, sign in to your Google account.
+
+1. Select the gear icon ![Desktop password manager gear icon](./media/user-help-authenticator-app-import-passwords/desktop-password-manager-gear.png) to open to Password settings page.
+
+1. Select **Export**, then on the next page select **Export** again to start exporting your passwords. Provide your Google password when prompted to confirm your identity. YouΓÇÖll be informed about import status when itΓÇÖs complete.
+
+ ![Desktop Chrome browser export passwords command location](./media/user-help-authenticator-app-import-passwords/desktop-chrome-export-passwords-location.png)
+
+1. Open the Autofill Chrome Extension and select **Settings**.
+
+ ![Desktop Chrome browser Autofill Extension settings location](./media/user-help-authenticator-app-import-passwords/desktop-chrome-autofill-settings.png)
+
+1. Select **Import data** to open a dialog. Then, select **Choose File** to locate and import the CSV file.
+
+ ![Desktop Chrome browser Import data CSV location](./media/user-help-authenticator-app-import-passwords/desktop-chrome-import-csv.png)
+
+## Import from Firefox
+
+Firefox allows exporting of passwords from the desktop browser only, so ensure that you have access to the Firefox desktop browser before importing passwords from Firefox.
+
+1. Sign in to the latest version of Firefox on your desktop and select the ![Firefox "hamburger" menu](./media/user-help-authenticator-app-import-passwords/desktop-firefox-ellipsis-icon.png) menu from the top right of screen.
+
+1. Select **Logins and Passwords**.
+
+ ![Desktop Firefox browser Logins and passwords location](./media/user-help-authenticator-app-import-passwords/desktop-firefox-passwords-location.png)
+
+1. From the Firefox Lockwise page, select the ![Firefox ellipsis menu](./media/user-help-authenticator-app-import-passwords/desktop-firefox-ellipsis-icon.png) menu, select **Export Logins**, and then confirm your intent by selecting **Export**. You are prompted to identify yourself by entering your PIN, device password or by scanning your fingerprints. Once successfully identified, Firefox exports your passwords in CSV format to the selected location.
+
+ ![Desktop Firefox browser export passwords location](./media/user-help-authenticator-app-import-passwords/desktop-firefox-export-passwords-location.png)
+
+1. You can import your passwords into Authenticator from a desktop browser or on iOS or Android phones. To import to the Authenticator app on your phone:
+
+ 1. Transfer the exported CSV file on your Android or iOS phone using a preferred and safe way, and then download it. Next, share the CSV file with Authenticator app to start the import.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android Chrome import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple Chrome import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+ 1. After successfully importing your password to Authenticator, delete the CSV file from your desktop or mobile phone.
+
+## Import from LastPass
+
+LastPass supports export passwords from a desktop browser only, so ensure you have access to a desktop browser before starting to import passwords.
+
+1. Sign in to [the LastPass web site](https://lastpass.com) and select **Advanced Options**, and then select **Export**.
+
+ ![Desktop LastPass export passwords location](./media/user-help-authenticator-app-import-passwords/desktop-lastpass-export-passwords-location.png)
+
+1. Identify yourself when prompted by providing your master password. After that, youΓÇÖll see the exported passwords on the webpage.
+
+1. Copy the contents of the webpage.
+
+1. Open Notepad (or your favorite text editor) and paste the copied content.
+
+1. Save this notepad file by selecting **File** &gt; **Save as**. Provide a name that ends with ΓÇ£.csvΓÇ¥ (such as LastPass.csv) at a safe location in your desktop.
+
+ ![Desktop LastPass save CSV file](./media/user-help-authenticator-app-import-passwords/desktop-lastpass-save-import-file.png)
+
+1. You can import your passwords into Authenticator in a desktop browser or on iOS or Android phones. To import to the Authenticator app on your phone:
+
+ 1. Transfer the exported CSV file on your smartphone using a preferred and safe way, and then download it. Then share the CSV file with Authenticator app to start the import.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android LastPass import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple LastPass import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+ 1. After successfully importing your password to Authenticator, delete the CSV file from your desktop or mobile phone.
+
+## Import from Bitwarden
+
+Bitwarden supports export passwords from a desktop browser only, so ensure you have access to a desktop browser before starting to import passwords.
+
+1. Sign in into https://vault.bitwarden.com/ and select **Tools** &gt; **Export vault**. Choose the file format as CSV, provide your master password, and then select **Export vault** to start exporting.
+
+ ![Bitwarden Export vault location](./media/user-help-authenticator-app-import-passwords/desktop-bitwarden-export-command-location.png)
+
+1. You can import your passwords into Authenticator in a desktop browser or on iOS or Android phones. To import to the Authenticator app on your phone:
+
+ 1. Transfer the exported CSV file on your smartphone using a preferred and safe way, and then download it. Then share the CSV file with Authenticator app to start the import.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android Bitwarden import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple Bitwarden import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+ 1. After successfully importing your password to Authenticator, delete the CSV file from your desktop or mobile phone.
+
+## Import from Roboform
+
+Roboform allows exporting of passwords from its desktop app only, so ensure you have access to the Roboform app on a desktop before starting the import.
+
+1. Start RoboForm from your desktop client and log in to your account.
+
+1. Select **Options** from the **Roboform** menu.
+
+ ![Desktop Roboform options menu](./media/user-help-authenticator-app-import-passwords/desktop-roboform-options.png)
+
+1. Select **Account & Data** &gt; **Export**.
+
+ ![Desktop Roboform export command location](./media/user-help-authenticator-app-import-passwords/desktop-roboform-accounts-data.png)
+
+1. Choose a safe location to save your exported file. Select **Logins** as the **Data** type and select the CSV file as the format, and then select **Export**.
+
+ ![Desktop Roboform export dialog box](./media/user-help-authenticator-app-import-passwords/desktop-roboform-export-dialog.png)
+
+1. Confirm your intent and the CSV file is then exported to the selected location.
+
+ ![Desktop Roboform export confirmation dialog box](./media/user-help-authenticator-app-import-passwords/desktop-roboform-confirmation.png)
+
+1. You can import your passwords into Authenticator in a desktop browser or on iOS or Android phones. To import to the Authenticator app on your phone:
+
+ 1. Transfer the exported CSV file on your smartphone using a preferred and safe way, and then download it. Then share the CSV file with Authenticator app to start the import.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android Roboform import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple Roboform import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+ 1. After successfully importing your password to Authenticator, delete the CSV file from your desktop or mobile phone.
+
+## Import by creating a CSV
+
+If steps to import passwords from your password manager aren't listed in this article, you can create a CSV that you can use to import your passwords into Authenticator. Microsoft recommends that you follow these steps on a desktop for ease of formatting.
+
+1. On your desktop, [download and open our import template](https://go.microsoft.com/fwlink/?linkid=2134938). If you are an Apple iPhone, Safari, and Keychain user, you can now skip to step 4.
+
+1. Export your passwords from your existing password manager in a nonencrypted CSV file.
+
+1. Copy the relevant columns from your exported CSV to the template CSV and then save.
+
+1. If you donΓÇÖt have an exported CSV, you can copy each login from your existing password manager to the template CSV. DonΓÇÖt remove or change the header row. When you finish, verify the integrity of your data before you begin the next step.
+
+1. You can import your passwords into Authenticator in a desktop browser or on iOS or Android phones. To import to the Authenticator app on your phone:
+
+ 1. Transfer the exported CSV file on your smartphone using a preferred and safe way, and then download it. Then share the CSV file with Authenticator app to start the import.
+
+ &nbsp; | &nbsp;
+ - | --
+ Android | ![Android CSV import passwords location](./media/user-help-authenticator-app-import-passwords/android-chrome-import.png)
+ iOS | ![Apple CSV import passwords location](./media/user-help-authenticator-app-import-passwords/apple-chrome-import.png)
+
+ 1. After successfully importing your password to Authenticator, delete the CSV file from your desktop or mobile phone.
+
+## Troubleshooting steps
+
+The most common cause of failed imports is incorrect formatting in the CSV file. You can try the following steps to troubleshoot the issue.
+
+- Check this article to see if if we already support importing passwords from your current password manager. If we do, you may want to retry the import by following the steps mentioned for your respective provider.
+
+- If we donΓÇÖt currently support importing the format of your password manager, you could retry by [creating your CSV file manually](#import-by-creating-a-csv).
+
+- You can verify the integrity of CSV data with following suggestions:
+
+ - First row must contain a header with three columns: **url**, **username**, and **password**.
+
+ - Each row must contain a value under **url** and **passwords** columns.
+
+- You can recreate the CSV by pasting your content in the [CSV template file](https://go.microsoft.com/fwlink/?linkid=2134938).
+
+- If nothing else works, please report your issue using the **Send Feedback** link from Authenticator app settings.
aks https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-container-registry-integration.md
@@ -146,6 +146,7 @@ nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
``` ### Troubleshooting
+* Run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
* Learn more about [ACR Diagnostics](../container-registry/container-registry-diagnostics-audit-logs.md) * Learn more about [ACR Health](../container-registry/container-registry-check-health.md)
analysis-services https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-database-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-database-users.md
@@ -25,6 +25,8 @@ When creating a tabular model project, you create roles and add users or groups
When adding a **security group**, use `obj:groupid@tenantid`.
+When adding a **service principal** use `app:appid@tenantid`.
+ ## To add or manage roles and users in Visual Studio 1. In **Tabular Model Explorer**, right-click **Roles**.
@@ -146,4 +148,4 @@ Row filters apply to the specified rows and related rows. When a table has multi
[Manage server administrators](analysis-services-server-admins.md) [Manage Azure Analysis Services with PowerShell](analysis-services-powershell.md)
- [Tabular Model Scripting Language (TMSL) Reference](/analysis-services/tmsl/tabular-model-scripting-language-tmsl-reference)
\ No newline at end of file
+ [Tabular Model Scripting Language (TMSL) Reference](/analysis-services/tmsl/tabular-model-scripting-language-tmsl-reference)
application-gateway https://docs.microsoft.com/en-us/azure/application-gateway/create-ssl-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/create-ssl-portal.md
@@ -5,7 +5,7 @@
Previously updated : 08/14/2020 Last updated : 01/28/2021 #Customer intent: As an IT administrator, I want to use the Azure portal to configure Application Gateway with TLS termination so I can secure my application traffic.
@@ -102,7 +102,7 @@ Export-PfxCertificate `
> [!NOTE] > For the Application Gateway v2 SKU, you can only choose **Public** frontend IP configuration. Private frontend IP configuration is currently not enabled for this v2 SKU.
-2. Choose **Create new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
+2. Choose **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
![Create new application gateway: frontends](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png)
@@ -112,7 +112,7 @@ Export-PfxCertificate `
The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
-1. On the **Backends** tab, select **+Add a backend pool**.
+1. On the **Backends** tab, select **Add a backend pool**.
2. In the **Add a backend pool** window that opens, enter the following values to create an empty backend pool:
@@ -129,7 +129,7 @@ The backend pool is used to route requests to the backend servers that serve the
On the **Configuration** tab, you'll connect the frontend and backend pool you created using a routing rule.
-1. Select **Add a rule** in the **Routing rules** column.
+1. Select **Add a routing rule** in the **Routing rules** column.
2. In the **Add a routing rule** window that opens, enter *myRoutingRule* for the **Rule name**.
@@ -140,11 +140,12 @@ On the **Configuration** tab, you'll connect the frontend and backend pool you c
- **Protocol**: Select **HTTPS**. - **Port**: Verify 443 is entered for the port.
- Under **HTTPS Certificate**:
+ Under **HTTPS Settings**:
+ - **Choose a certificate** - Select **Upload a certificate**.
- **PFX certificate file** - Browse to and select the c:\appgwcert.pfx file that you create earlier. - **Certificate name** - Type *mycert1* for the name of the certificate.
- - **Password** - Type your password.
+ - **Password** - Type the password you used to create the certificate.
Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule.
@@ -152,7 +153,7 @@ On the **Configuration** tab, you'll connect the frontend and backend pool you c
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**.
-5. For the **HTTP setting**, select **Create new** to create a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **HTTP setting**, select **Add new** to create a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add a HTTP setting** window that opens, enter *myHTTPSetting* for the **HTTP setting name**. Accept the default values for the other settings in the **Add a HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
:::image type="content" source="./media/create-ssl-portal/application-gateway-create-httpsetting.png" alt-text="Create new application gateway: HTTP setting":::
@@ -187,14 +188,14 @@ To do this, you'll:
- **Resource group**: Select **myResourceGroupAG** for the resource group name. - **Virtual machine name**: Enter *myVM* for the name of the virtual machine.
- - **Username**: Enter *azureuser* for the administrator user name.
+ - **Username**: Enter a name for the administrator user name.
- **Password**: Enter a password for the administrator account. 1. Accept the other defaults and then select **Next: Disks**. 2. Accept the **Disks** tab defaults and then select **Next: Networking**. 3. On the **Networking** tab, verify that **myVNet** is selected for the **Virtual network** and the **Subnet** is set to **myBackendSubnet**. Accept the other defaults and then select **Next: Management**. Application Gateway can communicate with instances outside of the virtual network that it is in, but you need to ensure there's IP connectivity.
-1. On the **Management** tab, set **Boot diagnostics** to **Off**. Accept the other defaults and then select **Review + create**.
+1. On the **Management** tab, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
2. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**. 3. Wait for the deployment to complete before continuing.
@@ -206,7 +207,7 @@ In this example, you install IIS on the virtual machines only to verify Azure cr
![Install custom extension](./media/application-gateway-create-gateway-portal/application-gateway-extension.png)
-2. Run the following command to install IIS on the virtual machine:
+2. Change the location setting for your environment, and then run the following command to install IIS on the virtual machine:
```azurepowershell-interactive Set-AzVMExtension `
@@ -217,7 +218,7 @@ In this example, you install IIS on the virtual machines only to verify Azure cr
-ExtensionType CustomScriptExtension ` -TypeHandlerVersion 1.4 ` -SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
- -Location EastUS
+ -Location <location>
``` 3. Create a second virtual machine and install IIS by using the steps that you previously completed. Use *myVM2* for the virtual machine name and for the **VMName** setting of the **Set-AzVMExtension** cmdlet.
@@ -230,9 +231,11 @@ In this example, you install IIS on the virtual machines only to verify Azure cr
3. Select **myBackendPool**.
-4. Under **Targets**, select **Virtual machine** from the drop-down list.
+4. Under **Target type**, select **Virtual machine** from the drop-down list.
-5. Under **VIRTUAL MACHINE** and **NETWORK INTERFACES**, select the **myVM** and **myVM2** virtual machines and their associated network interfaces from the drop-down lists.
+5. Under **Target**, select the the network interface under **myVM** from the drop-down list.
+
+6. Repeat to add the network interface for **myVM2**.
![Add backend servers](./media/application-gateway-create-gateway-portal/application-gateway-backend.png)
automation https://docs.microsoft.com/en-us/azure/automation/automation-connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-connections.md
@@ -30,7 +30,7 @@ Azure Automation makes the following built-in connection types available:
* `AzureServicePrincipal` - Represents a connection used by the Azure Run As account. * `AzureClassicCertificate` - Represents a connection used by the classic Azure Run As account.
-In most cases, you don't need to create a connection resource because it is created when you create a [Run As account](manage-runas-account.md).
+In most cases, you don't need to create a connection resource because it is created when you create a [Run As account](automation-security-overview.md).
## PowerShell cmdlets to access connections
automation https://docs.microsoft.com/en-us/azure/automation/automation-create-alert-triggered-runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-alert-triggered-runbook.md
@@ -6,6 +6,7 @@
Last updated 04/29/2019 + # Use an alert to trigger an Azure Automation runbook You can use [Azure Monitor](../azure-monitor/overview.md) to monitor base-level metrics and logs for most services in Azure. You can call Azure Automation runbooks by using [action groups](../azure-monitor/platform/action-groups.md) or by using classic alerts to automate tasks based on alerts. This article shows you how to configure and run a runbook by using alerts.
@@ -39,7 +40,7 @@ As described in the preceding section, each type of alert has a different schema
This example uses an alert from a VM. It retrieves the VM data from the payload, and then uses that information to stop the VM. The connection must be set up in the Automation account where the runbook is run. When using alerts to trigger runbooks, it is important to check the alert status in the runbook that is triggered. The runbook triggers each time the alert changes state. Alerts have multiple states, with the two most common being Activated and Resolved. Check for state in your runbook logic to ensure that the runbook does not run more than once. The example in this article shows how to look for alerts with state Activated only.
-The runbook uses the connection asset `AzureRunAsConnection` [Run As account](./manage-runas-account.md) to authenticate with Azure to perform the management action against the VM.
+The runbook uses the connection asset `AzureRunAsConnection` [Run As account](./automation-security-overview.md) to authenticate with Azure to perform the management action against the VM.
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**. You can modify the PowerShell script, and use it with many different resources.
automation https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-standalone-account.md
@@ -3,7 +3,7 @@ Title: Create a standalone Azure Automation account
description: This article tells how to create a standalone Azure Automation account and a Classic Run As account. Previously updated : 01/15/2019 Last updated : 01/07/2021 # Create a standalone Azure Automation account
@@ -87,7 +87,7 @@ When the Automation account is successfully created, several resources are autom
## Create a Classic Run As account
-Classic Run As accounts are no longer created by default when you create an Azure Automation account. If you still require a Classic Run As account:
+Classic Run As accounts are not created by default when you create an Azure Automation account. If you require a Classic Run As account to manage Azure classic resources, perform the following steps:
1. From your Automation account, select **Run As Accounts** under **Account Settings**. 2. Select **Azure Classic Run As Account**.
@@ -98,5 +98,5 @@ Classic Run As accounts are no longer created by default when you create an Azur
* To learn more about graphical authoring, see [Author graphical runbooks in Azure Automation](automation-graphical-authoring-intro.md). * To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md). * To get started with PowerShell Workflow runbooks, see [Tutorial: Create a PowerShell workflow runbook](learn/automation-tutorial-runbook-textual.md).
-* To get started with Python 2 runbooks, see [Tutorial: Create a Python 2 runbook](learn/automation-tutorial-runbook-textual-python2.md).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/?view=azps-3.7.0&preserve-view=true#automation).
+* To get started with Python 3 runbooks, see [Tutorial: Create a Python 3 runbook](learn/automation-tutorial-runbook-textual-python-3.md).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation&preserve-view=true#automation).
automation https://docs.microsoft.com/en-us/azure/automation/automation-deploy-template-runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-deploy-template-runbook.md
@@ -18,7 +18,7 @@ In this article, we create a PowerShell runbook that uses a Resource Manager tem
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or [sign up for a free account](https://azure.microsoft.com/free/).
-* [Automation account](./manage-runas-account.md) to hold the runbook and authenticate to Azure resources. This account must have permission to start and stop the virtual machine.
+* [Automation account](./automation-security-overview.md) to hold the runbook and authenticate to Azure resources. This account must have permission to start and stop the virtual machine.
* [Azure Storage account](../storage/common/storage-account-create.md) in which to store the Resource Manager template. * Azure PowerShell installed on a local machine. See [Install the Azure PowerShell Module](/powershell/azure/install-az-ps) for information about how to get Azure PowerShell.
automation https://docs.microsoft.com/en-us/azure/automation/automation-dsc-getting-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-getting-started.md
@@ -2,14 +2,11 @@
Title: Get started with Azure Automation State Configuration description: This article tells how to do the most common tasks in Azure Automation State Configuration. - -- Last updated 04/15/2019 - + # Get started with Azure Automation State Configuration This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/scripting/dsc/overview/overview).
@@ -21,7 +18,7 @@ article, you can use the [Azure Automation Managed Node template](https://github
To complete the examples in this article, the following are required: -- An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](./manage-runas-account.md).
+- An Azure Automation account. To learn more about an Automation account and its requirements, see [Automation Account authentication overview](./automation-security-overview.md).
- An Azure Resource Manager VM (not Classic) running a [supported operating system](automation-dsc-overview.md#operating-system-requirements). For instructions on creating a VM, see [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md) ## Create a DSC configuration
automation https://docs.microsoft.com/en-us/azure/automation/automation-graphical-authoring-intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-graphical-authoring-intro.md
@@ -6,6 +6,7 @@
Last updated 03/16/2018 + # Author graphical runbooks in Azure Automation All runbooks in Azure Automation are Windows PowerShell workflows. Graphical runbooks and graphical PowerShell Workflow runbooks generate PowerShell code that the Automation workers run but that you cannot view or modify. You can convert a graphical runbook to a graphical PowerShell Workflow runbook, and vice versa. However, you can't convert these runbooks to a textual runbook. Additionally, the Automation graphical editor can't import a textual runbook.
@@ -367,7 +368,7 @@ The following example uses output from an activity called `Get Twitter Connectio
## Authenticate to Azure resources
-Runbooks in Azure Automation that manage Azure resources require authentication to Azure. The [Run As account](./manage-runas-account.md), also referred to as a service principal, is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by adding the `AzureRunAsConnection` connection asset, which uses the PowerShell [Get-AutomationConnection](/system-center/smlet. This scenario is illustrated in the following example.
+Runbooks in Azure Automation that manage Azure resources require authentication to Azure. The [Run As account](./automation-security-overview.md), also referred to as a service principal, is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by adding the `AzureRunAsConnection` connection asset, which uses the PowerShell [Get-AutomationConnection](/system-center/smlet. This scenario is illustrated in the following example.
![Run As Authentication Activities](media/automation-graphical-authoring-intro/authenticate-run-as-account.png)
automation https://docs.microsoft.com/en-us/azure/automation/automation-runbook-execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
@@ -108,7 +108,7 @@ The logs available for the Log Analytics agent and the **nxautomation** account
## Runbook permissions
-A runbook needs permissions for authentication to Azure, through credentials. See [Manage Azure Automation Run As accounts](manage-runas-account.md).
+A runbook needs permissions for authentication to Azure, through credentials. See [Azure Automation authentication overview](automation-security-overview.md).
## Modules
@@ -135,6 +135,7 @@ The following table describes the statuses that are possible for a job. You can
| Status | Description | |: |: |
+| Activating |The job is being activated. |
| Completed |The job completed successfully. | | Failed |A graphical or PowerShell Workflow runbook failed to compile. A PowerShell runbook failed to start or the job had an exception. See [Azure Automation runbook types](automation-runbook-types.md).| | Failed, waiting for resources |The job failed because it reached the [fair share](#fair-share) limit three times and started from the same checkpoint or from the start of the runbook each time. |
@@ -235,4 +236,4 @@ Using child runbooks decreases the total amount of time for the parent runbook t
* To get started with a PowerShell runbook, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md). * To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md). * For details of PowerShell, see [PowerShell Docs](/powershell/scripting/overview).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation#automation).
\ No newline at end of file
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation#automation).
automation https://docs.microsoft.com/en-us/azure/automation/automation-security-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
@@ -4,9 +4,10 @@ description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 09/28/2020 Last updated : 01/21/2021 + # Automation account authentication overview Azure Automation allows you to automate tasks against resources in Azure, on-premises, and with other cloud providers such as Amazon Web Services (AWS). You can use runbooks to automate your tasks, or a Hybrid Runbook Worker if you have business or operational processes to manage outside of Azure. Working in any one of these environments require permissions to securely access the resources with the minimal rights required.
@@ -15,7 +16,9 @@ This article covers authentication scenarios supported by Azure Automation and t
## Automation account
-When you start Azure Automation for the first time, you must create at least one Automation account. Automation accounts allow you to isolate your Automation resources, runbooks, assets, configurations, from the resources of other accounts. You can use Automation accounts to separate resources into separate logical environments. For example, you might use one account for development, another for production, and another for your on-premises environment. An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](automation-quickstart-create-account.md).
+When you start Azure Automation for the first time, you must create at least one Automation account. Automation accounts allow you to isolate your Automation resources, runbooks, assets, and configurations from the resources of other accounts. You can use Automation accounts to separate resources into separate logical environments or delegated responsibilities. For example, you might use one account for development, another for production, and another for your on-premises environment. Or you might dedicate an Automation account to manage operating system updates across all of your machines with [Update Management](update-management/overview.md).
+
+An Azure Automation account is different from your Microsoft account or accounts created in your Azure subscription. For an introduction to creating an Automation account, see [Create an Automation account](automation-quickstart-create-account.md).
## Automation resources
@@ -27,31 +30,32 @@ All tasks that you create against resources using Azure Resource Manager and the
Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
-* Azure Run As account
-* Azure Classic Run As account
+* Azure Run As account: Allows you to manages Azure resources based on the Azure Resource Manager deployment and management service for Azure.
+* Azure Classic Run As account: Allows you to manage Azure classic resources based on the Classic deployment model.
-To learn more about these two deployment models, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
+To learn more about the Azure Resource Manager and Classic deployment models, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
>[!NOTE] >Azure Cloud Solution Provider (CSP) subscriptions support only the Azure Resource Manager model. Non-Azure Resource Manager services are not available in the program. When you are using a CSP subscription, the Azure Classic Run As account is not created, but the Azure Run As account is created. To learn more about CSP subscriptions, see [Available services in CSP subscriptions](/azure/cloud-solution-provider/overview/azure-csp-available-services).
-### Run As account
+When you create an Automation account, the Run As account is created by default at the same time. If you chose not to create it along with the Automation account, it can be created individually at a later time. An Azure Classic Run As Account is optional, and is created separately if you need to manage classic resources.
-The Azure Run As account manages Azure resources based on the Azure Resource Manager deployment and management service for Azure.
+### Run As account
When you create a Run As account, it performs the following tasks:
-* Creates an Azure AD application with a self-signed certificate, creates a service principal account for the application in Azure AD, and assigns the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to Owner or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md).
+* Creates an Azure AD application with a self-signed certificate, creates a service principal account for the application in Azure AD, and assigns the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md).
* Creates an Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses. * Creates an Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint.
-### Azure Classic Run As Account
+### Azure Classic Run As account
-The Azure Classic Run As account manages Azure classic resources based on the Classic deployment model. You must be a co-administrator on the subscription to create or renew this type of Run As account.
+When you create an Azure Classic Run As account, it performs the following tasks:
-When you create an Azure Classic Run As account, it performs the following tasks.
+> [!NOTE]
+> You must be a co-administrator on the subscription to create or renew this type of Run As account.
* Creates a management certificate in the subscription.
@@ -59,17 +63,45 @@ When you create an Azure Classic Run As account, it performs the following tasks
* Creates an Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name.
->[!NOTE]
->Azure Classic Run As account is not created by default at the same time when you create an Automation account. This account is created individually following the steps described in the [Manage Run As account](manage-runas-account.md#create-a-run-as-account-in-azure-portal) article.
- ## Service principal for Run As account The service principal for a Run As account does not have permissions to read Azure AD by default. If you want to add permissions to read or manage Azure AD, you must grant the permissions on the service principal under **API permissions**. To learn more, see [Add permissions to access your web API](../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).
+## <a name="permissions"></a>Run As account permissions
+
+This section defines permissions for both regular Run As accounts and Classic Run As accounts.
+
+* To create or update a Run As account, an Application administrator in Azure Active Directory and an Owner in the subscription can complete all the tasks.
+* To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
+
+In a situation where you have separation of duties, the following table shows a listing of the tasks, the equivalent cmdlet, and permissions needed:
+
+|Task|Cmdlet |Minimum Permissions |Where you set the permissions|
+|||||
+|Create Azure AD Application|[New-AzADApplication](/powershell/module/az.resources/new-azadapplication) | Application Developer role<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations |
+|Add a credential to the application.|[New-AzADAppCredential](/powershell/module/az.resources/new-azadappcredential) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
+|Create and get an Azure AD service principal|[New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal)</br>[Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
+|Assign or get the Azure role for the specified principal|[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment)</br>[Get-AzRoleAssignment](/powershell/module/Az.Resources/Get-AzRoleAssignment) | User Access Administrator or Owner, or have the following permissions:</br></br><code>Microsoft.Authorization/Operations/read</br>Microsoft.Authorization/permissions/read</br>Microsoft.Authorization/roleDefinitions/read</br>Microsoft.Authorization/roleAssignments/write</br>Microsoft.Authorization/roleAssignments/read</br>Microsoft.Authorization/roleAssignments/delete</code></br></br> | [Subscription](../role-based-access-control/role-assignments-portal.md)</br>Home > Subscriptions > \<subscription name\> - Access Control (IAM)|
+|Create or remove an Automation certificate|[New-AzAutomationCertificate](/powershell/module/Az.Automation/New-AzAutomationCertificate)</br>[Remove-AzAutomationCertificate](/powershell/module/az.automation/remove-azautomationcertificate) | Contributor on resource group |Automation account resource group|
+|Create or remove an Automation connection|[New-AzAutomationConnection](/powershell/module/az.automation/new-azautomationconnection)</br>[Remove-AzAutomationConnection](/powershell/module/az.automation/remove-azautomationconnection)|Contributor on resource group |Automation account resource group|
+
+<sup>1</sup> Non-administrator users in your Azure AD tenant can [register AD applications](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app) if the Azure AD tenant's **Users can register applications** option on the **User settings** page is set to **Yes**. If the application registration setting is **No**, the user performing this action must be as defined in this table.
+
+If you aren't a member of the subscription's Active Directory instance before you're added to the Global Administrator role of the subscription, you're added as a guest. In this situation, you receive a `You do not have permissions to create…` warning on the **Add Automation Account** page.
+
+To verify that the situation producing the error message has been remedied:
+
+1. From the Azure Active Directory pane in the Azure portal, select **Users and groups**.
+2. Select **All users**.
+3. Choose your name, then select **Profile**.
+4. Ensure that the value of the **User type** attribute under your user's profile is not set to **Guest**.
+ ## Role-based access control Role-based access control is available with Azure Resource Manager to grant permitted actions to an Azure AD user account and Run As account, and authenticate the service principal. Read [Role-based access control in Azure Automation article](automation-role-based-access-control.md) for further information to help develop your model for managing Automation permissions.
+If you have strict security controls for permission assignment in resource groups, you need to assign the Run As account membership to the **Contributor** role in the resource group.
+ ## Runbook authentication with Hybrid Runbook Worker Runbooks running on a Hybrid Runbook Worker in your datacenter or against computing services in other cloud environments like AWS, cannot use the same method that is typically used for runbooks authenticating to Azure resources. This is because those resources are running outside of Azure and therefore, requires their own security credentials defined in Automation to authenticate to resources that they access locally. For more information about runbook authentication with runbook workers, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
automation https://docs.microsoft.com/en-us/azure/automation/automation-send-email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
@@ -16,7 +16,7 @@ You can send an email from a runbook with [SendGrid](https://sendgrid.com/soluti
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [A SendGrid account](../sendgrid-dotnet-how-to-send-email.md#create-a-sendgrid-account). * [Automation account](./index.yml) with **Az** modules.
-* [Run As account](./manage-runas-account.md) to store and execute the runbook.
+* [Run As account](./automation-security-overview.md#run-as-accounts) to store and execute the runbook.
## Create an Azure Key Vault
@@ -69,7 +69,7 @@ For instructions, see [Import Az modules](shared-resources/modules.md#import-az-
## Create the runbook to send an email
-After you have created a Key Vault and stored your `SendGrid` API key, it's time to create the runbook that retrieves the API key and sends an email. Let's use a runbook that uses `AzureRunAsConnection` as a [Run As account](./manage-runas-account.md) to
+After you have created a Key Vault and stored your `SendGrid` API key, it's time to create the runbook that retrieves the API key and sends an email. Let's use a runbook that uses `AzureRunAsConnection` as a [Run As account](./automation-security-overview.md#run-as-accounts) to
authenticate with Azure to retrieve the secret from Azure Key Vault. We'll call the runbook **Send-GridMailMessage**. You can modify the PowerShell script used for example purposes, and reuse it for different scenarios. 1. Go to your Azure Automation account.
automation https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management.md
@@ -6,6 +6,7 @@
Last updated 09/22/2020 + # Start/Stop VMs during off-hours overview The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/platform/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
@@ -31,7 +32,7 @@ The following are limitations with the current feature:
## Prerequisites -- The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./manage-runas-account.md). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently.
+- The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./automation-security-overview.md#run-as-accounts). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently.
- The linked Automation account and Log Analytics workspace need to be in the same resource group.
@@ -73,7 +74,7 @@ To enable VMs for the Start/Stop VMs during off-hours feature using an existing
You can enable VMs for the Start/Stop VMs during off-hours feature using a new Automation account and Log Analytics workspace. In this case, you need the permissions defined in the preceding section as well as the permissions defined in this section. You also require the following roles: - Co-Administrator on subscription. This role is required to create the Classic Run As account if you are going to manage classic VMs. [Classic Run As accounts](automation-create-standalone-account.md#create-a-classic-run-as-account) are no longer created by default.-- Membership in the [Azure AD](../active-directory/roles/permissions-reference.md) Application Developer role. For more information on configuring Run As Accounts, see [Permissions to configure Run As accounts](manage-runas-account.md#permissions).
+- Membership in the [Azure AD](../active-directory/roles/permissions-reference.md) Application Developer role. For more information on configuring Run As Accounts, see [Permissions to configure Run As accounts](automation-security-overview.md#permissions).
- Contributor on the subscription or the following permissions. | Permission |Scope|
automation https://docs.microsoft.com/en-us/azure/automation/automation-update-azure-modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-update-azure-modules.md
@@ -18,7 +18,7 @@ The most common PowerShell modules are provided by default in each Automation ac
To avoid impacting your runbooks and the processes they automate, be sure to test and validate as you make updates. If you don't have a dedicated Automation account intended for this purpose, consider creating one so that you can test many different scenarios during the development of your runbooks. This testing should include iterative changes, such as updating the PowerShell modules.
-Make sure that your Automation account has an [Azure Run As account credential](manage-runas-account.md) created.
+Make sure that your Automation account has an [Azure Run As account](automation-security-overview.md#run-as-accounts) created.
If you develop your scripts locally, it's recommended to have the same module versions locally that you have in your Automation account when testing to ensure that you receive the same results. After the results are validated and you've applied any changes required, you can move the changes to production.
automation https://docs.microsoft.com/en-us/azure/automation/change-tracking/enable-from-automation-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/enable-from-automation-account.md
@@ -6,6 +6,7 @@
Last updated 10/14/2020 + # Enable Change Tracking and Inventory from an Automation account This article describes how you can use your Automation account to enable [Change Tracking and Inventory](overview.md) for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory.
@@ -16,7 +17,7 @@ This article describes how you can use your Automation account to enable [Change
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). ## Sign in to Azure
automation https://docs.microsoft.com/en-us/azure/automation/change-tracking/enable-from-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/enable-from-portal.md
@@ -6,6 +6,7 @@
Last updated 10/14/2020 + # Enable Change Tracking and Inventory from Azure portal This article describes how you can enable [Change Tracking and Inventory](overview.md) for one or more Azure VMs in the Azure portal. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory.
@@ -18,7 +19,7 @@ The number of resource groups that you can use for managing your VMs is limited
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). ## Sign in to Azure
automation https://docs.microsoft.com/en-us/azure/automation/change-tracking/enable-from-runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/enable-from-runbook.md
@@ -6,6 +6,7 @@
Last updated 10/14/2020 + # Enable Change Tracking and Inventory from a runbook This article describes how you can use a runbook to enable [Change Tracking and Inventory](overview.md) for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM using Change Tracking and Inventory.
automation https://docs.microsoft.com/en-us/azure/automation/change-tracking/enable-from-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/enable-from-vm.md
@@ -17,7 +17,7 @@ This article describes how you can use an Azure VM to enable [Change Tracking an
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). ## Sign in to Azure
automation https://docs.microsoft.com/en-us/azure/automation/create-run-as-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/create-run-as-account.md new file mode 100644 /dev/null
@@ -0,0 +1,108 @@
+
+ Title: Create an Azure Automation Run As account
+description: This article tells how to create a Run As account with PowerShell or from the Azure portal.
++ Last updated : 01/06/2021+++
+# How to create an Azure Automation Run As account
+
+Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to create a Run As or Classic Run As account from the Azure portal or Azure PowerShell.
+
+## Create account in Azure portal
+
+Perform the following steps to update your Azure Automation account in the Azure portal. The Run As and Classic Run As accounts are created separately. If you don't need to manage classic resources, you can just create the Azure Run As account.
+
+1. Sign in to the Azure portal with an account that is a member of the Subscription Admins role and co-administrator of the subscription.
+
+2. Search for and select **Automation Accounts**.
+
+3. On the **Automation Accounts** page, select your Automation account from the list.
+
+4. In the left pane, select **Run As Accounts** in the **Account Settings** section.
+
+ :::image type="content" source="media/create-run-as-account/automation-account-properties-pane.png" alt-text="Select the Run As Account option.":::
+
+5. Depending on the account you require, use the **+ Azure Run As Account** or **+ Azure Classic Run As Account** pane. After reviewing the overview information, click **Create**.
+
+ :::image type="content" source="media/create-run-as-account/automation-account-create-run-as.png" alt-text="Select the option to create a Run As Account":::
+
+6. While Azure creates the Run As account, you can track the progress under **Notifications** from the menu. A banner is also displayed stating that the account is being created. The process can take a few minutes to complete.
+
+## Create account using PowerShell
+
+The following list provides the requirements to create a Run As account in PowerShell using a provided script. These requirements apply to both types of Run As accounts.
+
+* Windows 10 or Windows Server 2016 with Azure Resource Manager modules 3.4.1 and later. The PowerShell script doesn't support earlier versions of Windows.
+* Azure PowerShell PowerShell 6.2.4 or later. For information, see [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+* An Automation account, which is referenced as the value for the `AutomationAccountName` and `ApplicationDisplayName` parameters.
+* Permissions equivalent to the ones listed in [Required permissions to configure Run As accounts](automation-security-overview.md#permissions).
+
+To get the values for `AutomationAccountName`, `SubscriptionId`, and `ResourceGroupName`, which are required parameters for the PowerShell script, complete the following steps.
+
+1. Sign in to the Azure portal.
+
+1. Search for and select **Automation Accounts**.
+
+1. On the Automation Accounts page, select your Automation account from the list.
+
+1. In the left pane, select **Properties**.
+
+1. Note the values for **Name**, **Subscription ID**, and **Resource Group** on the **Properties** page.
+
+ ![Automation account properties page](media/create-run-as-account/automation-account-properties.png)
+
+### PowerShell script to create a Run As account
+
+The PowerShell script includes support for several configurations.
+
+* Create a Run As account by using a self-signed certificate.
+* Create a Run As account and/or a Classic Run As account by using a self-signed certificate.
+* Create a Run As account and/or a Classic Run As account by using a certificate issued by your enterprise certification authority (CA).
+* Create a Run As account and/or a Classic Run As account by using a self-signed certificate in the Azure Government cloud.
+
+1. Download and save the script to a local folder using the following command.
+
+ ```powershell
+ wget https://raw.githubusercontent.com/azureautomation/runbooks/master/Utility/AzRunAs/Create-RunAsAccount.ps1 -outfile Create-RunAsAccount.ps1
+ ```
+
+2. Start PowerShell with elevated user rights and navigate to the folder that contains the script.
+
+3. Run one of the the following commands to create a Run As and/or Classic Run As account based on your requirements.
+
+ * Create a Run As account using a self-signed certificate.
+
+ ```powershell
+ .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $false
+ ```
+
+ * Create a Run As account and a Classic Run As account by using a self-signed certificate.
+
+ ```powershell
+ .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true
+ ```
+
+ * Create a Run As account and a Classic Run As account by using an enterprise certificate.
+
+ ```powershell
+ .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnterpriseCertPathForRunAsAccount <EnterpriseCertPfxPathForRunAsAccount> -EnterpriseCertPlainPasswordForRunAsAccount <StrongPassword> -EnterpriseCertPathForClassicRunAsAccount <EnterpriseCertPfxPathForClassicRunAsAccount> -EnterpriseCertPlainPasswordForClassicRunAsAccount <StrongPassword>
+ ```
+
+ If you've created a Classic Run As account with an enterprise public certificate (.cer file), use this certificate. The script creates and saves it to the temporary files folder on your computer, under the user profile `%USERPROFILE%\AppData\Local\Temp` you used to execute the PowerShell session. See [Uploading a management API certificate to the Azure portal](../cloud-services/cloud-services-configure-ssl-certificate-portal.md).
+
+ * Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud
+
+ ```powershell
+ .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnvironmentName AzureUSGovernment
+ ```
+
+4. After the script has executed, you're prompted to authenticate with Azure. Sign in with an account that's a member of the subscription administrators role. If you are creating a Classic Run As account, your account must be a co-administrator of the subscription.
+
+## Next steps
+
+* To learn more about graphical authoring, see [Author graphical runbooks in Azure Automation](automation-graphical-authoring-intro.md).
+* To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md).
+* To get started with a Python 3 runbook, see [Tutorial: Create a Python 3 runbook](learn/automation-tutorial-runbook-textual-python-3.md).
\ No newline at end of file
automation https://docs.microsoft.com/en-us/azure/automation/delete-run-as-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/delete-run-as-account.md new file mode 100644 /dev/null
@@ -0,0 +1,30 @@
+
+ Title: Delete an Azure Automation Run As account
+description: This article tells how to delete a Run As account with PowerShell or from the Azure portal.
++ Last updated : 01/06/2021+++
+# Delete an Azure Automation Run As account
+
+Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to delete a Run As or Classic Run As account. When you perform this action, the Automation account is retained. After you delete the Run As account, you can re-create it in the Azure portal or with the provided PowerShell script.
+
+## Delete a Run As or Classic Run As account
+
+1. In the Azure portal, open the Automation account.
+
+2. In the left pane, select **Run As Accounts** in the account settings section.
+
+3. On the Run As Accounts properties page, select either the Run As account or Classic Run As account that you want to delete.
+
+4. On the Properties pane for the selected account, click **Delete**.
+
+ ![Delete Run As account](media/delete-run-as-account/automation-account-delete-run-as.png)
+
+5. While the account is being deleted, you can track the progress under **Notifications** from the menu.
+
+## Next steps
+
+To recreate your Run As or Classic Run As account, see [Create Run As accounts](create-run-as-account.md).
\ No newline at end of file
automation https://docs.microsoft.com/en-us/azure/automation/how-to/move-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/move-account.md
@@ -2,14 +2,11 @@
Title: Move your Azure Automation account to another subscription description: This article tells how to move your Automation account to another subscription. - -- Previously updated : 03/11/2019 Last updated : 01/07/2021 - + # Move your Azure Automation account to another subscription Azure Automation allows you to move some resources to a new resource group or subscription. You can move resources through the Azure portal, PowerShell, the Azure CLI, or the REST API. To learn more about the process, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
@@ -102,7 +99,7 @@ You can now move your Automation account and its runbooks.
## Re-create Run As accounts
-[Run As accounts](../manage-runas-account.md) create a service principal in Azure Active Directory to authenticate with Azure resources. When you change subscriptions, the Automation account no longer uses the existing Run As account. To re-create the Run As accounts:
+[Run As accounts](../automation-security-overview.md#run-as-accounts) create a service principal in Azure Active Directory to authenticate with Azure resources. When you change subscriptions, the Automation account no longer uses the existing Run As account. To re-create the Run As accounts:
1. Go to your Automation account in the new subscription, and select **Run as accounts** under **Account Settings**. You'll see that the Run As accounts show as incomplete now.
@@ -111,7 +108,7 @@ You can now move your Automation account and its runbooks.
2. Delete the Run As accounts, one at a time, by selecting **Delete** on the **Properties** page. > [!NOTE]
- > If you don't have permissions to create or view the Run As accounts, you see the following message: `You do not have permissions to create an Azure Run As account (service principal) and grant the Contributor role to the service principal.` For more information, see [Permissions required to configure Run As accounts](../manage-runas-account.md#permissions).
+ > If you don't have permissions to create or view the Run As accounts, you see the following message: `You do not have permissions to create an Azure Run As account (service principal) and grant the Contributor role to the service principal.` For more information, see [Permissions required to configure Run As accounts](../automation-security-overview.md#permissions).
3. After you've deleted the Run As accounts, select **Create** under **Azure Run As account**.
automation https://docs.microsoft.com/en-us/azure/automation/manage-runas-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
@@ -2,8 +2,8 @@
Title: Manage an Azure Automation Run As account description: This article tells how to manage your Run As account with PowerShell or from the Azure portal. - Previously updated : 09/28/2020+ Last updated : 01/19/2021
@@ -13,142 +13,6 @@ Run As accounts in Azure Automation provide authentication for managing resource
To learn more about Azure Automation account authentication and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
-## <a name="permissions"></a>Run As account permissions
-
-This section defines permissions for both regular Run As accounts and Classic Run As accounts.
-
-To create or update a Run As account, you must have specific privileges and permissions. An Application administrator in Azure Active Directory and an Owner in a subscription can complete all the tasks. In a situation where you have separation of duties, the following table shows a listing of the tasks, the equivalent cmdlet, and permissions needed:
-
-|Task|Cmdlet |Minimum Permissions |Where you set the permissions|
-|||||
-|Create Azure AD Application|[New-AzADApplication](/powershell/module/az.resources/new-azadapplication) | Application Developer role<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations |
-|Add a credential to the application.|[New-AzADAppCredential](/powershell/module/az.resources/new-azadappcredential) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
-|Create and get an Azure AD service principal|[New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal)</br>[Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
-|Assign or get the Azure role for the specified principal|[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment)</br>[Get-AzRoleAssignment](/powershell/module/Az.Resources/Get-AzRoleAssignment) | User Access Administrator or Owner, or have the following permissions:</br></br><code>Microsoft.Authorization/Operations/read</br>Microsoft.Authorization/permissions/read</br>Microsoft.Authorization/roleDefinitions/read</br>Microsoft.Authorization/roleAssignments/write</br>Microsoft.Authorization/roleAssignments/read</br>Microsoft.Authorization/roleAssignments/delete</code></br></br> | [Subscription](../role-based-access-control/role-assignments-portal.md)</br>Home > Subscriptions > \<subscription name\> - Access Control (IAM)|
-|Create or remove an Automation certificate|[New-AzAutomationCertificate](/powershell/module/Az.Automation/New-AzAutomationCertificate)</br>[Remove-AzAutomationCertificate](/powershell/module/az.automation/remove-azautomationcertificate) | Contributor on resource group |Automation account resource group|
-|Create or remove an Automation connection|[New-AzAutomationConnection](/powershell/module/az.automation/new-azautomationconnection)</br>[Remove-AzAutomationConnection](/powershell/module/az.automation/remove-azautomationconnection)|Contributor on resource group |Automation account resource group|
-
-<sup>1</sup> Non-administrator users in your Azure AD tenant can [register AD applications](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app) if the Azure AD tenant's **Users can register applications** option on the User settings page is set to **Yes**. If the application registration setting is **No**, the user performing this action must be as defined in this table.
-
-If you aren't a member of the subscription's Active Directory instance before you're added to the Global Administrator role of the subscription, you're added as a guest. In this situation, you receive a `You do not have permissions to create…` warning on the **Add Automation Account** page.
-
-If you are a member of the subscription's Active Directory instance where the Global Administrator role is assigned, you can also receive a `You do not have permissions to create…` warning on the **Add Automation Account** page. In this case, you can request removal from the subscription's Active Directory instance and then request to be re-added, so that you become a full user in Active Directory.
-
-To verify that the situation producing the error message has been remedied:
-
-1. From the Azure Active Directory pane in the Azure portal, select **Users and groups**.
-2. Select **All users**.
-3. Choose your name, then select **Profile**.
-4. Ensure that the value of the **User type** attribute under your user's profile is not set to **Guest**.
-
-### <a name="permissions-classic"></a>Permissions required to create or manage Classic Run As accounts
-
-To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
-
-## Create a Run As account in Azure portal
-
-Perform the following steps to update your Azure Automation account in the Azure portal. Create the Run As and Classic Run As accounts individually. If you don't need to manage classic resources, you can just create the Azure Run As account.
-
-1. Log in to the Azure portal with an account that is a member of the Subscription Admins role and co-administrator of the subscription.
-
-2. Search for and select **Automation Accounts**.
-
-3. On the Automation Accounts page, select your Automation account from the list.
-
-4. In the left pane, select **Run As Accounts** in the **Account Settings** section.
-
- :::image type="content" source="media/manage-runas-account/automation-account-properties-pane.png" alt-text="Select the Run As Account option.":::
-
-5. Depending on the account you require, use the **+ Azure Run As Account** or **+ Azure Classic Run As Account** pane. After reviewing the overview information, click **Create**.
-
- :::image type="content" source="media/manage-runas-account/automation-account-create-runas.png" alt-text="Select the option to create a Run As Account":::
-
-6. While Azure creates the Run As account, you can track the progress under **Notifications** from the menu. A banner is also displayed stating that the account is being created. The process can take a few minutes to complete.
-
-## Create a Run As account using PowerShell
-
-The following list provides the requirements to create a Run As account in PowerShell using a provided script. These requirements apply to both types of Run As accounts.
-
-* Windows 10 or Windows Server 2016 with Azure Resource Manager modules 3.4.1 and later. The PowerShell script doesn't support earlier versions of Windows.
-* Azure PowerShell PowerShell 6.2.4 or later. For information, see [How to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
-* An Automation account, which is referenced as the value for the `AutomationAccountName` and `ApplicationDisplayName` parameters.
-* Permissions equivalent to the ones listed in [Required permissions to configure Run As accounts](#permissions).
-
-To get the values for `AutomationAccountName`, `SubscriptionId`, and `ResourceGroupName`, which are required parameters for the PowerShell script, complete the following steps.
-
-1. In the Azure portal, select **Automation Accounts**.
-
-1. On the Automation Accounts page, select your Automation account.
-
-1. In the account settings section, select **Properties**.
-
-1. Note the values for **Name**, **Subscription ID**, and **Resource Group** on the **Properties** page.
-
- ![Automation account properties page](media/manage-runas-account/automation-account-properties.png)
-
-### PowerShell script to create a Run As account
-
-The PowerShell script includes support for several configurations.
-
-* Create a Run As account by using a self-signed certificate.
-* Create a Run As account and a Classic Run As account by using a self-signed certificate.
-* Create a Run As account and a Classic Run As account by using a certificate issued by your enterprise certification authority (CA).
-* Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud.
-
-1. Download and save the script to a local folder using the following command.
-
- ```powershell
- wget https://raw.githubusercontent.com/azureautomation/runbooks/master/Utility/AzRunAs/Create-RunAsAccount.ps1 -outfile Create-RunAsAccount.ps1
- ```
-
-2. Start PowerShell with elevated user rights and navigate to the folder that contains the script.
-
-3. Run one of the the following commands to create a Run As and/or Classic Run As account based on your requirements.
-
- * Create a Run As account using a self-signed certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $false
- ```
-
- * Create a Run As account and a Classic Run As account by using a self-signed certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true
- ```
-
- * Create a Run As account and a Classic Run As account by using an enterprise certificate.
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnterpriseCertPathForRunAsAccount <EnterpriseCertPfxPathForRunAsAccount> -EnterpriseCertPlainPasswordForRunAsAccount <StrongPassword> -EnterpriseCertPathForClassicRunAsAccount <EnterpriseCertPfxPathForClassicRunAsAccount> -EnterpriseCertPlainPasswordForClassicRunAsAccount <StrongPassword>
- ```
-
- If you've created a Classic Run As account with an enterprise public certificate (.cer file), use this certificate. The script creates and saves it to the temporary files folder on your computer, under the user profile `%USERPROFILE%\AppData\Local\Temp` you used to execute the PowerShell session. See [Uploading a management API certificate to the Azure portal](../cloud-services/cloud-services-configure-ssl-certificate-portal.md).
-
- * Create a Run As account and a Classic Run As account by using a self-signed certificate in the Azure Government cloud
-
- ```powershell
- .\Create-RunAsAccount.ps1 -ResourceGroup <ResourceGroupName> -AutomationAccountName <NameofAutomationAccount> -SubscriptionId <SubscriptionId> -ApplicationDisplayName <DisplayNameofAADApplication> -SelfSignedCertPlainPassword <StrongPassword> -CreateClassicRunAsAccount $true -EnvironmentName AzureUSGovernment
- ```
-
-4. After the script has executed, you're prompted to authenticate with Azure. Sign in with an account that's a member of the subscription administrators role. If you are creating a Classic Run As account, your account must be a co-administrator of the subscription.
-
-## Delete a Run As or Classic Run As account
-
-This section describes how to delete a Run As or Classic Run As account. When you perform this action, the Automation account is retained. After you delete the Run As account, you can re-create it in the Azure portal or with the provided PowerShell script.
-
-1. In the Azure portal, open the Automation account.
-
-2. In the left pane, select **Run As Accounts** in the account settings section.
-
-3. On the Run As Accounts properties page, select either the Run As account or Classic Run As account that you want to delete.
-
-4. On the Properties pane for the selected account, click **Delete**.
-
- ![Delete Run As account](media/manage-runas-account/automation-account-delete-runas.png)
-
-5. While the account is being deleted, you can track the progress under **Notifications** from the menu.
- ## <a name="cert-renewal"></a>Renew a self-signed certificate The self-signed certificate that you have created for the Run As account expires one year from the date of creation. At some point before your Run As account expires, you must renew the certificate. You can renew it any time before it expires.
@@ -163,20 +27,45 @@ When you renew the self-signed certificate, the current valid certificate is ret
Use the following steps to renew the self-signed certificate.
-1. In the Azure portal, open the Automation account.
+1. Sign-in to the [Azure portal](https://portal.azure.com).
-1. Select **Run As Accounts** in the account settings section.
+1. Go to your Automation account and select **Run As Accounts** in the account settings section.
- ![Automation account properties pane](media/manage-runas-account/automation-account-properties-pane.png)
+ :::image type="content" source="media/manage-runas-account/automation-account-properties-pane.png" alt-text="Automation account properties pane.":::
-1. On the Run As Accounts properties page, select either the Run As account or the Classic Run As account for which to renew the certificate.
+1. On the **Run As Accounts** properties page, select either **Run As Account** or **Classic Run As Account** depending on which account you need to renew the certificate for.
-1. On the properties pane for the selected account, click **Renew certificate**.
+1. On the **Properties** page for the selected account, select **Renew certificate**.
- ![Renew certificate for Run As account](media/manage-runas-account/automation-account-renew-runas-certificate.png)
+ :::image type="content" source="media/manage-runas-account/automation-account-renew-runas-certificate.png" alt-text="Renew certificate for Run As account.":::
1. While the certificate is being renewed, you can track the progress under **Notifications** from the menu.
+## Grant Run As account permissions in other subscriptions
+
+Azure Automation supports using a single Automation account from one subscription, and executing runbooks against Azure Resource Manager resources across multiple subscriptions. This configuration does not support the Azure Classic deployment model.
+
+You assign the Run As account service principal the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the other subscription, or more restrictive permissions. For more information, see [Role-based access control](automation-role-based-access-control.md) in Azure Automation. To assign the Run As account to the role in the other subscription, the user account performing this task needs to be a member of the **Owner** role in that subscription.
+
+> [!NOTE]
+> This configuration only supports multiple subscriptions of an organization using a common Azure AD tenant.
+
+Before granting the Run As account permissions, you need to first note the display name of the service principal to assign.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. From your Automation account, select **Run As Accounts** under **Account Settings**.
+1. Select **Azure Run As Account**.
+1. Copy or note the value for **Display Name** on the **Azure Run As Account** page.
+
+For detailed steps for how to add role assignments, check out the following articles depending on the method you want to use.
+
+* [Add Azure role assignment from the Azure portal](../role-based-access-control/role-assignments-portal.md)
+* [Add Azure role assignment using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+* [Add Azure role assignment using the Azure CLI](../role-based-access-control/role-assignments-cli.md)
+* [Add Azure role assignment using the REST API](..//role-based-access-control/role-assignments-rest.md)
+
+After assigning the Run As account to the role, in your runbook specify `Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"` to set the subscription context to use. For more information, see [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+ ## Limit Run As account permissions To control the targeting of Automation against resources in Azure, you can run the [Update-AutomationRunAsAccountRoleAssignments.ps1](https://aka.ms/AA5hug8) script. This script changes your existing Run As account service principal to create and use a custom role definition. The role has permissions for all resources except [Key Vault](../key-vault/index.yml).
@@ -184,7 +73,7 @@ To control the targeting of Automation against resources in Azure, you can run t
>[!IMPORTANT] >After you run the **Update-AutomationRunAsAccountRoleAssignments.ps1** script, runbooks that access Key Vault through the use of Run As accounts no longer work. Before running the script, you should review runbooks in your account for calls to Azure Key Vault. To enable access to Key Vault from Azure Automation runbooks, you must [add the Run As account to Key Vault's permissions](#add-permissions-to-key-vault).
-If you need to restrict, further what the Run As service principal can do, you can add other resource types to the `NotActions` element of the custom role definition. The following example restricts access to `Microsoft.Compute/*`. If you add this resource type to `NotActions` for the role definition, the role will not be able to access any Compute resource. To learn more about role definitions, see [Understand role definitions for Azure resources](../role-based-access-control/role-definitions.md).
+If you need to further restrict what the Run As service principal can do, you can add other resource types to the `NotActions` element of the custom role definition. The following example restricts access to `Microsoft.Compute/*`. If you add this resource type to `NotActions` for the role definition, the role will not be able to access any Compute resource. To learn more about role definitions, see [Understand role definitions for Azure resources](../role-based-access-control/role-definitions.md).
```powershell $roleDefinition = Get-AzRoleDefinition -Name 'Automation RunAs Contributor'
@@ -192,11 +81,12 @@ $roleDefinition.NotActions.Add("Microsoft.Compute/*")
$roleDefinition | Set-AzRoleDefinition ```
-You can determine if the service principal used by your Run As account is in the Contributor role definition or a custom one.
+You can determine if the service principal used by your Run As account assigned the **Contributor** role or a custom one.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Go to your Automation account and select **Run As Accounts** in the account settings section.
-2. Select **Azure Run As Account**.
-3. Select **Role** to locate the role definition that is being used.
+1. Select **Azure Run As Account**.
+1. Select **Role** to locate the role definition that is being used.
:::image type="content" source="media/manage-runas-account/verify-role.png" alt-text="Verify the Run As Account role." lightbox="media/manage-runas-account/verify-role-expanded.png":::
@@ -209,7 +99,7 @@ You can allow Azure Automation to verify if Key Vault and your Run As account se
* Grant permissions to Key Vault. * Set the access policy.
-You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to give your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault.
+You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to grant your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault.
## Resolve misconfiguration issues for Run As accounts
@@ -222,7 +112,7 @@ Some configuration items necessary for a Run As or Classic Run As account might
For such misconfiguration instances, the Automation account detects the changes and displays a status of *Incomplete* on the Run As Accounts properties pane for the account.
-![Incomplete Run As account configuration status](media/manage-runas-account/automation-account-runas-config-incomplete.png)
+:::image type="content" source="media/manage-runas-account/automation-account-runas-config-incomplete.png" alt-text="Incomplete Run As account configuration.":::
When you select the Run As account, the account properties pane displays the following error message:
@@ -230,9 +120,11 @@ When you select the Run As account, the account properties pane displays the fol
The Run As account is incomplete. Either one of these was deleted or not created - Azure Active Directory Application, Service Principal, Role, Automation Certificate asset, Automation Connect asset - or the Thumbprint is not identical between Certificate and Connection. Please delete and then re-create the Run As Account. ```
-You can quickly resolve these Run As account issues by deleting and re-creating the Run As account.
+You can quickly resolve these Run As account issues by [deleting](delete-run-as-account.md) and [re-creating](create-run-as-account.md) the Run As account.
## Next steps * [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md).
-* [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).
\ No newline at end of file
+* [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).
+* To create or re-create a Run As account, see [Create a Run As account](create-run-as-account.md).
+* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
\ No newline at end of file
automation https://docs.microsoft.com/en-us/azure/automation/quickstart-create-automation-account-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/quickstart-create-automation-account-template.md
@@ -3,17 +3,12 @@ Title: "Quickstart: Create an Automation account - Azure template"
description: This quickstart shows how to create an Automation account by using the Azure Resource Manager template. - Customer intent: I want to create an Automation account by using an Azure Resource Manager template so that I can automate processes with runbooks.- Previously updated : 07/23/2020 Last updated : 01/07/2021 - # Quickstart: Create an Automation account by using ARM template
@@ -40,7 +35,7 @@ This sample template performs the following:
* Adds sample Automation runbooks to the account. >[!NOTE]
->Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Manage Run As accounts](manage-runas-account.md).
+>Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](create-run-as-account.md).
After you complete these steps, you need to [configure diagnostic settings](automation-manage-send-joblogs-log-analytics.md) for your Automation account to send runbook job status and job streams to the linked Log Analytics workspace.
automation https://docs.microsoft.com/en-us/azure/automation/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-baseline.md
@@ -4,7 +4,7 @@ description: Azure security baseline for Automation
Previously updated : 06/22/2020 Last updated : 01/07/2021
@@ -334,11 +334,11 @@ However, when using the Hybrid Runbook Worker feature, Azure Security Center pro
**Guidance**: Use Azure Active Directory built-in administrator roles which can be explicitly assigned and can be queried. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. Whenever using Automation Account Run As accounts for your runbooks, ensure these service principals are also tracked in your inventory since they often time have elevated permissions. Delete any unused Run As accounts to minimize your exposed attack surface.
-* [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
+* [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole)
-* [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0)
+* [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember)
-* [Delete a Run As or Classic Run As account](./manage-runas-account.md#delete-a-run-as-or-classic-run-as-account)
+* [Delete a Run As or Classic Run As account](./delete-run-as-account.md)
* [Manage an Azure Automation Run As account](./manage-runas-account.md)
@@ -362,7 +362,7 @@ You can also enable a Just-In-Time / Just-Enough-Access by using Azure AD Privil
* [Learn more about Privileged Identity Management](../active-directory/privileged-identity-management/index.yml)
-* [Delete a Run As or Classic Run As account](./manage-runas-account.md#delete-a-run-as-or-classic-run-as-account)
+* [Delete a Run As or Classic Run As account](./delete-run-as-account.md)
* [Manage an Azure Automation Run As account](./manage-runas-account.md)
@@ -452,7 +452,7 @@ You can also enable a Just-In-Time / Just-Enough-Access by using Azure AD Privil
* [How to use Azure identity access reviews](../active-directory/governance/access-reviews-overview.md)
-* [Delete a Run As or Classic Run As account](./manage-runas-account.md#delete-a-run-as-or-classic-run-as-account)
+* [Delete a Run As or Classic Run As account](./delete-run-as-account.md)
* [Manage an Azure Automation Run As account](./manage-runas-account.md)
@@ -694,7 +694,7 @@ If you are using Hybrid Runbook Workers backed by Azure virtual machines, then y
* [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md)
-* [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-3.0.0)
+* [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription)
* [Understand Azure RBAC](../role-based-access-control/overview.md)
@@ -722,7 +722,7 @@ If you are using Hybrid Runbook Workers backed by Azure virtual machines, then y
* [How to create and use Tags](../azure-resource-manager/management/tag-resources.md)
-* [Delete a Run As or Classic Run As account](./manage-runas-account.md#delete-a-run-as-or-classic-run-as-account)
+* [Delete a Run As or Classic Run As account](./delete-run-as-account.md)
* [Manage an Azure Automation Run As account](./manage-runas-account.md)
@@ -834,7 +834,7 @@ Adaptive application control is an intelligent, automated, end-to-end solution f
**Guidance**: When using the Hybrid Runbook Worker feature, and depending on the type of scripts, you may use operating system specific configurations or third-party resources to limit users' ability to execute scripts within Azure compute resources. You can also leverage Azure Security Center Adaptive Application Controls to ensure that only authorized software executes and all unauthorized software is blocked from executing on Azure Virtual Machines.
-* [How to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-6)
+* [How to control PowerShell script execution in Windows Environments](/powershell/module/microsoft.powershell.security/set-executionpolicy)
* [How to use Azure Security Center Adaptive Application Controls](../security-center/security-center-adaptive-application.md)
@@ -876,7 +876,7 @@ Also, Azure Resource Manager has the ability to export the template in JavaScrip
You may also use recommendations from Azure Security Center as a secure configuration baseline for your Azure resources.
-* [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-3.3.0)
+* [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias)
* [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
@@ -940,7 +940,7 @@ For most scenarios, the Microsoft base VM templates combined with the Azure Auto
* [Information on creating ARM templates](../virtual-machines/windows/ps-template.md)
-* [How to upload a custom VM VHD to Azure](/azure-stack/operator/azure-stack-add-vm-image?view=azs-1910)
+* [How to upload a custom VM VHD to Azure](/azure-stack/operator/azure-stack-add-vm-image)
**Azure Security Center monitoring**: Not applicable
@@ -950,7 +950,7 @@ For most scenarios, the Microsoft base VM templates combined with the Azure Auto
**Guidance**: Use Azure DevOps to securely store and manage your code like custom Azure policies, Azure Resource Manager templates, and Desired State Configuration scripts. To access the resources you manage in Azure DevOps, you can grant or deny permissions to specific users, built-in security groups, or groups defined in Azure Active Directory if integrated with Azure DevOps, or Active Directory if integrated with TFS. Use the source control integration feature to keep your runbooks in your Automation account up to date with scripts in your source control repository.
-* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
+* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
* [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
@@ -1132,7 +1132,7 @@ Use the source control integration feature to keep your runbooks in your Automat
* [Introduction to Azure Automation](./automation-intro.md)
-* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey?view=azurermps-6.13.0)
+* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey)
* [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
@@ -1158,7 +1158,7 @@ Use the source control integration feature to keep your runbooks in your Automat
* [Introduction to Azure Automation](./automation-intro.md)
-* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey?view=azurermps-6.13.0)
+* [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey)
* [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
@@ -1174,7 +1174,7 @@ Use the source control integration feature to keep your runbooks in your Automat
* [Deploy resources with ARM templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md)
-* [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey?view=azurermps-6.13.0)
+* [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey)
* [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
@@ -1188,7 +1188,7 @@ Use the source control integration feature to keep your runbooks in your Automat
Use the source control integration feature to keep your runbooks in your Automation account up to date with scripts in your source control repository.
-* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops)
+* [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow)
* [About permissions and groups in Azure DevOps](/azure/devops/organizations/security/about-permissions)
automation https://docs.microsoft.com/en-us/azure/automation/shared-resources/credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/credentials.md
@@ -110,7 +110,7 @@ $securePassword = $myCredential.Password
$password = $myCredential.GetNetworkCredential().Password ```
-You can also use a credential to authenticate to Azure with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). Under most circumstances, you should use a [Run As account](../manage-runas-account.md) and retrieve the connection with [Get-AzAutomationConnection](../automation-connections.md).
+You can also use a credential to authenticate to Azure with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). Under most circumstances, you should use a [Run As account](../automation-security-overview.md#run-as-accounts) and retrieve the connection with [Get-AzAutomationConnection](../automation-connections.md).
```powershell $myCred = Get-AutomationPSCredential -Name 'MyCredential'
automation https://docs.microsoft.com/en-us/azure/automation/source-control-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/source-control-integration.md
@@ -6,6 +6,7 @@
Last updated 11/12/2020 + # Use source control integration Source control integration in Azure Automation supports single-direction synchronization from your source control repository. Source control allows you to keep your runbooks in your Automation account up to date with scripts in your GitHub or Azure Repos source control repository. This feature makes it easy to promote code that has been tested in your development environment to your production Automation account.
@@ -23,7 +24,7 @@ Azure Automation supports three types of source control:
## Prerequisites * A source control repository (GitHub or Azure Repos)
-* A [Run As account](manage-runas-account.md)
+* A [Run As account](automation-security-overview.md#run-as-accounts)
* The [latest Azure modules](automation-update-azure-modules.md) in your Automation account, including the `Az.Accounts` module (Az module equivalent of `AzureRM.Profile`) > [!NOTE]
@@ -203,4 +204,4 @@ Currently, you can't use the Azure portal to update the PAT in source control. W
## Next steps * For integrating source control in Azure Automation, see [Azure Automation: Source Control Integration in Azure Automation](https://azure.microsoft.com/blog/azure-automation-source-control-13/).
-* For integrating runbook source control with Visual Studio Online, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Online](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
+* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/hybrid-runbook-worker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/hybrid-runbook-worker.md
@@ -101,7 +101,7 @@ At line:3 char:1
``` #### Cause
-This error occurs when you attempt to use a [Run As account](../manage-runas-account.md) in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
+This error occurs when you attempt to use a [Run As account](../automation-security-overview.md#run-as-accounts) in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
#### Resolution
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/runbooks.md
@@ -129,7 +129,7 @@ Run Login-AzureRMAccount to login.
### Cause
-This error can occur when you're not using a Run As account or the Run As account has expired. For more information, see [Manage Azure Automation Run As accounts](../manage-runas-account.md).
+This error can occur when you're not using a Run As account or the Run As account has expired. For more information, see [Azure Automation Run As accounts overview](../automation-security-overview.md#run-as-accounts).
This error has two primary causes:
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/shared-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/shared-resources.md
@@ -3,7 +3,7 @@ Title: Troubleshoot Azure Automation shared resource issues
description: This article tells how to troubleshoot and resolve issues with Azure Automation shared resources. Previously updated : 03/12/2019 Last updated : 01/27/2021
@@ -126,7 +126,7 @@ You don't have the permissions that you need to create or update the Run As acco
#### Resolution
-To create or update a Run As account, you must have appropriate [permissions](../manage-runas-account.md#permissions) to the various resources used by the Run As account.
+To create or update a Run As account, you must have appropriate [permissions](../automation-security-overview.md#permissions) to the various resources used by the Run As account.
If the problem is because of a lock, verify that the lock can be removed. Then go to the resource that is locked in Azure portal, right-click the lock, and select **Delete**.
@@ -142,7 +142,7 @@ Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iplpapi.dll'
#### Cause
-This error is most likely caused by an incorrectly configured [Run As account](../manage-runas-account.md).
+This error is most likely caused by an incorrectly configured [Run As account](../automation-security-overview.md).
#### Resolution
@@ -160,5 +160,4 @@ If this article doesn't resolve your issue, try one of the following channels fo
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport). This is the official Microsoft Azure account for connecting the Azure community to the right resources: answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
-
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
\ No newline at end of file
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/start-stop-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/start-stop-vm.md
@@ -104,7 +104,7 @@ Review the following list for potential resolutions:
* **ScheduledStartStop_Parent** * **SequencedStartStop_Parent**
-* Verify that your [Run As account](../manage-runas-account.md) has proper permissions to the VMs you're trying to start or stop. To learn how to check the permissions on a resource, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You'll need to provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings**, and select the appropriate Run As account.
+* Verify that your [Run As account](../automation-security-overview.md#run-as-accounts) has proper permissions to the VMs you're trying to start or stop. To learn how to check the permissions on a resource, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You'll need to provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings**, and select the appropriate Run As account.
* VMs might not be started or stopped if they're being explicitly excluded. Excluded VMs are set in the `External_ExcludeVMNames` variable in the Automation account to which the feature is deployed. The following example shows how you can query that value with PowerShell.
@@ -196,7 +196,7 @@ This issue can be caused by an improperly configured or expired Run As account.
To verify that your Run As account is properly configured, go to your Automation account in the Azure portal and select **Run as accounts** under **Account Settings**. If a Run As account is improperly configured or expired, the status shows the condition.
-If your Run As account is misconfigured, delete and re-create your Run As account. For more information, see [Manage Azure Automation Run As accounts](../manage-runas-account.md).
+If your Run As account is misconfigured, delete and re-create your Run As account. For more information, see [Azure Automation Run As accounts](../automation-security-overview.md#run-as-accounts).
If the certificate is expired for your Run As account, follow the steps in [Self-signed certificate renewal](../manage-runas-account.md#cert-renewal) to renew the certificate.
automation https://docs.microsoft.com/en-us/azure/automation/tutorial-configure-servers-desired-state https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/tutorial-configure-servers-desired-state.md
@@ -6,6 +6,7 @@
Last updated 08/08/2018 + # Configure machines to a desired state Azure Automation State Configuration allows you to specify configurations for your servers and ensure that those servers are in the specified state over time.
@@ -21,7 +22,7 @@ For this tutorial, we use a simple [DSC configuration](/powershell/scripting/dsc
## Prerequisites -- An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](./manage-runas-account.md).
+- An Azure Automation account. To learn more about an Automation account and its requirements, see [Automation Account authentication overview](./automation-security-overview.md).
- An Azure Resource Manager VM (not classic) running Windows Server 2008 R2 or later. For instructions on creating a VM, see [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md). - Azure PowerShell module version 3.6 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/azurerm/install-azurerm-ps).
@@ -47,7 +48,6 @@ Connect-AzAccount
## Create and upload a configuration to Azure Automation - In a text editor, type the following and save it locally as **TestConfig.ps1**. ```powershell
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-automation-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-automation-account.md
@@ -2,10 +2,12 @@
Title: Enable Azure Automation Update Management from Automation account description: This article tells how to enable Update Management from an Automation account. + Last updated 11/09/2020 + # Enable Update Management from an Automation account This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc enabled servers](../../azure-arc/servers/overview.md). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management.
@@ -16,7 +18,7 @@ This article describes how you can use your Automation account to enable the [Up
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* An [Azure virtual machine](../../virtual-machines/windows/quick-create-portal.md), or VM or server registered with Arc enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](../../azure-monitor/platform/log-analytics-agent.md) for Windows or Linux installed and reporting to the workspace linked to the Automation account Update Management is enabled in. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative. ## Sign in to Azure
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-portal.md
@@ -2,10 +2,12 @@
Title: Enable Azure Automation Update Management from the Azure portal description: This article tells how to enable Update Management from the Azure portal. Previously updated : 04/11/2019-+ Last updated : 01/07/2021+ + # Enable Update Management from the Azure portal This article describes how you can enable the [Update Management](overview.md) feature for VMs by browsing the Azure portal. To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management.
@@ -18,7 +20,7 @@ The number of resource groups that you can use for managing your VMs is limited
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). ## Sign in to Azure
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-runbook.md
@@ -2,10 +2,12 @@
Title: Enable Azure Automation Update Management from runbook description: This article tells how to enable Update Management from a runbook. + Last updated 11/24/2020 + # Enable Update Management from a runbook This article describes how you can use a runbook to enable the [Update Management](overview.md) feature for VMs in your environment. To enable Azure VMs at scale, you must enable an existing VM with Update Management.
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-template.md
@@ -1,11 +1,9 @@
Title: Enable Update Management using Azure Resource Manager template | Microsoft Docs
+ Title: Enable Update Management using Azure Resource Manager template
description: This article tells how to use an Azure Resource Manager template to enable Update Management.-+ -- Last updated 09/18/2020
@@ -24,7 +22,7 @@ The template does not automate enabling Update Management on one or more Azure o
If you already have a Log Analytics workspace and Automation account deployed in a supported region in your subscription, they are not linked. Using this template successfully creates the link and deploys Update Management. >[!NOTE]
->Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Manage Run As accounts](../manage-runas-account.md).
+>Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](../create-run-as-account.md).
After you complete these steps, you need to [configure diagnostic settings](../automation-manage-send-joblogs-log-analytics.md) for your Automation account to send runbook job status and job streams to the linked Log Analytics workspace.
automation https://docs.microsoft.com/en-us/azure/automation/update-management/enable-from-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-vm.md
@@ -2,6 +2,7 @@
Title: Enable Azure Automation Update Management from an Azure VM description: This article tells how to enable Update Management from an Azure VM. + Last updated 11/04/2020
@@ -17,7 +18,7 @@ This article describes how you can enable the [Update Management](overview.md) f
## Prerequisites * Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Automation account](../index.yml) to manage machines.
+* [Automation account](../automation-security-overview.md) to manage machines.
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). ## Sign in to Azure
availability-zones https://docs.microsoft.com/en-us/azure/availability-zones/az-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
@@ -1,11 +1,12 @@
Title: Regions and Availability Zones in Azure description: Learn about regions and Availability Zones in Azure to meet your technical and regulatory requirements.-+ -+ Last updated 01/26/2021-++
availability-zones https://docs.microsoft.com/en-us/azure/availability-zones/az-region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
@@ -1,11 +1,12 @@
Title: Azure Services that support Availability Zones description: To create highly available and resilient applications in Azure, Availability Zones provide physically separate locations you can use to run your resources.-+ -+ Last updated 01/26/2021-++
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md new file mode 100644 /dev/null
@@ -0,0 +1,27 @@
+
+ Title: Azure Policy Regulatory Compliance controls for Azure App Configuration
+description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Last updated : 01/27/2021++++++
+# Azure Policy Regulatory Compliance controls for Azure App Configuration
+
+[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
+provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
+**compliance domains** and **security controls** related to different compliance standards. This
+page lists the **compliance domains** and **security controls** for Azure App Configuration. You can
+assign the built-ins for a **security control** individually to help make your Azure resources
+compliant with the specific standard.
+
+[!INCLUDE [azure-policy-compliancecontrols-introwarning](../../includes/policy/standards/intro-warning.md)]
+
+[!INCLUDE [azure-policy-compliancecontrols-appconfig](../../includes/policy/standards/byrp/microsoft.appconfiguration.md)]
+
+## Next steps
+
+- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-event-grid-quickstart-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md new file mode 100644 /dev/null
@@ -0,0 +1,133 @@
+
+ Title: 'Quickstart: Route Azure Cache for Redis events to web endpoint with Azure CLI'
+description: Use Azure Event Grid to subscribe to Azure Cache for Redis events, send the events to a Webhook, and handle the events in a web application.
++ Last updated : 1/5/2021++++
+# Quickstart: Route Azure Cache for Redis events to web endpoint with Azure CLI
+
+Azure Event Grid is an eventing service for the cloud. In this quickstart, you'll use the Azure CLI to subscribe to Azure Cache for Redis events, trigger an event, and view the results.
+
+Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you complete the steps described in this quickstart, you'll see that the event data has been sent to the web app.
+
+[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
+
+[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
+
+If you choose to install and use the CLI locally, this quickstart requires that you're running the latest version of Azure CLI (2.0.70 or later). To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+If you aren't using Cloud Shell, you must first sign in using `az login`.
+
+## Create a resource group
+
+Event Grid topics are deployed as individual Azure resources and must be provisioned under an Azure resource group. A resource group is a logical collection into which Azure resources are deployed and managed.
+
+Create a resource group with the [az group create](/cli/azure/group) command.
+
+The following example creates a resource group named `<resource_group_name>` in the *westcentralus* location. Replace `<resource_group_name>` with a unique name for your resource group.
+
+```azurecli-interactive
+az group create --name <resource_group_name> --location westcentralus
+```
+
+## Create a cache instance
+
+```azurecli-interactive
+#/bin/bash
+
+# Create a Basic C0 (256 MB) Azure Cache for Redis instance
+az redis create --name <cache_name> --resource-group <resource_group_name> --location westcentralus --sku Basic --vm-size C0
+```
++
+## Create a message endpoint
+
+Before subscribing to the topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique because it's part of the DNS entry.
+
+```azurecli-interactive
+sitename=<your-site-name>
+
+az group deployment create \
+ --resource-group <resource_group_name> \
+ --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
+ --parameters siteName=$sitename hostingPlanName=viewerhost
+```
+
+The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to: `https://<your-site-name>.azurewebsites.net`
+
+You should see the site with no messages currently displayed.
+
+[!INCLUDE [event-grid-register-provider-cli.md](../../includes/event-grid-register-provider-cli.md)]
+
+## Subscribe to your Azure Cache for Redis instance
+
+In this step, you'll subscribe to a topic to tell Event Grid which events you want to track and where to send those events. The following example subscribes to the Azure Cache for Redis instance you created, and passes the URL from your web app as the endpoint for event notification. Replace `<event_subscription_name>` with a name for your event subscription. For `<resource_group_name>` and `<cache_name>`, use the values you created earlier.
+
+The endpoint for your web app must include the suffix `/api/updates/`.
+
+```azurecli-interactive
+cacheId=$(az redis show --name <cache_name> --resource-group <resource_group_name> --query id --subscription <subscription_id>)
+endpoint=https://$sitename.azurewebsites.net/api/updates
+
+az eventgrid event-subscription create \
+ --source-resource-id $cacheId \
+ --name <event_subscription_name> \
+ --endpoint $endpoint
+```
+
+View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+
+ :::image type="content" source="media/cache-event-grid-portal/subscription-event.png" alt-text="Azure Event Grid Viewer.":::
+
+## Trigger an event from Azure Cache for Redis
+
+Now, let's trigger an event to see how Event Grid distributes the message to your endpoint. Let's export the data stored in your Azure Cache for Redis instance. Again, use the values for `{cache_name}` and `{resource_group_name}` you created earlier.
+
+```azurecli-interactive
+az redis export  --ids '/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}' \
+ --prefix '<prefix_for_exported_files>' \
+ --container '<SAS_url>'  
+```
+
+You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. View your web app to see the event you just sent.
++
+```json
+[{
+"id": "e1ceb52d-575c-4ce4-8056-115dec723cff",
+ "eventType": "Microsoft.Cache.ExportRDBCompleted",
+ "topic": "/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+ "data": {
+ "name": "ExportRDBCompleted",
+ "timestamp": "2020-12-10T18:07:54.4937063+00:00",
+ "status": "Succeeded"
+ },
+ "subject": "ExportRDBCompleted",
+ "dataversion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2020-12-10T18:07:54.4937063+00:00"
+}]
+
+```
+
+## Clean up resources
+If you plan to continue working with this Azure Cache for Redis instance and event subscription, do not clean up the resources created in this quickstart. If you do not plan to continue, use the following command to delete the resources you created in this quickstart.
+
+Replace `<resource_group_name>` with the resource group you created above.
+
+```azurecli-interactive
+az group delete --name <resource_group_name>
+```
+
+## Next steps
+
+Now that you know how to create topics and event subscriptions, learn more about Azure Cache for Redis Events and what Event Grid can help you do:
+
+- [Reacting to Azure Cache for Redis events](cache-event-grid.md)
+- [About Event Grid](../event-grid/overview.md)
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-event-grid-quickstart-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-portal.md new file mode 100644 /dev/null
@@ -0,0 +1,121 @@
+
+ Title: 'Quickstart: Route Azure Cache for Redis events to web endpoint with the Azure portal'
+description: Use Azure Event Grid to subscribe to Azure Cache for Redis events, send the events to a Webhook, and handle the events in a web application
Last updated : 1/5/2021++++++
+# Quickstart: Route Azure Cache for Redis events to web endpoint with the Azure portal
+
+Azure Event Grid is an eventing service for the cloud. In this quickstart, you'll use the Azure portal to create an Azure Cache for Redis instance, subscribe to events for that instance, trigger an event, and view the results. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages.
+
+[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
+
+When you're finished, you'll see that the event data has been sent to the web app.
+
+:::image type="content" source="media/cache-event-grid-portal/event-grid-scaling.png" alt-text="Azure Event Grid Viewer scaling in JSON format.":::
+
+## Create an Azure Cache for Redis cache instance
+
+[!INCLUDE [redis-cache-create](../../includes/redis-cache-create.md)]
+
+## Create a message endpoint
+
+Before subscribing to the events for the cache instance, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you'll deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+1. Select **Deploy to Azure** in GitHub README to deploy the solution to your subscription.
+
+:::image type="content" source="media/cache-event-grid-portal/deploy-to-azure.png" alt-text="Deploy to Azure button.":::
+
+2. On the **Custom deployment** page, do the following steps:
+ 1. For **Resource group**, select the resource group that you created when creating the cache instance. It will be easier for you to clean up after you are done with the tutorial by deleting the resource group.
+ 2. For **Site Name**, enter a name for the web app.
+ 3. For **Hosting plan name**, enter a name for the App Service plan to use for hosting the web app.
+ 4. Select the check box for **I agree to the terms and conditions stated above**.
+ 5. Select **Purchase**.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Subscription** | Drop down and select your subscription. | The subscription under which to create this web app. |
+ | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | By putting all your app resources in one resource group, you can easily manage or delete them together. |
+ | **Site Name** | Enter a name for your web app. | This value cannot be empty. |
+ | **Hosting plan name** | Enter a name for the App Service plan to use for hosting the web app. | This value cannot be empty. |
+
+1. Select Alerts (bell icon) in the portal, and then select **Go to resource group**.
+
+ :::image type="content" source="media/cache-event-grid-portal/deployment-notification.png" alt-text="Azure portal deployment notification.":::
+
+4. On the **Resource group** page, in the list of resources, select the web app that you created. You'll also see the App Service plan and the cache instance in this list.
+
+5. On the **App Service** page for your web app, select the URL to navigate to the web site. The URL should be in this format: `https://<your-site-name>.azurewebsites.net`.
+
+6. Confirm that you see the site but no events have been posted to it yet.
+
+ :::image type="content" source="media/cache-event-grid-portal/blank-event-grid-viewer.png" alt-text="Empty Event Grid Viewer site.":::
+
+[!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)]
+
+## Subscribe to the Azure Cache for Redis instance
+
+In this step, you'll subscribe to a topic to tell Event Grid which events you want to track, and where to send the events.
+
+1. In the portal, navigate to your cache instance that you created earlier.
+2. On the **Azure Cache for Redis** page, select **Events** on the left menu.
+3. Select **Web Hook**. You are sending events to your viewer app using a web hook for the endpoint.
+
+ :::image type="content" source="media/cache-event-grid-portal/event-grid-web-hook.png" alt-text="Azure portal Events page.":::
+
+4. On the **Create Event Subscription** page, enter the following:
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Name** | Enter a name for the event subscription. | The value must be between 3 and 64 characters long. It can only contain letters, numbers, and dashes. |
+ | **Event Types** | Drop down and select which event type(s) you want to get pushed to your destination. For this quickstart, we'll be scaling our cache instance. | Patching, scaling, import and export are the available options. |
+ | **Endpoint Type** | Select **Web Hook**. | Event handler to receive your events. |
+ | **Endpoint** | Click **Select an endpoint**, and enter the URL of your web app and add `api/updates` to the home page URL (for example: `https://cache.azurewebsites.net/api/updates`), and then select **Confirm Selection**. | This is the URL of your web app that you created earlier. |
+
+5. Now, on the **Create Event Subscription** page, select **Create** to create the event subscription.
+
+6. View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+
+ :::image type="content" source="media/cache-event-grid-portal/subscription-event.png" alt-text="Azure Event Grid Viewer.":::
+
+## Send an event to your endpoint
+
+Now, let's trigger an event to see how Event Grid distributes the message to your endpoint. We'll be scaling your Azure Cache for Redis instance.
+
+1. In the Azure portal, navigate to your Azure Cache for Redis instance and select **Scale** on the left menu.
+
+1. Select the desired pricing tier from the **Scale** page and click **Select**.
+
+ You can scale to a different pricing tier with the following restrictions:
+
+ * You can't scale from a higher pricing tier to a lower pricing tier.
+ * You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache.
+ * You can't scale from a **Standard** cache down to a **Basic** cache.
+ * You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a subsequent scaling operation to the desired size.
+ * You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a subsequent scaling operation.
+ * You can't scale from a larger size down to the **C0 (250 MB)** size.
+
+ While the cache is scaling to the new pricing tier, a **Scaling** status is displayed in the **Azure Cache for Redis** blade. When scaling is complete, the status changes from **Scaling** to **Running**.
+
+1. You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a **ScalingCompleted** event was received.
+
+ :::image type="content" source="media/cache-event-grid-portal/event-grid-scaling.png" alt-text="Azure Event Grid Viewer scaling in JSON format.":::
+
+## Clean up resources
+
+If you plan to continue working with this event, don't clean up the resources created in this quickstart. Otherwise, delete the resources you created in this quickstart.
+
+Select the resource group, and select **Delete resource group**.
+
+## Next steps
+
+Now that you know how to create custom topics and event subscriptions, learn more about what Event Grid can help you do:
+
+- [Reacting to Azure Cache for Redis events](cache-event-grid.md)
+- [About Event Grid](../event-grid/overview.md)
+
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-event-grid-quickstart-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-powershell.md new file mode 100644 /dev/null
@@ -0,0 +1,164 @@
+
+ Title: 'Quickstart: Route Azure Cache for Redis events to web endpoint with PowerShell'
+description: Use Azure Event Grid to subscribe to Azure Cache for Redis events, send the events to a Webhook, and handle the events in a web application.
Last updated : 1/5/2021++++++
+# Quickstart: Route Azure Cache for Redis events to web endpoint with PowerShell
+
+Azure Event Grid is an eventing service for the cloud. In this quickstart, you'll use Azure PowerShell to subscribe to Azure Cache for Redis events, trigger an event, and view the results.
+
+Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you complete the steps described in this quickstart, you'll see that the event data has been sent to the web app.
+
+## Setup
+
+This quickstart requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+
+## Sign in to Azure
+
+Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions to authenticate.
+
+```powershell
+Connect-AzAccount
+```
+
+This example uses **westus2** and stores the selection in a variable for use throughout.
+
+```powershell
+$location = "westus2"
+```
+
+## Create a resource group
+
+Event Grid topics are deployed as individual Azure resources and must be provisioned under an Azure resource group. A resource group is a logical collection into which Azure resources are deployed and managed.
+
+Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
+
+The following example creates a resource group named **gridResourceGroup** in the **westus2** location.
+
+```powershell
+$resourceGroup = "gridResourceGroup"
+New-AzResourceGroup -Name $resourceGroup -Location $location
+```
+
+## Create an Azure Cache for Redis instance
+
+```powershell
+New-AzRedisCache
+ -ResourceGroupName <String>
+ -Name <String>
+ -Location <String>
+ [-Size <String>]
+ [-Sku <String>]
+ [-RedisConfiguration <Hashtable>]
+ [-EnableNonSslPort <Boolean>]
+ [-TenantSettings <Hashtable>]
+ [-ShardCount <Int32>]
+ [-MinimumTlsVersion <String>]
+ [-SubnetId <String>]
+ [-StaticIP <String>]
+ [-Tag <Hashtable>]
+ [-Zone <String[]>]
+ [-DefaultProfile <IAzureContextContainer>]
+ [-WhatIf]
+ [-Confirm]
+ [<CommonParameters>]
+```
+For more information on creating a cache instance in PowerShell, see the [Azure PowerShell reference](https://docs.microsoft.com/powershell/module/az.rediscache/new-azrediscache?view=azps-5.2.0).
+
+## Create a message endpoint
+
+Before subscribing to the topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique because it's part of the DNS entry.
+
+```powershell
+$sitename="<your-site-name>"
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroup `
+ -TemplateUri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" `
+ -siteName $sitename `
+ -hostingPlanName viewerhost
+```
+
+The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to: `https://<your-site-name>.azurewebsites.net`
+
+You should see the site with no messages currently displayed.
+
+:::image type="content" source="media/cache-event-grid-portal/blank-event-grid-viewer.png" alt-text="Empty Event Grid Viewer site.":::
+
+## Subscribe to your Azure Cache for Redis event
+
+In this step, you'll subscribe to a topic to tell Event Grid which events you want to track. The following example subscribes to the cache instance you created, and passes the URL from your web app as the endpoint for event notification. The endpoint for your web app must include the suffix `/api/updates/`.
+
+```powershell
+$cacheId = (Get-AzRedisCache -ResourceGroupName $resourceGroup -Name $cacheName).Id
+$endpoint="https://$sitename.azurewebsites.net/api/updates"
+
+New-AzEventGridSubscription `
+ -EventSubscriptionName <event_subscription_name> `
+ -Endpoint $endpoint `
+ -ResourceId $cacheId
+```
+
+View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+
+ :::image type="content" source="media/cache-event-grid-portal/subscription-event.png" alt-text="Azure Event Grid Viewer.":::
+
+## Trigger an event from Azure Cache for Redis
+
+Now, let's trigger an event to see how Event Grid distributes the message to your endpoint.
+
+```powershell
+Import-AzRedisCache
+ [-ResourceGroupName <String>]
+ -Name <String>
+ -Files <String[]>
+ [-Format <String>]
+ [-Force]
+ [-PassThru]
+ [-DefaultProfile <IAzureContextContainer>]
+ [-WhatIf]
+ [-Confirm]
+ [<CommonParameters>]
+```
+For more information on importing in PowerShell, see the [Azure PowerShell reference](https://docs.microsoft.com/powershell/module/az.rediscache/import-azrediscache?view=azps-5.2.0).
+
+You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. View your web app to see the event you just sent.
+
+```json
+[{
+"id": "e1ceb52d-575c-4ce4-8056-115dec723cff",
+ "eventType": "Microsoft.Cache.ImportRDBCompleted",
+ "topic": "/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+ "data": {
+ "name": "ImportRDBCompleted",
+ "timestamp": "2020-12-10T18:07:54.4937063+00:00",
+ "status": "Succeeded"
+ },
+ "subject": "ImportRDBCompleted",
+ "dataversion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2020-12-10T18:07:54.4937063+00:00"
+}]
+
+```
+
+## Clean up resources
+If you plan to continue working with this Azure Cache for Redis instance and event subscription, don't clean up the resources created in this quickstart. If you don't plan to continue, use the following command to delete the resources you created in this quickstart.
+
+```powershell
+Remove-AzResourceGroup -Name $resourceGroup
+```
+
+## Next steps
+
+Now that you know how to create topics and event subscriptions, learn more about Azure Cache for Redis events and what Event Grid can help you do:
+
+- [Reacting to Azure Cache for Redis events](cache-event-grid.md)
+- [About Event Grid](../event-grid/overview.md)
\ No newline at end of file
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/cache-event-grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid.md new file mode 100644 /dev/null
@@ -0,0 +1,58 @@
+
+ Title: Azure Cache for Redis Event Grid Overview
+description: Use Azure Event Grid to publish Azure Cache for Redis events.
++ Last updated : 12/21/2020++++
+# Azure Cache for Redis Event Grid Overview
+
+Azure Cache for Redis events, such as patching, scaling, import/export (RDB) events are pushed using [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers such as Azure Functions, Azure Logic Apps, or even to your own http listener. Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering.
+
+See the [Azure Cache for Redis events schema](../event-grid/event-schema-azure-cache.md) article to view the full list of the events that Azure Cache for Redis supports.
+
+If you want to try Azure Cache for Redis events, see any of these quickstarts:
+
+|If you want to use this tool: |See this quickstart: |
+|--|-|
+|Azure portal |[Quickstart: Route Azure Cache for Redis events to web endpoint with the Azure portal](cache-event-grid-quickstart-portal.md)|
+|PowerShell |[Quickstart: Route Azure Cache for Redis events to web endpoint with PowerShell](cache-event-grid-quickstart-powershell.md)|
+|Azure CLI |[Quickstart: Route Azure Cache for Redis events to web endpoint with Azure CLI](cache-event-grid-quickstart-cli.md)|
+
+## The event model
+
+Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. This image illustrates the relationship between event publishers, event subscriptions, and event handlers.
+
+:::image type="content" source="media/cache-event-grid/event-grid-model.png" alt-text="Event grid model.":::
+
+First, subscribe an endpoint to an event. Then, when an event is triggered, the Event Grid service will send data about that event to the endpoint.
+
+See the [Azure Cache for Redis events schema](../event-grid/event-schema-azure-cache.md) article to view:
+
+> [!div class="checklist"]
+> * A complete list of Azure Cache for Redis events and how each event is triggered.
+> * An example of the data the Event Grid would send for each of these events.
+> * The purpose of each key value pair that appears in the data.
++
+## Best practices for consuming events
+
+Applications that handle Azure Cache for Redis events should follow a few recommended practices:
+> [!div class="checklist"]
+> * As multiple subscriptions can be configured to route events to the same event handler, it is important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the Azure Cache for Redis instance you are expecting.
+> * Similarly, check that the eventType is one you are prepared to process, and do not assume that all events you receive will be the types you expect.
+> * Azure Cache for Redis events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However, due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md).
++
+## Next steps
+
+Learn more about Event Grid and give Azure Cache for Redis events a try:
+
+- [About Event Grid](../event-grid/overview.md)
+- [Azure Cache for Redis events schema](../event-grid/event-schema-azure-cache.md)
+- [Route Azure Cache for Redis events to web endpoint with Azure CLI](cache-event-grid-quickstart-cli.md)
+- [Route Azure Cache for Redis events to web endpoint with the Azure portal](cache-event-grid-quickstart-portal.md)
+- [Route Azure Cache for Redis events to web endpoint with PowerShell](cache-event-grid-quickstart-powershell.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-best-practices.md
@@ -109,7 +109,7 @@ For C# functions, you can change the type to a strongly-typed array. For exampl
The `host.json` file in the function app allows for configuration of host runtime and trigger behaviors. In addition to batching behaviors, you can manage concurrency for a number of triggers. Often adjusting the values in these options can help each instance scale appropriately for the demands of the invoked functions.
-Settings in the host.json file apply across all functions within the app, within a *single instance* of the function. For example, if you had a function app with two HTTP functions and [`maxConcurrentRequests`](functions-bindings-http-webhook-output.md#hostjson-settings) requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the two functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).
+Settings in the host.json file apply across all functions within the app, within a *single instance* of the function. For example, if you had a function app with two HTTP functions and [`maxConcurrentRequests`](functions-bindings-http-webhook-output.md#hostjson-settings) requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the ten functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).
Other host configuration options are found in the [host.json configuration article](functions-host-json.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
@@ -444,7 +444,7 @@ Here's the *function.json* file:
{ "type": "http", "direction": "out",
- "name": "res"
+ "name": "$return"
} ] }
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-dependency-injection.md
@@ -5,7 +5,7 @@
Previously updated : 08/15/2020 Last updated : 01/27/2021
@@ -252,6 +252,24 @@ public class HttpTrigger
Refer to [Options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) for more details regarding working with options.
+## Using ASP.NET Core user secrets
+
+When developing locally, ASP.NET Core provides a [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) that allows you to store secret information outside the project root. It makes it less likely that secrets are accidentally committed to source control. Azure Functions Core Tools (version 3.0.3233 or later) automatically reads secrets created by the ASP.NET Core Secret Manager.
+
+To configure a .NET Azure Functions project to use user secrets, run the following command in the project root.
+
+```bash
+dotnet user-secrets init
+```
+
+Then use the `dotnet user-secrets set` command to create or update secrets.
+
+```bash
+dotnet user-secrets set MySecret "my secret value"
+```
+
+To access user secrets values in your function app code, use `IConfiguration` or `IOptions`.
+ ## Customizing configuration sources > [!NOTE]
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-impact-level-5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
@@ -274,10 +274,6 @@ You can encrypt disks that support virtual machine scale sets by using Azure Dis
- [Encrypt disks in virtual machine scale sets](../virtual-machine-scale-sets/disk-encryption-powershell.md)
-### [Windows Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/)
-
-Windows Virtual Desktop supports Impact Level 5 workloads in Azure Government with no extra configuration required.
- ## Containers For Containers services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,service-fabric,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-plan-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-security.md
@@ -4,7 +4,7 @@ description: Customer guidance and best practices for securing their workloads.
Previously updated : 11/19/2020 Last updated : 1/27/2021
@@ -18,7 +18,7 @@ The following diagram shows the Azure defense-in-depth model. For example, Micro
:::image type="content" source="./media/azure-government-Defenseindepth.png" alt-text="Azure defense-in-depth model" border="false":::
-This article outlines the foundational principles for securing your services and applications, providing guidance and best practices on how to apply these principles, for example, how customers should make smart use of Azure Government to meet the obligations and responsibilities that are required for a solution that handles information subject to the [International Traffic in Arms Regulations](./documentation-government-overview-itar.md#itar) (ITAR). For additional security recommendations and implementation details to help customers improve the security posture with respect to Azure resources, see the [Azure Security Benchmark](../security/benchmarks/index.yml).
+This article outlines the foundational principles for securing your services and applications. It provides guidance and best practices on how to apply these principles. For example, how customers should make smart use of Azure Government to meet requirements for a solution that handles information subject to the [International Traffic in Arms Regulations](./documentation-government-overview-itar.md#itar) (ITAR). For extra security recommendations and implementation details to help customers improve the security posture with respect to Azure resources, see the [Azure Security Benchmark](../security/benchmarks/index.yml).
The overarching principles for securing customer data are:
@@ -37,7 +37,7 @@ Data encryption provides isolation assurances that are tied directly to encrypti
### Encryption at rest
-Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys, as well as customer-managed encryption keys. This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage Service Encryption and Azure Disk Encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage Service Encryption and Azure Disk Encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
### Encryption in transit
@@ -48,7 +48,7 @@ The basic encryption available for connectivity to Azure Government supports Tra
### Best practices for encryption - IaaS VMs: Use Azure Disk Encryption. Turn on Storage Service Encryption to encrypt the VHD files that are used to back up those disks in Azure Storage. This approach only encrypts newly written data, which means that, if you create a VM and then enable Storage Service Encryption on the storage account that holds the VHD file, only the changes will be encrypted, not the original VHD file.-- Client-side encryption: This is the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPs for your data in transit, and Storage Service Encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you are encrypting and transferring a lot of data.
+- Client-side encryption: Represents the most secure method for encrypting your data, because it encrypts it before transit, and encrypts the data at rest. However, it does require that you add code to your applications using storage, which you might not want to do. In those cases, you can use HTTPs for your data in transit, and Storage Service Encryption to encrypt the data at rest. Client-side encryption also involves more load on the client that you have to account for in your scalability plans, especially if you are encrypting and transferring much data.
## Managing secrets
@@ -76,24 +76,24 @@ The isolation of the Azure Government environment is achieved through a series o
- Specific credentials and multi-factor authentication for logical access - Infrastructure for Azure Government is located within the United States
-Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government does not directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by leveraging additional protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) leverages peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
+Within the Azure Government network, internal network system components are isolated from other system components through implementation of separate subnets and access control policies on management interfaces. Azure Government does not directly peer with the public internet or with the Microsoft corporate network. Azure Government directly peers to the commercial Microsoft Azure network, which has routing and transport capabilities to the Internet and the Microsoft Corporate network. Azure Government limits its exposed surface area by applying extra protections and communications capabilities of our commercial Azure network. In addition, Azure Government ExpressRoute (ER) uses peering with our customerΓÇÖs networks over non-Internet private circuits to route ER customer ΓÇ£DMZΓÇ¥ networks using specific Border Gateway Protocol (BGP)/AS peering as a trust boundary for application routing and associated policy enforcement.
-Azure Government maintains a FedRAMP High P-ATO issued by the FedRAMP Joint Authorization Board (JAB), as well as DoD SRG IL4 and IL5 provisional authorizations.
+Azure Government maintains a FedRAMP High P-ATO issued by the FedRAMP Joint Authorization Board (JAB), and DoD SRG IL4 and IL5 provisional authorizations.
### Tenant isolation Separation between customers/tenants is an essential security mechanism for the entire Azure Government multi-tenant cloud platform. Azure Government provides baseline per-customer or tenant isolation controls including isolation of Hypervisor, Root OS, and Guest VMs, isolation of Fabric Controllers, packet filtering, and VLAN isolation. For more information, see [compute isolation](./azure-secure-isolation-guidance.md#compute-isolation).
-Customer/tenants can manage their isolation posture to meet individual requirements through network access control and segregation through virtual machines, virtual networks, VLAN isolation, ACLs, load balancers and IP filters. Additionally, customers/tenants can further manage isolation levels for their resources across subscriptions, resource groups, virtual networks, and subnets. The customer/tenant logical isolation controls help prevent one tenant from interfering with the operations of any other customer/tenant.
+Customer/tenants can manage their isolation posture to meet individual requirements through network access control and segregation through virtual machines, virtual networks, VLAN isolation, ACLs, load balancers, and IP filters. Additionally, customers/tenants can further manage isolation levels for their resources across subscriptions, resource groups, virtual networks, and subnets. The customer/tenant logical isolation controls help prevent one tenant from interfering with the operations of any other customer/tenant.
## Screening
-All Azure and Azure Government employees in the United States are subject to Microsoft background checks, as outlined in the table below. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of U.S. citizenship, as well as additional screening requirements where appropriate.
+All Azure and Azure Government employees in the United States are subject to Microsoft background checks, as outlined in the table below. Personnel with the ability to access customer data for troubleshooting purposes in Azure Government are additionally subject to the verification of U.S. citizenship and extra screening requirements where appropriate.
-We are now screening all our operators at National Agency Check with Law and Credit (NACLC) as defined in DoD SRG [Section 5.6.2.2](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.6PhysicalFacilitiesandPersonnelRequirements):
+We are now screening all our operators at a Tier 3 Investigation (formerly National Agency Check with Law and Credit, NACLC) as defined in DoD SRG [Section 5.6.2.2](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html#5.6PhysicalFacilitiesandPersonnelRequirements):
> [!NOTE]
-> The minimum background investigation required for CSP personnel having access to Level 4 and 5 information based on a ΓÇ£noncritical-sensitiveΓÇ¥ (e.g., DoDΓÇÖs ADP-2) is a National Agency Check with Law and Credit (NACLC) (for ΓÇ£noncritical-sensitiveΓÇ¥ contractors), or a Moderate Risk Background Investigation (MBI) for a ΓÇ£moderate riskΓÇ¥ position designation.
+> The minimum background investigation required for CSP personnel having access to Level 4 and 5 information based on a ΓÇ£noncritical-sensitiveΓÇ¥ (e.g., DoDΓÇÖs ADP-2) is a Tier 3 Investigation (for ΓÇ£noncritical-sensitiveΓÇ¥ contractors), or a Moderate Risk Background Investigation (MBI) for a ΓÇ£moderate riskΓÇ¥ position designation.
> |Applicable screening and background check|Environment|Frequency|Description|
@@ -102,11 +102,11 @@ We are now screening all our operators at National Agency Check with Law and Cre
|||Every 2 years|- Social Security Number search </br>- Criminal history check (7-yr history) </br>- Office of Foreign Assets Control (OFAC) list </br>- Bureau of Industry and Security (BIS) list </br>- Office of Defense Trade Controls (DDTC) debarred list| |U.S. citizenship|Azure Gov|Upon employment|- Verification of U.S. citizenship| |Criminal Justice Information Services (CJIS)|Azure Gov|Upon signed CJIS agreement with State|- Adds fingerprint background check against FBI database </br>- Criminal records check and credit check|
-|National Agency Check with Law and Credit (NACLC)|Azure Gov|Upon signed contract with sponsoring agency|- Detailed background and criminal history investigation (Form SF 86 required)|
+|Tier 3 Investigation|Azure Gov|Upon signed contract with sponsoring agency|- Detailed background and criminal history investigation (Form SF 86 required)|
For Azure operations personnel, the following access principles apply: -- Duties are clearly defined, with separate responsibilities for requesting, approving and deploying changes.
+- Duties are clearly defined, with separate responsibilities for requesting, approving, and deploying changes.
- Access is through defined interfaces that have specific functionality. - Access is just-in-time (JIT), and is granted on a per-incident basis or for a specific maintenance event, and for a limited duration. - Access is rule-based, with defined roles that are only assigned the permissions required for troubleshooting.
@@ -114,5 +114,5 @@ For Azure operations personnel, the following access principles apply:
Screening standards include the validation of US citizenship of all Microsoft support and operational staff before access is granted to Azure Government-hosted systems. Support personnel who need to transfer data use the secure capabilities within Azure Government. Secure data transfer requires a separate set of authentication credentials to gain access. ## Next steps
-For supplemental information and updates please subscribe to the
+For supplemental information and updates, subscribe to the
<a href="https://devblogs.microsoft.com/azuregov/">Microsoft Azure Government Blog. </a>\ No newline at end of file
azure-maps https://docs.microsoft.com/en-us/azure/azure-maps/how-to-use-best-practices-for-routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-best-practices-for-routing.md
@@ -54,10 +54,10 @@ Here is a comparison to show some capabilities of the Route Directions and Matri
| Azure Maps API | Max number of queries in the request | Avoid areas | Truck and electric vehicle routing | Waypoints and Traveling Salesman optimization | Supporting points | | :--: | :--: | :--: | :--: | :--: | :--: |
-| Get Route Directions | 1 | | X | X | |
-| Post Route Directions | 1 | X | X | X | X |
-| Post Route Directions Batch | 700 | | X | X | |
-| Post Route Matrix | 700 | | X | | |
+| Get Route Directions | 1 | | Γ£ö | Γ£ö | |
+| Post Route Directions | 1 | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Post Route Directions Batch | 700 | | Γ£ö | Γ£ö | |
+| Post Route Matrix | 700 | | Γ£ö | | |
To learn more about electric vehicle routing capabilities, see our tutorial on how to [route electric vehicles using Azure Notebooks with Python](tutorial-ev-routing.md).
@@ -285,4 +285,4 @@ To learn more, please see:
> [Show route on the map](./map-route.md) > [!div class="nextstepaction"]
-> [Azure Maps NPM Package](https://www.npmjs.com/package/azure-maps-rest )
\ No newline at end of file
+> [Azure Maps NPM Package](https://www.npmjs.com/package/azure-maps-rest )
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/create-new-resource.md
@@ -32,6 +32,7 @@ Sign in to the [Azure portal](https://portal.azure.com), and create an Applicati
Enter the appropriate values into the required fields, and then select **Review + create**.
+[!div class="mx-imgBorder"]
![Enter values into required fields, and then select "review + create".](./media/create-new-resource/review-create.png) When your app has been created, a new pane opens. This pane is where you see performance and usage data about your monitored application.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/create-workspace-resource.md
@@ -29,6 +29,7 @@ Workspace-based Application Insights allows you to take advantage of the latest
Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource:
+[!div class="mx-imgBorder"]
![Workspace-based Application Insights resource](./media/create-workspace-resource/create-workspace-based.png) If you don't already have an existing Log Analytics Workspace, [consult the Log Analytics workspace creation documentation](../learn/quick-create-workspace.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/ip-addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
@@ -23,7 +23,7 @@ You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | | | | |
-| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33<br/>51.105.67.161<br/>40.124.64.192<br/>20.44.12.194<br/>20.189.172.0 | 443 |
+| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33<br/>51.105.67.161<br/>40.124.64.192<br/>20.44.12.194<br/>20.189.172.0<br/>13.69.106.208<br/>40.78.253.199<br/>40.78.253.198<br/>40.78.243.19 | 443 |
| Live Metrics Stream | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com|23.96.28.38<br/>13.92.40.198<br/>40.112.49.101<br/>40.117.80.207<br/>157.55.177.6<br/>104.44.140.84<br/>104.215.81.124<br/>23.100.122.113| 443 | ## Status Monitor
@@ -241,7 +241,35 @@ Note: *.loganalytics.io domain is owned by the Log Analytics team.
| Purpose | IP | Ports | | | |
-| Alerting | 13.72.19.232 <br/>13.106.57.181<br/>13.106.54.3<br/>13.106.54.19<br/>13.106.38.142<br/>13.106.38.148<br/>13.106.57.196<br/>13.106.57.197<br/>52.244.68.117<br/>52.244.65.137<br/>52.183.31.0<br/>52.184.145.166<br/>51.4.138.199<br/>51.5.148.86<br/>51.5.149.19 | 443 |
+| Alerting | 13.66.60.119/32<br/>13.66.143.220/30<br/>13.66.202.14/32<br/>13.66.248.225/32<br/>13.66.249.211/32<br/>13.67.10.124/30<br/>13.69.109.132/30<br/>13.71.199.112/30<br/>13.77.53.216/30<br/>13.77.172.102/32<br/>13.77.183.209/32<br/>13.78.109.156/30<br/>13.84.49.247/32<br/>13.84.51.172/32<br/>13.84.52.58/32<br/>13.86.221.220/30<br/>13.106.38.142/32<br/>13.106.38.148/32<br/>13.106.54.3/32<br/>13.106.54.19/32<br/>13.106.57.181/32<br/>13.106.57.196/31<br/>20.38.149.132/30<br/>20.42.64.36/30<br/>20.43.121.124/30<br/>20.44.17.220/30<br/>20.45.123.236/30<br/>20.72.27.152/30<br/>20.150.172.228/30<br/>20.192.238.124/30<br/>20.193.202.4/30<br/>40.68.195.137/32<br/>40.68.201.58/32<br/>40.68.201.65/32<br/>40.68.201.206/32<br/>40.68.201.211/32<br/>40.68.204.18/32<br/>40.115.37.106/32<br/>40.121.219.215/32<br/>40.121.221.62/32<br/>40.121.222.201/32<br/>40.121.223.186/32<br/>51.104.9.100/30<br/>52.183.20.244/32<br/>52.183.31.0/32<br/>52.183.94.59/32<br/>52.184.145.166/32<br/>191.233.50.4/30<br/>191.233.207.64/26<br/>2603:1000:4:402::178/125<br/>2603:1000:104:402::178/125<br/>2603:1010:6:402::178/125<br/>2603:1010:101:402::178/125<br/>2603:1010:304:402::178/125<br/>2603:1010:404:402::178/125<br/>2603:1020:5:402::178/125<br/>2603:1020:206:402::178/125<br/>2603:1020:305:402::178/125<br/>2603:1020:405:402::178/125<br/>2603:1020:605:402::178/125<br/>2603:1020:705:402::178/125<br/>2603:1020:805:402::178/125<br/>2603:1020:905:402::178/125<br/>2603:1020:a04:402::178/125<br/>2603:1020:b04:402::178/125<br/>2603:1020:c04:402::178/125<br/>2603:1020:d04:402::178/125<br/>2603:1020:e04:402::178/125<br/>2603:1020:f04:402::178/125<br/>2603:1020:1004:800::f8/125<br/>2603:1020:1104:400::178/125<br/>2603:1030:f:400::978/125<br/>2603:1030:10:402::178/125<br/>2603:1030:104:402::178/125<br/>2603:1030:107:400::f0/125<br/>2603:1030:210:402::178/125<br/>2603:1030:40b:400::978/125<br/>2603:1030:40c:402::178/125<br/>2603:1030:504:802::f8/125<br/>2603:1030:608:402::178/125<br/>2603:1030:807:402::178/125<br/>2603:1030:a07:402::8f8/125<br/>2603:1030:b04:402::178/125<br/>2603:1030:c06:400::978/125<br/>2603:1030:f05:402::178/125<br/>2603:1030:1005:402::178/125<br/>2603:1040:5:402::178/125<br/>2603:1040:207:402::178/125<br/>2603:1040:407:402::178/125<br/>2603:1040:606:402::178/125<br/>2603:1040:806:402::178/125<br/>2603:1040:904:402::178/125<br/>2603:1040:a06:402::178/125<br/>2603:1040:b04:402::178/125<br/>2603:1040:c06:402::178/125<br/>2603:1040:d04:800::f8/125<br/>2603:1040:f05:402::178/125<br/>2603:1040:1104:400::178/125<br/>2603:1050:6:402::178/125<br/>2603:1050:403:400::1f8/125<br/> | 443 |
+
+To receive updates about changes to these IP addresses, we recommend you configure a Service Health alert, which monitors for Informational notifications about the Action Groups service.
+
+### Action Groups Service Tag
+Managing changes to Source IP addresses can be quite time consuming. Using **Service Tags** eliminates the need to update your configuration. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the IP addresses and automatically updates the service tag as addresses change, eliminating the need to update network security rules for an Action Group.
+
+1. In the Azure portal under Azure Services search for *Network Security Group*.
+2. Click on **Add** and create a Network Security Group.
+
+ 1. Add the Resource Group Name and then enter *Instance Details*.
+ 1. Click on **Review + Create** and then click *Create*.
+
+ :::image type="content" source="../platform/media/action-groups/action-group-create-security-group.png" alt-text="Example on how to create a Network Security Group."border="true":::
+
+3. Go to Resource Group and then click on *Network Security Group* you have created.
+
+ 1. Select *Inbound Security Rules*.
+ 1. Click on **Add**.
+
+ :::image type="content" source="../platform/media/action-groups/action-group-add-service-tag.png" alt-text="Example on how to add a service tag."border="true":::
+
+4. A new window will open in right pane.
+ 1. Select Source: **Service Tag**
+ 1. Source Service Tag: **ActionGroup**
+ 1. Click **Add**.
+
+ :::image type="content" source="../platform/media/action-groups/action-group-service-tag.png" alt-text="Example on how to add service tag."border="true":::
+ ## Profiler
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/logs-dedicated-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/log-query/logs-dedicated-clusters.md
@@ -76,10 +76,12 @@ The following properties must be specified:
After you create your *Cluster* resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
+You can have up to 2 active clusters per subscription per region. If cluster is deleted, it is still reserved for 14 days. You can have up to 4 reserved clusters per subscription per region (active or recently deleted).
+ > [!WARNING] > Cluster creation triggers resource allocation and provisioning. This operation can take up to an hour to complete. It is recommended to run it asynchronously.
-The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `(Microsoft.OperationalInsights/clusters/write)`.
+The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`.
### Create
@@ -499,7 +501,9 @@ Use the following REST call to delete a cluster:
## Limits and constraints -- The max number of cluster per region and subscription is 2
+- The max number of active clusters per region and subscription is 2
+
+- The max number of reserved clusters (active or recently deleted) per region and subscription is 4
- The maximum of linked workspaces to cluster is 1000
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/action-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/action-groups.md
@@ -3,7 +3,7 @@ Title: Create and manage action groups in the Azure portal
description: Learn how to create and manage action groups in the Azure portal. Previously updated : 07/28/2020 Last updated : 01/28/2021
@@ -323,139 +323,8 @@ Webhooks are processed using the following rules
- The second and third attempts will wait 30 seconds for a response. - After the 3 attempts to call the webhook have failed no action group will call the endpoint for 15 minutes.
-Source IP address ranges:
--
-To receive updates about changes to these IP addresses, we recommend you configure a Service Health alert, which monitors for Informational notifications about the Action Groups service.
-
-You may have a limited number of Webhook actions in an Action Group.
-
-Frequent Updates to Source IP addresses can be quite time consuming in Webhook. Using **Service Tag** for *ActionGroup* helps with minimizing the complexity of frequent updates to IP addresses manually. Source IP addresses range prefixes shared above is auto managed by Microsoft encompassed by **Service Tag**.
-
-#### Service Tag
-A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules for an ActionGroup.
-
-1. In Azure portal under Azure Services search for *Network Security Group*.
-2. Click on **Add** and create a Network Security Group.
-
- 1. Add the Resource Group Name and then enter *Instance Details*.
- 1. Click on **Review + Create** and then click *Create*.
-
- :::image type="content" source="media/action-groups/action-group-create-security-group.png" alt-text="Example on how to create a Network Security Group."border="true":::
-
-3. Go to Resource Group and then click on *Network Security Group* you have created.
-
- 1. Select *Inbound Security Rules*.
- 1. Click on **Add**.
-
- :::image type="content" source="media/action-groups/action-group-add-service-tag.png" alt-text="Example on how to add a service tag."border="true":::
+Please see [Action Group IP Addresses](../app/ip-addresses.md) for source IP address ranges.
-4. A new window will open in right pane.
- 1. Select Source: **Service Tag**
- 1. Source Service Tag: **ActionGroup**
- 1. Click **Add**.
-
- :::image type="content" source="media/action-groups/action-group-service-tag.png" alt-text="Example on how to add service tag."border="true":::
## Next steps * Learn more about [SMS alert behavior](./alerts-sms-behavior.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/autoscale-get-started.md
@@ -1,6 +1,6 @@
Title: Get started with autoscale in Azure
-description: "Learn how to scale your resource Web App, Cloud Service, Virtual Machine or Virtual Machine Scale set in Azure."
+description: "Learn how to scale your resource Web App, Cloud Service, Virtual Machine or Virtual Machine scale set in Azure."
Last updated 07/07/2017
@@ -8,7 +8,7 @@
# Get started with Autoscale in Azure This article describes how to set up your Autoscale settings for your resource in the Microsoft Azure portal.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+Azure Monitor autoscale applies only to [Virtual Machine scale sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
## Discover the Autoscale settings in your subscription
@@ -109,36 +109,9 @@ You can always return to Autoscale by clicking **Enable autoscale** and then **S
## Route traffic to healthy instances (App Service)
-When you are scaled out to multiple instances, App Service can perform health checks on your instances to route traffic only to the healthy instances. To do so, open the Portal to your App Service, then select **Health check** under **Monitoring**. Select **Enable** and provide a valid URL path on your application, such as `/health` or `/api/health`. Click **Save**.
+<a id="health-check-path"></a>
-To enable the feature with ARM templates, set the `healthcheckpath` property of the `Microsoft.Web/sites` resource to the health check path on your site, for example: `"/api/health/"`. To disable the feature, set the property back to the empty string, `""`.
-
-### Health check path
-
-The path must respond within one minute with a status code between 200 and 299 (inclusive). If the path does not respond within one minute, or returns a status code outside the range, then the instance is considered "unhealthy". App Service does not follow 30x (301, 302, 307, etc.) redirects on the health check path--these status codes are considered **unhealthy**. Health Check integrates with App Service's authentication and authorization features, the system will reach the endpoint even if these security features are enabled. If you are using your own authentication system, the health check path must allow anonymous access. If the site has HTTP**S**-Only enabled, the healthcheck request will be sent via HTTP**S**.
-
-The health check path should check the critical components of your application. For example, if your application depends on a database and a messaging system, the health check endpoint should connect to those components. If the application cannot connect to a critical component, then the path should return a 500-level response code to indicate that the app is unhealthy.
-
-#### Security
-
-Development teams at large enterprises often need to adhere to security requirements for their exposed APIs. To secure the healthcheck endpoint, you should first use features such as [IP restrictions](../../app-service/app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](../../app-service/app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict access to the application. You can secure the healthcheck endpoint itself by requiring that the `User-Agent` of the incoming request matches `ReadyForRequest/1.0`. The User-Agent cannot be spoofed since the request was already secured by the prior security features.
-
-### Behavior
-
-When the health check path is provided, App Service will ping the path on all instances. If a successful response code is not received after 5 pings, that instance is considered "unhealthy". Unhealthy instance(s) will be excluded from the load balancer rotation if you are scaled out to 2 or more instances and using [basic tier](../../app-service/overview-hosting-plans.md) or higher. You can configure the required number of failed pings with the `WEBSITE_HEALTHCHECK_MAXPINGFAILURES` app setting. This app setting can be set to any integer between 2 and 10. For example, if this is set to `2`, your instances will be removed from the load balancer after two failed pings. Furthermore, when you are scaling up or out, App Service will ping the health check path to ensure that the new instances are ready for requests before being added to the load balancer.
-
-> [!NOTE]
-> Remember that your App Service Plan must be scaled out to 2 or more instances and be **Basic tier or higher** for the load balancer exclusion to occur. If you only have 1 instance, it will not be removed from the load balancer even if it is unhealthy.
-
-Additionally, the health check path is pinged when instances are added or restarted, such as during scale out operations, manual restarts, or deploying code through the SCM site. If the health check fails during these operations, the failing instances will not be added to the load balancer. This prevents these operations from negatively impacting your applicationΓÇÖs availability.
-
-When using healthcheck, your remaining healthy instances may experience increased load. To avoid overwhelming the remaining instances, no more than half of your instances will be excluded. For example, if an App Service Plan is scaled out to 4 instances and 3 of which are unhealthy, at most 2 will be excluded from the loadbalancer rotation. The other 2 instances (1 healthy and 1 unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded.If you would like to override this behavior, you can set the `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` app setting to a value between `0` and `100`. Setting this to a higher value means more unhealthy instances will be removed (the default value is 50).
-
-If the health checks fail for all apps on an instance for one hour, the instance will be replaced. At most one instance will be replaced per hour, with a maximum of three instances per day per App Service Plan.
-
-### Monitoring
-
-After providing your application's health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, click the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and create a new alert rule. For more information on monitoring your sites, [see the guide on Azure Monitor](../../app-service/web-sites-monitor.md).
+When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [this article on App Service Health check](../../app-service/monitor-instances-health-check.md).
## Moving Autoscale to a different region This section describes how to move Azure autoscale to another region under the same Subscription, and Resource Group. You can use REST API to move autoscale settings.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections-servicenow.md
@@ -118,7 +118,7 @@ Use the following procedure to create a ServiceNow connection.
| | | | **Connection Name** | Enter a name for the ServiceNow instance that you want to connect with ITSMC. You use this name later in Log Analytics when you configure ITSM work items and view detailed analytics. | | **Partner Type** | Select **ServiceNow**. |
- | **Server Url** | Enter the URL of the ServiceNow instance that you want to connect to ITSMC. The URL should point to a supported SaaS version with the suffix *.servicenow.com*.|
+ | **Server Url** | Enter the URL of the ServiceNow instance that you want to connect to ITSMC. The URL should point to a supported SaaS version with the suffix *.servicenow.com* (for example https://XXXXX.service-now.com/).|
| **Username** | Enter the integration username that you created in the ServiceNow app to support the connection to ITSMC.| | **Password** | Enter the password associated with this username. **Note**: The username and password are used for generating authentication tokens only. They're not stored anywhere within the ITSMC service. | | **Client Id** | Enter the client ID that you want to use for OAuth2 authentication, which you generated earlier. For more information on generating a client ID and a secret, see [Set up OAuth](https://wiki.servicenow.com/index.php?title=OAuth_Setup). |
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-dashboard-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-dashboard-errors.md
@@ -62,6 +62,10 @@ In this section you can find the common errors that presented in the connector s
* **Error**:"Something went wrong. Could not get connection details." This error presented when the customer defines ITSM action group.
- **Cause**: Newly created ITSM Connector has yet to finish the initial Sync.
+ **Cause**: Such error is displayed when:
+ * Newly created ITSM Connector has yet to finish the initial Sync.
+ * The connector was not defined correctly
- **Resolution**: When a new ITSM connector is created, ITSM Connector starts syncing information from ITSM system, such as work item templates and work items. Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
+ **Resolution**:
+ * When a new ITSM connector is created, ITSM Connector starts syncing information from ITSM system, such as work item templates and work items. Sync the ITSM Connector to generate a new refresh token as explained [here](./itsmc-resync-servicenow.md).
+ * Review your connection details in the ITSM connector as explained [here](./itsmc-connections-servicenow.md#create-a-connection) and check that your ITSM connector can successfully [sync](./itsmc-resync-servicenow.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/resource-manager-data-collection-rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/resource-manager-data-collection-rules.md
@@ -76,7 +76,7 @@ The following sample installs the Azure Monitor agent on a Windows Azure virtual
} ```
-## Create association ith Azure Arc
+## Create association with Azure Arc
The following sample installs the Azure Monitor agent on a Windows Azure virtual machine. An association is created between an Azure Arc-enabled server machine and a data collection rule.
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
@@ -13,7 +13,7 @@
na ms.devlang: na Previously updated : 01/22/2020 Last updated : 01/28/2020 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
@@ -46,6 +46,7 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
| `Unix` | NFS | NFSv3 mode bits | UNIX | NFS and Windows | | `Ntfs` | Windows | NTFS ACLs | NTFS |NFS and Windows| * UNIX users mounting the NTFS security style volume using NFS will be authenticated as Windows user `root` for UNIX `root` and `pcuser` for all other users. Make sure that these user accounts exist in your Active Directory prior to mounting the volume when using NFS.
+* If you have large topologies, and you use the `Unix` security style with a dual-protocol volume or LDAP with extended groups, Azure NetApp Files might not be able to access all servers in your topologies. If this situation occurs, contact your account team for assistance. <!-- NFSAAS-15123 -->
* You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-vulnerability-assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-vulnerability-assessment.md
@@ -52,7 +52,7 @@ The following steps implement the vulnerability assessment:
1. Under the **Security** heading, select **Security center**.
-1. Then click **Select Storage** on the **Vulnerability Assessment** pane to open the Vulnerability Assessment settings pane for either the entire server or managed instance.
+1. Then click **Configure** on the **Vulnerability Assessment** pane to open the Vulnerability Assessment settings pane for either the entire server or managed instance.
> [!NOTE] > For more information about storing Vulnerability Assessment scans behind firewalls and VNets, see [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
@@ -225,4 +225,4 @@ To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Azure Defender for SQL](azure-defender-for-sql.md). - Learn more about [data discovery and classification](data-discovery-and-classification-overview.md).-- Learn about [Storing Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).\ No newline at end of file
+- Learn about [Storing Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](sql-database-vulnerability-assessment-storage.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial.md
@@ -89,9 +89,9 @@ To create the virtual network in the Azure portal:
| **Field** | Value | | | | | **Name** |autoHAVNET |
- | **Address space** |10.33.0.0/24 |
+ | **Address space** |10.0.0.0/24 |
| **Subnet name** |Admin |
- | **Subnet address range** |10.33.0.0/29 |
+ | **Subnet address range** |10.0.0.0/29 |
| **Subscription** |Specify the subscription that you intend to use. **Subscription** is blank if you only have one subscription. | | **Resource group** |Choose **Use existing** and pick the name of the resource group. | | **Location** |Specify the Azure location. |
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-dpm-introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-dpm-introduction.md
@@ -42,7 +42,7 @@ Supported file types | These file types can be backed up with Azure Backup:<br>
Unsupported file types | <li>Servers on case-sensitive file systems<li> hard links (skipped)<li> reparse points (skipped)<li> encrypted and compressed (skipped)<li> encrypted and sparse (skipped)<li> Compressed stream<li> parse stream Local storage | Each machine you want to back up must have local free storage that's at least 5% of the size of the data that's being backed up. For example, backing up 100 GB of data requires a minimum of 5 GB of free space in the scratch location. Vault storage | ThereΓÇÖs no limit to the amount of data you can back up to an Azure Backup vault, but the size of a data source (for example a virtual machine or database) shouldnΓÇÖt exceed 54,400 GB.
-Azure ExpressRoute | You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.<br/><br/> **With public peering**: Ensure access to the following domains/addresses:<br/><br/>- `http://www.msftncsi.com/ncsi.txt` <br/><br/>- `microsoft.com` <br/><br/>-`.WindowsAzure.com`<br/><br/>-`.microsoftonline.com`<br/><br/>-`.windows.net`<br/><br/> **With Microsoft peering**, select the following services/regions and relevant community values:<br/><br/>- Azure Active Directory (12076:5060)<br/><br/>- Microsoft Azure Region (according to the location of your Recovery Services vault)<br/><br/>- Azure Storage (according to the location of your Recovery Services vault)<br/><br/>For more information, see [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).<br/><br/>**Note**: Public peering is deprecated for new circuits.
+Azure ExpressRoute | You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.<br/><br/> **With public peering**: Ensure access to the following domains/addresses:<br/><br/> URLs:<br> `www.msftncsi.com` <br> .Microsoft.com <br> .WindowsAzure.com <br> .microsoftonline.com <br> .windows.net <br>`www.msftconnecttest.com`<br><br>IP addresses<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/>**With Microsoft peering**, select the following services/regions and relevant community values:<br/><br/>- Azure Active Directory (12076:5060)<br/><br/>- Microsoft Azure Region (according to the location of your Recovery Services vault)<br/><br/>- Azure Storage (according to the location of your Recovery Services vault)<br/><br/>For more information, see [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).<br/><br/>**Note**: Public peering is deprecated for new circuits.
Azure Backup agent | If DPM is running on System Center 2012 SP1, install Rollup 2 or later for DPM SP1. This is required for agent installation.<br/><br/> This article describes how to deploy the latest version of the Azure Backup agent, also known as the Microsoft Azure Recovery Service (MARS) agent. If you have an earlier version deployed, update to the latest version to ensure that backup works as expected. Before you start, you need an Azure account with the Azure Backup feature enabled. If you don't have an account, you can create a free trial account in just a couple of minutes. Read about [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/).
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-mars-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-mars-troubleshoot.md
@@ -37,7 +37,7 @@ We recommend that you check the following before you start troubleshooting Micro
| Cause | Recommended actions | | | | | **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 48 hours before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.md#where-can-i-download-the-vault-credentials-file) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <ul><li> If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br/> <li> If the new installation fails, try reinstalling with the new credentials.</ul> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 48 hours. We recommend that you download a new vault credential file.
-| **Proxy server/firewall is blocking registration** <br/>or <br/>**No internet connectivity** <br/><br/> If your machine or proxy server has limited internet connectivity and you don't ensure access for the necessary URLs, the registration will fail.| Take these steps:<br/> <ul><li> Work with your IT team to ensure the system has internet connectivity.<li> If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<li> If you do have a firewall/proxy server, work with your networking team to ensure these URLs and IP addresses have access:<br/> <br> **URLs**<br> `www.msftncsi.com` <br> .Microsoft.com <br> .WindowsAzure.com <br> .microsoftonline.com <br> .windows.net <br>**IP addresses**<br> 20.190.128.0/18 <br> 40.126.0.0/18 <br/></ul></ul>Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in [Azure ExpressRoute support](backup-support-matrix-mars-agent.md#azure-expressroute-support).
+| **Proxy server/firewall is blocking registration** <br/>or <br/>**No internet connectivity** <br/><br/> If your machine or proxy server has limited internet connectivity and you don't ensure access for the necessary URLs, the registration will fail.| Take these steps:<br/> <ul><li> Work with your IT team to ensure the system has internet connectivity.<li> If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<li> If you do have a firewall/proxy server, work with your networking team to ensure these URLs and IP addresses have access:<br/> <br> **URLs**<br> `www.msftncsi.com` <br> .Microsoft.com <br> .WindowsAzure.com <br> .microsoftonline.com <br> .windows.net <br>`www.msftconnecttest.com`<br><br>**IP addresses**<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/></ul></ul>Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in [Azure ExpressRoute support](backup-support-matrix-mars-agent.md#azure-expressroute-support).
| **Antivirus software is blocking registration** | If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders: <br/><ul> <li> CBengine.exe <li> CSC.exe<li> The scratch folder. Its default location is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch. <li> The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin. ### Additional recommendations
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-microsoft-azure-backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-microsoft-azure-backup.md
@@ -299,13 +299,18 @@ Once you know the state of the Azure connectivity and of the Azure subscription,
### Recovering from loss of connectivity
-If you have a firewall or a proxy that are preventing access to Azure, you need to allow the following domain addresses in the firewall/proxy profile:
-
-* `http://www.msftncsi.com/ncsi.txt`
-* \*.Microsoft.com
-* \*.WindowsAzure.com
-* \*.microsoftonline.com
-* \*.windows.net
+If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow the following URLs and IP addresses:
+
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
If you're using ExpressRoute Microsoft peering, select the following services/regions:
backup https://docs.microsoft.com/en-us/azure/backup/backup-mabs-install-azure-stack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-install-azure-stack.md
@@ -329,13 +329,19 @@ Once you know the state of the Azure connectivity and of the Azure subscription,
### Recovering from loss of connectivity
-If a firewall or a proxy is preventing access to Azure, add the following domain addresses in the firewall/proxy profile allow list:
+If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow the following URLs and IP addresses:
+
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
-- `http://www.msftncsi.com/ncsi.txt`-- \*.Microsoft.com-- \*.WindowsAzure.com-- \*.microsoftonline.com-- \*.windows.net Once connectivity to Azure is restored to the Azure Backup Server, the Azure subscription state determines the operations that can be performed. Once the server is **Connected**, use the table in [Network connectivity](backup-mabs-install-azure-stack.md#network-connectivity) to see the available operations.
backup https://docs.microsoft.com/en-us/azure/backup/backup-mabs-protection-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
@@ -60,11 +60,17 @@ You can back up your data over Azure ExpressRoute with public peering (available
With public peering: Ensure access to the following domains/addresses:
-* `http://www.msftncsi.com/ncsi.txt`
-* `microsoft.com`
-* `.WindowsAzure.com`
-* `.microsoftonline.com`
-* `.windows.net`
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
+ With Microsoft peering, select the following services/regions and relevant community values:
backup https://docs.microsoft.com/en-us/azure/backup/backup-sql-server-database-azure-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
@@ -82,11 +82,11 @@ If you choose to allow access service IPs, refer to the IP ranges in the JSON fi
You can also use the following FQDNs to allow access to the required services from your servers:
-| Service | Domain names to be accessed |
-| -- | |
-| Azure Backup | `*.backup.windowsazure.com` |
-| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` |
-| Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) |
+| Service | Domain names to be accessed | Ports
+| -- | |
+| Azure Backup | `*.backup.windowsazure.com` | 443
+| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` | 443
+| Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | As applicable
#### Use an HTTP proxy server to route traffic
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-mabs-dpm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-mabs-dpm.md
@@ -106,13 +106,18 @@ You can deploy MABS on an Azure Stack VM so that you can manage backup of Azure
### URL access
-The DPM server/MABS needs access to these URLs:
--- `http://www.msftncsi.com/ncsi.txt`-- `*.Microsoft.com`-- `*.WindowsAzure.com`-- `*.microsoftonline.com`-- `*.windows.net`
+The DPM server/MABS server needs access to these URLs and IP addresses:
+
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18:
### Azure ExpressRoute support
@@ -120,11 +125,16 @@ You can back up your data over Azure ExpressRoute with public peering (available
With public peering: Ensure access to the following domains/addresses: -- `http://www.msftncsi.com/ncsi.txt`-- `microsoft.com`-- `.WindowsAzure.com`-- `.microsoftonline.com`-- `.windows.net`
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
With Microsoft peering, select the following services/regions and relevant community values:
backup https://docs.microsoft.com/en-us/azure/backup/backup-support-matrix-mars-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-mars-agent.md
@@ -54,6 +54,7 @@ The MARS agent needs access to these URLs:
- *.WindowsAzure.com - *.MicrosoftOnline.com - *.Windows.net
+- `www.msftconnecttest.com`
And to these IP addresses:
@@ -77,11 +78,16 @@ You can back up your data over Azure ExpressRoute with public peering (available
With public peering: Ensure access to the following domains/addresses: -- `http://www.msftncsi.com/ncsi.txt`-- `microsoft.com`-- `.WindowsAzure.com`-- `.microsoftonline.com`-- `.windows.net`
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
With Microsoft peering, select the following services/regions and relevant community values:
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
@@ -17,7 +17,7 @@ You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This ar
## Supported regions
-Azure Disk Backup is available in preview in the following regions: West Central US, East US2, Korea Central, Korea South, Japan West, UAE North.
+Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Korea Central, Korea South, Japan West, East Asia, UAE North.
More regions will be announced when they become available.
@@ -61,6 +61,8 @@ More regions will be announced when they become available.
- [Private Links](../virtual-machines/disks-enable-private-links-for-import-export-portal.md) support for managed disks allows you to restrict the export and import of managed disks so that it only occurs within your Azure virtual network. Azure Disk Backup supports backup of disks that have private endpoints enabled. This doesn't include the backup data or snapshots to be accessible through the private endpoint.
+- During the preview, you can't disable the backup, so the option **stop backup and retain backup data** is not supported. You can delete a backup instance, which will not only stop the backup but also delete all the backup data.
+ ## Next steps - [Back up Azure Managed Disks](backup-managed-disks.md)
backup https://docs.microsoft.com/en-us/azure/backup/install-mars-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/install-mars-agent.md
@@ -84,6 +84,9 @@ To use public peering, first ensure access to the following domains and addresse
* `.WindowsAzure.com` * `.microsoftonline.com` * `.windows.net`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
To use Microsoft peering, select the following services, regions, and relevant community values:
backup https://docs.microsoft.com/en-us/azure/backup/microsoft-azure-backup-server-protection-v3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/microsoft-azure-backup-server-protection-v3.md
@@ -81,11 +81,16 @@ You can back up your data over Azure ExpressRoute with public peering (available
With public peering: Ensure access to the following domains/addresses:
-* `http://www.msftncsi.com/ncsi.txt`
-* `microsoft.com`
-* `.WindowsAzure.com`
-* `.microsoftonline.com`
-* `.windows.net`
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
With Microsoft peering, select the following services/regions and relevant community values:
backup https://docs.microsoft.com/en-us/azure/backup/private-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
@@ -27,7 +27,7 @@ This article will help you understand the process of creating private endpoints
While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backup only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup. For more information, see the following table:
-| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to allow-list any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. |
+| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent | Use of private endpoints is recommended to allow backup and restore without needing to allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Azure AD IPs or FQDNs. |
| | | | **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
@@ -143,7 +143,7 @@ Once the private endpoints created for the vault in your VNet have been approved
Once the private endpoint is created and approved, no additional changes are required from the client side to use the private endpoint. All communication and data transfer from your secured network to the vault will be performed through the private endpoint. However, if you remove private endpoints for the vault after a server (SQL/SAP HANA) has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
-### Backup and restore through MARS Agent
+### Backup and restore through MARS agent
When using the MARS Agent to back up your on-premises resources, make sure your on-premises network (containing your resources to be backed up) is peered with the Azure VNet that contains a private endpoint for the vault, so you can use it. You can then continue to install the MARS agent and configure backup as detailed here. You must, however, ensure all communication for backup happens through the peered network only.
@@ -385,7 +385,7 @@ $privateEndpoint = New-AzPrivateEndpoint `
#### Create DNS zones for custom DNS servers
-You need to create three private DNS zones and link them to your virtual network.
+You need to create three private DNS zones and link them to your virtual network. Keep in mind that, unlike Blob and Queue, the Backup service public URLs don't register in Azure Public DNS for the redirection to the Private Link DNS zones.
| **Zone** | **Service** | | | -- |
batch https://docs.microsoft.com/en-us/azure/batch/batch-quota-limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-quota-limit.md
@@ -2,7 +2,7 @@
Title: Service quotas and limits description: Learn about default Azure Batch quotas, limits, and constraints, and how to request quota increases Previously updated : 12/29/2020 Last updated : 01/28/2021
@@ -26,21 +26,9 @@ Also note that quotas are not guaranteed values. Quotas can vary based on change
## Core quotas
-### Cores quotas in batch service mode
+### Cores quotas in Batch service mode
-The enforcement of dedicated core quotas is being improved, with the changes being made available in stages and completed for all Batch accounts by the end of January 2021.
-
-Core quotas exist for each VM series supported by Batch and are displayed on the **Quotas** page in the portal. VM series quota limits can be updated with a support request, as detailed below.
-
-With the existing mechanism being phased out, quota limits for VM series are not checked, only the total quota limit for the account is enforced. This means that it may be possible to allocate more cores for a VM series than is indicated by the VM series quota, up to the total account quota limit.
-
-The updated mechanism will enforce the VM series quotas, in addition to the total account quota. As part of the transition to the new mechanism, the VM series quota values may be updated to avoid allocation failures - any VM series used in recent months will have its VM series quota updated to match the total account quota. This change will not enable the use of any more capacity than was already available.
-
-It is possible to determine if VM series quota enforcement has been enabled for a Batch account by checking:
-
-* The Batch account [dedicatedCoreQuotaPerVMFamilyEnforced](/rest/api/batchmanagement/batchaccount/get#batchaccount) API property.
-
-* Text on the Batch account **Quotas** page in the portal.
+Core quotas exist for each VM series supported by Batch and are displayed on the **Quotas** page in the portal. VM series quota limits can be updated with a support request, as detailed below. For dedicated nodes, Batch enforces a core quota limit for each VM series as well as a total core quota limit for the entire Batch account. For low priority nodes, Batch enforces only a total core quota for the Batch account without any distinction between different VM series.
### Cores quotas in user subscription mode
batch https://docs.microsoft.com/en-us/azure/batch/create-pool-availability-zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/create-pool-availability-zones.md new file mode 100644 /dev/null
@@ -0,0 +1,81 @@
+
+ Title: Create a pool across availability zones
+description: Learn how to create a Batch pool with zonal policy to help protect against failures.
+ Last updated : 01/28/2021++
+# Create an Azure Batch pool across Availability Zones
+
+Azure regions which support [Availability Zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) have a minimum of three separate zones, each with their own independent power source, network, and cooling system. When you create an Azure Batch pool using Virtual Machine Configuration, you can choose to provision your Batch pool across Availability Zones. Creating your pool with this zonal policy helps protect your Batch compute nodes from Azure datacenter-level failures.
+
+For example, you could create your pool with zonal policy in an Azure region which supports three Availability Zones. If an Azure datacenter in one Availability Zone has an infrastructure failure, your Batch pool will still have healthy nodes in the other two Availability Zones, so the pool will remain available for task scheduling.
+
+## Regional support and other requirements
+
+Batch maintains parity with Azure on supporting Availability Zones. To use the zonal option, your pool must be created in a [supported Azure region](../availability-zones/az-region.md).
+
+In order for your Batch pool to be allocated across availability zones, the Azure region in which the pool is created must support the requested VM SKU in more than one zone. You can validate this by calling the [Resource Skus List API](/rest/api/compute/resourceskus/list) and check the **locationInfo** field of [resourceSku](/rest/api/compute/resourceskus/list#resourcesku). Be sure that more than one zone is supported for the requested VM SKU.
+
+For [user subscription mode Batch accounts](accounts.md#batch-accounts), make sure that the subscription in which you're creating your pool doesn't have a zone offer restriction on the requested VM SKU. To confirm this, call the [Resource Skus List API](/rest/api/compute/resourceskus/list) and check the [ResourceSkuRestrictions](/rest/api/compute/resourceskus/list#resourceskurestrictions). If a zone restriction exists, you can submit a [support ticket](../azure-portal/supportability/sku-series-unavailable.md) to remove the zone restriction.
+
+Also note that you can't create a pool with a zonal policy if it has inter-node communication enabled and uses a [VM SKU that supports InfiniBand](../virtual-machines/workloads/hpc/enable-infiniband.md).
+
+## Create a Batch pool across Availability Zones
+
+The following examples show how to create a Batch pool across Availability Zones.
+
+> [!NOTE]
+> When creating your pool with a zonal policy, the Batch service will try to allocate your pool across all Availability Zones in the selected region; you can't specify a particular allocation across the zones.
+
+### Batch Management Client .NET SDK
+
+```csharp
+pool.DeploymentConfiguration.VirtualMachineConfiguration.NodePlacementConfiguration = new NodePlacementConfiguration()
+ {
+ Policy = NodePlacementPolicyType.Zonal
+ };
+
+```
+
+### Batch REST API
+
+REST API URL
+
+```
+POST {batchURL}/pools?api-version=2021-01-01.13.0
+client-request-id: 00000000-0000-0000-0000-000000000000
+```
+
+Request body
+
+```
+"pool": {
+ "id": "pool2",
+ "vmSize": "standard_a1",
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "16.040-LTS"
+ },
+ "nodePlacementConfiguration": {
+ "policy": "Zonal"
+ }
+ "nodeAgentSKUId": "batch.node.ubuntu 16.04"
+ },
+ "resizeTimeout": "PT15M",
+ "targetDedicatedNodes": 5,
+ "targetLowPriorityNodes": 0,
+ "maxTasksPerNode": 3,
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": false
+}
+```
+
+## Next steps
+
+- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.
+- Learn about [creating a pool in a subnet of an Azure virtual network](batch-virtual-network.md).
+- Learn about [creating an Azure Batch pool without public IP addresses](./batch-pool-no-public-ip-address.md).
+
batch https://docs.microsoft.com/en-us/azure/batch/disk-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/disk-encryption.md
@@ -3,17 +3,22 @@ Title: Create a pool with disk encryption enabled
description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key. Previously updated : 10/08/2020 Last updated : 01/27/2021 # Create a pool with disk encryption enabled
-When you create an Azure Batch pool using virtual machine configuration, you can encrypt compute nodes in the pool with a platform-managed key by specifying the disk encryption configuration.
+When you create an Azure Batch pool using [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration), you can encrypt compute nodes in the pool with a platform-managed key by specifying the disk encryption configuration.
This article explains how to create a Batch pool with disk encryption enabled.
+> [!IMPORTANT]
+> Support for encryption at host using a platform-managed key in Azure Batch is currently in public preview for the East US, West US 2, South Central US, US Gov Virginia, and US Gov Arizona regions.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Why use a pool with disk encryption configuration? With a Batch pool, you can access and store data on the OS and temporary disks of the compute node. Encrypting the server-side disk with a platform-managed key will safeguard this data with low overhead and convenience.
@@ -24,13 +29,11 @@ Batch will apply one of these disk encryption technologies on compute nodes, bas
- [Encryption at host using a platform-managed Key](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) - [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md)
-> [!IMPORTANT]
-> Support for encryption at host using a platform-managed key in Azure Batch is currently in public preview for the East US, West US 2, South Central US, US Gov Virginia, and US Gov Arizona regions.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- You won't be able to specify which encryption method will be applied to the nodes in your pool. Instead, you provide the target disks you want to encrypt on their nodes, and Batch can choose the appropriate encryption method, ensuring the specified disks are encrypted on the compute node.
+> [!IMPORTANT]
+> If you are creating your pool with a [custom image](batch-sig-images.md), you can enable disk encryption only if using Windows VMs.
+ ## Azure portal When creating a Batch pool in the the Azure portal, select either **TemporaryDisk** or **OsAndTemporaryDisk** under **Disk Encryption Configuration**.
@@ -56,11 +59,14 @@ pool.VirtualMachineConfiguration.DiskEncryptionConfiguration = new DiskEncryptio
### Batch REST API REST API URL:+ ``` POST {batchURL}/pools?api-version=2020-03-01.11.0 client-request-id: 00000000-0000-0000-0000-000000000000 ```+ Request body:+ ``` "pool": { "id": "pool2",
@@ -103,4 +109,4 @@ az batch pool create \
## Next steps - Learn more about [server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md).-- For an in-depth overview of Batch, see [Batch service workflow and resources](batch-service-workflow-features.md).\ No newline at end of file
+- For an in-depth overview of Batch, see [Batch service workflow and resources](batch-service-workflow-features.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/schema-change-prediction-runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/schema-change-prediction-runtime.md
@@ -1,6 +1,6 @@
Title: Extend app at runtime - LUIS
-description:
+description: Learn how to extend an already published prediction endpoint to pass new information.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/role-based-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
@@ -1,6 +1,6 @@
Title: Collaborate with others - QnA Maker
-description:
+description: Learn how to collaborate with other authors and editors using Azure role-based access control.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/reference-private-endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/reference-private-endpoint.md new file mode 100644 /dev/null
@@ -0,0 +1,43 @@
+
+ Title: How to set up a Private Endpoint - QnA Maker
+description: Understand Private Endpoint creation available in QnA Maker managed.
+++ Last updated : 01/12/2021++
+# Private Endpoints
+
+Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Now, QnA Maker provides you support to create private endpoints to the Azure Search Service. This functionality is available in QnA Maker managed.
+
+Private endpoints are provided by [Azure Private Link](https://docs.microsoft.com/azure/private-link/private-link-overview), as a separate service. For more information about costs, see the [pricing page.](https://azure.microsoft.com/pricing/details/private-link/)
+
+## Prerequisites
+> [!div class="checklist"]
+> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A QnA Maker [managed resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
+
+## Steps to enable private endpoint
+1. Assign *Contributer* role to QnA Maker managed service in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
+![Managed service Identity](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-identity.png)
+
+2. Add the above identity as *Contributer* by going to Azure Search Service IAM tab.
+![Managed service IAM](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
+
+3. Click on *Add role assignments*, add the identity and click on *Save*.
+![Managed role assignment](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-role-assignment.png)
+
+4. Now, go to *Networking* tab in the Azure Search Service instance and switch Endpoint connectivity data from *Public* to *Private*. This operation is a long running process and can take up to 30 mins to complete.
+![Managed Azure search networking](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking.png)
+
+5. Go to *Networking* tab of QnA Maker managed service and under the *Allow access from*, select the *Selected Networks and private endpoints* option and Click *save*.
+![Managed QnA maker newtorking](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-2.png)
+
+This will establish a private endpoint connection between QnA maker service and Azure cognitive search service instance. You can verify the Private endpoint connection on the *Networking* tab of Azure Search service instance. Once the whole operation is completed, you are good to use your QnA Maker service.
+![Managed Networking Service](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png)
++
+## Support details
+ * We don't support change Azure Search Service once you enable private access to your QnAMaker service. If you change the Azure Search Service via 'Configuration' tab after you have enabled private access, the QnAMaker service will become unusable.
+ * After establishing Private Endpoint Connection, if you switch Azure Search Service Networking to 'Public', you won't be able to use the QnAMaker service. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/get-speech-sdk-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/get-speech-sdk-java.md
@@ -8,7 +8,7 @@
:::row::: :::column span="3":::
- The Java SDK for Android is packaged as an <a href="https://developer.android.com/studio/projects/android-library" target="_blank">AAR (Android Library) <span class="docon docon-navigate-external x-hidden-focus"></span></a>, which includes the necessary libraries and required Android permissions. It's hosted in a Maven repository at `https://csspeechstorage.blob.core.windows.net/maven/` as package `com.microsoft.cognitiveservices.speech:client-sdk:1.14.0`.
+ The Java SDK for Android is packaged as an <a href="https://developer.android.com/studio/projects/android-library" target="_blank">AAR (Android Library) <span class="docon docon-navigate-external x-hidden-focus"></span></a>, which includes the necessary libraries and required Android permissions. It's hosted in a Maven repository at `https://csspeechstorage.blob.core.windows.net/maven/` as package `com.microsoft.cognitiveservices.speech:client-sdk:1.15.0`.
:::column-end::: :::column::: <br>
@@ -27,7 +27,7 @@ To consume the package from your Android Studio project, make the following chan
2. In the module-level *build.gradle* file, add the following to the `dependencies` section: ```gradle
- implementation 'com.microsoft.cognitiveservices.speech:client-sdk:1.14.0'
+ implementation 'com.microsoft.cognitiveservices.speech:client-sdk:1.15.0'
``` The Java SDK is also part of the [Speech Devices SDK](../speech-devices-sdk.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/get-speech-sdk-macos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/get-speech-sdk-macos.md
@@ -38,7 +38,7 @@ platform :ios, '9.3'
use_frameworks! target 'MyApp' do
- pod 'MicrosoftCognitiveServicesSpeech', '~> 1.14.0'
+ pod 'MicrosoftCognitiveServicesSpeech', '~> 1.15.0'
end ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-to-text-basics/speech-to-text-basics-go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-to-text-basics/speech-to-text-basics-go.md
@@ -108,7 +108,7 @@ go build
go run quickstart ```
-See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechRecognizer) classes.
+See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.15.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.15.0/speech#SpeechRecognizer) classes.
## Speech-to-text from audio file
@@ -188,4 +188,4 @@ go build
go run quickstart ```
-See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.14.0/speech#SpeechRecognizer) classes.
\ No newline at end of file
+See the reference docs for detailed information on the [`SpeechConfig`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.15.0/speech#SpeechConfig) and [`SpeechRecognizer`](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go@v1.15.0/speech#SpeechRecognizer) classes.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/go/go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/go/go.md
@@ -20,7 +20,7 @@ Before you get started:
Update the go.mod file with the latest SDK version by adding this line ```sh require (
- github.com/Microsoft/cognitive-services-speech-sdk-go v1.14.0
+ github.com/Microsoft/cognitive-services-speech-sdk-go v1.15.0
) ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/quickstarts/voice-assistants/go/go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/quickstarts/voice-assistants/go/go.md
@@ -24,7 +24,7 @@ Before you get started:
Update the go.mod file with the latest SDK version by adding this line ```sh require (
- github.com/Microsoft/cognitive-services-speech-sdk-go v1.14.0
+ github.com/Microsoft/cognitive-services-speech-sdk-go v1.15.0
) ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/speech-devices-sdk-android-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/speech-devices-sdk-android-quickstart.md
@@ -91,7 +91,7 @@ To validate your development kit setup, build and install the sample application
Update the **build.gradle(Module:app)** by adding this line to the dependencies section. ```xml
- implementation'com.microsoft.cognitiveservices.speech:client-sdk:1.14.0'
+ implementation'com.microsoft.cognitiveservices.speech:client-sdk:1.15.0'
``` 1. Add your speech subscription key to the source code. If you want to try intent recognition, also add your [Language Understanding service](https://azure.microsoft.com/services/cognitive-services/language-understanding-intelligent-service/) subscription key and application ID.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/speech-devices-sdk-linux-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/speech-devices-sdk-linux-quickstart.md
@@ -95,7 +95,7 @@ If you plan to use the intents you'll need a [Language Understanding Service (LU
<dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk</artifactId>
- <version>1.14.0</version>
+ <version>1.15.0</version>
</dependency> </dependencies> ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/speech-devices-sdk-windows-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/speech-devices-sdk-windows-quickstart.md
@@ -69,7 +69,7 @@ If you plan to use the intents you'll need a [Language Understanding Service (LU
<dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk</artifactId>
- <version>1.14.0</version>
+ <version>1.15.0</version>
</dependency> </dependencies> ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
@@ -8,13 +8,64 @@
Previously updated : 08/17/2020 Last updated : 01/27/2021 # Speech Service release notes
+## Speech SDK 1.15.0: 2021-January release
+
+**Note**: The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+
+**Improvements**
+- We have started a multi release effort to reduce the Speech SDK's memory usage and disk footprint. As a first step we made significant file size reductions in shared libraries on most platforms. Compared to the 1.14 release:
+ - 64-bit UWP-compatible Windows libraries are about 30% smaller;
+ - 32-bit Windows libraries are not yet seeing a size improvements.
+ - Linux libraries are 20-25% smaller;
+ - Android libraries are 3-5% smaller;
+
+**New features**
+- **All**: Added 48kHz format for custom TTS voices, improving the audio quality of custom voices whose native output sample rates are higher than 24kHz.
+- **All**: Added support for setting custom voice via `EndpointId` ([C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid?view=azure-dotnet#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid?view=azure-java-stable#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig?view=azure-node-latest#endpointId), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices.
+- **C++/C#/Java/Objective-C/Python**: `IntentRecognizer` now supports configuring the JSON result containing all intents and not only the top scoring intent via `LanguageUnderstandingModel FromEndpoint` method by using `verbose=true` uri parameter. This addresses [GitHub issue #880](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/880). See updated documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstarts/intent-recognition/#add-a-languageunderstandingmodel-and-intents).
+- **C++/C#/Java**: `DialogServiceConnector` ([C++](https://docs.microsoft.com/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-java-stable)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.
+- **C++/C#/Java/JavaScript**: `DialogServiceConnector` ([C++](https://docs.microsoft.com/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-java-stable), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector?view=azure-node-latest)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](https://docs.microsoft.com/dotnet/api/microsoft.bot.builder.iturncontext?view=botbuilder-dotnet-stable) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.
+- **C++/C#**: The [Speech SDK nuget package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.
+- **Java**: [`DialogServiceConnector`](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-java-stable) now has a `setSpeechActivityTemplate()` method that was unintentionally excluded from the language previously. This is equivalent to setting the `Conversation_Speech_Activity_Template` property and will request that all future Bot Framework activities originated by the Direct Line Speech service merge the provided content into their JSON payloads.
+- **Java**: The [`Connection`](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.connection?view=azure-java-stable) class now has a `MessageReceived` event, similar to other programing languages (C++, C#). This event provides low-level access to incoming data from the service and can be useful for diagnostics and debugging.
+- **JavaScript**: [`BotFrameworkConfig`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/botframeworkconfig) now has `fromHost()` and `fromEndpoint()` factory methods that simplify the use of custom service locations versus manually setting properties. We also standardized optional specification of `botId` to use a non-default bot across the configuration factories.
+- **JavaScript**: Added string control property for websocket compression. For performance reasons we disabled websocket compression by default. This can be reenabled for low bandwidth scenarios. More details [here](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid). This addresses [GitHub issue #242](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/242).
+- **JavaScript**: Added support for pronunciation assessment to enable evaluation of speech pronunciation. See the quickstart [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-pronunciation-assessment?pivots=programming-language-javascript).
+
+**Bug fixes**
+- **All** (except JavaScript): Fixed a regression in version 1.14, in which too much memory was allocated by the recognizer.
+- **C++**: Fixed a garbage collection issue with `DialogServiceConnector`, addressing [GitHub issue #794](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/794).
+- **C#**: Fixed an issue with thread shutdown that caused objects to block for about a second when disposed.
+- **C++/C#/Java**: Fixed an exception preventing an application from setting speech authorization token or activity template more than once on a `DialogServiceConnector`.
+- **C++/C#/Java**: Fixed a recognizer crash due to a race condition in teardown.
+- **JavaScript**: [`DialogServiceConnector`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector) did not previously honor the optional `botId` parameter specified in `BotFrameworkConfig`'s factories. This made it necessary to set the `botId` query string parameter manually to use a non-default bot. The bug has been corrected and `botId` values provided to `BotFrameworkConfig`'s factories will be honored and used, including the new `fromHost()` and `fromEndpoint()` additions. This also applies to the `applicationId` parameter for `CustomCommandsConfig`.
+- **JavaScript**: Fixed [GitHub issue #881](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/881), allowing recognizer object re-usage.
+- **JavaScript**: Fixed an issue where the SKD was sending `speech.config` multiple times in one TTS session, wasting bandwidth.
+- **JavaScript**: Simplified error handling on microphone authorization, allowing more descriptive message to bubble up when user has not allowed microphone input on their browser.
+- **JavaScript**: Fixed [GitHub issue #249](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/249) where type errors in `ConversationTranslator` and `ConversationTranscriber` caused a compilation error for TypeScript users.
+- **Objective-C**: Fixed an issue where GStreamer build failed for iOS on Xcode 11.4, addressing [GitHub issue #911](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/911).
+- **Python**: Fixed [GitHub issue #870](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/870), removing "DeprecationWarning: the imp module is deprecated in favour of importlib".
+
+**Samples**
+- [From-file sample for JavaScript browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/browser/from-file/https://docsupdatetracker.net/index.html) now uses files for speech recognition. This addresses [GitHub issue #884](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/884).
+
+## Speech CLI (also known as SPX): 2021-January release
+
+**New features**
+- Speech CLI is now available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.CLI/) and can be installed via .Net CLI as a .Net global tool you can call from the shell/command line.
+- The [Custom Speech DevOps Template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) has been updated to use Speech CLI for its Custom Speech workflows.
+
+**COVID-19 abridged testing**:
+As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
+Stay healthy!
+ ## Text-to-speech 2020-December release **New neural voices in GA and preview**
@@ -70,7 +121,7 @@ Visit the [Audio Content Creation tool](https://speech.microsoft.com/audioconten
- Improved word-level pronunciation accuracy in `pl-PL` (error rate reduction: 51%) and `fi-FI` (error rate reduction: 58%) - Improved `ja-JP` single word reading for the dictionary scenario. Reduced pronunciation error by 80%. - `zh-CN-XiaoxiaoNeural`: Improved sentiment/CustomerService/Newscast/Cheerful/Angry style voice quality.-- `zh-CN`: Improved Erhua pronunciation and light tone and refined space prosody, which greatly improves the intelligibility.
+- `zh-CN`: Improved Erhua pronunciation and light tone and refined space prosody, which greatly improves intelligibility.
## Speech SDK 1.14.0: 2020-October release
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-and-machine-learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-and-machine-learning.md
@@ -6,7 +6,7 @@
Previously updated : 08/22/2019 Last updated : 08/22/2019 # Cognitive Services and machine learning
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/concept-apprentice-mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-apprentice-mode.md
@@ -1,6 +1,6 @@
Title: Apprentice mode - Personalizer
-description:
+description: Learn how to use apprentice mode to gain confidence in a model without changing any code.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md
@@ -12,7 +12,7 @@ Last updated 12/02/2020
keywords: text mining, sentiment analysis, text analytics
-zone_pivot_groups: programming-languages-text-analytics
+zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Use the Text Analytics client library and REST API
connectors https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-office365-outlook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-office365-outlook.md
@@ -89,17 +89,19 @@ An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an opera
## Connect using other accounts
-If you try connecting to Outlook by using a different account than the one currently signed in to Azure, you might get [single sign-on (SSO)](../active-directory/manage-apps/what-is-single-sign-on.md) errors. This problem happens when you sign in to the Azure portal with one account, but use a different account to create the connection. The Logic App Designer expects to use the account that's signed in to Azure. To resolve this problem, you have these options:
+If you try connecting to Outlook by using a different account than the one currently signed in to Azure, you might get [single sign-on (SSO)](../active-directory/manage-apps/what-is-single-sign-on.md) errors. This problem happens when you sign in to the Azure portal with one account, but use a different account to create the connection. The designer expects that you use the account that's signed in to the Azure portal. To resolve this problem, you have these options:
-* Set up the other account as a **Contributor** to your logic app's resource group.
+* Set up the other account with the **Contributor** role in your logic app's resource group.
- 1. On your logic app's resource group menu, select **Access control (IAM)**. Set up the other account with the **Contributor** role. For more information, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+ 1. On your logic app's resource group menu, select **Access control (IAM)**. Set up the other account with the **Contributor** role.
+
+ For more information, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- 1. If you're signed in to the Azure portal with your work or school account, sign out and sign back in with your other account. You can now create a connection to Outlook by using the other account.
+ 1. After you set up this role, sign in to the Azure portal with the account that now has Contributor permissions. You can now use this account to create the connection to Outlook.
* Set up the other account so that your work or school account has "send as" permissions.
- If you have admin permissions, on the service account's mailbox, set up your your work or school account with either **Send as** or **Send on behalf of** permissions. For more information, see [Give mailbox permissions to another user - Admin Help](/microsoft-365/admin/add-users/give-mailbox-permissions-to-another-user). You can then create the connection by using your work or school account. Now, in triggers or actions where you can specify the sender, you can use the service account's email address.
+ If you have admin permissions, on the service account's mailbox, set up your work or school account with either **Send as** or **Send on behalf of** permissions. For more information, see [Give mailbox permissions to another user - Admin Help](/microsoft-365/admin/add-users/give-mailbox-permissions-to-another-user). You can then create the connection by using your work or school account. Now, in triggers or actions where you can specify the sender, you can use the service account's email address.
For example, the **Send an email** action has an optional parameter, **From (Send as)**, which you can add to the action and use your service account's email address as the sender. To add this parameter, follow these steps:
@@ -113,4 +115,4 @@ For technical details about this connector, such as triggers, actions, and limit
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
\ No newline at end of file
+* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auto-purge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-auto-purge.md
@@ -9,7 +9,7 @@ Last updated 01/27/2021
When you use an Azure container registry as part of a development workflow, the registry can quickly fill up with images or other artifacts that aren't needed after a short period. You might want to delete all tags that are older than a certain duration or match a specified name filter. To delete multiple artifacts quickly, this article introduces the `acr purge` command you can run as an on-demand or [scheduled](container-registry-tasks-scheduled.md) ACR Task.
-The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.4`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub.
+The `acr purge` command is currently distributed in a public container image (`mcr.microsoft.com/acr/acr-cli:0.3`), built from source code in the [acr-cli](https://github.com/Azure/acr-cli) repo in GitHub.
You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the ACR task examples in this article. If you'd like to use it locally, version 2.0.76 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
@@ -40,7 +40,6 @@ At a minimum, specify the following when you run `acr purge`:
* `--untagged` - Specifies that manifests that don't have associated tags (*untagged manifests*) are deleted. * `--dry-run` - Specifies that no data is deleted, but the output is the same as if the command is run without this flag. This parameter is useful for testing a purge command to make sure it does not inadvertently delete data you intend to preserve. * `--keep` - Specifies that the latest x number of to-be-deleted tags are retained.
-* `--concurrency` - Specifies that x purge tasks are processed concurrently. A default value will be used if this parameter is not provided.
For additional parameters, run `acr purge --help`.
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-troubleshoot-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
@@ -36,6 +36,8 @@ Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
+If you're experiencing problems using the registry wih Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
+ > [!NOTE] > Some network connectivity symptoms can also occur when there are issues with registry authentication or authorization. See [Troubleshoot registry login](container-registry-troubleshoot-login.md).
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-troubleshoot-login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-login.md
@@ -35,6 +35,8 @@ Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
+If you're experiencing problems using the registry wih Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
+ > [!NOTE] > Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md).
@@ -74,7 +76,7 @@ Check the validity of the credentials you use for your scenario, or were provide
* If using an Active Directory service principal, ensure you use the correct credentials in the Active Directory tenant: * User name - service principal application ID (also called *client ID*) * Password - service principal password (also called *client secret*)
-* If using an Azure service such as Azure Kubernetes Service or Azure DevOps to access the registry, confirm the registry configuration for your service.
+* If using an Azure service such as Azure Kubernetes Service or Azure DevOps to access the registry, confirm the registry configuration for your service.
* If you ran `az acr login` with the `--expose-token` option, which enables registry login without using the Docker daemon, ensure that you authenticate with the username `00000000-0000-0000-0000-000000000000`. * If your registry is configured for [anonymous pull access](container-registry-faq.md#how-do-i-enable-anonymous-pull-access), existing Docker credentials stored from a previous Docker login can prevent anonymous access. Run `docker logout` before attempting an anonymous pull operation on the registry.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/cost-management-billing-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cost-management-billing-overview.md
@@ -4,7 +4,7 @@ description: You use Azure Cost Management + Billing features to conduct billing
keywords: Previously updated : 10/26/2020 Last updated : 01/28/2021
@@ -38,11 +38,11 @@ A billing account is created when you sign up to use Azure. You use your billing
The Azure portal currently supports the following types of billing accounts: -- **Microsoft Online Services Program**: An individual billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an Azure Free Account, account with pay-as-you-go rates or as a Visual studio subscriber.
+- **Microsoft Online Services Program**: An individual billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](./manage/create-free-services.md), account with pay-as-you-go rates or as a Visual studio subscriber.
- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an Enterprise Agreement (EA) to use Azure. -- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an account with pay-as-you-go rates or upgrade their Azure Free Account may have a billing account for a Microsoft Customer Agreement as well.
+- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an account with pay-as-you-go rates or upgrade their [Azure Free Account](./manage/create-free-services.md) may have a billing account for a Microsoft Customer Agreement as well.
### Scopes for billing accounts A scope is a node in a billing account that you use to view and manage billing. It's where you manage billing data, payments, invoices, and conduct general account management.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/understand-work-scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-work-scopes.md
@@ -110,7 +110,7 @@ Represents a single account owner for one or more Azure subscriptions. It doesn'
Resource type: Not applicable
-Individual Azure subscription account admins can view and manage billing data, such as invoices and payments, from the [Azure Account Center](https://account.azure.com/subscriptions). However, they can't view cost data or manage resources in the Azure portal. To grant access to the account admin, use the Cost Management roles mentioned previously.
+Individual Azure subscription account admins can view and manage billing data, such as invoices and payments, from the [Azure portal](https://portal.azure.com) > **Subscriptions** > select a subscription.
Unlike EA, individual Azure subscription account admins can see their invoices in the Azure portal. Keep in mind that Cost Management Reader and Cost Management Contributor roles don't provide access to invoices. For more information, see [How to grant access to invoices](../manage/manage-billing-access.md#give-read-only-access-to-billing).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/cost-management-automation-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/cost-management-automation-scenarios.md
@@ -44,8 +44,8 @@ You can use the billing and cost management APIs in several scenarios to answer
| Usage Details | X | X | X | X | X | X | | Billing Periods | X | X | X | X | | | | Invoices | X | X | X | X | | |
-| RateCard | X | | X | X | X | |
-| Unrated Usage | X | | X | | X | |
+| Azure Retail Prices | X | | X | X | | |
+ > [!NOTE] > The scenario-to-API mapping doesn't include the Enterprise Consumption APIs. Where possible, use the general Consumption APIs for new development scenarios.
@@ -69,9 +69,7 @@ Web Direct and Enterprise customers can use all the following APIs, except where
- [Usage Details API](/rest/api/consumption/usagedetails): Get charge and usage information on all Azure resources from Microsoft. Information is in the form of usage detail records, which are currently emitted once per meter per day. You can use the information to add up the costs across all resources or investigate costs/usage on specific resources. -- [RateCard API](/previous-versions/azure/reference/mt219005(v=azure.100)): Get meter rates if you're a Web Direct customer. You can then use the returned information with your resource usage information to manually calculate the expected bill.--- [Unrated Usage API](/previous-versions/azure/reference/mt219003(v=azure.100)): Get raw usage information before Azure does any metering/charging.
+- [Azure Retail Prices](/rest/api/cost-management/retail-prices/azure-retail-prices): Get meter rates with pay-as-you-go pricing. You can then use the returned information with your resource usage information to manually calculate the expected bill.
### Billing - [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods): Determine a billing period to analyze, along with the invoice IDs for that period. You can use invoice IDs with the Invoices API.
@@ -102,16 +100,6 @@ These APIs have a similar set of functionality and can answer the same broad set
- Consumption APIs are available to all customers, with a few exceptions. For more information, see [Azure consumption API overview](consumption-api-overview.md) and the [Azure Consumption API reference](/rest/api/consumption/). We recommend the provided APIs as the solution for the latest development scenarios.
-### What's the difference between the Usage Details API and the Usage API?
-These APIs provide fundamentally different data:
--- The [Usage Details API](/rest/api/consumption/usagedetails) provides Azure usage and cost information per meter instance. The provided data has already passed through the cost metering system in Azure and had cost applied to it, along with other possible changes:-
- - Changes to account for the use of prepaid Azure Prepayment
- - Changes to account for usage discrepancies discovered by Azure
--- The [Usage API](/previous-versions/azure/reference/mt219003(v=azure.100)) provides raw Azure usage information before it passes through the cost metering system in Azure. This data might not have any correlation with the usage or charge amount that's seen after the Azure charge metering system.- ### What's the difference between the Invoice API and the Usage Details API? These APIs provide a different view of the same data:
@@ -124,7 +112,7 @@ These APIs provide similar sets of data but have different audiences:
- The [Price Sheet API](/rest/api/consumption/pricesheet) provides the custom pricing that was negotiated for an Enterprise customer. -- The [RateCard API](/previous-versions/azure/reference/mt219005(v=azure.100)) provides the public-facing pricing that applies to Web Direct customers.
+- The [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) provides public-facing pay-as-you-go pricing that applies to Web Direct customers.
## Next steps
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-administration.md
@@ -342,7 +342,7 @@ To add a subscription:
New subscriptions can take up to 24 hours to appear in the subscriptions list. After you've created a subscription, you can: -- [Edit subscription details](https://account.azure.com/Subscriptions)
+- [Edit subscription details](https://portal.azure.com)
- [Manage subscription services](https://portal.azure.com/#home) ## Delete subscription
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/pay-by-invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/pay-by-invoice.md
@@ -103,4 +103,4 @@ Occasionally Microsoft needs legal documentation if the information you provided
## Next steps
-* If needed, update your billing contact information at the [Azure Account Center](https://account.azure.com/Profile).
\ No newline at end of file
+* If needed, update your billing contact information at the [Azure portal](https://portal.azure.com).
\ No newline at end of file
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/troubleshoot-azure-sign-up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
@@ -8,7 +8,7 @@ tags: billing
Previously updated : 08/20/2020 Last updated : 01/28/2021
@@ -134,12 +134,12 @@ Complete the Agreement.
## Other issues
-### Can't activate Azure benefit plan like MSDN, BizSpark, BizSparkPlus, or MPN
+### Can't activate Azure benefit plan like Visual Studio, BizSpark, BizSparkPlus, or MPN
Check that you're using the correct sign-in credentials. Then, check the benefit program and verify that you're eligible.-- MSDN
- - Verify your eligibility status on your [MSDN account page](https://msdn.microsoft.com/subscriptions/manage/default.aspx).
- - If you can't verify your status, contact the [MSDN Subscriptions Customer Service Centers](/previous-versions/mappoint/aa493452(v=msdn.10)).
+- Visual Studio
+ - Verify your eligibility status on your [Visual Studio account page](https://my.visualstudio.com/Benefits).
+ - If you can't verify your status, contact [Visual Studio Subscription Support](https://visualstudio.microsoft.com/subscriptions/support/).
- Microsoft for Startups - Sign in to the [Microsoft for Startups portal](https://startups.microsoft.com/#start-two) to verify your eligibility status for Microsoft for Startups. - If you can't verify your status, you can get help on the [Microsoft for Startups forums](https://www.microsoftpartnercommunity.com/t5/Microsoft-for-Startups/ct-p/Microsoft_Startups).
@@ -147,7 +147,6 @@ Check that you're using the correct sign-in credentials. Then, check the benefit
- Sign in to the [MPN portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for additional benefits. - If you can't verify your status, contact [MPN Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx). - ### Can't activate new Azure In Open subscription To create an Azure In Open subscription, you must have a valid Online Service Activation (OSA) key that has at least one Azure In Open token associated with it. If you don't have an OSA key, contact one of the Microsoft Partners that are listed in [Microsoft Pinpoint](https://pinpoint.microsoft.com/).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/troubleshoot-sign-in-issue https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-sign-in-issue.md
@@ -27,7 +27,7 @@ If your internet browser page hangs, try each of the following steps until you c
- Use a different internet browser. - Use the private browsing mode for your browser:
- - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com/) or [Azure account center](https://account.azure.com/Subscriptions).
+ - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com/).
- **Chrome:** Choose **Incognito** mode. - **Safari:** Choose **File**, then **New Private Window**.
@@ -48,7 +48,7 @@ To resolve the issue, try one of the following methods:
- **Chrome:** Choose **Settings** and select **Clear browsing data** under **Privacy and Security**. - Reset your browser settings to defaults. - Use the private browsing mode for your browser.
- - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com/) or [Azure account center](https://account.azure.com/Subscriptions).
+ - **Edge:** Open **Settings** (the three dots by your profile picture), select **New InPrivate window**, and then browse and sign in to the [Azure portal](https://portal.azure.com/).
- **Chrome:** Choose **Incognito** mode. - **Safari:** Choose **File**, then **New Private Window**.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/reserved-instance-windows-software-costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/reserved-instance-windows-software-costs.md
@@ -7,7 +7,7 @@ tags: billing
Previously updated : 02/13/2020 Last updated : 01/28/2021
@@ -56,7 +56,7 @@ Virtual machine reserved instance and SQL reserved capacity discounts apply only
## Get rates for Azure meters
-You can get the cost of each of these meters through Azure RateCard API. For information on how to get the rates for an azure meter, see [Get price and metadata information for resources used in an Azure subscription](/previous-versions/azure/reference/mt219004(v=azure.100)).
+You can get the cost of each of the meters with the Azure Retail Prices API. For information on how to get the rates for an Azure meter, see [Azure Retail Prices overview](/rest/api/cost-management/retail-prices/azure-retail-prices).
## Next steps To learn more about reservations for Azure, see the following articles:
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reserved-instance-usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/understand-reserved-instance-usage.md
@@ -13,7 +13,7 @@
# Understand Azure reservation usage for your individual subscription with pay-as-you-go rates subscription
-Use the ReservationId from [Reservation page](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Reservations&Microsoft_Azure_Reservations=true#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade) and the usage file from the [Azure Accounts portal](https://account.azure.com) to evaluate your reservation usage.
+Use the ReservationId from [Reservation page](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Reservations&Microsoft_Azure_Reservations=true#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade) and the usage file from the [Azure portal](https://portal.azure.com) to evaluate your reservation usage.
If you are a customer with an Enterprise Agreement, see [Understand reservation usage for your Enterprise enrollment.](understand-reserved-instance-usage-ea.md).
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-roles-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
@@ -52,6 +52,12 @@ Permissions on Azure Repos and GitHub are independent of Data Factory permission
> [!IMPORTANT] > Resource Manager template deployment with the **Data Factory Contributor** role does not elevate your permissions. For example, if you deploy a template that creates an Azure virtual machine, and you don't have permission to create virtual machines, the deployment fails with an authorization error.
+> [!IMPORTANT]
+> The permission **Microsoft.DataFactory/factories/write** is required in both modes within the publish context.
+
+- This permission is only required in Live mode when the customer modifies global parameters.
+- This permission is always required in Git mode since every time after the customer publishes, because the factory object with the last commit id is updated.
+ ### Custom scenarios and custom roles Sometimes you may need to grant different access levels for different data factory users. For example:
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
@@ -1,5 +1,5 @@
Title: Troubleshoot Azure Data Factory Connectors
+ Title: Troubleshoot Azure Data Factory connectors
description: Learn how to troubleshoot connector issues in Azure Data Factory.
@@ -11,85 +11,81 @@
-# Troubleshoot Azure Data Factory Connectors
+# Troubleshoot Azure Data Factory connectors
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article explores common troubleshooting methods for connectors in Azure Data Factory.
+This article explores common ways to troubleshoot problems with Azure Data Factory connectors.
## Azure Blob Storage ### Error code: AzureBlobOperationFailed -- **Message**: `Blob operation Failed. ContainerName: %containerName;, path: %path;.`
+- **Message**: "Blob operation Failed. ContainerName: %containerName;, path: %path;."
-- **Cause**: Blob storage operation hit problem.
+- **Cause**: A problem with the Blob Storage operation.
-- **Recommendation**: Check the error in details. Refer to blob help document: https://docs.microsoft.com/rest/api/storageservices/blob-service-error-codes. Contact storage team if need help.
+- **Recommendation**: To check the error details, see [Blob Storage error codes](https://docs.microsoft.com/rest/api/storageservices/blob-service-error-codes). For further help, contact the Blob Storage team.
### Invalid property during copy activity -- **Message**: `Copy activity <Activity Name> has an invalid "source" property. The source type is not compatible with the dataset <Dataset Name> and its linked service <Linked Service Name>. Please verify your input against.`
+- **Message**: `Copy activity \<Activity Name> has an invalid "source" property. The source type is not compatible with the dataset \<Dataset Name> and its linked service \<Linked Service Name>. Please verify your input against.`
-- **Cause**: The type defined in dataset is inconsistent with the source/sink type defined in copy activity.
+- **Cause**: The type defined in the dataset is inconsistent with the source or sink type that's defined in the copy activity.
-- **Resolution**: Edit the dataset or pipeline JSON definition to make the types consistent and rerun the deployment.
+- **Resolution**: Edit the dataset or pipeline JSON definition to make the types consistent, and then rerun the deployment.
## Azure Cosmos DB ### Error message: Request size is too large -- **Symptoms**: You copy data into Azure Cosmos DB with default write batch size, and hit error *"**Request size is too large**"*.
+- **Symptoms**: When you copy data into Azure Cosmos DB with a default write batch size, you receive the following error: `Request size is too large.`
-- **Cause**: Cosmos DB limits one single request's size to 2 MB. The formula is, Request Size = Single Document Size * Write Batch Size. If your document size is large, the default behavior will result in too large request size. You can tune the write batch size.
+- **Cause**: Azure Cosmos DB limits the size of a single request to 2 MB. The formula is *request size = single document size \* write batch size*. If your document size is large, the default behavior will result in a request size that's too large. You can tune the write batch size.
-- **Resolution**: In copy activity sink, reduce the 'Write batch size' value (default value is 10000).
+- **Resolution**: In the copy activity sink, reduce the *write batch size* value (the default value is 10000).
### Error message: Unique index constraint violation -- **Symptoms**: When copying data into Cosmos DB, you hit the following error:
+- **Symptoms**: When you copy data into Azure Cosmos DB, you receive the following error:
- ```
- Message=Partition range id 0 | Failed to import mini-batch.
- Exception was Message: {"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"Unique index constraint violation.\"]}...
- ```
+ `Message=Partition range id 0 | Failed to import mini-batch.
+ Exception was Message: {"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"Unique index constraint violation.\"]}...`
- **Cause**: There are two possible causes:
- - If you use **Insert** as write behavior, this error means you source data have rows/objects with same ID.
- - If you use **Upsert** as write behavior and you set another unique key to the container, this error means you source data have rows/objects with different IDs but same value for the defined unique key.
+ - Cause 1: If you use **Insert** as the write behavior, this error means that your source data has rows or objects with same ID.
+ - Cause 2: If you use **Upsert** as the write behavior and you set another unique key to the container, this error means that your source data has rows or objects with different IDs but the same value for the defined unique key.
- **Resolution**:
- - For cause1, set **Upsert** as write behavior.
- - For cause 2, make sure each document has different value for defined unique key.
+ - For cause 1, set **Upsert** as the write behavior.
+ - For cause 2, make sure that each document has a different value for the defined unique key.
### Error message: Request rate is large -- **Symptoms**: When copying data into Cosmos DB, you hit the following error:
+- **Symptoms**: When you copy data into Azure Cosmos DB, you receive the following error:
- ```
- Type=Microsoft.Azure.Documents.DocumentClientException,
- Message=Message: {"Errors":["Request rate is large"]}
- ```
+ `Type=Microsoft.Azure.Documents.DocumentClientException,
+ Message=Message: {"Errors":["Request rate is large"]}`
-- **Cause**: The request units used is bigger than the available RU configured in Cosmos DB. Learn how
-Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-considerations).
+- **Cause**: The number of used request units (RUs) is greater than the available RUs configured in Azure Cosmos DB. To learn how
+Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos-db/request-units.md#request-unit-considerations).
-- **Resolution**: Here are two solutions:
+- **Resolution**: Try either of the following two solutions:
- - **Increase the container RU** to bigger value in Cosmos DB, which will improve the copy activity performance, though incur more cost in Cosmos DB.
- - Decrease **writeBatchSize** to smaller value (such as 1000) and set **parallelCopies** to smaller value such as 1, which will make copy run performance worse than current but will not incur more cost in Cosmos DB.
+ - Increase the *container RUs* number to a greater value in Azure Cosmos DB. This solution will improve the copy activity performance, but it will incur more cost in Azure Cosmos DB.
+ - Decrease *writeBatchSize* to a lesser value, such as 1000, and decrease *parallelCopies* to a lesser value, such as 1. This solution will reduce copy run performance, but it won't incur more cost in Azure Cosmos DB.
-### Column missing in column mapping
+### Columns missing in column mapping
-- **Symptoms**: When you import schema for Cosmos DB for column mapping, some columns are missing.
+- **Symptoms**: When you import a schema for Azure Cosmos DB for column mapping, some columns are missing.
-- **Cause**: ADF infers the schema from the first 10 Cosmos DB documents. If some columns/properties don't have value in those documents, they won't be detected by ADF thus won't show up.
+- **Cause**: Data Factory infers the schema from the first 10 Azure Cosmos DB documents. If some document columns or properties don't contain values, the schema isn't detected by Data Factory and consequently isn't displayed.
-- **Resolution**: You can tune the query as below to enforce column to show up in result set with empty value: (assume: "impossible" column is missing in first 10 documents). Alternatively, you can manually add the column for mapping.
+- **Resolution**: You can tune the query as shown in the following code to force the column values to be displayed in the result set with empty values. Assume that the *impossible* column is missing in the first 10 documents). Alternatively, you can manually add the column for mapping.
```sql select c.company, c.category, c.comments, (c.impossible??'') as impossible from c
@@ -97,17 +93,15 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
### Error message: The GuidRepresentation for the reader is CSharpLegacy -- **Symptoms**: When copying data from Cosmos DB MongoAPI/MongoDB with UUID field, you hit the following error:
+- **Symptoms**: When you copy data from Azure Cosmos DB MongoAPI or MongoDB with the universally unique identifier (UUID) field, you receive the following error:
- ```
- Failed to read data via MongoDB client.,
- Source=Microsoft.DataTransfer.Runtime.MongoDbV2Connector,Type=System.FormatException,
- Message=The GuidRepresentation for the reader is CSharpLegacy which requires the binary sub type to be UuidLegacy not UuidStandard.,Source=MongoDB.Bson,ΓÇÖΓÇ£,
- ```
+ `Failed to read data via MongoDB client.,
+ Source=Microsoft.DataTransfer.Runtime.MongoDbV2Connector,Type=System.FormatException,
+ Message=The GuidRepresentation for the reader is CSharpLegacy which requires the binary sub type to be UuidLegacy not UuidStandard.,Source=MongoDB.Bson,ΓÇÖΓÇ£,`
-- **Cause**: There are two ways to represent UUID in BSON - UuidStardard and UuidLegacy. By default, UuidLegacy is used to read data. You will hit error if your UUID data in MongoDB is UuidStandard.
+- **Cause**: There are two ways to represent the UUID in Binary JSON (BSON): UuidStardard and UuidLegacy. By default, UuidLegacy is used to read data. You will receive an error if your UUID data in MongoDB is UuidStandard.
-- **Resolution**: In MongoDB connection string, add option "**uuidRepresentation=standard**". For more information, see [MongoDB connection string](connector-mongodb.md#linked-service-properties).
+- **Resolution**: In the MongoDB connection string, add the *uuidRepresentation=standard* option. For more information, see [MongoDB connection string](connector-mongodb.md#linked-service-properties).
## Azure Cosmos DB (SQL API)
@@ -115,10 +109,9 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
- **Message**: `CosmosDbSqlApi operation Failed. ErrorMessage: %msg;.` -- **Cause**: CosmosDbSqlApi operation hit problem.--- **Recommendation**: Check the error in details. Refer to [CosmosDb help document](../cosmos-db/troubleshoot-dot-net-sdk.md). Contact CosmosDb team if need help.
+- **Cause**: A problem with the CosmosDbSqlApi operation.
+- **Recommendation**: To check the error details, see [Azure Cosmos DB help document](../cosmos-db/troubleshoot-dot-net-sdk.md). For further help, contact the Azure Cosmos DB team.
## Azure Data Lake Storage Gen1
@@ -126,41 +119,34 @@ Cosmos DB calculates RU from [here](../cosmos-db/request-units.md#request-unit-c
- **Symptoms**: Copy activity fails with the following error:
- ```
- Message: ErrorCode = `UserErrorFailedFileOperation`, Error Message = `The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel`.
- ```
+ `Message: ErrorCode = UserErrorFailedFileOperation, Error Message = The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.`
-- **Cause**: The certificate validation failed during TLS handshake.
+- **Cause**: The certificate validation failed during the TLS handshake.
-- **Resolution**: Workaround: Use staged copy to skip the TLS validation for ADLS Gen1. You need to reproduce this issue and gather netmon trace, and then engage your network team to check the local network configuration.
+- **Resolution**: As a workaround, use the staged copy to skip the Transport Layer Security (TLS) validation for Azure Data Lake Storage Gen1. You need to reproduce this issue and gather the network monitor (netmon) trace, and then engage your network team to check the local network configuration.
- ![Troubleshoot ADLS Gen1](./media/connector-troubleshoot-guide/adls-troubleshoot.png)
+ ![Diagram of Azure Data Lake Storage Gen1 connections for troubleshooting issues.](./media/connector-troubleshoot-guide/adls-troubleshoot.png)
### Error message: The remote server returned an error: (403) Forbidden - **Symptoms**: Copy activity fail with the following error:
- ```
- Message: The remote server returned an error: (403) Forbidden..
- Response details: {"RemoteException":{"exception":"AccessControlException""message":"CREATE failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.)....
- ```
+ `Message: The remote server returned an error: (403) Forbidden.
+ Response details: {"RemoteException":{"exception":"AccessControlException""message":"CREATE failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.)....`
-- **Cause**: One possible cause is that the service principal or managed identity you use doesn't have permission to access the certain folder/file.
+- **Cause**: One possible cause is that the service principal or managed identity you use doesn't have permission to access certain folders or files.
-- **Resolution**: Grant corresponding permissions on all the folders and subfolders you need to copy. Refer to [this doc](connector-azure-data-lake-store.md#linked-service-properties).
+- **Resolution**: Grant appropriate permissions to all the folders and subfolders you need to copy. For more information, see [Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory](connector-azure-data-lake-store.md#linked-service-properties).
### Error message: Failed to get access token by using service principal. ADAL Error: service_unavailable -- **Symptoms**: Copy activity fail with the following error:
+- **Symptoms**: Copy activity fails with the following error:
- ```
- Failed to get access token by using service principal.
- ADAL Error: service_unavailable, The remote server returned an error: (503) Server Unavailable.
- ```
+ `Failed to get access token by using service principal.
+ ADAL Error: service_unavailable, The remote server returned an error: (503) Server Unavailable.`
-- **Cause**: When the Service Token Server (STS) owned by Azure Active Directory is not unavailable, i.e., too
-busy to handle requests, it returns an HTTP error 503.
+- **Cause**: When the Service Token Server (STS) that's owned by Azure Active Directory is not available, that means it's too busy to handle requests, and it returns HTTP error 503.
- **Resolution**: Rerun the copy activity after several minutes.
@@ -171,33 +157,36 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `ADLS Gen2 operation failed for: %adlsGen2Message;.%exceptionData;.` -- **Cause**: ADLS Gen2 throws the error indicating operation failed.
+- **Cause**: If Azure Data Lake Storage Gen2 throws this error, the operation has failed.
-- **Recommendation**: Check the detailed error message thrown by ADLS Gen2. If it's caused by transient failure, please retry. If you need further help, please contact Azure Storage support and provide the request ID in error message.
+- **Recommendation**: Check the detailed error message thrown by Azure Data Lake Storage Gen2. If the error is a transient failure, retry the operation. For further help, contact Azure Storage support, and provide the request ID in error message.
-- **Cause**: When the error message contains 'Forbidden', the service principal or managed identity you use may not have enough permission to access the ADLS Gen2.
+- **Cause**: If the error message contains the string "Forbidden," the service principal or managed identity you use might not have sufficient permission to access Azure Data Lake Storage Gen2.
-- **Recommendation**: Refer to the help document: https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication.
+- **Recommendation**: To troubleshoot this error, see [Copy and transform data in Azure Data Lake Storage Gen2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-azure-data-lake-storage#service-principal-authentication).
-- **Cause**: When the error message contains 'InternalServerError', the error is returned by ADLS Gen2.
+- **Cause**: If the error message contains the string "InternalServerError," the error is returned by Azure Data Lake Storage Gen2.
-- **Recommendation**: It may be caused by transient failure, please retry. If the issue persists, please contact Azure Storage support and provide the request ID in error message.
+- **Recommendation**: The error might be caused by a transient failure. If so, retry the operation. If the issue persists, contact Azure Storage support and provide the request ID from the error message.
-### Request to ADLS Gen2 account met timeout error
+### Request to Azure Data Lake Storage Gen2 account caused a timeout error
-- **Message**: Error Code = `UserErrorFailedBlobFSOperation`, Error Message = `BlobFS operation failed for: A task was canceled`.
+- **Message**:
+ * Error Code = `UserErrorFailedBlobFSOperation`
+ * Error Message = `BlobFS operation failed for: A task was canceled.`
-- **Cause**: The issue is caused by the ADLS Gen2 sink timeout error, which mostly happens on the self-hosted IR machine.
+- **Cause**: The issue is caused by the Azure Data Lake Storage Gen2 sink timeout error, which usually occurs on the Self-hosted Integration Runtime (IR) machine.
- **Recommendation**:
- - Place your self-hosted IR machine and target ADLS Gen2 account in the same region if possible. This can avoid random timeout error and have better performance.
+ - Place your Self-hosted IR machine and target Azure Data Lake Storage Gen2 account in the same region, if possible. This can help avoid a random timeout error and produce better performance.
- - Check whether there is any special network setting like ExpressRoute and ensure the network has enough bandwidth. It is suggested to lower the self-hosted IR concurrent jobs setting when the overall bandwidth is low, through which can avoid network resource competition across multiple concurrent jobs.
+ - Check whether there's a special network setting, such as ExpressRoute, and ensure that the network has enough bandwidth. We suggest that you lower the Self-hosted IR concurrent jobs setting when the overall bandwidth is low. Doing so can help avoid network resource competition across multiple concurrent jobs.
- - Use smaller block size for non-binary copy to mitigate such timeout error if the file size is moderate or small. Refer to [Blob Storage Put Block](/rest/api/storageservices/put-block).
+ - If the file size is moderate or small, use a smaller block size for nonbinary copy to mitigate such a timeout error. For more information, see [Blob Storage Put Block](/rest/api/storageservices/put-block).
- To specify the custom block size, you can edit the property in .json editor:
+ To specify the custom block size, edit the property in your JSON file editor as shown here:
+
``` "sink": { "type": "DelimitedTextSink",
@@ -209,299 +198,293 @@ busy to handle requests, it returns an HTTP error 503.
```
-## Azure File Storage
+## Azure Files storage
### Error code: AzureFileOperationFailed - **Message**: `Azure File operation Failed. Path: %path;. ErrorMessage: %msg;.` -- **Cause**: Azure File storage operation hit problem.
+- **Cause**: A problem with the Azure Files storage operation.
-- **Recommendation**: Check the error in details. Refer to Azure File help document: https://docs.microsoft.com/rest/api/storageservices/file-service-error-codes. Contact the storage team if you need help.
+- **Recommendation**: To check the error details, see [Azure Files help](https://docs.microsoft.com/rest/api/storageservices/file-service-error-codes). For further help, contact the Azure Files team.
-## Azure Synapse Analytics/Azure SQL Database/SQL Server
+## Azure Synapse Analytics, Azure SQL Database, and SQL Server
### Error code: SqlFailedToConnect - **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', User: '%user;'. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.` -- **Cause**: Azure SQL: If the error message contains "SqlErrorNumber=47073", it means public network access is denied in connectivity setting.
+- **Cause**: For Azure SQL, if the error message contains the string "SqlErrorNumber=47073," it means that public network access is denied in the connectivity setting.
-- **Recommendation**: On Azure SQL firewall, set "Deny public network access" option to "No". Learn more from https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access.
+- **Recommendation**: On the Azure SQL firewall, set the **Deny public network access** option to *No*. For more information, see [Azure SQL connectivity settings](https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access).
-- **Cause**: Azure SQL: If the error message contains SQL error code, like "SqlErrorNumber=[errorcode]", please refer to Azure SQL troubleshooting guide.
+- **Cause**: For Azure SQL, if the error message contains an SQL error code such as "SqlErrorNumber=[errorcode]", see the Azure SQL troubleshooting guide.
-- **Recommendation**: Learn more from https://docs.microsoft.com/azure/azure-sql/database/troubleshoot-common-errors-issues.
+- **Recommendation**: For a recommendation, see [Troubleshoot connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/database/troubleshoot-common-errors-issues).
-- **Cause**: Check if port 1433 is in the firewall allow list.
+- **Cause**: Check to see whether port 1433 is in the firewall allow list.
-- **Recommendation**: Follow with this reference doc: https://docs.microsoft.com/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-.
+- **Recommendation**: For more information, see [Ports used by SQL Server](https://docs.microsoft.com/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access#ports-used-by-).
-- **Cause**: If the error message contains "SqlException", SQL Database throws the error indicating some specific operation failed.
+- **Cause**: If the error message contains the string "SqlException," SQL Database the error indicates that some specific operation failed.
-- **Recommendation**: Search by SQL error code in this reference doc for more details: https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors. If you need further help, contact Azure SQL support.
+- **Recommendation**: For more information, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support.
-- **Cause**: If this is a transient issue (e.g., instable network connection), please add retry in activity policy to mitigate.
+- **Cause**: If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate.
-- **Recommendation**: Follow this reference doc: https://docs.microsoft.com/azure/data-factory/concepts-pipelines-activities#activity-policy.
+- **Recommendation**: For more information, see [Pipelines and activities in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/concepts-pipelines-activities#activity-policy).
-- **Cause**: If the error message contains "Client with IP address '...' is not allowed to access the server", and you are trying to connect to Azure SQL Database, usually it is caused by Azure SQL Database firewall issue.
+- **Cause**: If the error message contains the string "Client with IP address '...' is not allowed to access the server," and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue.
-- **Recommendation**: In Azure SQL Server firewall configuration, enable "Allow Azure services and resources to access this server" option. Reference doc: https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure.
+- **Recommendation**: In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](https://docs.microsoft.com/azure/sql-database/sql-database-firewall-configure).
### Error code: SqlOperationFailed - **Message**: `A database operation failed. Please search error to get more details.` -- **Cause**: If the error message contains "SqlException", SQL Database throws the error indicating some specific operation failed.
+- **Cause**: If the error message contains the string "SqlException," SQL Database throws an error indicating some specific operation failed.
-- **Recommendation**: If SQL error is not clear, please try to alter the database to latest compatibility level '150'. It can throw latest version SQL errors. Refer the [detail doc](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat).
+- **Recommendation**: If the SQL error is not clear, try to alter the database to the latest compatibility level '150'. It can throw the latest version SQL errors. For more information, see the [documentation](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level#backwardCompat).
- For troubleshooting SQL issues, please search by SQL error code in this reference doc for more details: https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors. If you need further help, contact Azure SQL support.
+ For more information about troubleshooting SQL issues, search by SQL error code in [Database engine errors](https://docs.microsoft.com/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support.
-- **Cause**: If the error message contains "PdwManagedToNativeInteropException", usually it's caused by mismatch between source and sink column sizes.
+- **Cause**: If the error message contains the string "PdwManagedToNativeInteropException," it's usually caused by a mismatch between the source and sink column sizes.
-- **Recommendation**: Check the size of both source and sink columns. If you need further help, contact Azure SQL support.
+- **Recommendation**: Check the size of both the source and sink columns. For further help, contact Azure SQL support.
-- **Cause**: If the error message contains "InvalidOperationException", usually it's caused by invalid input data.
+- **Cause**: If the error message contains the string "InvalidOperationException", it's usually caused by invalid input data.
-- **Recommendation**: To identify which row encounters the problem, please enable fault tolerance feature on copy activity, which can redirect problematic row(s) to the storage for further investigation. Reference doc: https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance.
+- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity, which can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance).
### Error code: SqlUnauthorizedAccess - **Message**: `Cannot connect to '%connectorName;'. Detail Message: '%message;'` -- **Cause**: Credential is incorrect or the login account cannot access SQL Database.
+- **Cause**: The credentials are incorrect or the login account can't access the SQL database.
-- **Recommendation**: Check the login account has enough permission to access the SQL Database.
+- **Recommendation**: Check to ensure that the login account has sufficient permissions to access the SQL database.
### Error code: SqlOpenConnectionTimeout - **Message**: `Open connection to database timeout after '%timeoutValue;' seconds.` -- **Cause**: Could be SQL Database transient failure.
+- **Cause**: The problem could be a SQL database transient failure.
-- **Recommendation**: Retry to update linked service connection string with larger connection timeout value.
+- **Recommendation**: Retry the operation to update the linked service connection string with a larger connection timeout value.
### Error code: SqlAutoCreateTableTypeMapFailed - **Message**: `Type '%dataType;' in source side cannot be mapped to a type that supported by sink side(column name:'%columnName;') in autocreate table.` -- **Cause**: Auto creation table cannot meet source requirement.
+- **Cause**: The autocreation table can't meet the source requirement.
-- **Recommendation**: Update the column type in 'mappings', or manually create the sink table in target server.
+- **Recommendation**: Update the column type in *mappings*, or manually create the sink table in the target server.
### Error code: SqlDataTypeNotSupported - **Message**: `A database operation failed. Check the SQL errors.` -- **Cause**: If the issue happens on SQL source and the error is related to SqlDateTime overflow, the data value is over the logic type range (1/1/1753 12:00:00 AM - 12/31/9999 11:59:59 PM).
+- **Cause**: If the issue occurs in the SQL source and the error is related to SqlDateTime overflow, the data value exceeds the logic type range (1/1/1753 12:00:00 AM - 12/31/9999 11:59:59 PM).
-- **Recommendation**: Cast the type to string in source SQL query, or in copy activity column mapping change the column type to 'String'.
+- **Recommendation**: Cast the type to the string in the source SQL query or, in the copy activity column mapping, change the column type to *String*.
-- **Cause**: If the issue happens on SQL sink and the error is related to SqlDateTime overflow, the data value is over the allowed range in sink table.
+- **Cause**: If the issue occurs on the SQL sink and the error is related to SqlDateTime overflow, the data value exceeds the allowed range in the sink table.
-- **Recommendation**: Update the corresponding column type to 'datetime2' type in sink table.
+- **Recommendation**: Update the corresponding column type to the *datetime2* type in the sink table.
### Error code: SqlInvalidDbStoredProcedure - **Message**: `The specified Stored Procedure is not valid. It could be caused by that the stored procedure doesn't return any data. Invalid Stored Procedure script: '%scriptName;'.` -- **Cause**: The specified Stored Procedure is not valid. It could be caused by that the stored procedure doesn't return any data.
+- **Cause**: The specified stored procedure is invalid. The cause might be that the stored procedure doesn't return any data.
-- **Recommendation**: Validate the stored procedure by SQL Tools. Make sure the stored procedure can return data.
+- **Recommendation**: Validate the stored procedure by using SQL Tools. Make sure that the stored procedure can return data.
### Error code: SqlInvalidDbQueryString - **Message**: `The specified SQL Query is not valid. It could be caused by that the query doesn't return any data. Invalid query: '%query;'` -- **Cause**: The specified SQL Query is not valid. It could be caused by that the query doesn't return any data
+- **Cause**: The specified SQL query is invalid. The cause might be that the query doesn't return any data.
-- **Recommendation**: Validate the SQL Query by SQL Tools. Make sure the query can return data.
+- **Recommendation**: Validate the SQL query by using SQL Tools. Make sure that the query can return data.
### Error code: SqlInvalidColumnName - **Message**: `Column '%column;' does not exist in the table '%tableName;', ServerName: '%serverName;', DatabaseName: '%dbName;'.` -- **Cause**: Cannot find column. Possible configuration wrong.
+- **Cause**: The column can't be found because the configuration might be incorrect.
-- **Recommendation**: Verify the column in the query, 'structure' in dataset, and 'mappings' in activity.
+- **Recommendation**: Verify the column in the query, *structure* in the dataset, and *mappings* in the activity.
### Error code: SqlBatchWriteTimeout - **Message**: `Timeouts in SQL write operation.` -- **Cause**: Could be SQL Database transient failure.
+- **Cause**: The problem could be caused by a SQL database transient failure.
-- **Recommendation**: Retry. If problem repro, contact Azure SQL support.
+- **Recommendation**: Retry the operation. If the problem persists, contact Azure SQL support.
### Error code: SqlBatchWriteTransactionFailed -- **Message**: `SQL transaction commits failed`
+- **Message**: `SQL transaction commits failed.`
-- **Cause**: If exception details constantly tell transaction timeout, the network latency between integration runtime and database is higher than default threshold as 30 seconds.
+- **Cause**: If exception details constantly indicate a transaction timeout, the network latency between the integration runtime and the database is greater than the default threshold of 30 seconds.
-- **Recommendation**: Update Sql linked service connection string with 'connection timeout' value equals to 120 or higher and rerun the activity.
+- **Recommendation**: Update the SQL-linked service connection string with a *connection timeout* value that's equal to or greater than 120 and rerun the activity.
-- **Cause**: If exception details intermittently tell sqlconnection broken, it could just be transient network failure or SQL Database side issue
+- **Cause**: If the exception details intermittently indicate that the SQL connection is broken, it might be a transient network failure or a SQL database side issue.
-- **Recommendation**: Retry the activity and review SQL Database side metrics.
+- **Recommendation**: Retry the activity and review the SQL database side metrics.
### Error code: SqlBulkCopyInvalidColumnLength - **Message**: `SQL Bulk Copy failed due to receive an invalid column length from the bcp client.` -- **Cause**: SQL Bulk Copy failed due to receive an invalid column length from the bcp client.
+- **Cause**: SQL Bulk Copy failed because it received an invalid column length from the bulk copy program utility (bcp) client.
-- **Recommendation**: To identify which row encounters the problem, please enable fault tolerance feature on copy activity, which can redirect problematic row(s) to the storage for further investigation. Reference doc: https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance.
+- **Recommendation**: To identify which row has encountered the problem, enable the fault tolerance feature on the copy activity. This can redirect problematic rows to the storage for further investigation. For more information, see [Fault tolerance of copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/copy-activity-fault-tolerance).
### Error code: SqlConnectionIsClosed - **Message**: `The connection is closed by SQL Database.` -- **Cause**: SQL connection is closed by SQL Database when high concurrent run and server terminate connection.
+- **Cause**: The SQL connection is closed by the SQL database when a high concurrent run and the server terminate the connection.
-- **Recommendation**: Remote server closed the SQL connection. Retry. If problem repro, contact Azure SQL support.
+- **Recommendation**: Retry the connection. If the problem persists, contact Azure SQL support.
### Error message: Conversion failed when converting from a character string to uniqueidentifier -- **Symptoms**: When you copy data from tabular data source (such as SQL Server) into Azure Synapse Analytics using staged copy and PolyBase, you hit the following error:
+- **Symptoms**: When you copy data from a tabular data source (such as SQL Server) into Azure Synapse Analytics using staged copy and PolyBase, you receive the following error:
- ```
- ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
- Message=Error happened when loading data into Azure Synapse Analytics.,
- Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
- Message=Conversion failed when converting from a character string to uniqueidentifier...
- ```
+ `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
+ Message=Error happened when loading data into Azure Synapse Analytics.,
+ Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
+ Message=Conversion failed when converting from a character string to uniqueidentifier...`
-- **Cause**: Azure Synapse Analytics PolyBase cannot convert empty string to GUID.
+- **Cause**: Azure Synapse Analytics PolyBase can't convert an empty string to a GUID.
-- **Resolution**: In Copy activity sink, under Polybase settings, set "**use type default**" option to false.
+- **Resolution**: In the copy activity sink, under PolyBase settings, set the **use type default** option to *false*.
### Error message: Expected data type: DECIMAL(x,x), Offending value -- **Symptoms**: When you copy data from tabular data source (such as SQL Server) into Azure Synapse Analytics using staged copy and PolyBase, you hit the following error:
+- **Symptoms**: When you copy data from a tabular data source (such as SQL Server) into Azure Synapse Analytics by using staged copy and PolyBase, you receive the following error:
- ```
- ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
- Message=Error happened when loading data into Azure Synapse Analytics.,
- Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
- Message=Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 415 rows processed. (/file_name.txt)
- Column ordinal: 18, Expected data type: DECIMAL(x,x), Offending value:..
- ```
+ `ErrorCode=FailedDbOperation,Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
+ Message=Error happened when loading data into Azure Synapse Analytics.,
+ Source=Microsoft.DataTransfer.ClientLibrary,Type=System.Data.SqlClient.SqlException,
+ Message=Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 415 rows processed. (/file_name.txt)
+ Column ordinal: 18, Expected data type: DECIMAL(x,x), Offending value:..`
-- **Cause**: Azure Synapse Analytics Polybase cannot insert empty string (null value) into decimal column.
+- **Cause**: Azure Synapse Analytics PolyBase can't insert an empty string (null value) into a decimal column.
-- **Resolution**: In Copy activity sink, under Polybase settings, set "**use type default**" option to false.
+- **Resolution**: In the copy activity sink, under PolyBase settings, set the **use type default** option to false.
### Error message: Java exception message: HdfsBridge::CreateRecordReader -- **Symptoms**: You copy data into Azure Synapse Analytics using PolyBase, and hit the following error:
+- **Symptoms**: You copy data into Azure Synapse Analytics by using PolyBase and receive the following error:
- ```
- Message=110802;An internal DMS error occurred that caused this operation to fail.
+ `Message=110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException,
- Message: Java exception raised on call to HdfsBridge_CreateRecordReader.
- Java exception message:HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.: Error [HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.] occurred while accessing external file.....
- ```
--- **Cause**: The possible cause is that the schema (total column width) being too large (larger than 1 MB). Check the schema of the target Azure Synapse Analytics table by adding the size of all columns:-
- - Int -> 4 bytes
- - Bigint -> 8 bytes
- - Varchar(n),char(n),binary(n), varbinary(n) -> n bytes
- - Nvarchar(n), nchar(n) -> n*2 bytes
- - Date -> 6 bytes
- - Datetime/(2), smalldatetime -> 16 bytes
- - Datetimeoffset -> 20 bytes
- - Decimal -> 19 bytes
- - Float -> 8 bytes
- - Money -> 8 bytes
- - Smallmoney -> 4 bytes
- - Real -> 4 bytes
- - Smallint -> 2 bytes
- - Time -> 12 bytes
- - Tinyint -> 1 byte
+ Message: Java exception raised on call to HdfsBridge_CreateRecordReader.
+ Java exception message:HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.: Error [HdfsBridge::CreateRecordReader - Unexpected error encountered creating the record reader.] occurred while accessing external file.....`
+
+- **Cause**: The cause might be that the schema (total column width) is too large (larger than 1 MB). Check the schema of the target Azure Synapse Analytics table by adding the size of all columns:
+
+ - Int = 4 bytes
+ - Bigint = 8 bytes
+ - Varchar(n), char(n), binary(n), varbinary(n) = n bytes
+ - Nvarchar(n), nchar(n) = n*2 bytes
+ - Date = 6 bytes
+ - Datetime/(2), smalldatetime = 16 bytes
+ - Datetimeoffset = 20 bytes
+ - Decimal = 19 bytes
+ - Float = 8 bytes
+ - Money = 8 bytes
+ - Smallmoney = 4 bytes
+ - Real = 4 bytes
+ - Smallint = 2 bytes
+ - Time = 12 bytes
+ - Tinyint = 1 byte
- **Resolution**:
- - Reduce column width to be less than 1 MB.
- - Or use bulk insert approach by disabling Polybase.
+ - Reduce column width to less than 1 MB.
+ - Or use a bulk insert approach by disabling PolyBase.
### Error message: The condition specified using HTTP conditional header(s) is not met -- **Symptoms**: You use SQL query to pull data from Azure Synapse Analytics and hit the following error:
+- **Symptoms**: You use SQL query to pull data from Azure Synapse Analytics and receive the following error:
- ```
- ...StorageException: The condition specified using HTTP conditional header(s) is not met...
- ```
+ `...StorageException: The condition specified using HTTP conditional header(s) is not met...`
-- **Cause**: Azure Synapse Analytics hit issue querying the external table in Azure Storage.
+- **Cause**: Azure Synapse Analytics encountered an issue while querying the external table in Azure Storage.
-- **Resolution**: Run the same query in SSMS and check if you see the same result. If yes, open a support ticket to Azure Synapse Analytics and provide your Azure Synapse Analytics server and database name to further troubleshoot.
+- **Resolution**: Run the same query in SQL Server Management Studio (SSMS) and check to see whether you get the same result. If you do, open a support ticket to Azure Synapse Analytics and provide your Azure Synapse Analytics server and database name.
### Performance tier is low and leads to copy failure -- **Symptoms**: Below error message occurred when copying data into Azure SQL Database: `Database operation failed. Error message from database execution : ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.`
+- **Symptoms**: You copy data into Azure SQL Database and receive the following error: `Database operation failed. Error message from database execution : ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.`
-- **Cause**: Azure SQL Database s1 is being used, which hit IO limits in such case.
+- **Cause**: Azure SQL Database s1 has hit input/output (I/O) limits.
- **Resolution**: Upgrade the Azure SQL Database performance tier to fix the issue.
-### SQL Table cannot be found
+### SQL table can't be found
-- **Symptoms**: Error occurred when copying data from Hybrid into On-prem SQL Server table:`Cannot find the object "dbo.Contoso" because it does not exist or you do not have permissions.`
+- **Symptoms**: You copy data from hybrid into an on-premises SQL Server table and receive the following error:`Cannot find the object "dbo.Contoso" because it does not exist or you do not have permissions.`
-- **Cause**: The current SQL account does not have enough permission to execute requests issued by .NET SqlBulkCopy.WriteToServer.
+- **Cause**: The current SQL account doesn't have sufficient permissions to execute requests issued by .NET SqlBulkCopy.WriteToServer.
- **Resolution**: Switch to a more privileged SQL account.
-### Error message: String or binary data would be truncated
+### Error message: String or binary data is truncated
-- **Symptoms**: Error occurred when copying data into On-prem/Azure SQL Server table:
+- **Symptoms**: An error occurs when you copy data into an on-premises Azure SQL Server table.
-- **Cause**: Cx Sql table schema definition has one or more columns with less length than expectation.
+- **Cause**: The Cx SQL table schema definition has one or more columns with less length than expected.
-- **Resolution**: Try following steps to fix the issue:
+- **Resolution**: To resolve the issue, try the following:
- 1. Apply SQL sink [fault tolerance](./copy-activity-fault-tolerance.md), especially "redirectIncompatibleRowSettings" to troubleshoot which rows have the issue.
+ 1. To troubleshoot which rows have the issue, apply SQL sink [fault tolerance](./copy-activity-fault-tolerance.md), especially "redirectIncompatibleRowSettings."
> [!NOTE]
- > Please be noticed that fault tolerance might introduce additional execution time, which could lead to higher cost.
+ > Fault tolerance might require additional execution time, which could lead to higher costs.
- 2. Double check the redirected data with SQL table schema column length to see which column(s) need to be updated.
+ 2. Double-check the redirected data against the SQL table schema column length to see which columns need to be updated.
- 3. Update table schema accordingly.
+ 3. Update the table schema accordingly.
## Azure Table Storage ### Error code: AzureTableDuplicateColumnsFromSource -- **Message**: `Duplicate columns with same name '%name;' are detected from source. This is NOT supported by Azure Table Storage sink`
+- **Message**: `Duplicate columns with same name '%name;' are detected from source. This is NOT supported by Azure Table Storage sink.`
-- **Cause**: It could be common for sql query with join, or unstructured csv files
+- **Cause**: Duplicated source columns might occur for one of the following reasons:
+ * You're using the database as a source and applied table joins.
+ * You have unstructured CSV files with duplicated column names in the header row.
-- **Recommendation**: double check the source columns and fix accordingly.
+- **Recommendation**: Double-check and fix the source columns, as necessary.
## DB2
@@ -510,71 +493,73 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `Error thrown from driver. Sql code: '%code;'` -- **Cause**: If the error message contains "SQLSTATE=51002 SQLCODE=-805", please refer to the Tip in this document: https://docs.microsoft.com/azure/data-factory/connector-db2#linked-service-properties
+- **Cause**: If the error message contains the string "SQLSTATE=51002 SQLCODE=-805," follow the "Tip" in [Copy data from DB2 by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-db2#linked-service-properties).
-- **Recommendation**: Try to set "NULLID" in property "packageCollection"
+- **Recommendation**: Try to set "NULLID" in the `packageCollection` property.
-## Delimited Text Format
+## Delimited text format
### Error code: DelimitedTextColumnNameNotAllowNull - **Message**: `The name of column index %index; is empty. Make sure column name is properly specified in the header row.` -- **Cause**: When set 'firstRowAsHeader' in activity, the first row will be used as column name. This error means the first row contains empty value. For example: 'ColumnA, ColumnB'.
+- **Cause**: When 'firstRowAsHeader' is set in the activity, the first row is used as the column name. This error means that the first row contains an empty value (for example, 'ColumnA, ColumnB').
-- **Recommendation**: Check the first row, and fix the value if there is empty value.
+- **Recommendation**: Check the first row, and fix the value if it is empty.
### Error code: DelimitedTextMoreColumnsThanDefined - **Message**: `Error found when processing '%function;' source '%name;' with row number %rowCount;: found more columns than expected column count: %expectedColumnCount;.` -- **Cause**: The problematic row's column count is larger than the first row's column count. It may be caused by data issue or incorrect column delimiter/quote char settings.
+- **Cause**: The problematic row's column count is larger than the first row's column count. It might be caused by a data issue or incorrect column delimiter or quote char settings.
-- **Recommendation**: Get the row count in error message, check the row's column and fix the data.
+- **Recommendation**: Get the row count from the error message, check the row's column, and fix the data.
-- **Cause**: If the expected column count is "1" in error message, maybe you have specified wrong compression or format settings. Thus ADF parsed your file(s) incorrectly.
+- **Cause**: If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused Data Factory to parse your files incorrectly.
-- **Recommendation**: Check the format settings to make sure it matches to your source file(s).
+- **Recommendation**: Check the format settings to make sure they match your source files.
-- **Cause**: If your source is a folder, it's possible that the files under the specified folder have different schema.
+- **Cause**: If your source is a folder, the files under the specified folder might have a different schema.
-- **Recommendation**: Make sure the files under the given folder have identical schema.
+- **Recommendation**: Make sure that the files in the specified folder have an identical schema.
-## Dynamics 365/Common Data Service/Dynamics CRM
+## Dynamics 365, Common Data Service, and Dynamics CRM
### Error code: DynamicsCreateServiceClientError -- **Message**: `This is a transient issue on dynamics server side. Try to rerun the pipeline.`
+- **Message**: `This is a transient issue on Dynamics server side. Try to rerun the pipeline.`
-- **Cause**: This is a transient issue on dynamics server side.
+- **Cause**: The problem is a transient issue on the Dynamics server side.
-- **Recommendation**: Rerun the pipeline. If keep failing, try to reduce the parallelism. If still fail, please contact dynamics support.
+- **Recommendation**: Rerun the pipeline. If it fails again, try to reduce the parallelism. If the problem persists, contact Dynamics support.
-### Columns are missing when previewing/importing schema
+### Missing columns when you import a schema or preview data
-- **Symptoms**: Some of the columns turn out to be missing when importing schema or previewing data. Error message: `The valid structure information (column name and type) are required for Dynamics source.`
+- **Symptoms**: Some columns are missing when you import a schema or preview data. Error message: `The valid structure information (column name and type) are required for Dynamics source.`
-- **Cause**: This issue is basically by-design, as ADF is not able to show columns that have no value in the first 10 records. Make sure the columns you added is with correct format.
+- **Cause**: This issue is by design, because Data Factory is unable to show columns that contain no values in the first 10 records. Make sure that the columns you've added are in the correct format.
-- **Recommendation**: Manually add the columns in mapping tab.
+- **Recommendation**: Manually add the columns in the mapping tab.
### Error code: DynamicsMissingTargetForMultiTargetLookupField - **Message**: `Cannot find the target column for multi-target lookup field: '%fieldName;'.` -- **Cause**: The target column does not exist in source or in column mapping.
+- **Cause**: The target column doesn't exist in the source or in the column mapping.
-- **Recommendation**: 1. Make sure the source contains the target column. 2. Add the target column in the column mapping. Ensure the sink column is in pattern of "{fieldName}@EntityReference".
+- **Recommendation**:
+ 1. Make sure that the source contains the target column.
+ 2. Add the target column in the column mapping. Ensure that the sink column is in the format *{fieldName}@EntityReference*.
### Error code: DynamicsInvalidTargetForMultiTargetLookupField -- **Message**: `The provided target: '%targetName;' is not a valid target of field: '%fieldName;'. Valid targets are: '%validTargetNames;"`
+- **Message**: `The provided target: '%targetName;' is not a valid target of field: '%fieldName;'. Valid targets are: '%validTargetNames;'`
- **Cause**: A wrong entity name is provided as target entity of a multi-target lookup field.
@@ -585,40 +570,40 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `The provided target type is not a valid string. Field: '%fieldName;'.` -- **Cause**: The value in target column is not a string
+- **Cause**: The value in the target column is not a string.
- **Recommendation**: Provide a valid string in the multi-target lookup target column. ### Error code: DynamicsFailedToRequetServer -- **Message**: `The dynamics server or the network is experiencing issues. Check network connectivity or check dynamics server log for more details.`
+- **Message**: `The Dynamics server or the network is experiencing issues. Check network connectivity or check Dynamics server log for more details.`
-- **Cause**: The dynamics server is instable or inaccessible or the network is experiencing issues.
+- **Cause**: The Dynamics server is instable or inaccessible, or the network is experiencing issues.
-- **Recommendation**: Check network connectivity or check dynamics server log for more details. Contact dynamics support for further help.
+- **Recommendation**: For more details, check network connectivity or check the Dynamics server log. For further help, contact Dynamics support.
## FTP ### Error code: FtpFailedToConnectToFtpServer -- **Message**: `Failed to connect to FTP server. Please make sure the provided server informantion is correct, and try again.`
+- **Message**: `Failed to connect to FTP server. Please make sure the provided server information is correct, and try again.`
-- **Cause**: Incorrect linked service type might be used for FTP server, like using SFTP Linked Service to connect to an FTP server.
+- **Cause**: An incorrect linked service type might be used for the FTP server, such as using the Secure FTP (SFTP) linked service to connect to an FTP server.
-- **Recommendation**: Check the port of the target server. By default FTP uses port 21.
+- **Recommendation**: Check the port of the target server. FTP uses port 21.
-## Http
+## HTTP
### Error code: HttpFileFailedToRead - **Message**: `Failed to read data from http server. Check the error from http server:%message;` -- **Cause**: This error happens when Azure Data Factory talk to http server, but http request operation fail.
+- **Cause**: This error occurs when Azure Data Factory talks to the HTTP server, but the HTTP request operation fails.
-- **Recommendation**: Check the http status code \ message in error message and fix the remote server issue.
+- **Recommendation**: Check the HTTP status code in the error message, and fix the remote server issue.
## Oracle
@@ -627,76 +612,76 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `Hour, Minute, and Second parameters describe an un-representable DateTime.` -- **Cause**: In ADF, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports wider range of DateTime value (like BC century or min/sec>59), which leads to failure in ADF.
+- **Cause**: In Data Factory, DateTime values are supported in the range from 0001-01-01 00:00:00 to 9999-12-31 23:59:59. However, Oracle supports a wider range of DateTime values, such as the BC century or min/sec>59, which leads to failure in Data Factory.
- **Recommendation**:
- Run `select dump(<column name>)` to check if the value in Oracle is in ADF's range.
+ To see whether the value in Oracle is in the range of Data Factory, run `select dump(<column name>)`.
- If you wish to know the byte sequence in the result, please check https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle.
+ To learn the byte sequence in the result, see [How are dates stored in Oracle?](https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle).
-## Orc Format
+## ORC format
### Error code: OrcJavaInvocationException -- **Message**: `An error occurred when invoking java, message: %javaException;.`
+- **Message**: `An error occurred when invoking Java, message: %javaException;.`
-- **Cause**: When the error message contains 'java.lang.OutOfMemory', 'Java heap space' and 'doubleCapacity', usually it's a memory management issue in old version of integration runtime.
+- **Cause**: When the error message contains the strings "java.lang.OutOfMemory," "Java heap space," and "doubleCapacity," it's usually a memory management issue in an old version of integration runtime.
-- **Recommendation**: If you are using Self-hosted Integration Runtime, suggest upgrading to the latest version.
+- **Recommendation**: If you're using Self-hosted Integration Runtime, we recommend that you upgrade to the latest version.
-- **Cause**: When the error message contains 'java.lang.OutOfMemory', the integration runtime doesn't have enough resource to process the file(s).
+- **Cause**: When the error message contains the string "java.lang.OutOfMemory," the integration runtime doesn't have enough resources to process the files.
-- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted Integration Runtime, scale up to a powerful machine with memory equal to or larger than 8 GB.
+- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory equal to or larger than 8 GB.
-- **Cause**: When error message contains 'NullPointerReference', it possible is a transient error.
+- **Cause**: When the error message contains the string "NullPointerReference," the cause might be a transient error.
-- **Recommendation**: Retry. If the problem persists, please contact support.
+- **Recommendation**: Retry the operation. If the problem persists, contact support.
-- **Cause**: When error message contains 'BufferOverflowException', it possible is a transient error.
+- **Cause**: When the error message contains the string "BufferOverflowException," the cause might be a transient error.
-- **Recommendation**: Retry. If the problem persists, please contact support.
+- **Recommendation**: Retry the operation. If the problem persists, contact support.
-- **Cause**: When error message contains "java.lang.ClassCastException:org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to org.apache.hadoop.io.Text", this is the type conversion issue inside Java Runtime. Usually, it caused by source data cannot be well handled in Java runtime.
+- **Cause**: When the error message contains the string "java.lang.ClassCastException:org.apache.hadoop.hive.serde2.io.HiveCharWritable can't be cast to org.apache.hadoop.io.Text," the cause might be a type conversion issue inside Java Runtime. Usually, it means that the source data can't be handled well in Java Runtime.
-- **Recommendation**: This is data issue. Try to use string instead of char/varchar in orc format data.
+- **Recommendation**: This is a data issue. Try to use a string instead of char or varchar in ORC format data.
### Error code: OrcDateTimeExceedLimit - **Message**: `The Ticks value '%ticks;' for the datetime column must be between valid datetime ticks range -621355968000000000 and 2534022144000000000.` -- **Cause**: If the datetime value is '0001-01-01 00:00:00', it could be caused by the difference between Julian Calendar and Gregorian Calendar. https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar#Difference_between_Julian_and_proleptic_Gregorian_calendar_dates.
+- **Cause**: If the datetime value is '0001-01-01 00:00:00', it could be caused by the differences between the [Julian calendar and the Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar#Difference_between_Julian_and_proleptic_Gregorian_calendar_dates).
- **Recommendation**: Check the ticks value and avoid using the datetime value '0001-01-01 00:00:00'.
-## Parquet Format
+## Parquet format
### Error code: ParquetJavaInvocationException - **Message**: `An error occurred when invoking java, message: %javaException;.` -- **Cause**: When the error message contains 'java.lang.OutOfMemory', 'Java heap space' and 'doubleCapacity', usually it's a memory management issue in old version of integration runtime.
+- **Cause**: When the error message contains the strings "java.lang.OutOfMemory," "Java heap space," and "doubleCapacity," it's usually a memory management issue in an old version of Integration Runtime.
-- **Recommendation**: If you are using Self-hosted Integration Runtime and the version is earlier than 3.20.7159.1, it is suggested to upgrade to the latest version.
+- **Recommendation**: If you are using Self-hosted IR and the version is earlier than 3.20.7159.1, we recommend that you upgrade to the latest version.
-- **Cause**: When the error message contains 'java.lang.OutOfMemory', the integration runtime doesn't have enough resource to process the file(s).
+- **Cause**: When the error message contains the string "java.lang.OutOfMemory," the integration runtime doesn't have enough resources to process the files.
-- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted Integration Runtime, scale up to a powerful machine with memory equal to or larger than 8 GB.
+- **Recommendation**: Limit the concurrent runs on the integration runtime. For Self-hosted IR, scale up to a powerful machine with memory that's equal to or greater than 8 GB.
-- **Cause**: When error message contains 'NullPointerReference', it possible is a transient error.
+- **Cause**: When the error message contains the string "NullPointerReference," it might be a transient error.
-- **Recommendation**: Retry. If the problem persists, please contact support.
+- **Recommendation**: Retry the operation. If the problem persists, contact support.
### Error code: ParquetInvalidFile - **Message**: `File is not a valid Parquet file.` -- **Cause**: Parquet file issue.
+- **Cause**: This is a Parquet file issue.
-- **Recommendation**: Check the input is a valid Parquet file.
+- **Recommendation**: Check to see whether the input is a valid Parquet file.
### Error code: ParquetNotSupportedType
@@ -705,16 +690,16 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**: The Parquet format is not supported in Azure Data Factory. -- **Recommendation**: Double check the source data. Refer to the doc: https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs.
+- **Recommendation**: Double-check the source data by going to [Supported file formats and compression codecs by copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs).
### Error code: ParquetMissedDecimalPrecisionScale - **Message**: `Decimal Precision or Scale information is not found in schema for column: %column;.` -- **Cause**: Try to parse the number precision and scale, but no such information is provided.
+- **Cause**: The number precision and scale were parsed, but no such information was provided.
-- **Recommendation**: 'Source' does not return correct Precision and scale. Check the issue column precision and scale.
+- **Recommendation**: The source doesn't return the correct precision and scale information. Check the issue column for the information.
### Error code: ParquetInvalidDecimalPrecisionScale
@@ -723,41 +708,41 @@ busy to handle requests, it returns an HTTP error 503.
- **Cause**: The schema is invalid. -- **Recommendation**: Check the issue column precision and scale.
+- **Recommendation**: Check the issue column for precision and scale.
### Error code: ParquetColumnNotFound - **Message**: `Column %column; does not exist in Parquet file.` -- **Cause**: Source schema is mismatch with sink schema.
+- **Cause**: The source schema is a mismatch with the sink schema.
-- **Recommendation**: Check the'mappings' in 'activity'. Make sure the source column can be mapped to the right sink column.
+- **Recommendation**: Check the mappings in the activity. Make sure that the source column can be mapped to the correct sink column.
### Error code: ParquetInvalidDataFormat - **Message**: `Incorrect format of %srcValue; for converting to %dstType;.` -- **Cause**: The data cannot be converted into type specified in mappings.source
+- **Cause**: The data can't be converted into the type that's specified in mappings.source.
-- **Recommendation**: Double check the source data or specify the correct data type for this column in copy activity column mapping. Refer to the doc: https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs.
+- **Recommendation**: Double-check the source data or specify the correct data type for this column in the copy activity column mapping. For more information, see [Supported file formats and compression codecs by copy activity in Azure Data Factory](https://docs.microsoft.com/azure/data-factory/supported-file-formats-and-compression-codecs).
### Error code: ParquetDataCountNotMatchColumnCount - **Message**: `The data count in a row '%sourceColumnCount;' does not match the column count '%sinkColumnCount;' in given schema.` -- **Cause**: Source column count and sink column count mismatch
+- **Cause**: A mismatch between the source column count and the sink column count.
-- **Recommendation**: Double check source column count is same as sink column count in 'mapping'.
+- **Recommendation**: Double-check to ensure that the source column count is same as the sink column count in 'mapping'.
### Error code: ParquetDataTypeNotMatchColumnType -- **Message**: The data type %srcType; is not match given column type %dstType; at column '%columnIndex;'.
+- **Message**: `The data type %srcType; is not match given column type %dstType; at column '%columnIndex;'.`
-- **Cause**: Data from source cannot be converted to typed defined in sink
+- **Cause**: The data from the source can't be converted to the type that's defined in the sink.
- **Recommendation**: Specify a correct type in mapping.sink.
@@ -766,16 +751,16 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `%message;` -- **Cause**: Data value over limitation
+- **Cause**: The data value has exceeded the limit.
-- **Recommendation**: Retry. If issue persists, please contact us.
+- **Recommendation**: Retry the operation. If the issue persists, contact us.
### Error code: ParquetUnsupportedInterpretation - **Message**: `The given interpretation '%interpretation;' of Parquet format is not supported.` -- **Cause**: Not supported scenario
+- **Cause**: This scenario isn't supported.
- **Recommendation**: 'ParquetInterpretFor' should not be 'sparkSql'.
@@ -784,48 +769,47 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `File level compression is not supported for Parquet.` -- **Cause**: Not supported scenario
+- **Cause**: This scenario isn't supported.
-- **Recommendation**: Remove 'CompressionType' in payload.
+- **Recommendation**: Remove 'CompressionType' in the payload.
### Error code: UserErrorJniException - **Message**: `Cannot create JVM: JNI return code [-6][JNI call failed: Invalid arguments.]` -- **Cause**: JVM can't be created because some illegal (global) arguments are set.
+- **Cause**: A Java Virtual Machine (JVM) can't be created because some illegal (global) arguments are set.
-- **Recommendation**: Log in to the machine that host **each node** of your self-hosted IR. Check if the system variable is correctly set like this: `_JAVA_OPTIONS "-Xms256m -Xmx16g" with memory bigger than 8 G`. Restart all the IR nodes and then rerun the pipeline.
+- **Recommendation**: Log in to the machine that hosts *each node* of your self-hosted IR. Check to ensure that the system variable is set correctly, as follows: `_JAVA_OPTIONS "-Xms256m -Xmx16g" with memory bigger than 8 G`. Restart all the IR nodes, and then rerun the pipeline.
### Arithmetic overflow -- **Symptoms**: Error message occurred when copying Parquet files: `Message = Arithmetic Overflow., Source = Microsoft.DataTransfer.Common`
+- **Symptoms**: Error message occurred when you copy Parquet files: `Message = Arithmetic Overflow., Source = Microsoft.DataTransfer.Common`
-- **Cause**: Currently only decimal of precision <= 38 and length of integer part <= 20 is supported when copying files from Oracle to Parquet.
+- **Cause**: Currently only the decimal of precision <= 38 and length of integer part <= 20 are supported when you copy files from Oracle to Parquet.
-- **Resolution**: You may convert columns with such problem into VARCHAR2 as a workaround.
+- **Resolution**: As a workaround, you can convert any columns with this problem into VARCHAR2.
### No enum constant -- **Symptoms**: Error message occurred when copying data to Parquet format: `java.lang.IllegalArgumentException:field ended by &apos;;&apos;`, or: `java.lang.IllegalArgumentException:No enum constant org.apache.parquet.schema.OriginalType.test`.
+- **Symptoms**: Error message occurred when you copy data to Parquet format: `java.lang.IllegalArgumentException:field ended by &apos;;&apos;`, or: `java.lang.IllegalArgumentException:No enum constant org.apache.parquet.schema.OriginalType.test`.
- **Cause**:
- The issue could be caused by white spaces or unsupported characters (such as,;{}()\n\t=) in column name, as Parquet doesn't support such format.
+ The issue could be caused by white spaces or unsupported special characters (such as,;{}()\n\t=) in the column name, because Parquet doesn't support such a format.
- For example, column name like *contoso(test)* will parse the type in brackets from [code](https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/schema/MessageTypeParser.java) `Tokenizer st = new Tokenizer(schemaString, " ;{}()\n\t");`. The error will be raised as there is no such "test" type.
+ For example, a column name such as *contoso(test)* will parse the type in brackets from [code](https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/schema/MessageTypeParser.java) `Tokenizer st = new Tokenizer(schemaString, " ;{}()\n\t");`. The error is thrown because there is no such "test" type.
- To check supported types, you can check them [here](https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/schema/OriginalType.java).
+ To check supported types, go to the GitHub [apache/parquet-mr site](https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/schema/OriginalType.java).
- **Resolution**:
- - Double check if there are white spaces in sink column name.
-
- - Double check if the first row with white spaces is used as column name.
-
- - Double check the type OriginalType is supported or not. Try to avoid these special symbols `,;{}()\n\t=`.
+ Double-check to see whether:
+ - There are white spaces in the sink column name.
+ - The first row with white spaces is used as the column name.
+ - The type OriginalType is supported. Try to avoid using these special characters: `,;{}()\n\t=`.
## REST
@@ -834,30 +818,30 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `Rest Endpoint responded with Failure from server. Check the error from server:%message;` -- **Cause**: This error happens when Azure Data Factory talk to Rest Endpoint over http protocol, and request operation fail.
+- **Cause**: This error occurs when Azure Data Factory talks to the REST endpoint over HTTP protocol, and the request operation fails.
-- **Recommendation**: Check the http status code \ message in error message and fix the remote server issue.
+- **Recommendation**: Check the HTTP status code or the message in the error message and fix the remote server issue.
-### Unexpected network response from REST connector
+### Unexpected network response from the REST connector
-- **Symptoms**: Endpoint sometimes receives unexpected response (400 / 401 / 403 / 500) from REST connector.
+- **Symptoms**: The endpoint sometimes receives an unexpected response (400, 401, 403, 500) from the REST connector.
-- **Cause**: The REST source connector uses URL and HTTP method/header/body from linked service/dataset/copy source as parameters when constructing an HTTP request. The issue is most likely caused by some mistakes in one or more specified parameters.
+- **Cause**: The REST source connector uses the URL and HTTP method/header/body from the linked service/dataset/copy source as parameters when it constructs an HTTP request. The issue is most likely caused by some mistakes in one or more specified parameters.
- **Resolution**:
- - Use 'curl' in cmd window to check if parameter is the cause or not (**Accept** and **User-Agent** headers should always be included):
- ```
- curl -i -X <HTTP method> -H <HTTP header1> -H <HTTP header2> -H "Accept: application/json" -H "User-Agent: azure-data-factory/2.0" -d '<HTTP body>' <URL>
- ```
- If the command returns the same unexpected response, please fix above parameters with 'curl' until it returns the expected response.
+ - Use 'curl' in a Command Prompt window to see whether the parameter is the cause (**Accept** and **User-Agent** headers should always be included):
+
+ `curl -i -X <HTTP method> -H <HTTP header1> -H <HTTP header2> -H "Accept: application/json" -H "User-Agent: azure-data-factory/2.0" -d '<HTTP body>' <URL>`
+
+ If the command returns the same unexpected response, fix the preceding parameters with 'curl' until it returns the expected response.
- Also you can use 'curl--help' for more advanced usage of the command.
+ You can also use 'curl--help' for more advanced usage of the command.
- - If only ADF REST connector returns unexpected response, please contact Microsoft support for further troubleshooting.
+ - If only the Data Factory REST connector returns an unexpected response, contact Microsoft support for further troubleshooting.
- - Please note that 'curl' may not be suitable to reproduce SSL certificate validation issue. In some scenarios, 'curl' command was executed successfully without hitting any SSL cert validation issue. But when the same URL is executed in browser, no SSL cert is actually returned in the first place for client to establish trust with server.
+ - Note that 'curl' might not be suitable to reproduce an SSL certificate validation issue. In some scenarios, the 'curl' command was executed successfully without encountering any SSL certificate validation issues. But when the same URL is executed in a browser, no SSL certificate is actually returned for the client to establish trust with server.
- Tools like **Postman** and **Fiddler** are recommended for the above case.
+ Tools like **Postman** and **Fiddler** are recommended for the preceding case.
## SFTP
@@ -866,62 +850,62 @@ busy to handle requests, it returns an HTTP error 503.
- **Message**: `Failed to '%operation;'. Check detailed error from SFTP.` -- **Cause**: Sftp operation hit problem.
+- **Cause**: A problem with the SFTP operation.
-- **Recommendation**: Check detailed error from SFTP.
+- **Recommendation**: Check the error details from SFTP.
### Error code: SftpRenameOperationFail -- **Message**: `Failed to rename the temp file. Your SFTP server doesn't support renaming temp file, please set "useTempFileRename" as false in copy sink to disable uploading to temp file.`
+- **Message**: `Failed to rename the temp file. Your SFTP server doesn't support renaming temp file, set "useTempFileRename" as false in copy sink to disable uploading to temp file.`
-- **Cause**: Your SFTP server doesn't support renaming temp file.
+- **Cause**: Your SFTP server doesn't support renaming the temp file.
-- **Recommendation**: Set "useTempFileRename" as false in copy sink to disable uploading to temp file.
+- **Recommendation**: Set "useTempFileRename" as false in the copy sink to disable uploading to the temp file.
### Error code: SftpInvalidSftpCredential -- **Message**: `Invalid Sftp credential provided for '%type;' authentication type.`
+- **Message**: `Invalid SFTP credential provided for '%type;' authentication type.`
-- **Cause**: Private key content is fetched from AKV/SDK but it is not encoded correctly.
+- **Cause**: Private key content is fetched from the Azure key vault or SDK, but it's not encoded correctly.
- **Recommendation**:
- If private key content is from AKV and original key file can work if customer upload it directly to SFTP linked service
+ If the private key content is from your key vault, the original key file can work if you upload it directly to the SFTP linked service.
- Refer to https://docs.microsoft.com/azure/data-factory/connector-sftp#using-ssh-public-key-authentication, the privateKey content is a Base64 encoded SSH private key content.
+ For more information, see [Copy data from and to the SFTP server by using Azure Data Factory](https://docs.microsoft.com/azure/data-factory/connector-sftp#using-ssh-public-key-authentication). The private key content is base64 encoded SSH private key content.
- Please encode **the whole content of original private key file** with base64 encoding and store the encoded string to AKV. Original private key file is the one that can work on SFTP linked service if you click on Upload from file.
+ Encode *entire* original private key file with base64 encoding, and store the encoded string in your key vault. The original private key file is the one that can work on the SFTP linked service if you select **Upload** from the file.
- Here's some samples used for generating the string:
+ Here are some samples you can use to generate the string:
- Use C# code:
- ```
- byte[] keyContentBytes = File.ReadAllBytes(Private Key Path);
- string keyContent = Convert.ToBase64String(keyContentBytes, Base64FormattingOptions.None);
- ```
+
+ ```
+ byte[] keyContentBytes = File.ReadAllBytes(Private Key Path);
+ string keyContent = Convert.ToBase64String(keyContentBytes, Base64FormattingOptions.None);
+ ```
- Use Python code:
- ```
- import base64
- rfd = open(r'{Private Key Path}', 'rb')
- keyContent = rfd.read()
- rfd.close()
- print base64.b64encode(Key Content)
- ```
- - Use third-party base64 convert tool
+ ```
+ import base64
+ rfd = open(r'{Private Key Path}', 'rb')
+ keyContent = rfd.read()
+ rfd.close()
+ print base64.b64encode(Key Content)
+ ```
- Tools like https://www.base64encode.org/ are recommended.
+ - Use a third-party base64 conversion tool. We recommend the [Encode to Base64 format](https://www.base64encode.org/) tool.
-- **Cause**: Wrong key content format is chosen
+- **Cause**: The wrong key content format was chosen.
- **Recommendation**:
- PKCS#8 format SSH private key (start with "--BEGIN ENCRYPTED PRIVATE KEY--") is currently not supported to access SFTP server in ADF.
+ PKCS#8 format SSH private key (start with "--BEGIN ENCRYPTED PRIVATE KEY--") is currently not supported to access the SFTP server in Data Factory.
- Run below commands to convert the key to traditional SSH key format (start with "--BEGIN RSA PRIVATE KEY--"):
+ To convert the key to traditional SSH key format, starting with "--BEGIN RSA PRIVATE KEY--", run the following commands:
``` openssl pkcs8 -in pkcs8_format_key_file -out traditional_format_key_file
@@ -929,127 +913,126 @@ busy to handle requests, it returns an HTTP error 503.
ssh-keygen -f traditional_format_key_file -p ``` -- **Cause**: Invalid credential or private key content
+- **Cause**: Invalid credentials or private key content.
-- **Recommendation**: Double check with tools like WinSCP to see if your key file or password is correct.
+- **Recommendation**: To see whether your key file or password is correct, double-check with tools such as WinSCP.
-### SFTP Copy Activity failed
+### SFTP copy activity failed
-- **Symptoms**: Error code: UserErrorInvalidColumnMappingColumnNotFound. Error message: `Column &apos;AccMngr&apos; specified in column mapping cannot be found in source data.`
+- **Symptoms**:
+ * Error code: UserErrorInvalidColumnMappingColumnNotFound
+ * Error message: `Column 'AccMngr' specified in column mapping cannot be found in source data.`
-- **Cause**: The source doesn't include a column named "AccMngr".
+- **Cause**: The source doesn't include a column named "AccMngr."
-- **Resolution**: Double check how your dataset configured by mapping the destination dataset column to confirm if there's such "AccMngr" column.
+- **Resolution**: To determine whether the "AccMngr" column exists, double-check your dataset configuration by mapping the destination dataset column.
### Error code: SftpFailedToConnectToSftpServer -- **Message**: `Failed to connect to Sftp server '%server;'.`
+- **Message**: `Failed to connect to SFTP server '%server;'.`
-- **Cause**: If error message contains 'Socket read operation has timed out after 30000 milliseconds', one possible cause is that incorrect linked service type is used for SFTP server, like using FTP Linked Service to connect to an SFTP server.
+- **Cause**: If the error message contains the string "Socket read operation has timed out after 30000 milliseconds," one possible cause is that an incorrect linked service type is used for the SFTP server. For example, you might be using the FTP linked service to connect to the SFTP server.
-- **Recommendation**: Check the port of the target server. By default SFTP uses port 22.
+- **Recommendation**: Check the port of the target server. By default, SFTP uses port 22.
-- **Cause**: If error message contains 'Server response does not contain SSH protocol identification', one possible cause is that SFTP server throttled the connection. ADF will create multiple connections to download from SFTP server in parallel, and sometimes it will hit SFTP server throttling. Practically, different server will return different error when hit throttling.
+- **Cause**: If the error message contains the string "Server response does not contain SSH protocol identification," one possible cause is that the SFTP server throttled the connection. Data Factory will create multiple connections to download from the SFTP server in parallel, and sometimes it will encounter SFTP server throttling. Ordinarily, different servers return different errors when they encounter throttling.
- **Recommendation**:
- Specify the max concurrent connection of SFTP dataset to 1 and rerun the copy. If it succeeds to pass, we can be sure that throttling is the cause.
-
- If you want to promote the low throughput, please contact SFTP administrator to increase the concurrent connection count limit, or add below IP to allow list:
-
- - If you're using Managed IR, please add [Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653).
- Or you can install Self-hosted IR if you do not want to add large list of IP ranges into SFTP server allow list.
+ Specify the maximum number of concurrent connections of the SFTP dataset as 1 and rerun the copy activity. If the activity succeeds, you can be sure that throttling is the cause.
- - If you're using Self-hosted IR, please add the machine IP that installed SHIR to allow list.
+ If you want to promote the low throughput, contact your SFTP administrator to increase the concurrent connection count limit, or you can do one of the following:
+ * If you're using Self-hosted IR, add the Self-hosted IR machine's IP to the allow list.
+ * If you're using Azure IR, add [Azure Integration Runtime IP addresses](https://docs.microsoft.com/azure/data-factory/azure-integration-runtime-ip-addresses). If you don't want to add a range of IPs to the SFTP server allow list, use Self-hosted IR instead.
-## SharePoint Online List
+## SharePoint Online list
### Error code: SharePointOnlineAuthFailed - **Message**: `The access token generated failed, status code: %code;, error message: %message;.` -- **Cause**: The service principal ID and Key may not correctly set.
+- **Cause**: The service principal ID and key might not be set correctly.
-- **Recommendation**: Check your registered application (service principal ID) and key whether it's correctly set.
+- **Recommendation**: Check your registered application (service principal ID) and key to see whether they're set correctly.
-## Xml Format
+## XML format
### Error code: XmlSinkNotSupported -- **Message**: `Write data in xml format is not supported yet, please choose a different format!`
+- **Message**: `Write data in XML format is not supported yet, choose a different format!`
-- **Cause**: Used an Xml dataset as sink dataset in your copy activity
+- **Cause**: An XML dataset was used as a sink dataset in your copy activity.
-- **Recommendation**: Use a dataset in different format as copy sink.
+- **Recommendation**: Use a dataset in a different format from that of the sink dataset.
### Error code: XmlAttributeColumnNameConflict - **Message**: `Column names %attrNames;' for attributes of element '%element;' conflict with that for corresponding child elements, and the attribute prefix used is '%prefix;'.` -- **Cause**: Used an attribute prefix, which caused the conflict.
+- **Cause**: An attribute prefix was used, which caused the conflict.
-- **Recommendation**: Set a different value of the "attributePrefix" property.
+- **Recommendation**: Set a different value for the "attributePrefix" property.
### Error code: XmlValueColumnNameConflict - **Message**: `Column name for the value of element '%element;' is '%columnName;' and it conflicts with the child element having the same name.` -- **Cause**: Used one of the child element names as the column name for the element value.
+- **Cause**: One of the child element names was used as the column name for the element value.
-- **Recommendation**: Set a different value of the "valueColumn" property.
+- **Recommendation**: Set a different value for the "valueColumn" property.
### Error code: XmlInvalid - **Message**: `Input XML file '%file;' is invalid with parsing error '%error;'.` -- **Cause**: The input xml file is not well formed.
+- **Cause**: The input XML file is not well formed.
-- **Recommendation**: Correct the xml file to make it well formed.
+- **Recommendation**: Correct the XML file to make it well formed.
-## General Copy Activity Error
+## General copy activity error
### Error code: JreNotFound - **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.` -- **Cause**: The self-hosted integration runtime cannot find Java Runtime. The Java Runtime is required for reading particular source.
+- **Cause**: The self-hosted IR can't find Java Runtime. Java Runtime is required for reading particular sources.
-- **Recommendation**: Check your integration runtime environment, the reference doc: https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime
+- **Recommendation**: Check your integration runtime environment, see [Use Self-hosted Integration Runtime](https://docs.microsoft.com/azure/data-factory/format-parquet#using-self-hosted-integration-runtime).
### Error code: WildcardPathSinkNotSupported - **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.` -- **Cause**: Sink dataset doesn't support wildcard.
+- **Cause**: The sink dataset doesn't support wildcard values.
-- **Recommendation**: Check the sink dataset and fix the path without wildcard value.
+- **Recommendation**: Check the sink dataset, and rewrite the path without using a wildcard value.
### FIPS issue -- **Symptoms**: Copy activity fails on FIPS-enabled Self-hosted Integration Runtime machine with error message `This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.`. This may happen when copying data with connectors like Azure Blob, SFTP, etc.
+- **Symptoms**: Copy activity fails on a FIPS-enabled self-hosted IR machine with the following error message: `This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.`
-- **Cause**: FIPS (Federal Information Processing Standards) defines a certain set of cryptographic algorithms that are allowed to be used. When FIPS mode is enabled on the machine, some cryptographic classes that copy activity depends on are blocked in some scenarios.
+- **Cause**: This error might occur when you copy data with connectors such as Azure Blob, SFTP, and so on. Federal Information Processing Standards (FIPS) defines a certain set of cryptographic algorithms that are allowed to be used. When FIPS mode is enabled on the machine, some cryptographic classes that copy activity depends on are blocked in some scenarios.
-- **Resolution**: You can learn about the current situation of FIPS mode in Windows from [this article](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/why-we-8217-re-not-recommending-8220-fips-mode-8221-anymore/ba-p/701037), and evaluate if you can disable FIPS on the Self-hosted Integration Runtime machine.
+- **Resolution**: Learn [why weΓÇÖre not recommending ΓÇ£FIPS ModeΓÇ¥ anymore](https://techcommunity.microsoft.com/t5/microsoft-security-baselines/why-we-8217-re-not-recommending-8220-fips-mode-8221-anymore/ba-p/701037), and evaluate whether you can disable FIPS on your self-hosted IR machine.
- On the other hand, if you just want to let Azure Data Factory bypass FIPS and make the activity runs succeeded, you can follow these steps:
+ Alternatively, if you only want to let Azure Data Factory bypass FIPS and make the activity runs succeed, do the following:
- 1. Open the folder where Self-hosted Integration Runtime is installed, usually under `C:\Program Files\Microsoft Integration Runtime\<IR version>\Shared`.
+ 1. Open the folder where Self-hosted IR is installed. The path is usually *C:\Program Files\Microsoft Integration Runtime \<IR version>\Shared*.
- 2. Open "diawp.exe.config", add `<enforceFIPSPolicy enabled="false"/>` into the `<runtime>` section like the following.
+ 2. Open the *diawp.exe.config* file and then, at the end of the `<runtime>` section, add `<enforceFIPSPolicy enabled="false"/>`, as shown here:
- ![Disable FIPS](./media/connector-troubleshoot-guide/disable-fips-policy.png)
+ ![Screenshot of a section of the diawp.exe.config file showing FIPS disabled.](./media/connector-troubleshoot-guide/disable-fips-policy.png)
- 3. Restart the Self-hosted Integration Runtime machine.
+ 3. Save the file, and then restart the Self-hosted IR machine.
## Next steps
@@ -1059,6 +1042,6 @@ For more troubleshooting help, try these resources:
* [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/) * [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
-* [Microsoft Q&A question page](/answers/topics/azure-data-factory.html)
+* [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
* [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
\ No newline at end of file
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-2101-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2101-release-notes.md
@@ -7,7 +7,7 @@
Previously updated : 01/19/2021 Last updated : 01/27/2021
@@ -73,8 +73,7 @@ The following table provides a summary of known issues carried over from the pre
|**16.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> | |**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | | |**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
-|**19.**|Compute + web proxy + update |If you have compute configured with web proxy, then compute update may fail. |We recommend that you disable compute before the update. |
-|**20.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
+|**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
<!--|**18.**|Azure Private Edge Zone (Preview) |There is a known issue with Virtual Network Function VM if the VM was created on Azure Stack Edge device running earlier preview builds such as 2006/2007b and then the device was updated to 2009 GA release. The issue is that the VNF information can't be retrieved or any new VNFs can't be created unless the VNF VMs are deleted before the device is updated. |Before you update Azure Stack Edge device to 2009 release, use the PowerShell command `get-mecvnf` followed by `remove-mecvnf <VNF guid>` to remove all Virtual Network Function VMs one at a time. After the upgrade, you will need to redeploy the same VNFs.|-->
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-create-kubernetes-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md
@@ -7,7 +7,7 @@
Previously updated : 08/28/2020 Last updated : 01/27/2021 # Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device
@@ -160,9 +160,9 @@ You can now deploy your applications in the namespace, then view those applicati
## Remove Kubernetes cluster
-To remove the Kubernetes cluster, you will need to remove the compute configuration.
+To remove the Kubernetes cluster, you will need to remove the IoT Edge configuration.
-For detailed instructions, go to [Remove compute configuration](azure-stack-edge-j-series-manage-compute.md#remove-compute-configuration).
+For detailed instructions, go to [Remove IoT Edge configuration](azure-stack-edge-j-series-manage-compute.md#remove-iot-edge-service).
## Next steps
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md
@@ -36,7 +36,7 @@ Use the following checklist to ensure you have this information after you have p
| Activation | Require activation key from the Azure Stack Edge Pro/ Data Box Gateway resource. | Once generated, the key expires in 3 days. | <!--
-| (Optional) MAC Address | If MAC address needs to be whitelisted, get the address of the connected port from local UI of the device. | |
+| (Optional) MAC Address | If MAC address needs to be on the allowed list, get the address of the connected port from local UI of the device. | |
| (Optional) Network switch port | Device hosts Hyper-V VMs for compute. Some network switch port configurations donΓÇÖt accommodate these setups by default. | |-->
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
@@ -7,7 +7,7 @@
Previously updated : 12/07/2020 Last updated : 01/27/2021 Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
@@ -127,7 +127,8 @@ This is an optional configuration.
> [!IMPORTANT] > * If you enable compute and use IoT Edge module on your Azure Stack Edge Pro device, we recommend you set web proxy authentication as **None**. NTLM is not supported.
->* Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Pro. Non-transparent web proxies are not supported.
+> * Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL.
+> * Transparent proxies work well with Azure Stack Edge Pro. For non-transparent proxies that intercept and read all the traffic (via their own certificates installed on the proxy server), upload the public key of the proxy's certificate as the signing chain on your Azure Stack Edge Pro device. You can then configure the proxy server settings on your Azure Stack Edge device. For more information, see [Bring your own certificates and upload through the local UI](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates).
<!--1. Go to the **Get started** page in the local web UI of your device. 2. On the **Network** tile, configure your web proxy server settings. Although web proxy configuration is optional, if you use a web proxy, you can configure it on this page only.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-quickstart.md
@@ -23,7 +23,7 @@ The total procedure should approximately take 1.5 hours to complete. For detaile
Before you deploy, make sure that following prerequisites are in place: 1. The Azure Stack Edge Pro GPU device is delivered to your site, [unpacked](azure-stack-edge-gpu-deploy-install.md#unpack-the-device) and [rack mounted](azure-stack-edge-gpu-deploy-install.md#rack-the-device).
-1. Configure your network such that your device can reach the [listed URLs patterns and ports](azure-stack-edge-gpu-system-requirements.md#networking-port-requirements).
+1. Configure your network such that your device can reach the [listed URL patterns and ports](azure-stack-edge-gpu-system-requirements.md#networking-port-requirements).
1. You have owner or contributor access to Azure subscription. 1. In the Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register the resource provider. Repeat to register `Microsoft.Devices` if you'll create an IoT Hub resource to deploy compute workloads. 1. Make sure you have a minimum of 2 free, static, contiguous IPs for Kubernetes nodes and at least 1 static IP for IoT Edge service. For each module or external service, you deploy, you will need 1 additional IP.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-manage-compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-compute.md
@@ -7,20 +7,20 @@
Previously updated : 08/28/2020 Last updated : 01/27/2021 # Manage compute on your Azure Stack Edge Pro GPU <!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
-This article describes how to manage compute on your Azure Stack Edge Pro. You can manage the compute via the Azure portal or via the local web UI. Use the Azure portal to manage modules, triggers, and compute configuration, and the local web UI to manage compute settings.
+This article describes how to manage compute via IoT Edge service on your Azure Stack Edge Pro GPU device. You can manage the compute via the Azure portal or via the local web UI. Use the Azure portal to manage modules, triggers, and IoT Edge configuration, and the local web UI to manage compute network settings.
In this article, you learn how to: > [!div class="checklist"] > * Manage triggers
-> * Manage compute configuration
+> * Manage IoT Edge configuration
## Manage triggers
@@ -35,7 +35,7 @@ Events are things that happen within your cloud environment or on your device th
Take the following steps in the Azure portal to create a trigger.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Edge compute > Trigger**. Select **+ Add trigger** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. Go to **Triggers** and select **+ Add trigger** on the command bar.
![Select add trigger](media/azure-stack-edge-j-series-manage-compute/add-trigger-1m.png)
@@ -77,32 +77,32 @@ Take the following steps in the Azure portal to delete a trigger.
The list of triggers updates to reflect the deletion.
-## Manage compute configuration
+## Manage IoT Edge configuration
Use the Azure portal to view the compute configuration, remove an existing compute configuration, or to refresh the compute configuration to sync up access keys for the IoT device and IoT Edge device for your Azure Stack Edge Pro.
-### View compute configuration
+### View IoT Edge configuration
-Take the following steps in the Azure portal to view the compute configuration for your device.
+Take the following steps in the Azure portal to view the IoT Edge configuration for your device.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Edge compute > Modules**. Select **View compute** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. After IoT Edge service is enabled on your device, the Overview page indicates that the IoT Edge service is running fine.
![Select View compute](media/azure-stack-edge-j-series-manage-compute/view-compute-1.png)
-2. Make a note of the compute configuration on your device. When you configured compute, you created an IoT Hub resource. Under that IoT Hub resource, an IoT device and an IoT Edge device are configured. Only the Linux modules are supported to run on the IoT Edge device.
+2. Go to **Properties** to view the IoT Edge configuration on your device. When you configured compute, you created an IoT Hub resource. Under that IoT Hub resource, an IoT device and an IoT Edge device are configured. Only the Linux modules are supported to run on the IoT Edge device.
![View configuration](media/azure-stack-edge-j-series-manage-compute/view-compute-2.png)
-### Remove compute configuration
+### Remove IoT Edge service
-Take the following steps in the Azure portal to remove the existing Edge compute configuration for your device.
+Take the following steps in the Azure portal to remove the existing IoT Edge configuration for your device.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Edge compute > Get started**. Select **Remove compute** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. Go to **Overview** and select **Remove** on the command bar.
![Select Remove compute](media/azure-stack-edge-j-series-manage-compute/remove-compute-1.png)
-2. If you remove the compute configuration, you will need to reconfigure your device in case you need to use compute again. When prompted for confirmation, select **Yes**.
+2. If you remove the IoT Edge service, the action is irreversible and can't be undone. The modules and triggers that you created will also be deleted. You will need to reconfigure your device in case you need to use IoT Edge again. When prompted for confirmation, select **OK**.
![Select Remove compute 2](media/azure-stack-edge-j-series-manage-compute/remove-compute-2.png)
@@ -116,7 +116,7 @@ If your IoT device and IoT Edge device keys have been rotated, then you need to
Take the following steps in the Azure portal to sync the access keys for your device.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Edge compute > Get started**. Select **Refresh configuration** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge compute**. Go to **Overview** and select **Refresh configuration** on the command bar.
![Select Refresh configuration](media/azure-stack-edge-j-series-manage-compute/refresh-configuration-1.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-set-azure-resource-manager-password https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-set-azure-resource-manager-password.md
@@ -7,7 +7,7 @@
Previously updated : 08/28/2020 Last updated : 01/27/2021 #Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources.
@@ -18,24 +18,26 @@
This article describes how to set your Azure Resource Manager password. You need to set this password when you are connecting to the device local APIs via the Azure Resource Manager.
-The procedure to set the password can be different depending upon whether you use the Azure portal or the PowerShell cmdlets. Each of these procedures is described in the following sections.
+<!--The procedure to set the password can be different depending upon whether you use the Azure portal or the PowerShell cmdlets. Each of these procedures is described in the following sections.-->
## Reset password via the Azure portal
-1. In the Azure portal, go to the Azure Stack Edge resource you created to manage your device. Go to **Edge compute > Get started**.
+1. In the Azure portal, go to the Azure Stack Edge resource you created to manage your device. Go to **Edge services > Cloud storage gateway**.
+
+ ![Reset EdgeARM user password 1](media/azure-stack-edge-j-series-set-azure-resource-manager-password/set-edgearm-password-1.png)
2. In the right pane, from the command bar, select **Reset Edge ARM password**.
- ![Reset EdgeARM user password 1](media/azure-stack-edge-j-series-set-azure-resource-manager-password/set-edgearm-password-1.png)
+ ![Reset EdgeARM user password 2](media/azure-stack-edge-j-series-set-azure-resource-manager-password/set-edgearm-password-2.png)
3. In the **Reset EdgeArm user password** blade, provide a password to connect to your device local APIs via the Azure Resource Manager. Confirm the password and select **Reset**.
- ![Reset EdgeARM user password 2](media/azure-stack-edge-j-series-set-azure-resource-manager-password/set-edgearm-password-2.png)
+ ![Reset EdgeARM user password 3](media/azure-stack-edge-j-series-set-azure-resource-manager-password/set-edgearm-password-3.png)
-## Reset password via PowerShell
+<!--## Reset password via PowerShell
1. In the Azure Portal, go to the Azure Stack Edge resource you created to manage your device. Make a note of the following parameters in the **Overview** page.
@@ -141,7 +143,7 @@ The procedure to set the password can be different depending upon whether you us
PS /home/aseuser/clouddrive> ```
-Use the new password to connect to Azure Resource Manager.
+Use the new password to connect to Azure Resource Manager.-->
## Next steps
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-defender-for-iot-glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-defender-for-iot-glossary.md
@@ -17,27 +17,26 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **access group** | Support user access requirements for large organizations by creating access group rules.<br /><br />Rules let you control view and configuration access to the Defender for IoT on-premises management console for specific user roles at relevant business units, regions, sites, and zones.<br /><br />For example, allow security analysts from an Active Directory group to access West European automotive data but prevent access to data in Africa. | **on-premises management console<br /><br />business unit** |
-| **access tokens** | Generate access tokens to access the Defender for IoT REST API. | **API** |
-| **Acknowledge Alert Event** | Instruct Defender for IoT to hide the alert once for the detected event. The alert will be triggered again if the event is detected again. | **alert<br /><br />Learn Alert Event<br /><br />Mute Alert Event** |
-| **alert** | A message that a Defender for IoT engine triggers regarding deviations from authorized network behavior, network anomalies, or suspicious network activity and traffic. | **forwarding rule<br /><br />exclusion rule<br /><br />system notifications** |
-| **alert comment** | Comments that security analysts and administrators make in alert messages. For example, an alert comment might give instructions about mitigation actions to take, or names of individuals to contact regarding the event.<br /><br />Users who are reviewing alerts can choose the comment or comments that best reflect the event status, or steps taken to investigate the alert. | **alert** |
-| **anomaly engine** | A Defender for IoT engine that detects unusual machine-to-machine (M2M) communication and behavior. For example, the engine might detect excessive SMB sign-in attempts. Anomaly alerts are triggered when these events are detected. | **Defender for IoT engines** |
-| **API** | Allows external systems to access data discovered by Defender for IoT and perform actions by using the external REST API over SSL connections. | **access tokens** |
-| **attack vector report** | A real-time graphical representation of vulnerability chains of exploitable endpoints.<br /><br />Reports let you evaluate the effect of mitigation activities in the attack sequence to determine. For example, you can evaluate whether a system upgrade disrupts the attacker's path by breaking the attack chain, or whether an alternate attack path remains. This prioritizes remediation and mitigation activities. | **risk assessment report** |
+| **Access group** | Support user access requirements for large organizations by creating access group rules.<br /><br />Rules let you control view and configuration access to the Defender for IoT on-premises management console for specific user roles at relevant business units, regions, sites, and zones.<br /><br />For example, allow security analysts from an Active Directory group to access West European automotive data but prevent access to data in Africa. | **[On-premises management console](#o)** <br /><br />**[Business unit](#b)** |
+| **Access tokens** | Generate access tokens to access the Defender for IoT REST API. | **[API](#a)** |
+| **Acknowledge alert event** | Instruct Defender for IoT to hide the alert once for the detected event. The alert will be triggered again if the event is detected again. | **[Alert](#a)<br /><br />[Learn alert event](#l)<br /><br />[Mute alert event](#m)** |
+| **Alert** | A message that a Defender for IoT engine triggers regarding deviations from authorized network behavior, network anomalies, or suspicious network activity and traffic. | **[Forwarding rule](#f)<br /><br />[Exclusion rule](#e)<br /><br />[System notifications](#s)** |
+| **Alert comment** | Comments that security analysts and administrators make in alert messages. For example, an alert comment might give instructions about mitigation actions to take, or names of individuals to contact regarding the event.<br /><br />Users who are reviewing alerts can choose the comment or comments that best reflect the event status, or steps taken to investigate the alert. | **[Alert](#a)** |
+| **Anomaly engine** | A Defender for IoT engine that detects unusual machine-to-machine (M2M) communication and behavior. For example, the engine might detect excessive SMB sign in attempts. Anomaly alerts are triggered when these events are detected. | **[Defender for IoT engines](#d)** |
+| **API** | Allows external systems to access data discovered by Defender for IoT and perform actions by using the external REST API over SSL connections. | **[Access tokens](#a)** |
+| **Attack vector report** | A real-time graphical representation of vulnerability chains of exploitable endpoints.<br /><br />Reports let you evaluate the effect of mitigation activities in the attack sequence to determine. For example, you can evaluate whether a system upgrade disrupts the attacker's path by breaking the attack chain, or whether an alternate attack path remains. This prioritizes remediation and mitigation activities. | **[Risk assessment report](#r)** |
## B | Term | Description | Learn more | |--|--|--|
-| **business unit** | A logical organization of your business according to specific industries.<br /><br />For example, a global company that contains glass factories and plastic factories can be managed as two different business units. You can control access of Defender for IoT users to specific business units. | **on-premises management console<br /><br />access group<br /><br />site<br /><br />zone** |
-| **baseline** | Approved network traffic, protocols, commands, and devices. Defender for IoT identifies deviations from the network baseline. View approved baseline traffic by generating data-mining reports. | **data mining<br /><br />learning mode** |
+| **Business unit** | A logical organization of your business according to specific industries.<br /><br />For example, a global company that contains glass factories and plastic factories can be managed as two different business units. You can control access of Defender for IoT users to specific business units. | **[On-premises management console](#o)<br /><br />[Access group](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
+| **Baseline** | Approved network traffic, protocols, commands, and devices. Defender for IoT identifies deviations from the network baseline. View approved baseline traffic by generating data-mining reports. | **[Data mining](#d)<br /><br />[Learning mode](#l)** |
## C | Term | Description | Learn more | |--|--|--|
-| **on-premises management console** | The on-premises management console provides a centralized view and management of devices and threats that Defender for IoT sensor deployments detect in your organization. | **Defender for IoT platform<br /><br />sensor** |
| **CLI commands** | Command-line interface (CLI) options for Defender for IoT administrator users. CLI commands are available for features that can't be accessed from the Defender for IoT consoles. | - |
@@ -45,104 +44,105 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **baseline<br /><br />reports** |
-| **Defender for IoT engines** | The self-learning analytics engines in Defender for IoT eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and deviations from baseline network activity.<br /><br />When an engine detects a deviation, an alert is triggered. Alerts can be viewed and managed from the **Alerts** screen or from a SIEM. | **alert** |
-| **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **sensor<br /><br />on-premises management console** |
-| **device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **Purdue layer group** |
-| **device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **group device inventory - on-premises management console** |
-| **device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **device inventory - sensor<br /><br />device inventory - data integrator** |
-| **device inventory - data integrator** | The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs. | **device inventory - on-premises management console** |
+| **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** |
+| **Defender for IoT engines** | The self-learning analytics engines in Defender for IoT eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and deviations from baseline network activity.<br /><br />When an engine detects a deviation, an alert is triggered. Alerts can be viewed and managed from the **Alerts** screen or from a SIEM. | **[Alert](#a)** |
+| **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** |
+| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** |
+| **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** |
+| **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)<br /><br />[Device inventory - data integrator](#d)** |
+| **Device inventory - data integrator** | The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs. | **[Device inventory - on-premises management console](#d)** |
## E | Term | Description | Learn more | |--|--|--|
-| **enterprise view** | A global map that presents business units, sites, and zones where Defender for IoT sensors are installed. View geographical locations of malicious alerts, operational alerts, and more. | **business unit<br /><br />site<br /><br />zone** |
-| **event timeline** | A timeline of activity detected on your network, including:<br /><br />- Alerts triggered.<br /><br />- Network events (informational).<br /><br />- User operations such as sign-in, user deletion, and user creation, and alert management operations such as mute, learn, and acknowledge. Available in the sensor consoles. | - |
-| **exclusion rule** | Instruct Defender for IoT to ignore alert triggers based on time period, device address, and alert name, or by a specific sensor.<br /><br />For example, if you know that all the OT devices monitored by a specific sensor will go through a maintenance procedure between 6:30 and 10:15 in the morning, you can set an exclusion rule that states that this sensor should send no alerts in the predefined period. | **alert<br /><br />Mute Alert Event** |
+| **Enterprise view** | A global map that presents business units, sites, and zones where Defenders for IoT sensors are installed. View geographical locations of malicious alerts, operational alerts, and more. | **[Business unit](#b)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
+| **Event timeline** | A timeline of activity detected on your network, including:<br /><br />- Alerts triggered.<br /><br />- Network events (informational).<br /><br />- User operations such as sign in, user deletion, and user creation, and alert management operations such as mute, learn, and acknowledge. Available in the sensor consoles. | - |
+| **Exclusion rule** | Instruct Defender for IoT to ignore alert triggers based on time period, device address, and alert name, or by a specific sensor.<br /><br />For example, if you know that all the OT devices monitored by a specific sensor will go through a maintenance procedure between 6:30 and 10:15 in the morning, you can set an exclusion rule that states that this sensor should send no alerts in the predefined period. | **[Alert](#a)<br /><br />[Mute alert event](#m)** |
## F | Term | Description | Learn more | |--|--|--|
-| **forwarding rule** | Forwarding rules instruct Defender for IoT to send alert information to partner vendors or systems.<br /><br />For example, send alert information to a Splunk server or a syslog server. | **alert** |
+| **Forwarding rule** | Forwarding rules instruct Defender for IoT to send alert information to partner vendors or systems.<br /><br />For example, send alert information to a Splunk server or a syslog server. | **[Alert](#a)** |
## G | Term | Description | Learn more | |--|--|--|
-| **group** | Predefined or custom groups of devices that contain specific attributes, such as devices that carried out programming activity or devices that are located on a specific subnet. Use groups to help you view devices and analyze devices in Defender for IoT.<br /><br />Groups can be viewed in and created from the device map and device inventory. | **device map<br /><br />device inventory** |
+| **Group** | Predefined or custom groups of devices that contain specific attributes, such as devices that carried out programming activity or devices that are located on a specific subnet. Use groups to help you view devices and analyze devices in Defender for IoT.<br /><br />Groups can be viewed in and created from the device map and device inventory. | **[Device map](#d)<br /><br />[Device inventory](#d)** |
## H | Term | Description | Learn more | |--|--|--|
-| **Horizon Open Development Environment** | Secure IoT and ICS devices running proprietary and custom protocols or protocols that deviate from any standard. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic based on defined protocols. Defender for IoT services analyze traffic to provide complete monitoring, alerting, and reporting.<br /><br />Use Horizon to:<br /><br />- **Expand** visibility and control without the need to upgrade Defender for IoT platform versions.<br /><br />- **Secure** proprietary information by developing on-site as an external plug-in.<br /><br />- **Localize** text for alerts, events, and protocol parameters.<br /><br />Contact your customer success representative for details. | **protocol support<br /><br />localization** |
-| **Horizon custom alert** | Enhance alert management in your enterprise by triggering custom alerts for any protocol (based on Horizon Framework traffic dissectors).<br /><br />These alerts can be used to communicate information:<br /><br />- About traffic detections based on protocols and underlying protocols in a proprietary Horizon plug-in.<br /><br />- About a combination of protocol fields from all protocol layers. | **protocol support** |
+| **Horizon open development environment** | Secure IoT and ICS devices running proprietary and custom protocols or protocols that deviate from any standard. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic based on defined protocols. Defenders for IoT services analyze traffic to provide complete monitoring, alerting, and reporting.<br /><br />Use Horizon to:<br /><br />- **Expand** visibility and control without the need to upgrade Defender for IoT platform versions.<br /><br />- **Secure** proprietary information by developing on-site as an external plug-in.<br /><br />- **Localize** text for alerts, events, and protocol parameters.<br /><br />Contact your customer success representative for details. | **[Protocol support](#p)<br /><br />[Localization](#l)** |
+| **Horizon custom alert** | Enhance alert management in your enterprise by triggering custom alerts for any protocol (based on Horizon Framework traffic dissectors).<br /><br />These alerts can be used to communicate information:<br /><br />- About traffic detections based on protocols and underlying protocols in a proprietary Horizon plug-in.<br /><br />- About a combination of protocol fields from all protocol layers. | **[Protocol support](#p)** |
## I | Term | Description | Learn more | |--|--|--|
-| **integrations** | Expand Defender for IoT capabilities by sharing device information with partner systems. Organizations can bridge previously siloed security, NAC, incident management, and device management solutions to accelerate system-wide responses and more rapidly mitigate risks. | **forwarding rule** |
-| **internal subnet** | Subnet configurations defined by Defender for IoT. In some cases, such as environments that use public ranges as internal ranges, you can instruct Defender for IoT to resolve all subnets as internal subnets. Subnets are displayed in the map and in various Defender for IoT reports. | **subnets** |
+| **Integrations** | Expand Defender for IoT capabilities by sharing device information with partner systems. Organizations can bridge previously siloed security, NAC, incident management, and device management solutions to accelerate system-wide responses and more rapidly mitigate risks. | **[Forwarding rule](#f)** |
+| **Internal subnet** | Subnet configurations defined by Defender for IoT. In some cases, such as environments that use public ranges as internal ranges, you can instruct Defender for IoT to resolve all subnets as internal subnets. Subnets are displayed in the map and in various Defender for IoT reports. | **[Subnets](#s)** |
## L | Term | Description | Learn more | |--|--|--|
-| **Learn Alert Event** | Instruct Defender for IoT to authorize the traffic detected in an alert event. | **alert<br /><br />Acknowledge Alert Event<br /><br />Mute Alert Event** |
-| **learning mode** | The mode used when Defender for IoT learns your network activity. This activity becomes your network baseline. Defender for IoT remains in the mode for a predefined period after installation. Activity that deviates from learned activity after this period will trigger Defender for IoT alerts. | **Smart IT Learning<br /><br />baseline** |
-| **localization** | Localize text for alerts, events, and protocol parameters for dissector plug-ins developed by Horizon. | **Horizon Open Development Environment** |
+| **Learn alert event** | Instruct Defender for IoT to authorize the traffic detected in an alert event. | **[Alert](#a)<br /><br />[Acknowledge alert event](#a)<br /><br />[Mute alert event](#m)** |
+| **Learning mode** | The mode used when Defender for IoT learns your network activity. This activity becomes your network baseline. Defender for IoT remains in the mode for a predefined period after installation. Activity that deviates from learned activity after this period will trigger Defender for IoT alerts. | **[Smart IT learning](#s)<br /><br />[Baseline](#b)** |
+| **Localization** | Localize text for alerts, events, and protocol parameters for dissector plug-ins developed by Horizon. | **[Horizon open development environment](#h)** |
## M | Term | Description | Learn more | |--|--|--|
-| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **alert<br /><br />exclusion rule<br /><br />Acknowledge Alert Event<br /><br />Learn Alert Event** |
+| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#a)<br /><br />[Learn alert event](#l)** |
## N | Term | Description | Learn more | |--|--|--|
-| **notifications** | Information about network changes or unresolved device properties. Options are available to update device and network information with new data detected. Responding to notifications enriches the device inventory, map, and various reports. Available on sensor consoles. | **alert<br /><br />system notifications** |
+| **Notifications** | Information about network changes or unresolved device properties. Options are available to update device and network information with new data detected. Responding to notifications enriches the device inventory, map, and various reports. Available on sensor consoles. | **[Alert](#a)<br /><br />[System notifications](#s)** |
## O | Term | Description | Learn more | |--|--|--|
-| **operational alert** | Alerts that deal with operational network issues, such as a device that's suspected to be disconnected from the network. | **alert<br /><br />security alert** |
+| **On-premises management console** | The on-premises management console provides a centralized view and management of devices and threats that Defenders for IoT sensor deployments detect in your organization. | **[Defender for IoT platform](#d)<br /><br />[Sensor](#s)** |
+| **Operational alert** | Alerts that deal with operational network issues, such as a device that's suspected to be disconnected from the network. | **[Alert](#a)<br /><br />[Security alert](#s)** |
## P | Term | Description | Learn more | |--|--|--| | **Purdue layer** | Shows the interconnections and interdependencies of main components of a typical ICS on the map. | |
-| **protocol support** | In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard, by using the Horizon Open Development Environment SDK. | **Horizon Open Development Environment** |
+| **Protocol support** | In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard, by using the Horizon Open Development Environment SDK. | **[Horizon open development environment](#h)** |
## R | Term | Description | Learn more | |--|--|--|
-| **region** | A logical division of a global organization into geographical regions. Examples are North America, Western Europe, and Eastern Europe.<br /><br />North America might have factories from various business units. | **access group<br /><br />business unit<br /><br />on-premises management console<br /><br />site<br /><br />zone** |
-| **reports** | Reports reflect information generated by data-mining query results. This includes default data-mining results, which are available in the **Reports** view. Admins and security analysts can also generate custom data-mining queries and save them as reports. These reports will also be available for read-only users. | **data mining** |
-| **risk assessment report** | Risk assessment reporting lets you generate a security score for each network device, along with an overall network security score. The overall score represents the percentage of 100 percent security. The report provides mitigation recommendations that will help you improve your current security score. | - |
+| **Region** | A logical division of a global organization into geographical regions. Examples are North America, Western Europe, and Eastern Europe.<br /><br />North America might have factories from various business units. | **[Access group](#a)<br /><br />[Business unit](#b)<br /><br />[On-premises management console](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
+| **Reports** | Reports reflect information generated by data-mining query results. This includes default data-mining results, which are available in the **Reports** view. Admins and security analysts can also generate custom data-mining queries and save them as reports. These reports will also be available for read-only users. | **[Data mining](#d)** |
+| **Risk assessment report** | Risk assessment reporting lets you generate a security score for each network device, along with an overall network security score. The overall score represents the percentage of 100 percent security. The report provides mitigation recommendations that will help you improve your current security score. | - |
## S | Term | Description | Learn more | |--|--|--|
-| **security alert** | Alerts that deal with security issues, such as excessive SMB sign-in attempts or malware detections. | **alert<br /><br />operational alert** |
-| **selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - |
-| **sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **on-premises management console** |
-| **site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **zone** |
+| **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#a)<br /><br />[Operational alert](#o)** |
+| **Selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - |
+| **Sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **[On-premises management console](#o)** |
+| **Site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **[Zone](#z)** |
| **Site Management** | The on-premises management console option that that lets you manage enterprise sensors. | - |
-| **Smart IT Learning** | After the learning period is complete and the learning mode is disabled, Defender for IoT might detect an unusually high level of baseline changes that are the result of normal IT activity, such as DNS and HTTP requests. This traffic might trigger unnecessary policy violation alerts and system notifications. To reduce these alerts and notifications, you can enable Smart IT Learning. | **learning mode<br /><br />baseline** |
-| **subnets** | To enable focus on the OT devices, IT devices are automatically aggregated by subnet in the device map. Each subnet is presented as a single entity on the map, including an interactive collapsing or expanding capability to focus in to an IT subnet and back. | **device map** |
-| **system notifications** | Notifications from the on-premises management console regrading:<br /><br />- Sensor connection status.<br /><br />- Remote backup failures. | **notifications<br /><br />alert** |
+| **Smart IT learning** | After the learning period is complete and the learning mode is disabled, Defender for IoT might detect an unusually high level of baseline changes that are the result of normal IT activity, such as DNS and HTTP requests. This traffic might trigger unnecessary policy violation alerts and system notifications. To reduce these alerts and notifications, you can enable Smart IT Learning. | **[Learning mode](#l)<br /><br />[Baseline](#b)** |
+| **Subnets** | To enable focus on the OT devices, IT devices are automatically aggregated by subnet in the device map. Each subnet is presented as a single entity on the map, including an interactive collapsing or expanding capability to focus in to an IT subnet and back. | **[Device map](#d)** |
+| **System notifications** | Notifications from the on-premises management console regrading:<br /><br />- Sensor connection status.<br /><br />- Remote backup failures. | **[Notifications](#n)<br /><br />[Alert](#a)** |
## Z | Term | Description | Learn more | |--|--|--|
-| **zone** | An area within a site in which a sensor or sensors are installed. | **site<br /><br />business unit<br /><br />region** |
+| **Zone** | An area within a site in which a sensor, or sensors are installed. | **[Site](#s)<br /><br />[Business unit](#b)<br /><br />[Region](#r)** |
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-query-units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-units.md
@@ -46,4 +46,4 @@ To learn more about querying Azure Digital Twins, visit:
* [*How-to: Query the twin graph*](how-to-query-graph.md) * [Query API reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)
-You can find Azure Digital Twins query-related limits in [*Reference: Service limits*](reference-service-limits.md).
\ No newline at end of file
+You can find Azure Digital Twins query-related limits in [*Azure Digital Twins service limits*](reference-service-limits.md).
\ No newline at end of file
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/concepts-route-events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
@@ -56,7 +56,7 @@ To define an event route, developers first must define endpoints. An **endpoint*
* Event Hub * Service Bus
-To create an endpoint, you can use the Azure Digital Twins [**control plane APIs**](how-to-manage-routes-apis-cli.md#create-an-endpoint-for-azure-digital-twins), [**CLI commands**](how-to-manage-routes-apis-cli.md#manage-endpoints-and-routes-with-cli), or the [**Azure portal**](how-to-manage-routes-portal.md#create-an-endpoint-for-azure-digital-twins).
+To create an endpoint, you can use the Azure Digital Twins [REST APIs, CLI commands](how-to-manage-routes-apis-cli.md#create-an-endpoint-for-azure-digital-twins), or the [Azure portal](how-to-manage-routes-portal.md#create-an-endpoint-for-azure-digital-twins).
When defining an endpoint, you'll need to provide: * The endpoint's name
@@ -72,7 +72,7 @@ The endpoint APIs that are available in control plane are:
## Create an event route
-To create an event route, you can use the Azure Digital Twins [**data plane APIs**](how-to-manage-routes-apis-cli.md#create-an-event-route), [**CLI commands**](how-to-manage-routes-apis-cli.md#manage-endpoints-and-routes-with-cli), or the [**Azure portal**](how-to-manage-routes-portal.md#create-an-event-route).
+To create an event route, you can use the Azure Digital Twins [REST APIs, CLI commands](how-to-manage-routes-apis-cli.md#create-an-event-route), or the [Azure portal](how-to-manage-routes-portal.md#create-an-event-route).
Here is an example of creating an event route within a client application, using the `CreateOrReplaceEventRouteAsync` [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true) call:
@@ -87,8 +87,6 @@ Here is an example of creating an event route within a client application, using
> [!TIP] > All SDK functions come in synchronous and asynchronous versions.
-Routes can be also created using the [Azure Digital Twins CLI](how-to-use-cli.md).
- ## Dead-letter events When an endpoint can't deliver an event within a certain time period or after trying to deliver the event a certain number of times, it can send the undelivered event to a storage account. This process is known as **dead-lettering**. Azure Digital Twins will dead-letter an event when **one of the following** conditions is met.
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-manage-routes-apis-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
@@ -21,7 +21,7 @@
In Azure Digital Twins, you can route [event notifications](how-to-interpret-event-data.md) to downstream services or connected compute resources. This is done by first setting up **endpoints** that can receive the events. You can then create [**event routes**](concepts-route-events.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
-This article walks you through the process of creating endpoints and routes with the [Event Routes APIs](/rest/api/digital-twins/dataplane/eventroutes), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), and the [Azure Digital Twins CLI](how-to-use-cli.md).
+This article walks you through the process of creating endpoints and routes with the [REST APIs](/rest/api/azure-digitaltwins/), the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), and the [Azure Digital Twins CLI](how-to-use-cli.md).
Alternatively, you can also manage endpoints and routes with the [Azure portal](https://portal.azure.com). For a version of this article that uses the portal instead, see [*How-to: Manage endpoints and routes (portal)*](how-to-manage-routes-portal.md).
@@ -43,7 +43,7 @@ These are the supported types of endpoints that you can create for your instance
For more information on the different endpoint types, see [*Choose between Azure messaging services*](../event-grid/compare-messaging-services.md).
-This section explains how to create one of these endpoints using the Azure CLI.
+This section explains how to create these endpoints using the Azure CLI. You can also manage endpoints with the [DigitalTwinsEndpoint control plane APIs](/rest/api/digital-twins/controlplane/endpoints).
[!INCLUDE [digital-twins-endpoint-resources.md](../../includes/digital-twins-endpoint-resources.md)]
@@ -150,10 +150,6 @@ Here is an example of a dead-letter message for a [twin create notification](how
## Create an event route
-To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. Azure Digital Twins **EventRoutes APIs** let developers wire up event flow, throughout the system and to downstream services. Read more about event routes in [*Concepts: Routing Azure Digital Twins events*](concepts-route-events.md).
-
-The samples in this section use the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
- **Prerequisite**: You need to create endpoints as described earlier in this article before you can move on to creating a route. You can proceed to creating an event route once your endpoints are finished setting up. > [!NOTE]
@@ -161,9 +157,7 @@ The samples in this section use the [.NET (C#) SDK](/dotnet/api/overview/azure/d
> > If you are scripting this flow, you may want to account for this by building in 2-3 minutes of wait time for the endpoint service to finish deploying before moving on to route setup.
-### Creation code with APIs and the C# SDK
-
-Event routes are defined using [data plane APIs](how-to-use-apis-sdks.md#overview-data-plane-apis).
+To actually send data from Azure Digital Twins to an endpoint, you'll need to define an **event route**. Event routes are used to connect event flow, throughout the system and to downstream services.
A route definition can contain these elements: * The route name you want to use
@@ -176,6 +170,12 @@ If there is a route name and a different filter is added, messages will be filte
One route should allow multiple notifications and event types to be selected.
+Event routes can be created with the Azure Digital Twins [**EventRoutes** data plane APIs](/rest/api/digital-twins/dataplane/eventroutes) or [**az dt route** CLI commands](/cli/azure/ext/azure-iot/dt/route?view=azure-cli-latest&preserve-view=true). The rest of this section walks through the creation process.
+
+### Create routes with the APIs and C# SDK
+
+One way to define event routes is with the [data plane APIs](how-to-use-apis-sdks.md#overview-data-plane-apis). The samples in this section use the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+ `CreateOrReplaceEventRouteAsync` is the SDK call that is used to add an event route. Here is an example of its usage: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="CreateEventRoute":::
@@ -183,12 +183,18 @@ One route should allow multiple notifications and event types to be selected.
> [!TIP] > All SDK functions come in synchronous and asynchronous versions.
-### Event route sample code
+#### Event route sample SDK code
-The following sample method shows how to create, list, and delete an event route:
+The following sample method shows how to create, list, and delete an event route with the C# SDK:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="FullEventRouteSample":::
+### Create routes with the CLI
+
+Routes can also be managed using the [az dt route](/cli/azure/ext/azure-iot/dt/route?view=azure-cli-latest&preserve-view=true) commands for the Azure Digital Twins CLI.
+
+For more information about using the CLI and what commands are available, see [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md).
+ ## Filter events Without filtering, endpoints receive a variety of events from Azure Digital Twins:
@@ -206,10 +212,6 @@ Here are the supported route filters. Use the detail in the *Filter text schema*
[!INCLUDE [digital-twins-route-filters](../../includes/digital-twins-route-filters.md)]
-## Manage endpoints and routes with CLI
-
-Endpoints and routes can also be managed using the Azure Digital Twins CLI. For more information about using the CLI and what commands are available, see [*How-to: Use the Azure Digital Twins CLI*](how-to-use-cli.md).
- [!INCLUDE [digital-twins-route-metrics](../../includes/digital-twins-route-metrics.md)] ## Next steps
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/how-to-set-up-instance-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
@@ -58,8 +58,8 @@ If you do want to configure more details for your instance, the next section des
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process.
-* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [*How-to: Enable private access with Private Link*](how-to-enable-private-link.md#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a [system-managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For instructions, see [*How-to: Enable managed identities for routing events*](how-to-enable-managed-identities.md).
+* **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [*How-to: Enable private access with Private Link (preview)*](how-to-enable-private-link.md#add-a-private-endpoint-during-instance-creation).
+* **Advanced**: In this tab, you can enable a [system-managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For instructions, see [*How-to: Enable managed identities for routing events (preview)*](how-to-enable-managed-identities.md#add-a-system-managed-identity-during-instance-creation).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [*Tag resources, resource groups, and subscriptions for logical organization*](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/reference-service-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-service-limits.md
@@ -15,7 +15,7 @@
#
-# Service limits
+# Azure Digital Twins service limits
These are the service limits of Azure Digital Twins.
event-grid https://docs.microsoft.com/en-us/azure/event-grid/event-schema-azure-cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-azure-cache.md new file mode 100644 /dev/null
@@ -0,0 +1,140 @@
+
+ Title: Azure Cache for Redis as Event Grid source
+description: Describes the properties that are provided for Azure Cache for Redis events with Azure Event Grid
+ Last updated : 12/21/2020++++
+# Azure Cache for Redis as an Event Grid source
+
+This article provides the properties and schema for Azure Cache for Redis events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+
+## Event Grid event schema
+
+### List of events for Azure Cache for Redis REST APIs
+
+These events are triggered when a client exports, imports, or scales by calling Azure Cache for Redis REST APIs. Patching event is triggered by Redis update.
+
+ |Event name |Description|
+ |-|--|
+ |**Microsoft.Cache.ExportRDBCompleted** |Triggered when cache data is exported. |
+ |**Microsoft.Cache.ImportRDBCompleted** |Triggered when cache data is imported. |
+ |**Microsoft.Cache.PatchingCompleted** |Triggered when patching is completed. |
+ |**Microsoft.Cache.ScalingCompleted** |Triggered when scaling is completed. |
+
+<a name="example-event"></a>
+### The contents of an event response
+
+When an event is triggered, the Event Grid service sends data about that event to subscribing endpoint.
+
+This section contains an example of what that data would look like for each Azure Cache for Redis event.
+
+### Microsoft.Cache.PatchingCompleted event
+
+```json
+[{
+"id":"9b87886d-21a5-4af5-8e3e-10c4b8dac73b",
+"eventType":"Microsoft.Cache.PatchingCompleted",
+"topic":"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+"data":{
+ "name":"PatchingCompleted",
+ "timestamp":"2020-12-09T21:50:19.9995668+00:00",
+ "status":"Succeeded"},
+"subject":"PatchingCompleted",
+"dataversion":"1.0",
+"metadataVersion":"1",
+"eventTime":"2020-12-09T21:50:19.9995668+00:00"}]
+```
+
+### Microsoft.Cache.ImportRDBCompleted event
+
+```json
+[{
+"id":"9b87886d-21a5-4af5-8e3e-10c4b8dac73b",
+"eventType":"Microsoft.Cache.ImportRDBCompleted",
+"topic":"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+"data":{
+ "name":"ImportRDBCompleted",
+ "timestamp":"2020-12-09T21:50:19.9995668+00:00",
+ "status":"Succeeded"},
+"subject":"ImportRDBCompleted",
+"dataversion":"1.0",
+"metadataVersion":"1",
+"eventTime":"2020-12-09T21:50:19.9995668+00:00"}]
+```
+
+### Microsoft.Cache.ExportRDBCompleted event
+
+```json
+[{
+"id":"9b87886d-21a5-4af5-8e3e-10c4b8dac73b",
+"eventType":"Microsoft.Cache.ExportRDBCompleted",
+"topic":"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+"data":{
+ "name":"ExportRDBCompleted",
+ "timestamp":"2020-12-09T21:50:19.9995668+00:00",
+ "status":"Succeeded"},
+"subject":"ExportRDBCompleted",
+"dataversion":"1.0",
+"metadataVersion":"1",
+"eventTime":"2020-12-09T21:50:19.9995668+00:00"}]
+```
+
+### Microsoft.Cache.ScalingCompleted
+
+```json
+[{
+"id":"9b87886d-21a5-4af5-8e3e-10c4b8dac73b",
+"eventType":"Microsoft.Cache.ScalingCompleted",
+"topic":"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Cache/Redis/{cache_name}",
+"data":{
+ "name":"ScalingCompleted",
+ "timestamp":"2020-12-09T21:50:19.9995668+00:00",
+ "status":"Succeeded"},
+"subject":"ScalingCompleted",
+"dataversion":"1.0",
+"metadataVersion":"1",
+"eventTime":"2020-12-09T21:50:19.9995668+00:00"}]
+```
+
+### Event properties
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| topic | string | Full resource path to the event source. This field is not writeable. Event Grid provides this value. |
+| subject | string | Publisher-defined path to the event subject. |
+| eventType | string | One of the registered event types for this event source. |
+| eventTime | string | The time the event is generated based on the provider's UTC time. |
+| id | string | Unique identifier for the event. |
+| data | object | Azure Cache for Redis event data. |
+| dataVersion | string | The schema version of the data object. The publisher defines the schema version. |
+| metadataVersion | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
+
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| timestamp | string | The time at which the event occurred. |
+| name | string | The name of the event. |
+| status | string | The status of the event. Failed or succeeded. |
++
+## Quickstarts
+
+If you want to try Azure Cache for Redis events, see any of these quickstart articles:
+
+|If you want to use this tool: |See this article: |
+|--|-|
+|Azure portal |[Quickstart: Route Azure Cache for Redis events to web endpoint with the Azure portal](../azure-cache-for-redis/cache-event-grid-quickstart-portal.md)|
+|PowerShell |[Quickstart: Route Azure Cache for Redis events to web endpoint with PowerShell](../azure-cache-for-redis/cache-event-grid-quickstart-powershell.md)|
+|Azure CLI |[Quickstart: Route Azure Cache for Redis events to web endpoint with Azure CLI](../azure-cache-for-redis/cache-event-grid-quickstart-cli.md)|
+
+## Next steps
+
+* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
+* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
+
event-grid https://docs.microsoft.com/en-us/azure/event-grid/managed-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/managed-service-identity.md
@@ -2,57 +2,57 @@
Title: Event delivery, managed service identity, and private link description: This article describes how to enable managed service identity for an Azure event grid topic. Use it to forward events to supported destinations. Previously updated : 10/22/2020 Last updated : 01/28/2021 # Event delivery with a managed identity
-This article describes how to enable a [managed service identity](../active-directory/managed-identities-azure-resources/overview.md) for Azure event grid topics or domains. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
+This article describes how to enable a [managed service identity](../active-directory/managed-identities-azure-resources/overview.md) for Azure event grid custom topics or domains. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts.
Here are the steps that are covered in detail in this article:
-1. Create a topic or domain with a system-assigned identity, or update an existing topic or domain to enable identity.
+1. Create a custom topic or domain with a system-assigned identity, or update an existing custom topic or domain to enable identity.
1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). 1. When you create event subscriptions, enable the usage of the identity to deliver events to the destination. > [!NOTE] > Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). For more information, see the [Private endpoints](#private-endpoints) section at the end of this article.
-## Create a topic or domain with an identity
+## Create a custom topic or domain with an identity
First, let's look at how to create a topic or a domain with a system-managed identity. ### Use the Azure portal
-You can enable system-assigned identity for a topic or domain while you create it in the Azure portal. The following image shows how to enable a system-managed identity for a topic. Basically, you select the option **Enable system assigned identity** on the **Advanced** page of the topic creation wizard. You'll see this option on the **Advanced** page of the domain creation wizard too.
+You can enable system-assigned identity for a custom topic or domain while you create it in the Azure portal. The following image shows how to enable a system-managed identity for a custom topic. Basically, you select the option **Enable system assigned identity** on the **Advanced** page of the topic creation wizard. You'll see this option on the **Advanced** page of the domain creation wizard too.
-![Enable identity while creating a topic](./media/managed-service-identity/create-topic-identity.png)
+![Enable identity while creating a custom topic](./media/managed-service-identity/create-topic-identity.png)
### Use the Azure CLI
-You can also use the Azure CLI to create a topic or domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
+You can also use the Azure CLI to create a custom topic or domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
```azurecli-interactive
-# create a topic with a system-assigned identity
+# create a custom topic with a system-assigned identity
az eventgrid topic create -g <RESOURCE GROUP NAME> --name <TOPIC NAME> -l <LOCATION> --identity systemassigned ``` Similarly, you can use the `az eventgrid domain create` command to create a domain with a system-managed identity.
-## Enable an identity for an existing topic or domain
-In the previous section, you learned how to enable a system-managed identity while you created a topic or a domain. In this section, you learn how to enable a system-managed identity for an existing topic or domain.
+## Enable an identity for an existing custom topic or domain
+In the previous section, you learned how to enable a system-managed identity while you created a custom topic or a domain. In this section, you learn how to enable a system-managed identity for an existing custom topic or domain.
### Use the Azure portal
-The following procedure shows you how to enable system-managed identity for a topic. The steps for enabling an identity for a domain are similar.
+The following procedure shows you how to enable system-managed identity for a custom topic. The steps for enabling an identity for a domain are similar.
1. Go to the [Azure portal](https://portal.azure.com). 2. Search for **event grid topics** in the search bar at the top.
-3. Select the **topic** for which you want to enable the managed identity.
+3. Select the **custom topic** for which you want to enable the managed identity.
4. Switch to the **Identity** tab. 5. Turn **on** the switch to enable the identity. 1. Select **Save** on the toolbar to save the setting.
- :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a topic":::
+ :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a custom topic":::
You can use similar steps to enable an identity for an event grid domain. ### Use the Azure CLI
-Use the `az eventgrid topic update` command with `--identity` set to `systemassigned` to enable system-assigned identity for an existing topic. If you want to disable the identity, specify `noidentity` as the value.
+Use the `az eventgrid topic update` command with `--identity` set to `systemassigned` to enable system-assigned identity for an existing custom topic. If you want to disable the identity, specify `noidentity` as the value.
```azurecli-interactive # Update the topic to assign a system-assigned identity.
@@ -62,9 +62,9 @@ az eventgrid topic update -g $rg --name $topicname --identity systemassigned --s
The command for updating an existing domain is similar (`az eventgrid domain update`). ## Supported destinations and Azure roles
-After you enable identity for your event grid topic or domain, Azure automatically creates an identity in Azure Active Directory. Add this identity to appropriate Azure roles so that the topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid topic can forward events to event hubs in that namespace.
+After you enable identity for your event grid custom topic or domain, Azure automatically creates an identity in Azure Active Directory. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid custom topic can forward events to event hubs in that namespace.
-Currently, Azure event grid supports topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the topic can forward the events.
+Currently, Azure event grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
| Destination | Azure role | | -- | |
@@ -74,35 +74,35 @@ Currently, Azure event grid supports topics or domains configured with a system-
| Azure Queue storage |[Storage Queue Data Message Sender](../storage/common/storage-auth-aad-rbac-portal.md#azure-roles-for-blobs-and-queues) | ## Add an identity to Azure roles on destinations
-This section describes how to add the identity for your topic or domain to an Azure role.
+This section describes how to add the identity for your custom topic or domain to an Azure role.
### Use the Azure portal
-You can use the Azure portal to assign the topic or domain identity to an appropriate role so that the topic or domain can forward events to the destination.
+You can use the Azure portal to assign the custom topic or domain identity to an appropriate role so that the custom topic or domain can forward events to the destination.
-The following example adds a managed identity for an event grid topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the topic can forward events to all entities within the namespace.
+The following example adds a managed identity for an event grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the event grid custom topic can forward events to all entities within the namespace.
1. Go to your **Service Bus namespace** in the [Azure portal](https://portal.azure.com). 1. Select **Access Control** in the left pane. 1. Select **Add** in the **Add a role assignment** section. 1. On the **Add a role assignment** page, do the following steps: 1. Select the role. In this case, it's **Azure Service Bus Data Sender**.
- 1. Select the **identity** for your topic or domain.
+ 1. Select the **identity** for your event grid custom topic or domain.
1. Select **Save** to save the configuration. The steps are similar for adding an identity to other roles mentioned in the table. ### Use the Azure CLI
-The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid topics. The commands for event grid domains are similar.
+The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid custom topics. The commands for event grid domains are similar.
-#### Get the principal ID for the topic's system identity
-First, get the principal ID of the topic's system-managed identity and assign the identity to appropriate roles.
+#### Get the principal ID for the custom topic's system identity
+First, get the principal ID of the custom topic's system-managed identity and assign the identity to appropriate roles.
```azurecli-interactive topic_pid=$(az ad sp list --display-name "$<TOPIC NAME>" --query [].objectId -o tsv) ``` #### Create a role assignment for event hubs at various scopes
-The following CLI example shows how to add a topic's identity to the **Azure Event Hubs Data Sender** role at the namespace level or at the event hub level. If you create the role assignment at the namespace level, the topic can forward events to all event hubs in that namespace. If you create a role assignment at the event hub level, the topic can forward events only to that specific event hub.
+The following CLI example shows how to add a custom topic's identity to the **Azure Event Hubs Data Sender** role at the namespace level or at the event hub level. If you create the role assignment at the namespace level, the custom topic can forward events to all event hubs in that namespace. If you create a role assignment at the event hub level, the custom topic can forward events only to that specific event hub.
```azurecli-interactive
@@ -118,7 +118,7 @@ az role assignment create --role "$role" --assignee "$topic_pid" --scope "$event
``` #### Create a role assignment for a Service Bus topic at various scopes
-The following CLI example shows how to add a topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid topic can forward events only to that specific Service Bus queue or topic.
+The following CLI example shows how to add an event grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid custom topic can forward events only to that specific Service Bus queue or topic.
```azurecli-interactive role="Azure Service Bus Data Sender"
@@ -133,7 +133,7 @@ az role assignment create --role "$role" --assignee "$topic_pid" --scope "$sbust
``` ## Create event subscriptions that use an identity
-After you have a topic or a domain with a system-managed identity and have added the identity to the appropriate role on the destination, you're ready to create subscriptions that use the identity.
+After you have an event grid custom topic or a domain with a system-managed identity and have added the identity to the appropriate role on the destination, you're ready to create subscriptions that use the identity.
### Use the Azure portal When you create an event subscription, you see an option to enable the use of a system-assigned identity for an endpoint in the **ENDPOINT DETAILS** section.
@@ -152,7 +152,7 @@ First, specify values for the following variables to be used in the CLI command.
```azurecli-interactive subid="<AZURE SUBSCRIPTION ID>"
-rg = "<RESOURCE GROUP of EVENT GRID TOPIC>"
+rg = "<RESOURCE GROUP of EVENT GRID CUSTOM TOPIC>"
topicname = "<EVENT GRID TOPIC NAME>" # get the service bus queue resource id
@@ -161,7 +161,7 @@ sb_esname = "<Specify a name for the event subscription>"
``` #### Create an event subscription by using a managed identity for delivery
-This sample command creates an event subscription for an event grid topic with an endpoint type set to **Service Bus queue**.
+This sample command creates an event subscription for an event grid custom topic with an endpoint type set to **Service Bus queue**.
```azurecli-interactive az eventgrid event-subscription create
@@ -173,7 +173,7 @@ az eventgrid event-subscription create
``` #### Create an event subscription by using a managed identity for delivery and dead-lettering
-This sample command creates an event subscription for an event grid topic with an endpoint type set to **Service Bus queue**. It also specifies that the system-managed identity is to be used for dead-lettering.
+This sample command creates an event subscription for an event grid custom topic with an endpoint type set to **Service Bus queue**. It also specifies that the system-managed identity is to be used for dead-lettering.
```azurecli-interactive storageid=$(az storage account show --name demoStorage --resource-group gridResourceGroup --query id --output tsv)
@@ -195,15 +195,15 @@ In this section, you learn how to use the Azure CLI to enable the use of a syste
#### Define variables ```azurecli-interactive subid="<AZURE SUBSCRIPTION ID>"
-rg = "<RESOURCE GROUP of EVENT GRID TOPIC>"
-topicname = "<EVENT GRID TOPIC NAME>"
+rg = "<RESOURCE GROUP of EVENT GRID CUSTOM TOPIC>"
+topicname = "<EVENT GRID CUSTOM TOPIC NAME>"
hubid=$(az eventhubs eventhub show --name <EVENT HUB NAME> --namespace-name <NAMESPACE NAME> --resource-group <RESOURCE GROUP NAME> --query id --output tsv) eh_esname = "<SPECIFY EVENT SUBSCRIPTION NAME>" ``` #### Create an event subscription by using a managed identity for delivery
-This sample command creates an event subscription for an event grid topic with an endpoint type set to **Event Hubs**.
+This sample command creates an event subscription for an event grid custom topic with an endpoint type set to **Event Hubs**.
```azurecli-interactive az eventgrid event-subscription create
@@ -215,7 +215,7 @@ az eventgrid event-subscription create
``` #### Create an event subscription by using a managed identity for delivery + deadletter
-This sample command creates an event subscription for an event grid topic with an endpoint type set to **Event Hubs**. It also specifies that the system-managed identity is to be used for dead-lettering.
+This sample command creates an event subscription for an event grid custom topic with an endpoint type set to **Event Hubs**. It also specifies that the system-managed identity is to be used for dead-lettering.
```azurecli-interactive storageid=$(az storage account show --name demoStorage --resource-group gridResourceGroup --query id --output tsv)
@@ -238,8 +238,8 @@ In this section, you learn how to use the Azure CLI to enable the use of a syste
```azurecli-interactive subid="<AZURE SUBSCRIPTION ID>"
-rg = "<RESOURCE GROUP of EVENT GRID TOPIC>"
-topicname = "<EVENT GRID TOPIC NAME>"
+rg = "<RESOURCE GROUP of EVENT GRID CUSTOM TOPIC>"
+topicname = "<EVENT GRID CUSTOM TOPIC NAME>"
# get the storage account resource id storageid=$(az storage account show --name <STORAGE ACCOUNT NAME> --resource-group <RESOURCE GROUP NAME> --query id --output tsv)
@@ -280,9 +280,9 @@ az eventgrid event-subscription create
## Private endpoints Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). That is, there is no support if you have strict network isolation requirements where your delivered events traffic must not leave the private IP space.
-However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure event grid topic or a domain with system-managed identity configured as shown in this article. Then, you can use a private link configured in Azure Functions or your webhook deployed on your virtual network to pull events. See the sample: [Connect to private endpoints with Azure Functions](/samples/azure-samples/azure-functions-private-endpoints/connect-to-private-endpoints-with-azure-functions/).
+However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure event grid custom topic or a domain with system-managed identity configured as shown in this article. Then, you can use a private link configured in Azure Functions or your webhook deployed on your virtual network to pull events. See the sample: [Connect to private endpoints with Azure Functions](/samples/azure-samples/azure-functions-private-endpoints/connect-to-private-endpoints-with-azure-functions/).
-Note that under this configuration, the traffic goes over the public IP/internet from Event Grid to Event Hubs, Service Bus, or Azure Storage, but the channel can be encrypted and a managed identity of Event Grid is used. If you configure your Azure Functions or webhook deployed to your virtual network to use an Event Hubs, Service Bus, or Azure Storage via private link, that section of the traffic will evidently stay within Azure.
+Under this configuration, the traffic goes over the public IP/internet from Event Grid to Event Hubs, Service Bus, or Azure Storage, but the channel can be encrypted and a managed identity of Event Grid is used. If you configure your Azure Functions or webhook deployed to your virtual network to use an Event Hubs, Service Bus, or Azure Storage via private link, that section of the traffic will evidently stay within Azure.
## Next steps
event-grid https://docs.microsoft.com/en-us/azure/event-grid/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/overview.md
@@ -2,7 +2,7 @@
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 09/24/2020 Last updated : 01/28/2021 # What is Azure Event Grid?
@@ -15,7 +15,7 @@ Azure Event Grid is deployed to maximize availability by natively spreading acro
This article provides an overview of Azure Event Grid. If you want to get started with Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
-:::image type="content" source="./media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="./media/overview/functional-model.png":::
+:::image type="content" source="./media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="./media/overview/functional-model-big.png":::
This image shows how Event Grid connects sources and handlers, and isn't a comprehensive list of supported integrations.
@@ -37,6 +37,7 @@ Currently, the following Azure services support sending events to Event Grid. Fo
- [Azure Service Bus](event-schema-service-bus.md) - [Azure SignalR](event-schema-azure-signalr.md) - [Azure subscriptions](event-schema-subscriptions.md)
+- [Azure Cache for Redis](event-schema-azure-cache.md)
## Event handlers
event-grid https://docs.microsoft.com/en-us/azure/event-grid/system-topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/system-topics.md
@@ -29,6 +29,7 @@ Here is the current list of Azure services that support creation of system topic
- [Azure Service Bus](event-schema-service-bus.md) - [Azure SignalR](event-schema-azure-signalr.md) - [Azure subscriptions](event-schema-subscriptions.md)
+- [Azure Cache for Redis](event-schema-azure-cache.md)
## System topics as Azure resources In the past, a system topic was implicit and wasn't exposed for simplicity. System topics are now visible as Azure resources and provide the following capabilities:
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/authenticate-shared-access-signature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/authenticate-shared-access-signature.md
@@ -197,7 +197,7 @@ For example, to define authorization rules scoped down to only sending/publishin
> [!NOTE]
-> Although it's not recommended, it is possible to equip devices with tokens that grant access to an event hub or a namespace. Any device that holds this token can send messages directly to that event hub. Furthermore, the device cannot be blacklisted from sending to that event hub.
+> Although it's not recommended, it is possible to equip devices with tokens that grant access to an event hub or a namespace. Any device that holds this token can send messages directly to that event hub. Furthermore, the device cannot be blocklisted from sending to that event hub.
> > It's always recommended to give specific and granular scopes.
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -109,7 +109,7 @@ The following table shows connectivity locations and the service providers for e
| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks | | **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit |
-| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix |
+| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix, Megaport |
| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, Cologix, Fibrenoire, Megaport, Telus, Zayo | | **Mumbai** | Tata Communications | 2 | West India | 10G | DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon | | **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Vodafone Idea |
@@ -140,7 +140,7 @@ The following table shows connectivity locations and the service providers for e
| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | 10G, 100G | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Verizon | | **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, Megaport, Tokai Communications | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | 10G, 100G | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Cologix, Telus |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Cologix, Megaport, Telus |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | 1 | East US, East US 2 | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, Internet2, InterCloud, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite Reston](https://www.coresite.com/data-center-locations/northern-virginia-washington-dc) | 1 | East US, East US 2 | 10G, 100G | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Eqinix, Intercloud, Interxion, Megaport, Swisscom |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
@@ -128,7 +128,7 @@ The following table shows locations by service provider. If you want to view ava
| **[Level 3 Communications](http://your.level3.com/LP=882?WT.tsrc=02192014LP882AzureVanityAzureText)** |Supported |Supported |Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC | | **LG CNS** |Supported |Supported |Busan, Seoul | | **[Liquid Telecom](https://www.liquidtelecom.com/products-and-services/cloud.html)** |Supported |Supported |Cape Town, Johannesburg |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Melbourne, Miami, Montreal, New York, Osaka, Oslo, Perth, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Washington DC, Washington DC2, Zurich |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported |Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Melbourne, Miami, Minneapolis, Montreal, New York, Osaka, Oslo, Perth, Quebec City, San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported |London | | **[Neutrona Networks](https://www.neutrona.com/index.php/azure-expressroute/)** |Supported |Supported |Dallas, Los Angeles, Miami, Sao Paulo, Washington DC | | **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** |Supported |Supported |Newport(Wales) |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/how-to-npm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-npm.md
@@ -166,7 +166,7 @@ For more information about NSG, see [Network Security Groups](../virtual-network
## <a name="setupmonitor"></a>Step 4: Discover peering connections
-1. Navigate to the Network Performance Monitor overview tile by going to the **All Resources** page, then click on the whitelisted NPM Workspace.
+1. Navigate to the Network Performance Monitor overview tile by going to the **All Resources** page, then click on the allowlisted NPM Workspace.
![npm workspace](./media/how-to-npm/npm.png) 2. Click the **Network Performance Monitor** overview tile to bring up the dashboard. The dashboard contains an ExpressRoute page, which shows that ExpressRoute is in an 'unconfigured state'. Click **Feature Setup** to open the Network Performance Monitor configuration page.
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-add-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
@@ -4,7 +4,7 @@ description: How to define storage targets so that your Azure HPC Cache can use
Previously updated : 09/30/2020 Last updated : 01/28/2021
@@ -161,19 +161,21 @@ An NFS storage target has different settings from a Blob storage target. The usa
When you create a storage target that points to an NFS storage system, you need to choose the usage model for that target. This model determines how your data is cached.
+The built-in usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize file read speed, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
+ There are three options: * **Read heavy, infrequent writes** - Use this option if you want to speed up read access to files that are static or rarely changed.
- This option caches files that clients read, but passes writes through to the back-end storage immediately. Files stored in the cache are never compared to the files on the NFS storage volume.
+ This option caches files that clients read, but passes writes through to the back-end storage immediately. Files stored in the cache are not automatically compared to the files on the NFS storage volume. (Read the note below about back-end verification to learn more.)
- Do not use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will never be updated with changes from the back end, and the data set can become inconsistent.
+ Do not use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will be out of sync with the back-end file.
* **Greater than 15% writes** - This option speeds up both read and write performance. When using this option, all clients must access files through the Azure HPC Cache instead of mounting the back-end storage directly. The cached files will have recent changes that are not stored on the back end.
- In this usage model, files in the cache are not checked against the files on back-end storage. The cached version of the file is assumed to be more current. A modified file in the cache is written to the back-end storage system after it has been in the cache for an hour with no additional changes.
+ In this usage model, files in the cache are only checked against the files on back-end storage every eight hours. The cached version of the file is assumed to be more current. A modified file in the cache is written to the back-end storage system after it has been in the cache for an hour with no additional changes.
-* **Clients write to the NFS target, bypassing the cache** - Choose this option if any clients in your workflow write data directly to the storage system without first writing to the cache. Files that clients request are cached, but any changes to those files from the client are passed back to the back-end storage system immediately.
+* **Clients write to the NFS target, bypassing the cache** - Choose this option if any clients in your workflow write data directly to the storage system without first writing to the cache, or if you want to optimize data consistency. Files that clients request are cached, but any changes to those files from the client are passed back to the back-end storage system immediately.
With this usage model, the files in the cache are frequently checked against the back-end versions for updates. This verification allows files to be changed outside of the cache while maintaining data consistency.
@@ -182,9 +184,12 @@ This table summarizes the usage model differences:
| Usage model | Caching mode | Back-end verification | Maximum write-back delay | |-|--|--|--| | Read heavy, infrequent writes | Read | Never | None |
-| Greater than 15% writes | Read/write | Never | 1 hour |
+| Greater than 15% writes | Read/write | 8 hours | 1 hour |
| Clients bypass the cache | Read | 30 seconds | None |
+> [!NOTE]
+> The **Back-end verification** value shows when the cache automatically compares its files with source files in remote storage. However, you can force Azure HPC Cache to compare files by performing a directory operation that includes a readdirplus request. Readdirplus is a standard NFS API (also called extended read) that returns directory metadata, which causes the cache to compare and update files.
+ ### Create an NFS storage target ### [Portal](#tab/azure-portal)
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/how-to-legacy-device-symm-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-legacy-device-symm-key.md
@@ -3,11 +3,11 @@ Title: Provision devices using symmetric keys - Azure IoT Hub Device Provisionin
description: How to use symmetric keys to provision devices with your Device Provisioning Service (DPS) instance Previously updated : 07/13/2020 Last updated : 01/28/2021 -+ # How to provision devices using symmetric key enrollment groups
@@ -16,9 +16,7 @@ This article demonstrates how to securely provision multiple symmetric key devic
Some devices may not have a certificate, TPM, or any other security feature that can be used to securely identify the device. The Device Provisioning Service includes [symmetric key attestation](concepts-symmetric-key-attestation.md). Symmetric key attestation can be used to identify a device based off unique information like the MAC address or a serial number.
-If you can easily install a [hardware security module (HSM)](concepts-service.md#hardware-security-module) and a certificate, then that may be a better approach for identifying and provisioning your devices. Since that approach may allow you to bypass updating the code deployed to all your devices, and you would not have a secret key embedded in your device image.
-
-This article assumes that neither an HSM or a certificate is a viable option. However, it is assumed that you do have some method of updating device code to use the Device Provisioning Service to provision these devices.
+If you can easily install a [hardware security module (HSM)](concepts-service.md#hardware-security-module) and a certificate, then that may be a better approach for identifying and provisioning your devices. Using an HSM will allow you to bypass updating the code deployed to all your devices, and you would not have a secret key embedded in your device images. This article assumes that neither an HSM or a certificate is a viable option. However, it is assumed that you do have some method of updating device code to use the Device Provisioning Service to provision these devices.
This article also assumes that the device update takes place in a secure environment to prevent unauthorized access to the master group key or the derived device key.
@@ -137,65 +135,65 @@ In this example, we use a combination of a MAC address and serial number forming
sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6 ```
-Create a unique registration ID for your device. Valid characters are lowercase alphanumeric and dash ('-').
+Create unique registration IDs for each device. Valid characters are lowercase alphanumeric and dash ('-').
## Derive a device key
-To generate the device key, use the group master key to compute an [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the unique registration ID for the device and convert the result into Base64 format.
+To generate device keys, use the enrollment group master key to compute an [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the registration ID for each device. The result is then converted into Base64 format for each device.
> [!WARNING]
-> Your device code should only include the derived device key for the individual device. Do not include your group master key in your device code.
+> Your device code for each device should only include the corresponding derived device key for that device. Do not include your group master key in your device code.
> A compromised master key has the potential to compromise the security of all devices being authenticated with it.
-#### Linux workstations
+# [Windows](#tab/windows)
-If you are using a Linux workstation, you can use openssl to generate your
-derived device key as shown in the following example.
+If you are using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
Replace the value of **KEY** with the **Primary Key** you noted earlier. Replace the value of **REG_ID** with your registration ID.
-```bash
-KEY=8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw==
-REG_ID=sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+```powershell
+$KEY='8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw=='
+$REG_ID='sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6'
-keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
-echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64
+$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256
+$hmacsha256.key = [Convert]::FromBase64String($KEY)
+$sig = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID))
+$derivedkey = [Convert]::ToBase64String($sig)
+echo "`n$derivedkey`n"
```
-```bash
+```powershell
Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc= ```
+# [Linux](#tab/linux)
-#### Windows-based workstations
-
-If you are using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
+If you are using a Linux workstation, you can use openssl to generate your
+derived device key as shown in the following example.
Replace the value of **KEY** with the **Primary Key** you noted earlier. Replace the value of **REG_ID** with your registration ID.
-```powershell
-$KEY='8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw=='
-$REG_ID='sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6'
+```bash
+KEY=8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw==
+REG_ID=sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
-$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256
-$hmacsha256.key = [Convert]::FromBase64String($KEY)
-$sig = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID))
-$derivedkey = [Convert]::ToBase64String($sig)
-echo "`n$derivedkey`n"
+keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
+echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64
```
-```powershell
+```bash
Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc= ``` +
-Your device will use the derived device key with your unique registration ID to perform symmetric key attestation with the enrollment group during provisioning.
+Each device uses its derived device key and unique registration ID to perform symmetric key attestation with the enrollment group during provisioning.
@@ -203,7 +201,7 @@ Your device will use the derived device key with your unique registration ID to
In this section, you will update a provisioning sample named **prov\_dev\_client\_sample** located in the Azure IoT C SDK you set up earlier.
-This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence will cause the device to be recognized and assigned to the IoT hub you configured on the enrollment group.
+This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence will cause the device to be recognized and assigned to the IoT hub you configured on the enrollment group. This would be completed for each device that would be provisioned using the enrollment group.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning service and note down the **_ID Scope_** value.
@@ -277,14 +275,11 @@ This sample code simulates a device boot sequence that sends the provisioning re
## Security concerns
-Be aware that this leaves the derived device key included as part of the image, which is not a recommended security best practice. This is one reason why security and ease-of-use are tradeoffs.
---
+Be aware that this leaves the derived device key included as part of the image for each device, which is not a recommended security best practice. This is one reason why security and ease-of-use are often tradeoffs. You must fully review the security of your devices based on your own requirements.
## Next steps * To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) * [Quickstart: Provision a simulated device with symmetric keys](quick-create-simulated-device-symm-key.md)
-* To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
\ No newline at end of file
+* To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/tutorial-custom-hsm-enrollment-group-x509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
@@ -3,7 +3,7 @@ Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a custom Hardwa
description: This tutorial uses enrollment groups. In this tutorial, you learn how to provision X.509 devices using a custom Hardware Security Module (HSM) and the C device SDK for Azure IoT Hub Device Provisioning Service (DPS). Previously updated : 11/18/2020 Last updated : 01/28/2021
@@ -13,9 +13,11 @@
# Tutorial: Provision multiple X.509 devices using enrollment groups
-In this tutorial, you will learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) will be used to provision your development machine as an IoT device.
+In this tutorial, you will learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
-The Azure IoT Device Provisioning Service supports two types of enrollments:
+Make sure you've at least completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](quick-setup-auto-provision.md) before continuing with this tutorial. Also, if you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
+
+The Azure IoT Device Provisioning Service supports two types of enrollments for provisioning devices:
* [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices. * [Individual Enrollments](concepts-service.md#individual-enrollment): Used to enroll a single device.
@@ -24,8 +26,6 @@ This tutorial is similar to the previous tutorials demonstrating how to use enro
This tutorial will demonstrate the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example) that provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets is not required, but strongly recommended to help protect sensitive information like your device certificate's private key.
-If you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also, make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](quick-setup-auto-provision.md) before continuing with this tutorial.
- In this tutorial you will complete the following objectives:
@@ -40,9 +40,11 @@ In this tutorial you will complete the following objectives:
## Prerequisites
-The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+The following prerequisites are for a Windows development environment used to simulate the devices. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+
+ Visual Studio is used in this article to build the device sample code that would be deployed to IoT devices. This does not imply that Visual Studio is required on the device itself.
* Latest version of [Git](https://git-scm.com/download/) installed.
@@ -102,7 +104,7 @@ In this section, you will prepare a development environment used to build the [A
## Create an X.509 certificate chain
-In this section you, will generate an X.509 certificate chain of three certificates for testing with this tutorial. The certificates will have the following hierarchy.
+In this section you, will generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates will have the following hierarchy.
![Tutorial device certificate chain](./media/tutorial-custom-hsm-enrollment-group-x509/example-device-cert-chain.png#lightbox)
@@ -110,15 +112,17 @@ In this section you, will generate an X.509 certificate chain of three certifica
[Intermediate Certificate](concepts-x509-attestation.md#intermediate-certificate): It's common for intermediate certificates to be used to group devices logically by product lines, company divisions, or other criteria. This tutorial will use a certificate chain composed of one intermediate certificate. The intermediate certificate will be signed by the root certificate. This certificate will also be used on the enrollment group created in DPS to logically group a set of devices. This configuration allows managing a whole group of devices that have device certificates signed by the same intermediate certificate. You can create enrollment groups for enabling or disabling a group of devices. For more information on disabling a group of devices, see [Disallow an X.509 intermediate or root CA certificate by using an enrollment group](how-to-revoke-device-access-portal.md#disallow-an-x509-intermediate-or-root-ca-certificate-by-using-an-enrollment-group)
-[Device certificate](concepts-x509-attestation.md#end-entity-leaf-certificate): The device (leaf) certificate will be signed by the intermediate certificate and stored on the device along with its private key. The device will present this certificate and private key, along with the certificate chain when attempting provisioning.
+[Device certificates](concepts-x509-attestation.md#end-entity-leaf-certificate): The device (leaf) certificates will be signed by the intermediate certificate and stored on the device along with its private key. Ideally these sensitive items would be stored securely with an HSM. Each device will present its certificate and private key, along with the certificate chain when attempting provisioning.
-To create the certificate chain:
+#### Create root and intermediate certificates
-1. Open a Git Bash command prompt. Complete steps 1 and 2 using the Bash shell instructions that are located in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md#managing-test-ca-certificates-for-samples-and-tutorials).
+To create the root and intermediate portions of the certificate chain:
- This step creates a working directory for the certificate scripts, and generates the example root and intermediate certificate for the certificate chain using openssl.
+1. Open a Git Bash command prompt. Complete steps 1 and 2 using the Bash shell instructions that are located in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md#managing-test-ca-certificates-for-samples-and-tutorials).
- Notice in the output showing the location of the self-signed root certificate. This certificate will go through [proof of possession](how-to-verify-certificates.md) to verify ownership later.
+ This creates a working directory for the certificate scripts, and generates the example root and intermediate certificate for the certificate chain using openssl.
+
+2. Notice in the output showing the location of the self-signed root certificate. This certificate will go through [proof of possession](how-to-verify-certificates.md) to verify ownership later.
```output Creating the Root CA Certificate
@@ -138,8 +142,8 @@ To create the certificate chain:
Not After : Nov 22 21:30:30 2020 GMT Subject: CN=Azure IoT Hub CA Cert Test Only ``` -
- Notice in the output showing the location of the intermediate certificate that is signed/issued by the root certificate. This certificate will be used with the enrollment group you will create later.
+
+3. Notice in the output showing the location of the intermediate certificate that is signed/issued by the root certificate. This certificate will be used with the enrollment group you will create later.
```output Intermediate CA Certificate Generated At:
@@ -157,8 +161,12 @@ To create the certificate chain:
Not After : Nov 22 21:30:33 2020 GMT Subject: CN=Azure IoT Hub Intermediate Cert Test Only ```
+
+#### Create device certificates
-2. Next, run the following command to create a new device/leaf certificate with a subject name you give as a parameter. Use the example subject name given for this tutorial, `custom-hsm-device-01`. This subject name will be the device ID for your IoT device.
+To create the device certificates signed by the intermediate certificate in the chain:
+
+1. Run the following command to create a new device/leaf certificate with a subject name you give as a parameter. Use the example subject name given for this tutorial, `custom-hsm-device-01`. This subject name will be the device ID for your IoT device.
> [!WARNING] > Don't use a subject name with spaces in it. This subject name is the device ID for the IoT device being provisioned.
@@ -189,13 +197,13 @@ To create the certificate chain:
Subject: CN=custom-hsm-device-01 ```
-3. Run the following command to create a full certificate chain .pem file that includes the new device certificate.
+2. Run the following command to create a full certificate chain .pem file that includes the new device certificate for `custom-hsm-device-01`.
```Bash
- cd ./certs && cat new-device.cert.pem azure-iot-test-only.intermediate.cert.pem azure-iot-test-only.root.ca.cert.pem > new-device-full-chain.cert.pem && cd ..
+ cd ./certs && cat new-device.cert.pem azure-iot-test-only.intermediate.cert.pem azure-iot-test-only.root.ca.cert.pem > new-device-01-full-chain.cert.pem && cd ..
```
- Use a text editor and open the certificate chain file, *./certs/new-device-full-chain.cert.pem*. The certificate chain text contains the full chain of all three certificates. You will use this text as the certificate chain with the custom HSM code later in this tutorial.
+ Use a text editor and open the certificate chain file, *./certs/new-device-01-full-chain.cert.pem*. The certificate chain text contains the full chain of all three certificates. You will use this text as the certificate chain with in the custom HSM device code later in this tutorial for `custom-hsm-device-01`.
The full chain text has the following format:
@@ -211,115 +219,25 @@ To create the certificate chain:
--END CERTIFICATE-- ```
-5. Notice the private key for the new device certificate is written to *./private/new-device.key.pem*. The text for this key will be needed by the device during provisioning. The text will be added to the custom HSM example later.
+3. Notice the private key for the new device certificate is written to *./private/new-device.key.pem*. Rename this key file *./private/new-device-01.key.pem* for the `custom-hsm-device-01` device. The text for this key will be needed by the device during provisioning. The text will be added to the custom HSM example later.
+
+ ```bash
+ $ mv private/new-device.key.pem private/new-device-01.key.pem
+ ```
> [!WARNING] > The text for the certificates only contains public key information. > > However, the device must also have access to the private key for the device certificate. This is necessary because the device must perform verification using that key at runtime when attempting provisioning. The sensitivity of this key is one of the main reasons it is recommended to use hardware-based storage in a real HSM to help secure private keys.
+4. Repeat steps 1-3 for a second device with device ID `custom-hsm-device-02`. Use the following values for that device:
-
-## Configure the custom HSM stub code
-
-The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate chain used by the device in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods shown in this sample would then be implemented to read the secrets from that hardware-based storage.
-
-While HSM hardware is not required, it is not recommended to have sensitive information, like the certificate's private key, checked into source code. This exposes the key to anyone that can view the code. This is only done in this article to assist with learning.
-
-To update the custom HSM stub code for this tutorial:
-
-1. Launch Visual Studio and open the new solution file that was created in the `cmake` directory you created in the root of the azure-iot-sdk-c git repository. The solution file is named `azure_iot_sdks.sln`.
-
-2. In Solution Explorer for Visual Studio, navigate to **Provisioning_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
-
-3. Update the string value of the `COMMON_NAME` string constant using the common name you used when generating the device certificate.
-
- ```c
- static const char* const COMMON_NAME = "custom-hsm-device-01";
- ```
-
-4. In the same file, you need to update the string value of the `CERTIFICATE` constant string using your certificate chain text you saved in *./certs/new-device-full-chain.cert.pem* after generating your certificates.
-
- The syntax of certificate text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
-
- ```c
- // <Device/leaf cert>
- // <intermediates>
- // <root>
- static const char* const CERTIFICATE = "--BEGIN CERTIFICATE--\n"
- "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
- ...
- "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
- "--END CERTIFICATE--\n"
- "--BEGIN CERTIFICATE--\n"
- "MIIFPDCCAySgAwIBAgIBATANBgkqhkiG9w0BAQsFADAqMSgwJgYDVQQDDB9BenVy\n"
- ...
- "MTEyMjIxMzAzM1owNDEyMDAGA1UEAwwpQXp1cmUgSW9UIEh1YiBJbnRlcm1lZGlh\n"
- "--END CERTIFICATE--\n"
- "--BEGIN CERTIFICATE--\n"
- "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
- ...
- "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
- "--END CERTIFICATE--";
- ```
-
- Updating this string value correctly in this step can be very tedious and subject to error. To generate the proper syntax in your Git Bash prompt, copy and paste the following bash shell commands into your Git Bash command prompt, and press **ENTER**. These commands will generate the syntax for the `CERTIFICATE` string constant value.
-
- ```Bash
- input="./certs/new-device-full-chain.cert.pem"
- bContinue=true
- prev=
- while $bContinue; do
- if read -r next; then
- if [ -n "$prev" ]; then
- echo "\"$prev\\n\""
- fi
- prev=$next
- else
- echo "\"$prev\";"
- bContinue=false
- fi
- done < "$input"
- ```
-
- Copy and paste the output certificate text for the new constant value.
--
-5. In the same file, the string value of the `PRIVATE_KEY` constant must also be updated with the private key for your device certificate.
-
- The syntax of the private key text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
-
- ```c
- static const char* const PRIVATE_KEY = "--BEGIN RSA PRIVATE KEY--\n"
- "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n"
- ...
- "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n"
- "--END RSA PRIVATE KEY--";
- ```
-
- Updating this string value correctly in this step can also be very tedious and subject to error. To generate the proper syntax in your Git Bash prompt, copy and paste the following bash shell commands, and press **ENTER**. These commands will generate the syntax for the `PRIVATE_KEY` string constant value.
-
- ```Bash
- input="./private/new-device.key.pem"
- bContinue=true
- prev=
- while $bContinue; do
- if read -r next; then
- if [ -n "$prev" ]; then
- echo "\"$prev\\n\""
- fi
- prev=$next
- else
- echo "\"$prev\";"
- bContinue=false
- fi
- done < "$input"
- ```
-
- Copy and paste the output private key text for the new constant value.
-
-6. Save *custom_hsm_example.c*.
-
+ | Description | Value |
+ | :- | : |
+ | Subject Name | `custom-hsm-device-02` |
+ | Full certificate chain file | *./certs/new-device-02-full-chain.cert.pem* |
+ | Private key filename | *private/new-device-02.key.pem* |
+
## Verify ownership of the root certificate
@@ -350,6 +268,9 @@ On non-Windows devices, you can pass the certificate chain from the code as the
On Windows-based devices, you must add the signing certificates (root and intermediate) to a Windows [certificate store](/windows/win32/secauthn/certificate-stores). Otherwise, the signing certificates won't be transported to DPS by a secure channel with Transport Layer Security (TLS).
+> [!TIP]