Updates from: 02/02/2021 04:09:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/billing.md
@@ -8,7 +8,7 @@
Previously updated : 09/01/2020 Last updated : 02/01/2021
@@ -19,7 +19,7 @@
Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking your Azure AD B2C tenants to a subscription, and changing your pricing tier. > [!IMPORTANT]
-> This article does not contain pricing details. For the latest information about usage billing and pricing, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+> This article does not contain pricing details. For the latest information about usage billing and pricing, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). See also [Azure AD B2C region availability and data residency](data-residency.md) for details about where the Azure AD B2C service is available and where user data is stored.
## What do I need to do?
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/multi-factor-authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
@@ -36,7 +36,9 @@ This feature helps applications handle scenarios such as:
1. Select **User flows**. 1. Select the user flow for which you want to enable MFA. For example, *B2C_1_signinsignup*. 1. Select **Properties**.
-1. In the **Multifactor authentication** section, select the desired **MFA method**, and then under **MFA enforcement** select **Always on**, or **[Conditional](conditional-access-user-flow.md) (Recommended)**. For Conditional, create a [Conditional Access policy](conditional-access-identity-protection-setup.md) policy, and specify the apps you want the policy to apply to.
+1. In the **Multifactor authentication** section, select the desired **MFA method**, and then under **MFA enforcement** select **Always on**, or **Conditional (Recommended)**.
+ > [!NOTE]
+ > If you select **Conditional (Recommended)**, you'll also need to [add a Conditional Access policy](conditional-access-identity-protection-setup.md#add-a-conditional-access-policy) and specify the apps you want the policy to apply to.
1. Select Save. MFA is now enabled for this user flow. You can use **Run user flow** to verify the experience. Confirm the following scenario:
@@ -49,4 +51,4 @@ A customer account is created in your tenant before the multi-factor authenticat
To enable Multi-Factor Authentication get the custom policy starter packs from GitHub, then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social, local, and multi-factor authentication options. For more information, see [Get started with custom policies in Active Directory B2C](custom-policy-get-started.md).
-::: zone-end
\ No newline at end of file
+::: zone-end
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
@@ -1,7 +1,7 @@
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 12/15/2020 Last updated : 02/01/2021
@@ -15,6 +15,43 @@
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## January 2021
+
+### New articles
+
+- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
+- [Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant](identity-provider-azure-ad-b2c.md)
+- [Set up the local account identity provider](identity-provider-local.md)
+- [Set up a sign-in flow in Azure Active Directory B2C](add-sign-in-policy.md)
+
+### Updated articles
+
+- [Track user behavior in Azure Active Directory B2C using Application Insights](analytics-with-application-insights.md)
+- [TechnicalProfiles](technicalprofiles.md)
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)
+- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)
+- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs.md)
+- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)
+- [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md)
+- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)
+- [Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant](identity-provider-azure-ad-b2c.md)
+- [Set up sign-in for multi-tenant Azure Active Directory using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md)
+- [Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md)
+- [Set up sign-up and sign-in with a Facebook account using Azure Active Directory B2C](identity-provider-facebook.md)
+- [Set up sign-up and sign-in with a GitHub account using Azure Active Directory B2C](identity-provider-github.md)
+- [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md)
+- [Set up sign-up and sign-in with a ID.me account using Azure Active Directory B2C](identity-provider-id-me.md)
+- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md)
+- [Set up sign-up and sign-in with a Microsoft account using Azure Active Directory B2C](identity-provider-microsoft-account.md)
+- [Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C](identity-provider-qq.md)
+- [Set up sign-up and sign-in with a Salesforce account using Azure Active Directory B2C](identity-provider-salesforce.md)
+- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)
+- [Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C](identity-provider-wechat.md)
+- [Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C](identity-provider-weibo.md)
+- [Azure AD B2C custom policy overview](custom-policy-trust-frameworks.md)
++ ## December 2020 ### New articles
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-configure-ldaps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-configure-ldaps.md
@@ -211,6 +211,12 @@ It takes a few minutes to enable secure LDAP for your managed domain. If the sec
Some common reasons for failure are if the domain name is incorrect, the encryption algorithm for the certificate isn't *TripleDES-SHA1*, or the certificate expires soon or has already expired. You can re-create the certificate with valid parameters, then enable secure LDAP using this updated certificate.
+## Change an expiring certificate
+
+1. Create a replacement secure LDAP certificate by following the steps to [create a certificate for secure LDAP](#create-a-certificate-for-secure-ldap).
+1. To apply the replacement certificate to Azure AD DS, in the left menu for Azure AD DS in the Azure portal, select **Secure LDAP**, and then select **Change Certificate**.
+1. Distribute the certificate to any clients that connect by using secure LDAP.
+ ## Lock down secure LDAP access over the internet When you enable secure LDAP access over the internet to your managed domain, it creates a security threat. The managed domain is reachable from the internet on TCP port 636. It's recommended to restrict access to the managed domain to specific known IP addresses for your environment. An Azure network security group rule can be used to limit access to secure LDAP.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned.md
@@ -3,7 +3,7 @@ Title: Users are not being provisioned in my application
description: How to troubleshoot common issues faced when you don't see users appearing in an Azure AD Gallery Application you have configured for user provisioning with Azure AD -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
@@ -3,7 +3,7 @@ Title: Known issues with System for Cross-Domain Identity Management (SCIM) 2.0
description: How to solve common protocol compatibility issues faced when adding a non-gallery application that supports SCIM 2.0 to Azure AD -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-config-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem.md
@@ -3,7 +3,7 @@ Title: Problem configuring user provisioning to an Azure AD Gallery app
description: How to troubleshoot common issues faced when configuring user provisioning to an application already listed in the Azure AD Application Gallery -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-log-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
@@ -3,7 +3,7 @@ Title: Understand how Provisioning integrates with Azure Monitor logs in Azure A
description: Understand how Provisioning integrates with Azure Monitor logs in Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-quarantine-status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
@@ -3,7 +3,7 @@ Title: Application Provisioning status of Quarantine | Microsoft Docs
description: When you've configured an application for automatic user provisioning, learn what a provisioning status of Quarantine means and how to clear it. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
@@ -3,7 +3,7 @@ Title: Find out when a specific user will be able to access an app
description: How to find out when a critically important user be able to access an application you have configured for user provisioning with Azure AD -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/check-status-user-account-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
@@ -3,7 +3,7 @@ Title: Report automatic user account provisioning to SaaS applications
description: 'Learn how to check the status of automatic user account provisioning jobs, and how to troubleshoot the provisioning of individual users.' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/configure-automatic-user-provisioning-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
@@ -4,7 +4,7 @@ description: Learn how to manage user account provisioning for enterprise apps u
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/customize-application-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
@@ -3,7 +3,7 @@ Title: Tutorial - Customize Azure Active Directory attribute mappings
description: Learn what attribute mappings for SaaS apps in Azure Active Directory are how you can modify them to address your business needs. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
@@ -3,7 +3,7 @@ Title: Provision apps with scoping filters | Microsoft Docs
description: Learn how to use scoping filters to prevent objects in apps that support automated user provisioning from being provisioned if an object doesn't satisfy your business requirements. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/export-import-provisioning-configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
@@ -3,7 +3,7 @@ Title: Export provisioning configuration and roll back to a known good state for
description: Learn how to export your provisioning configuration and roll back to a known good state for disaster recovery. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/functions-for-customizing-application-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
@@ -3,7 +3,7 @@ Title: Reference for writing expressions for attribute mappings in Azure Active
description: Learn how to use expression mappings to transform attribute values into an acceptable format during automated provisioning of SaaS app objects in Azure Active Directory. Includes a reference list of functions. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/how-provisioning-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
@@ -3,7 +3,7 @@ Title: Understand how Azure AD provisioning works | Microsoft Docs
description: Understand how Azure AD provisioning works -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
@@ -3,7 +3,7 @@ Title: Enable automatic user provisioning for multi-tenant applications - Azure
description: A guide for independent software vendors for enabling automated provisioning -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
@@ -3,7 +3,7 @@ Title: Known issues for application provisioning in Azure AD
description: Learn about known issues when working with automated application provisioning in Azure AD. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/plan-auto-user-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
@@ -3,7 +3,7 @@ Title: Plan an automatic user provisioning deployment for Azure Active Directory
description: Guidance for planning and executing automatic user provisioning -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/plan-cloud-hr-provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
@@ -3,7 +3,7 @@ Title: Plan cloud HR application to Azure Active Directory user provisioning
description: This article describes the deployment process of integrating cloud HR systems, such as Workday and SuccessFactors, with Azure Active Directory. Integrating Azure AD with your cloud HR system results in a complete identity lifecycle management system. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/provision-on-demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/provision-on-demand.md
@@ -3,7 +3,7 @@ Title: Provision a user on demand by using Azure Active Directory
description: Force sync -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/provisioning-agent-release-version-history https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/provisioning-agent-release-version-history.md
@@ -3,7 +3,7 @@ Title: 'Azure AD Connect Provisioning Agent: Version release history | Microsoft
description: This article lists all releases of Azure AD Connect Provisioning Agent and describes new features and fixed issues -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/sap-successfactors-attribute-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
@@ -3,7 +3,7 @@ Title: SAP SuccessFactors attribute reference
description: Learn which attributes from SuccessFactors are supported by SuccessFactors-HR driven provisioning -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/sap-successfactors-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
@@ -3,7 +3,7 @@ Title: Azure Active Directory and SAP SuccessFactors integration reference
description: Technical deep dive into SAP SuccessFactors-HR driven provisioning -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/scim-graph-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/scim-graph-scenarios.md
@@ -3,7 +3,7 @@ Title: Use SCIM, Microsoft Graph, and Azure AD to provision users and enrich app
description: Using SCIM and the Microsoft Graph together to provision users and enrich your application with the data it needs. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/skip-out-of-scope-deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
@@ -3,7 +3,7 @@ Title: Skip deletion of out of scope users
description: Learn how to override the default behavior of de-provisioning out of scope users. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
@@ -3,7 +3,7 @@ Title: Build a SCIM endpoint for user provisioning to apps from Azure Active Dir
description: System for Cross-domain Identity Management (SCIM) standardizes automatic user provisioning. Learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and start automating provisioning users and groups into your cloud applications with Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -3,7 +3,7 @@ Title: Tutorial - Develop a SCIM endpoint for user provisioning to apps from Azu
description: System for Cross-domain Identity Management (SCIM) standardizes automatic user provisioning. In this tutorial, you learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and start automating provisioning users and groups into your cloud applications. -+
@@ -63,7 +63,7 @@ The schema defined above would be represented using the JSON payload below. Note
"schemas": ["urn:ietf:params:scim:schemas:core:2.0:User", "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User", "urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User"],
- "userName":"bjensen",
+ "userName":"bjensen@testuser.com",
"externalId":"bjensen", "name":{ "familyName":"Jensen",
@@ -957,7 +957,7 @@ If the response to a query to the web service for a user with an `externalId` at
"urn:ietf:params:scim:schemas:core:2.0:User", "urn:ietf:params:scim:schemas:extension:enterprise:2.0User"], "externalId":"jyoung",
- "userName":"jyoung",
+ "userName":"jyoung@testuser.com",
"active":true, "addresses":null, "displayName":"Joy Young",
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
@@ -3,7 +3,7 @@ Title: Synchronize attributes to Azure AD for mapping
description: Learn how to synchronize attributes from your on-premises Active Directory to Azure AD. When configuring user provisioning to SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/user-provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
@@ -3,7 +3,7 @@ Title: What is automated SaaS app user provisioning in Azure AD
description: An introduction to how you can use Azure AD to automatically provision, de-provision, and continuously update user accounts across multiple third-party SaaS applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
@@ -1,20 +1,32 @@
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 12/15/2020 Last updated : 02/01/2021 -+ # Azure Active Directory application provisioning: What's new Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2021
+
+### New articles
+- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)
+
+### Updated articles
+- [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)
+- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)
+- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)
+- [Application provisioning in quarantine status](application-provisioning-quarantine-status.md)
++ ## December 2020 ### Updated articles
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/workday-attribute-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/workday-attribute-reference.md
@@ -3,7 +3,7 @@ Title: Workday attribute reference
description: Learn which which attributes that you can fetch from Workday using XPATH queries. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/workday-integration-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/workday-integration-reference.md
@@ -3,7 +3,7 @@ Title: Azure Active Directory and Workday integration reference
description: Technical deep dive into Workday-HR driven provisioning -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-use-email-signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-use-email-signin.md
@@ -110,7 +110,7 @@ During preview, you can currently only enable the sign-in with email as an alter
1. Check if the *HomeRealmDiscoveryPolicy* policy already exists in your tenant using the [Get-AzureADPolicy][Get-AzureADPolicy] cmdlet as follows: ```powershell
- Get-AzureADPolicy | where-object {$_.Type -eq "HomeRealmDiscoveryPolicy"} | fl *
+ Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
``` 1. If there's no policy currently configured, the command returns nothing. If a policy is returned, skip this step and move on to the next step to update an existing policy.
@@ -118,10 +118,22 @@ During preview, you can currently only enable the sign-in with email as an alter
To add the *HomeRealmDiscoveryPolicy* policy to the tenant, use the [New-AzureADPolicy][New-AzureADPolicy] cmdlet and set the *AlternateIdLogin* attribute to *"Enabled": true* as shown in the following example: ```powershell
- New-AzureADPolicy -Definition @('{"HomeRealmDiscoveryPolicy" :{"AlternateIdLogin":{"Enabled": true}}}') `
- -DisplayName "BasicAutoAccelerationPolicy" `
- -IsOrganizationDefault $true `
- -Type "HomeRealmDiscoveryPolicy"
+ $AzureADPolicyDefinition = @(
+ @{
+ "HomeRealmDiscoveryPolicy" = @{
+ "AlternateIdLogin" = @{
+ "Enabled" = $true
+ }
+ }
+ } | ConvertTo-JSON -Compress
+ )
+ $AzureADPolicyParameters = @{
+ Definition = $AzureADPolicyDefinition
+ DisplayName = "BasicAutoAccelerationPolicy"
+ IsOrganizationDefault = $true
+ Type = "HomeRealmDiscoveryPolicy"
+ }
+ New-AzureADPolicy @AzureADPolicyParameters
``` When the policy has been successfully created, the command returns the policy ID, as shown in the following example output:
@@ -153,17 +165,31 @@ During preview, you can currently only enable the sign-in with email as an alter
The following example adds the *AlternateIdLogin* attribute and preserves the *AllowCloudPasswordValidation* attribute that may have already been set: ```powershell
- Set-AzureADPolicyΓÇ»-id b581c39c-8fe3-4bb5-b53d-ea3de05abb4b `
- -DefinitionΓÇ»@('{"HomeRealmDiscoveryPolicy" :{"AllowCloudPasswordValidation":true,"AlternateIdLogin":{"Enabled": true}}}') `
- -DisplayName "BasicAutoAccelerationPolicy" `
- -IsOrganizationDefaultΓÇ»$true `
- -Type "HomeRealmDiscoveryPolicy"
+ $AzureADPolicyDefinition = @(
+ @{
+ "HomeRealmDiscoveryPolicy" = @{
+ "AllowCloudPasswordValidation" = $true
+ "AlternateIdLogin" = @{
+ "Enabled" = $true
+ }
+ }
+ } | ConvertTo-JSON -Compress
+ )
+ $AzureADPolicyParameters = @{
+ ID = "b581c39c-8fe3-4bb5-b53d-ea3de05abb4b"
+ Definition = $AzureADPolicyDefinition
+ DisplayName = "BasicAutoAccelerationPolicy"
+ IsOrganizationDefault = $true
+ Type = "HomeRealmDiscoveryPolicy"
+ }
+
+ Set-AzureADPolicyΓÇ»@AzureADPolicyParameters
``` Confirm that the updated policy shows your changes and that the *AlternateIdLogin* attribute is now enabled: ```powershell
- Get-AzureADPolicy | where-object {$_.Type -eq "HomeRealmDiscoveryPolicy"} | fl *
+ Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
``` With the policy applied, it can take up to an hour to propagate and for users to be able to sign in using their alternate login ID.
@@ -204,7 +230,12 @@ You need *tenant administrator* permissions to complete the following steps:
4. If there are no existing staged rollout policies for this feature, create a new staged rollout policy and take note of the policy ID: ```powershell
- New-AzureADMSFeatureRolloutPolicy -Feature EmailAsAlternateId -DisplayName "EmailAsAlternateId Rollout Policy" -IsEnabled $true
+ $AzureADMSFeatureRolloutPolicy = @{
+ Feature = "EmailAsAlternateId"
+ DisplayName = "EmailAsAlternateId Rollout Policy"
+ IsEnabled = $true
+ }
+ New-AzureADMSFeatureRolloutPolicy @AzureADMSFeatureRolloutPolicy
``` 5. Find the directoryObject ID for the group to be added to the staged rollout policy. Note the value returned for the *Id* parameter, because it will be used in the next step.
@@ -247,7 +278,7 @@ If users have trouble with sign-in events using their email address, review the
1. Confirm that the Azure AD *HomeRealmDiscoveryPolicy* policy has the *AlternateIdLogin* attribute set to *"Enabled": true*: ```powershell
- Get-AzureADPolicy | where-object {$_.Type -eq "HomeRealmDiscoveryPolicy"} | fl *
+ Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
``` ## Next steps
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-smart-lockout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-smart-lockout.md
@@ -87,6 +87,8 @@ When the smart lockout threshold is triggered, you will get the following messag
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.*
+When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has (*threshold_limit * datacenter_count*) number of bad attempts if the user hits each datacenter before a lockout occurs.
+ ## Next steps To customize the experience further, you can [configure custom banned passwords for Azure AD password protection](tutorial-configure-custom-password-protection.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/multi-factor-authentication-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/multi-factor-authentication-faq.md
@@ -121,7 +121,7 @@ Learn more about MFA providers in [Getting started with an Azure Multi-Factor Au
In some instances, yes.
-If your directory has a *per-user* Azure Multi-Factor Authentication provider, you can add MFA licenses. Users with licenses aren't be counted in the per-user consumption-based billing. Users without licenses can still be enabled for MFA through the MFA provider. If you purchase and assign licenses for all your users configured to use Multi-Factor Authentication, you can delete the Azure Multi-Factor Authentication provider. You can always create another per-user MFA provider if you have more users than licenses in the future.
+If your directory has a *per-user* Azure Multi-Factor Authentication provider, you can add MFA licenses. Users with licenses aren't counted in the per-user consumption-based billing. Users without licenses can still be enabled for MFA through the MFA provider. If you purchase and assign licenses for all your users configured to use Multi-Factor Authentication, you can delete the Azure Multi-Factor Authentication provider. You can always create another per-user MFA provider if you have more users than licenses in the future.
If your directory has a *per-authentication* Azure Multi-Factor Authentication provider, you're always billed for each authentication, as long as the MFA provider is linked to your subscription. You can assign MFA licenses to users, but you'll still be billed for every two-step verification request, whether it comes from someone with an MFA license assigned or not.
@@ -258,4 +258,4 @@ If your question isn't answered here, the following support options are availabl
* Search the [Microsoft Support Knowledge Base](https://support.microsoft.com) for solutions to common technical issues. * Search for and browse technical questions and answers from the community, or ask your own question in the [Azure Active Directory Q&A](/answers/topics/azure-active-directory.html). * Contact Microsoft professional through [Azure Multi-Factor Authentication Server support](https://support.microsoft.com/oas/default.aspx?prid=14947). When contacting us, it's helpful if you can include as much information about your issue as possible. Information you can supply includes the page where you saw the error, the specific error code, the specific session ID, and the ID of the user who saw the error.
-* If you're a legacy PhoneFactor customer and you have questions or need help with resetting a password, use the [phonefactorsupport@microsoft.com](mailto:phonefactorsupport@microsoft.com) e-mail address to open a support case.
\ No newline at end of file
+* If you're a legacy PhoneFactor customer and you have questions or need help with resetting a password, use the [phonefactorsupport@microsoft.com](mailto:phonefactorsupport@microsoft.com) e-mail address to open a support case.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-national-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-national-cloud.md
@@ -81,7 +81,7 @@ To enable your MSAL.js application for sovereign clouds:
1. On the **Overview** page, note down the **Application (client) ID** value for later use. This tutorial requires you to enable the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). 1. Under **Manage**, select **Authentication**.
-1. Under **Implicit grant**, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app needs to sign in users and call an API.
+1. Under **Implicit grant and hybrid flows**, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app needs to sign in users and call an API.
1. Select **Save**. ### Step 2: Set up your web server or project
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-angular.md
@@ -47,11 +47,11 @@ In this quickstart, you download and run a code sample that demonstrates how an
> 1. If your account has access to more than one tenant, select your account at the upper right, and set your portal session to the Azure AD tenant that you want to use. > 1. Follow the instructions to [register a single-page application](./scenario-spa-app-registration.md) in the Azure portal. > 1. Add a new platform on the **Authentication** pane of your app registration and register the redirect URI: `http://localhost:4200/`.
-> 1. This quickstart uses the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). Select the **Implicit grant** settings for **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app signs in users and calls an API.
+> 1. This quickstart uses the [implicit grant flow](v2-oauth2-implicit-grant-flow.md). In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app signs in users and calls an API.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 1: Configure the application in the Azure portal
-> For the code sample for this quickstart to work, you need to add a redirect URI as **http://localhost:4200/** and enable **Implicit grant**.
+> For the code sample for this quickstart to work, you need to add a redirect URI as **http://localhost:4200/** and enable ****Implicit grant**.
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make these changes for me]() >
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-core-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
@@ -54,7 +54,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Under **Manage**, select **Authentication**. > 1. Under **Redirect URIs**, select **Add URI**, and then enter `https://localhost:44321/signin-oidc`. > 1. Enter a **Front-channel logout URL** of `https://localhost:44321/signout-oidc`.
-> 1. Under **Implicit grant**, select **ID tokens**.
+> 1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
> 1. Select **Save**. > [!div class="sxs-lookup" renderon="portal"]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-aspnet-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
@@ -51,7 +51,7 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Enter a **Name** for your application, for example `ASPNET-Quickstart`. Users of your app might see this name, and you can change it later. > 1. Add `https://localhost:44368/` in **Redirect URI**, and select **Register**. > 1. Under **Manage**, select **Authentication**.
-> 1. Under the **Implicit Grant** sub-section, select **ID tokens**.
+> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens**.
> 1. Select **Save**. > [!div class="sxs-lookup" renderon="portal"]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
@@ -53,8 +53,9 @@ See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use. > 1. This quickstart requires the [Implicit grant flow](v2-oauth2-implicit-grant-flow.md) to be enabled. Under **Manage**, select **Authentication**.
-> 1. Under **Platform Configurations**, select **Add a platform**. A panel opens on the left. There, select the **Web Applications** region.
-> 1. Still on the left, set the **Redirect URI** value to `http://localhost:3000/`. Then, select **Access Token** and **ID Token**.
+> 1. Under **Platform Configurations** > **Add a platform**. Select **Web**.
+> 1. Set the **Redirect URI** value to `http://localhost:3000/`.
+> 1. Select **Access Tokens** and **ID Tokens** under the **Implicit grant and hybrid flows** .
> 1. Select **Configure**. > [!div class="sxs-lookup" renderon="portal"]
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-nodejs-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
@@ -42,7 +42,7 @@ In this quickstart, you download and run a code sample that demonstrates how to
1. Select **Add a platform** > **Web**. 1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`. 1. Enter a **Front-channel logout URL** `https://localhost:3000`.
-1. In the Implicit grant section, check **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
+1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
1. Select **Configure**. 1. Under **Manage**, select **Certificates & secrets** > **New client secret**. 1. Enter a key description (for instance app secret).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-production.md
@@ -36,7 +36,7 @@ For instance, you might have two resources that have two scopes each:
- `https://mytenant.onmicrosoft.com/customerapi` with the scopes `customer.read` and `customer.write` - `https://mytenant.onmicrosoft.com/vendorapi` with the scopes `vendor.read` and `vendor.write`
-In this example, use the `.WithAdditionalPromptToConsent` modifier that has the `extraScopesToConsent` parameter.
+In this example, use the `.WithExtraScopesToConsent` modifier that has the `extraScopesToConsent` parameter.
For instance:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-app-registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-app-registration.md
@@ -43,7 +43,7 @@ Follow these steps to add a redirect URI for an app that uses MSAL.js 2.0 or lat
1. In the Azure portal, select the app registration you created earlier in [Create the app registration](#create-the-app-registration). 1. Under **Manage**, select **Authentication** > **Add a platform**. 1. Under **Web applications**, select the **Single-page application** tile.
-1. Under **Redirect URIs**, enter a [redirect URI](reply-url.md). Do **NOT** select either checkbox under **Implicit grant**.
+1. Under **Redirect URIs**, enter a [redirect URI](reply-url.md). Do **NOT** select either checkbox under **Implicit grant and hybrid flows**.
1. Select **Configure** to finish adding the redirect URI. You've now completed the registration of your single-page application (SPA) and configured a redirect URI to which the client will be redirected and any security tokens will be sent. By configuring your redirect URI using the **Single-page application** tile in the **Add a platform** pane, your application registration is configured to support the authorization code flow with PKCE and CORS.
@@ -58,7 +58,7 @@ Follow these steps to add a redirect URI for a single-page app that uses MSAL.js
1. Under **Manage**, select **Authentication** > **Add a platform**. 1. Under **Web applications**, select **Single-page application** tile. 1. Under **Redirect URIs**, enter a [redirect URI](reply-url.md).
-1. Enable the **Implicit flow**:
+1. Enable the **Implicit grant and hybrid flows**:
- If your application signs in users, select **ID tokens**. - If your application also needs to call a protected web API, select **Access tokens**. For more information about these token types, see [ID tokens](id-tokens.md) and [Access tokens](access-tokens.md). 1. Select **Configure** to finish adding the redirect URI.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-server.md
@@ -37,7 +37,7 @@ Every app that uses Azure Active Directory (Azure AD) for authentication must be
- For **Supported account types**, select **Accounts in this organizational directory only**. - Leave the **Redirect URI** drop down set to **Web** and enter `https://localhost:5001/signin-oidc`. The default port for an app running on Kestrel is 5001. If the app is available on a different port, specify that port number instead of `5001`.
-In **Authentication** > **Implicit grant**, select the check boxes for **Access tokens** and **ID tokens**, and then select the **Save** button.
+Under **Manage**, select **Authentication** > **Implicit grant and hybrid flows**. Select **Access tokens** and **ID tokens**, and then select **Save**.
Finally, because the app calls a protected API (in this case Microsoft Graph), it needs a client secret in order to verify its identity when it requests an access token to call that API.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
@@ -39,7 +39,7 @@ Every app that uses Azure Active Directory (Azure AD) for authentication must be
- For **Supported account types**, select **Accounts in this organizational directory only**. - Leave the **Redirect URI** drop down set to **Web** and enter `https://localhost:5001/authentication/login-callback`. The default port for an app running on Kestrel is 5001. If the app is available on a different port, specify that port number instead of `5001`.
-Once registered, in **Authentication** > **Implicit grant**, select the check boxes for **Access tokens** and **ID tokens**, and then select the **Save** button.
+Once registered, under **Manage**, select **Authentication** > **Implicit grant and hybrid flows**. Select **Access tokens** and **ID tokens**, and then select **Save**.
## Create the app using the .NET Core CLI
@@ -77,7 +77,7 @@ The components of this template that enable logins with Azure AD using the Micro
[Microsoft Graph](/graph/overview) contains APIs that provide access to Microsoft 365 data for your users, and it supports the tokens issued by the Microsoft identity platform, which makes it a good protected API to use as an example. In this section, you add code to call Microsoft Graph and display the user's emails on the application's "Fetch data" page.
-This section is written using a common approach to calling a protected API using a named client. The same method can be used for other protected APIs you want to call. However, if you do plan to call Microsoft Graph from your application you can use the Graph SDK to reduce boilerplate. The .NET docs contain instructions on [how to use the Graph SDK](/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-5.0).
+This section is written using a common approach to calling a protected API using a named client. The same method can be used for other protected APIs you want to call. However, if you do plan to call Microsoft Graph from your application you can use the Graph SDK to reduce boilerplate. The .NET docs contain instructions on [how to use the Graph SDK](/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-5.0&preserve-view=true).
Before you start, log out of your app since you'll be making changes to the required permissions, and your current token won't work. If you haven't already, run your app again and select **Log out** before updating the code below.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-asp-webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-asp-webapp.md
@@ -379,7 +379,7 @@ To register your application and add the app's registration information to your
1. Add the SSL URL you copied from Visual Studio in step 1 (for example, `https://localhost:44368/`) in **Redirect URI**. 1. Select **Register**. 1. Under **Manage**, select **Authentication**.
-1. In the **Implicit Grant** section, select **ID tokens**, and then select **Save**.
+1. In the **Implicit grant and hybrid flows** section, select **ID tokens**, and then select **Save**.
1. Add the following in the web.config file, located in the root folder in the `configuration\appSettings` section: ```xml
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-aspnet-daemon-web-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
@@ -107,7 +107,7 @@ If you don't want to use the automation, use the steps in the following sections
1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later use. You'll need it to configure the Visual Studio configuration file for this project. 1. Under **Manage**, select **Authentication**. 1. Set **Front-channel logout URL** to `https://localhost:44316/Account/EndSession`.
-1. In the **Implicit grant** section, select **Access tokens** and **ID tokens**. This sample requires the [implicit grant flow](v2-oauth2-implicit-grant-flow.md) to be enabled to sign in the user and call an API.
+1. In the **Implicit grant and hybrid flows** section, select **Access tokens** and **ID tokens**. This sample requires the [implicit grant flow](v2-oauth2-implicit-grant-flow.md) to be enabled to sign in the user and call an API.
1. Select **Save**. 1. Under **Manage**, select **Certificates & secrets**. 1. In the **Client secrets** section, select **New client secret**.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-javascript-spa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-javascript-spa.md
@@ -271,7 +271,7 @@ Before proceeding further with authentication, register your application on **Az
1. Select **Register**. 1. On the app **Overview** page, note the **Application (client) ID** value for later use. 1. Under **Manage**, select **Authentication**.
-1. In the **Implicit grant** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app must sign in users and call an API.
+1. In the **Implicit grant and hybrid flows** section, select **ID tokens** and **Access tokens**. ID tokens and access tokens are required because this app must sign in users and call an API.
1. Select **Save**. > ### Set a redirect URL for Node.js
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-implicit-grant-flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
@@ -40,7 +40,7 @@ The following diagram shows what the entire implicit sign-in flow looks like and
To initially sign the user into your app, you can send an [OpenID Connect](v2-protocols-oidc.md) authentication request and get an `id_token` from the Microsoft identity platform. > [!IMPORTANT]
-> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and.or **access tokens** under the **Implicit grant** section. If it's not enabled, an `unsupported_response` error will be returned: **The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'**
+> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned: `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'`
``` // Line breaks for legibility only
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/whats-new-docs.md
@@ -20,13 +20,25 @@ Welcome to what's new in the Microsoft identity platform documentation. This art
## January 2021
+### New articles
+
+- [Logging in MSAL for Android](msal-logging-android.md)
+- [Logging in MSAL.NET](msal-logging-dotnet.md)
+- [Logging in MSAL for iOS/macOS](msal-logging-ios.md)
+- [Logging in MSAL for Java](msal-logging-java.md)
+- [Logging in MSAL.js](msal-logging-js.md)
+- [Logging in MSAL for Python](msal-logging-python.md)
+ ### Updated articles
+- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)
+- [Application model](application-model.md)
- [Authentication vs. authorization](authentication-vs-authorization.md) - [How to: Restrict your Azure AD app to a set of users in an Azure AD tenant](howto-restrict-your-app-to-a-set-of-users.md) - [Permissions and consent in the Microsoft identity platform endpoint](v2-permissions-and-consent.md) - [Configurable token lifetimes in Microsoft identity platform (preview)](active-directory-configurable-token-lifetimes.md) - [Configure token lifetime policies (preview)](configure-token-lifetimes.md)
+- [Microsoft identity platform authentication libraries](reference-v2-libraries.md)
- [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) ## December 2020
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/direct-federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation.md
@@ -75,7 +75,8 @@ Yes. If the domain hasn't been verified and the tenant hasn't undergone an [admi
When direct federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up direct federation, they'll continue to use one-time passcode authentication. ### Does direct federation address sign-in issues due to a partially synced tenancy? No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The direct federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.-
+### Once Direct Federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?
+Setting up direct federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by deleting the guest user account from your directory and reinviting them.
## Step 1: Configure the partner organizationΓÇÖs identity provider First, your partner organization needs to configure their identity provider with the required claims and relying party trusts.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
@@ -1,7 +1,7 @@
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 12/15/2020 Last updated : 02/01/2021
@@ -15,10 +15,16 @@
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
-## December 2020
+## January 2021
### Updated articles
+- [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md)
+- [How users in your organization can invite guest users to an app](add-users-information-worker.md)
+
+## December 2020
+
+### Updated articles
- [Azure Active Directory B2B collaboration FAQs](faq.md) - [Add Google as an identity provider for B2B guest users](google-federation.md) - [Identity Providers for External Identities](identity-providers.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-troubleshooting-support-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
@@ -36,8 +36,8 @@ If you are unable to find answers by using self-help resources, you can open an
### How to open a support ticket for Azure AD in the Azure portal > [!NOTE]
-> For billing or subscription issues, you must use the [Microsoft 365 admin center](https://admin.microsoft.com).
->
+> * For billing or subscription issues, you must use the [Microsoft 365 admin center](https://admin.microsoft.com).
+> * If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
1. Sign in to [the Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
@@ -69,7 +69,7 @@ If you are unable to find answers by using self-help resources, you can open an
### How to open a support ticket for Azure AD in the Microsoft 365 admin center > [!NOTE]
-> Support for Azure AD in the [Microsoft 365 admin center](https://admin.microsoft.com) is offered for administrators only.
+> Support for Azure AD in the [Microsoft 365 admin center](https://admin.microsoft.com) is offered for administrators only.
1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com) with an account that has an Enterprise Mobility + Security (EMS) license.
@@ -95,4 +95,4 @@ See the [Contact Microsoft for support](https://portal.office.com/Support/Contac
* [Microsoft Tech Community](https://techcommunity.microsoft.com/)
-* [Technical documentation at docs.microsoft.com](../index.yml)
\ No newline at end of file
+* [Technical documentation at docs.microsoft.com](../index.yml)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/protect-m365-from-on-premises-attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
@@ -1,6 +1,6 @@
Title: Protecting Microsoft 365 from on-premises attacks
-description: Guidance on how to ensure an on-premises attack does not impact Microsoft 365
+description: Guidance about how to ensure an on-premises attack doesn't affect Microsoft 365.
@@ -18,206 +18,212 @@
# Protecting Microsoft 365 from on-premises attacks Many customers connect their private corporate networks to Microsoft 365
-to benefit their users, devices, and applications. However, there are
-many well-documented ways these private networks can be compromised. Because Microsoft 365 acts as the "nervous system" for many organizations, it is critical to protect it from compromised on-premises infrastructure.
+to benefit their users, devices, and applications. However, these private networks can be compromised in
+many well-documented ways. Because Microsoft 365 acts as a sort of nervous system for many organizations, it's critical to protect it from compromised on-premises infrastructure.
This article shows you how to configure your systems to protect your Microsoft 365 cloud environment from on-premises compromise. We
-primarily focus on Azure AD tenant configuration settings, the ways
-Azure AD tenants can be safely connected to on-premises systems, and the
-tradeoffs required to operate your systems in ways that protect your
+focus primarily on:
+
+- Azure Active Directory (Azure AD) tenant configuration settings.
+- How Azure AD tenants can be safely connected to on-premises systems.
+- The tradeoffs required to operate your systems in ways that protect your
cloud systems from on-premises compromise. We strongly recommend you implement this guidance to secure your Microsoft 365 cloud environment. > [!NOTE]
-> This article was initially published as a blog post. It has been moved here for longevity and maintenance. <br>
-To create an offline version of this article, use your browser's print to PDF functionality. Check back here frequently for updates.
+> This article was initially published as a blog post. It has been moved to its current location for longevity and maintenance.
+>
+> To create an offline version of this article, use your browser's print-to-PDF functionality. Check back here frequently for updates.
## Primary threat vectors from compromised on-premises environments Your Microsoft 365 cloud environment benefits from an extensive monitoring and security infrastructure. Using machine learning and human
-intelligence that looks across worldwide traffic can rapidly detect
-attacks and allow you to reconfigure in near-real-time. In hybrid
+intelligence, Microsoft 365 looks across worldwide traffic. It can rapidly detect
+attacks and allow you to reconfigure nearly in real time.
+
+In hybrid
deployments that connect on-premises infrastructure to Microsoft 365, many organizations delegate trust to on-premises components for critical authentication and directory object state management decisions. Unfortunately, if the on-premises environment is compromised, these
-trust relationships result in attackers' opportunities to compromise
+trust relationships become an attacker's opportunities to compromise
your Microsoft 365 environment.
-The two primary threat vectors are **federation trust relationships**
-and **account synchronization.** Both vectors can grant an attacker
+The two primary threat vectors are *federation trust relationships*
+and *account synchronization.* Both vectors can grant an attacker
administrative access to your cloud. * **Federated trust relationships**, such as SAML authentication, are
- used to authenticate to Microsoft 365 via your on-premises Identity
- Infrastructure. If a SAML token signing certificate is compromised,
- federation would allow anyone with that certificate to impersonate
- any user in your cloud. **We recommend you disable federation trust
- relationships for authentication to Microsoft 365 when possible.**
+ used to authenticate to Microsoft 365 through your on-premises identity
+ infrastructure. If a SAML token-signing certificate is compromised,
+ federation allows anyone who has that certificate to impersonate
+ any user in your cloud. *We recommend you disable federation trust
+ relationships for authentication to Microsoft 365 when possible.*
* **Account synchronization** can be used to modify privileged users
- (including their credentials) or groups granted administrative
- privileges in Microsoft 365. **We recommend you ensure that
+ (including their credentials) or groups that have administrative
+ privileges in Microsoft 365. *We recommend you ensure that
synchronized objects hold no privileges beyond a user in
- Microsoft 365,** either directly or via inclusion in trusted roles
+ Microsoft 365,* either directly or through inclusion in trusted roles
or groups. Ensure these objects have no direct or nested assignment in trusted cloud roles or groups. ## Protecting Microsoft 365 from on-premises compromise
-To address the threat vectors outlined above, we recommend you adhere to
-the principles illustrated below:
+To address the threat vectors outlined earlier, we recommend you adhere to
+the principles illustrated in the following diagram:
-![Reference architecture for protecting Microsoft 365 ](media/protect-m365/protect-m365-principles.png)
+![Reference architecture for protecting Microsoft 365.](media/protect-m365/protect-m365-principles.png)
-* **Fully Isolate your Microsoft 365 administrator accounts.** They
- should be
+1. **Fully isolate your Microsoft 365 administrator accounts.** They
+ should be:
* Mastered in Azure AD.
- * Authenticated with Multi-factor authentication (MFA).
+ * Authenticated by using multifactor authentication.
- * Secured by Azure AD conditional access.
+ * Secured by Azure AD Conditional Access.
- * Accessed only by using Azure Managed Workstations.
+ * Accessed only by using Azure-managed workstations.
-These are restricted use accounts. **There should be no on-premises accounts with administrative privileges in Microsoft 365.** For more information, see this [overview of Microsoft 365 administrator roles](/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide).
-Also see [Roles for Microsoft 365 in Azure Active Directory](../roles/m365-workload-docs.md).
+ These administrator accounts are restricted-use accounts. *No on-premises accounts should have administrative privileges in Microsoft 365.*
-* **Manage devices from Microsoft 365.** Use Azure AD Join and
+ For more information, see the [overview of Microsoft 365 administrator roles](/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide). Also see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md).
+
+1. **Manage devices from Microsoft 365.** Use Azure AD join and
cloud-based mobile device management (MDM) to eliminate dependencies
- on your on-premises device management infrastructure, which can
+ on your on-premises device management infrastructure. These dependencies can
compromise device and security controls.
-* **No on-premises account has elevated privileges to Microsoft 365.**
- Accounts accessing on-premises applications that require NTLM, LDAP,
- or Kerberos authentication need an account in the organization's
+1. **Ensure no on-premises account has elevated privileges to Microsoft 365.**
+ Some accounts access on-premises applications that require NTLM, LDAP,
+ or Kerberos authentication. These accounts must be in the organization's
on-premises identity infrastructure. Ensure that these accounts,
- including service accounts, are not included in privileged cloud
- roles or groups and that changes to these accounts cannot impact the
+ including service accounts, aren't included in privileged cloud
+ roles or groups. Also ensure that changes to these accounts can't affect the
integrity of your cloud environment. Privileged on-premises software
- must not be capable of impacting Microsoft 365 privileged accounts
+ must not be capable of affecting Microsoft 365 privileged accounts
or roles.
-* **Use Azure AD cloud authentication** to eliminate dependencies on
+1. **Use Azure AD cloud authentication** to eliminate dependencies on
your on-premises credentials. Always use strong authentication,
- such as Windows Hello, FIDO, the Microsoft Authenticator, or Azure
- AD MFA.
+ such as Windows Hello, FIDO, Microsoft Authenticator, or Azure
+ AD multifactor authentication.
-## Specific Recommendations
+## Specific security recommendations
-The following sections provide specific guidance on how to implement the
-principles described above.
+The following sections provide specific guidance about how to implement the
+principles described earlier.
### Isolate privileged identities
-In Azure AD, users with privileged roles such as administrators are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the impact of a compromise.
+In Azure AD, users who have privileged roles, such as administrators, are the root of trust to build and manage the rest of the environment. Implement the following practices to minimize the effects of a compromise.
* Use cloud-only accounts for Azure AD and Microsoft 365 privileged
- roles.d
+ roles.
* Deploy [privileged access devices](/security/compass/privileged-access-devices#device-roles-and-profiles) for privileged access to manage Microsoft 365 and Azure AD.
-* Deploy [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) for just in time (JIT) access to all human accounts that have privileged roles, and require strong authentication to activate roles.
+* Deploy [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) for just-in-time (JIT) access to all human accounts that have privileged roles. Require strong authentication to activate roles.
-* Provide administrative roles the [least privilege possible to perform their tasks](../roles/delegate-by-task.md).
+* Provide administrative roles that allow the [least privilege necessary to do required tasks](../roles/delegate-by-task.md).
-* To enable a richer role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups (collectively "cloud groups") and [enable role-based access control](../roles/groups-assign-role.md). You can also use [Administrative Units](../roles/administrative-units.md) to restrict the scope of roles to a portion of the organization.
+* To enable a rich role assignment experience that includes delegation and multiple roles at the same time, consider using Azure AD security groups or Microsoft 365 Groups. These groups are collectively called *cloud groups*. Also [enable role-based access control](../roles/groups-assign-role.md). You can use [administrative units](../roles/administrative-units.md) to restrict the scope of roles to a portion of the organization.
-* Deploy [Emergency Access Accounts](../roles/security-emergency-access.md) and do NOT use on-premises password vaults to store credentials.
+* Deploy [emergency access accounts](../roles/security-emergency-access.md). Do *not* use on-premises password vaults to store credentials.
-For more information, see [Securing privileged access](/security/compass/overview), which has detailed guidance on this topic. Also, see [Secure access practices for administrators in Azure AD](../roles/security-planning.md).
+For more information, see [Securing privileged access](/security/compass/overview). Also see [Secure access practices for administrators in Azure AD](../roles/security-planning.md).
### Use cloud authentication Credentials are a primary attack vector. Implement the following
-practices to make credentials more secure.
+practices to make credentials more secure:
-* [Deploy passwordless authentication](../authentication/howto-authentication-passwordless-deployment.md): Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and
- validated natively in the cloud. Choose from:
+* [Deploy passwordless authentication](../authentication/howto-authentication-passwordless-deployment.md). Reduce the use of passwords as much as possible by deploying passwordless credentials. These credentials are managed and
+ validated natively in the cloud. Choose from these authentication methods:
* [Windows Hello for business](/windows/security/identity-protection/hello-for-business/passwordless-strategy)
- * [Authenticator App](../authentication/howto-authentication-passwordless-phone.md)
+ * [The Microsoft Authenticator app](../authentication/howto-authentication-passwordless-phone.md)
* [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key-windows.md)
-* [Deploy Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md): Provision
- [multiple strong credentials using Azure AD MFA](../fundamentals/resilience-in-credentials.md). That way, access to cloud resources will require a credential that is managed in Azure AD in addition to an on-premises password that can be manipulated.
+* [Deploy multifactor authentication](../authentication/howto-mfa-getstarted.md). Provision
+ [multiple strong credentials by using Azure AD multifactor authentication](../fundamentals/resilience-in-credentials.md). That way, access to cloud resources will require a credential that's managed in Azure AD in addition to an on-premises password that can be manipulated. For more information, see [Create a resilient access control management strategy by using Azure AD](./resilience-overview.md).
- * For more information, see [Create a resilient access control management strategy with Azure active Directory](./resilience-overview.md).
+### Limitations and tradeoffs
-**Limitations and tradeoffs**
+* Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. This vulnerability won't
+ compromise your cloud infrastructure. But your cloud accounts won't protect these components from on-premises compromise.
-* Hybrid account password management requires hybrid components such as password protection agents and password writeback agents. If your on-premises infrastructure is compromised, attackers can control the machines on which these agents reside. While this will not
- compromise your cloud infrastructure, your cloud accounts will not protect these components from on-premises compromise.
+* On-premises accounts synced from Active Directory are marked to never expire in Azure AD. This setting is usually mitigated by on-premises Active Directory password settings. However, if your on-premises instance of Active Directory is compromised and synchronization is disabled, you must set the [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md) option to force password changes.
-* On-premises accounts synced from Active Directory are marked to never expire in Azure AD, based on the assumption that on-premises AD password policies will mitigate this. If your on-premises AD is compromised and synchronization from AD connect needs to be disabled, you must set the option [EnforceCloudPasswordPolicyForPasswordSyncedUsers](../hybrid/how-to-connect-password-hash-synchronization.md).
+## Provision user access from the cloud
-## Provision User Access from the Cloud
+*Provisioning* refers to the creation of user accounts and groups in applications or identity providers.
-Provisioning refers to the creation of user accounts and groups in applications or identity providers.
+![Diagram of provisioning architecture.](media/protect-m365/protect-m365-provision.png)
-![Diagram of provisioning architecture](media/protect-m365/protect-m365-provision.png)
+We recommend the following provisioning methods:
-* **Provision from cloud HR apps to Azure AD:** This enables an on-premises compromise to be isolated without disrupting your Joiner-Mover-Leaver cycle from your cloud HR apps to Azure AD.
+* **Provision from cloud HR apps to Azure AD**: This provisioning enables an on-premises compromise to be isolated, without disrupting your joiner-mover-leaver cycle from your cloud HR apps to Azure AD.
-* **Cloud Applications:** Where possible, deploy [Azure AD App
- Provisioning](../app-provisioning/user-provisioning.md) as
- opposed to on-premises provisioning solutions. This will protect
- some of your SaaS apps from being poisoned with malicious user
- profiles due to on-premises breaches.
+* **Cloud applications**: Where possible, deploy [Azure AD app
+ provisioning](../app-provisioning/user-provisioning.md) as
+ opposed to on-premises provisioning solutions. This method protects
+ some of your software-as-a-service (SaaS) apps from being affected by malicious hacker
+ profiles in on-premises breaches.
-* **External Identities:** Use [Azure AD B2B
+* **External identities**: Use [Azure AD B2B
collaboration](../external-identities/what-is-b2b.md).
- This will reduce the dependency on on-premises accounts for external
+ This method reduces the dependency on on-premises accounts for external
collaboration with partners, customers, and suppliers. Carefully evaluate any direct federation with other identity providers. We
- recommend limiting B2B guest accounts in the following ways.
+ recommend limiting B2B guest accounts in the following ways:
* Limit guest access to browsing groups and other properties in
- the directory. Use the external collaboration settings to restrict guest
- ability to read groups they are not members of.
+ the directory. Use the external collaboration settings to restrict guests'
+ ability to read groups they're not members of.
* Block access to the Azure portal. You can make rare necessary exceptions. Create a Conditional Access policy that includes all guests
- and external users and then [implement a policy to block
+ and external users. Then [implement a policy to block
access](../../role-based-access-control/conditional-access-azure-management.md).
-* **Disconnected Forests:** Use [Azure AD Cloud
- Provisioning](../cloud-provisioning/what-is-cloud-provisioning.md). This enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can
- broaden the impact of an on-premises breach. *
+* **Disconnected forests**: Use [Azure AD cloud
+ provisioning](../cloud-provisioning/what-is-cloud-provisioning.md). This method enables you to connect to disconnected forests, eliminating the need to establish cross-forest connectivity or trusts, which can
+ broaden the effect of an on-premises breach.
-**Limitations and Tradeoffs:**
+### Limitations and tradeoffs
-* When used to provision hybrid accounts, the Azure AD from cloud HR systems relies on on-premises synchronization to complete the data flow from AD to Azure AD. If synchronization is interrupted, new employee records will not be available in Azure AD.
+When used to provision hybrid accounts, the Azure-AD-from-cloud-HR system relies on on-premises synchronization to complete the data flow from Active Directory to Azure AD. If synchronization is interrupted, new employee records won't be available in Azure AD.
## Use cloud groups for collaboration and access Cloud groups allow you to decouple your collaboration and access from your on-premises infrastructure.
-* **Collaboration:** Use Microsoft 365 Groups and Microsoft Teams for
+* **Collaboration**: Use Microsoft 365 Groups and Microsoft Teams for
modern collaboration. Decommission on-premises distribution lists,
- and [Upgrade distribution lists to Microsoft 365 Groups in
+ and [upgrade distribution lists to Microsoft 365 Groups in
Outlook](/office365/admin/manage/upgrade-distribution-lists?view=o365-worldwide).
-* **Access:** Use Azure AD security groups or Microsoft 365 Groups to
+* **Access**: Use Azure AD security groups or Microsoft 365 Groups to
authorize access to applications in Azure AD.
-* **Office 365 licensing:** Use group-based licensing to provision to
- Office 365 using cloud-only groups. This decouples control of group
+* **Office 365 licensing**: Use group-based licensing to provision to
+ Office 365 by using cloud-only groups. This method decouples control of group
membership from on-premises infrastructure.
-Owners of groups used for access should be considered privileged
-identities to avoid membership takeover from on-premises compromise.
-Take over includes direct manipulation of group membership on-premises
+Owners of groups that are used for access should be considered privileged
+identities to avoid membership takeover in an on-premises compromise.
+A takeover would include direct manipulation of group membership on-premises
or manipulation of on-premises attributes that can affect dynamic group membership in Microsoft 365.
@@ -226,84 +232,93 @@ membership in Microsoft 365.
Use Azure AD capabilities to securely manage devices. -- **Use Windows 10 Workstations:** [Deploy Azure AD
- Joined](../devices/azureadjoin-plan.md)
+- **Use Windows 10 workstations**: [Deploy Azure AD
+ joined](../devices/azureadjoin-plan.md)
devices with MDM policies. Enable [Windows Autopilot](/mem/autopilot/windows-autopilot) for a fully automated provisioning experience.
- - Deprecate Windows 8.1 and earlier machines.
+ - Deprecate machines that run Windows 8.1 and earlier.
+
+ - Don't deploy server OS machines as workstations.
- - Do not deploy Server OS machines as workstations.
+ - Use [Microsoft Intune](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/microsoft-intune)
+ as the source of authority for all device management workloads.
- - Use [Microsoft Intune](https://www.microsoft.com/en/microsoft-365/enterprise-mobility-security/microsoft-intune)
- as the source of authority of all device management workloads.
+- [**Deploy privileged access devices**](/security/compass/privileged-access-devices#device-roles-and-profiles):
+ Use privileged access to manage Microsoft 365 and Azure AD.
-- [**Deploy privileged access devices**](/security/compass/privileged-access-devices#device-roles-and-profiles)
- for privileged access to manage Microsoft 365 and Azure AD.
+## Workloads, applications, and resources
- ## Workloads, applications, and resources
+- **On-premises single-sign-on (SSO) systems**
-- **On-premises SSO systems:** Deprecate any on-premises federation
- and Web Access Management infrastructure and configure applications
+ Deprecate any on-premises federation
+ and web access management infrastructure. Configure applications
to use Azure AD. -- **SaaS and LOB applications that support modern authentication
- protocols:** [Use Azure AD for single
- sign-on](../manage-apps/what-is-single-sign-on.md). The
+- **SaaS and line-of-business (LOB) applications that support modern authentication
+ protocols**
+
+ [Use Azure AD for SSO](../manage-apps/what-is-single-sign-on.md). The
more apps you configure to use Azure AD for authentication, the less
- risk in the case of an on-premises compromise.
+ risk in an on-premises compromise.
-* **Legacy Applications**
+* **Legacy applications**
- * Authentication, authorization, and remote access to legacy applications that do not support modern authentication can be enabled via [Azure AD Application Proxy](../manage-apps/application-proxy.md).They can also be enabled through a network or application delivery controller solution using [secure hybrid access partner integrations](../manage-apps/secure-hybrid-access.md).
+ * You can enable authentication, authorization, and remote access to legacy applications that don't support modern authentication. Use [Azure AD Application Proxy](../manage-apps/application-proxy.md). You can also enable them through a network or application delivery controller solution by using [secure hybrid access partner integrations](../manage-apps/secure-hybrid-access.md).
- * Choose a VPN vendor that supports modern authentication and integrate its authentication with Azure AD. In the case of anon-premises compromise, you can use Azure AD to disable or block access by disabling the VPN.
+ * Choose a VPN vendor that supports modern authentication. Integrate its authentication with Azure AD. In an on-premises compromise, you can use Azure AD to disable or block access by disabling the VPN.
* **Application and workload servers**
- * Applications or resources that required servers can be migrated to Azure IaaS and use [Azure AD Domain Services](../../active-directory-domain-services/overview.md) (Azure AD DS) to decouple trust and dependency on AD on-premises. To achieve this decoupling, virtual networks used for Azure AD DS should not have connection to corporate networks.
+ * Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use [Azure AD Domain Services](../../active-directory-domain-services/overview.md) (Azure AD DS) to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Azure AD DS don't have a connection to corporate networks.
- * Follow the guidance of the [credential tiering](/security/compass/privileged-access-access-model#ADATM_BM). Application Servers are typically considered Tier 1 assets.
+ * Follow the guidance for [credential tiering](/security/compass/privileged-access-access-model#ADATM_BM). Application servers are typically considered tier-1 assets.
- ## Conditional Access Policies
+## Conditional Access policies
-Use Azure AD Conditional Access to interpret signals and make
-authentication decisions based on them. For more information, see the
-[Conditional Access deployment plan.](../conditional-access/plan-conditional-access.md)
+Use Azure AD Conditional Access to interpret signals and use them to make
+authentication decisions. For more information, see the
+[Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
-* [Legacy Authentication Protocols](../fundamentals/auth-sync-overview.md): Use Conditional Access to [block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md) protocols whenever possible. Additionally, disable legacy authentication protocols at the application level using application-specific configuration.
+* Use Conditional Access to [block legacy authentication protocols](../conditional-access/howto-conditional-access-policy-block-legacy.md) whenever possible. Additionally, disable legacy authentication protocols at the application level by using an application-specific configuration.
- * See specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant?view=sharepoint-ps).
+ For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant?view=sharepoint-ps).
-* Implement the recommended [Identity and device access configurations.](/microsoft-365/security/office-365-security/identity-access-policies?view=o365-worldwide)
+* Implement the recommended [identity and device access configurations](/microsoft-365/security/office-365-security/identity-access-policies?view=o365-worldwide).
-* If you are using a version of Azure AD that does not include Conditional Access, ensure that you are using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
+* If you're using a version of Azure AD that doesn't include Conditional Access, ensure that you're using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
- * For more information on Azure AD feature licensing, see the [Azure AD pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+ For more information about Azure AD feature licensing, see the [Azure AD pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
-## Monitoring
+## Monitor
-Once you have configured your environment to protect your Microsoft 365
+After you configure your environment to protect your Microsoft 365
from an on-premises compromise, [proactively monitor](../reports-monitoring/overview-monitoring.md) the environment.
-### Scenarios to Monitor
+### Scenarios to monitor
Monitor the following key scenarios, in addition to any scenarios specific to your organization. For example, you should proactively monitor access to your business-critical applications and resources.
-* **Suspicious activity**: All [Azure AD risk events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) should be monitored for suspicious activity. [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Azure Security Center.
+* **Suspicious activity**
+
+ Monitor all [Azure AD risk events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) for suspicious activity. [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Azure Security Center.
- * Define the network [named locations](../reports-monitoring/quickstart-configure-named-locations.md) to avoid noisy detections on location-based signals.
-* **User Entity Behavioral Analytics (UEBA) alerts** Use UEBA
+ Define the network [named locations](../reports-monitoring/quickstart-configure-named-locations.md) to avoid noisy detections on location-based signals.
+* **User and Entity Behavioral Analytics (UEBA) alerts**
+
+ Use UEBA
to get insights on anomaly detection.
- * Microsoft Cloud App Discovery (MCAS) provides [UEBA in the cloud](/cloud-app-security/tutorial-ueba).
+ * Microsoft Cloud App Security (MCAS) provides [UEBA in the cloud](/cloud-app-security/tutorial-ueba).
+
+ * You can [integrate on-premises UEBA from Azure Advanced Threat Protection (ATP)](/defender-for-identity/install-step2). MCAS reads signals from Azure AD Identity Protection.
- * You can [integrate on-premises UEBA from Azure ATP](/defender-for-identity/install-step2). MCAS reads signals from Azure AD Identity Protection.
+* **Emergency access accounts activity**
-* **Emergency access accounts activity**: Any access using [emergency access accounts](../roles/security-emergency-access.md) should be monitored and alerts created for investigations. This monitoring must include:
+ Monitor any access that uses [emergency access accounts](../roles/security-emergency-access.md). Create alerts for investigations. This monitoring must include:
* Sign-ins.
@@ -311,57 +326,68 @@ monitor access to your business-critical applications and resources.
* Any updates on group memberships.
- * Application Assignments.
-* **Privileged role activity**: Configure and review
- security [alerts generated by Azure AD PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md?tabs=new#security-alerts).
+ * Application assignments.
+* **Privileged role activity**
+
+ Configure and review
+ security [alerts generated by Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-how-to-configure-security-alerts.md?tabs=new#security-alerts).
Monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly.
-* **Azure AD tenant-wide configurations**: Any change to tenant-wide configurations should generate alerts in the system. These include but are not limited to
- * Updating custom domains
- * Azure AD B2B allow/block list changes.
- * Azure AD B2B allowed identity providers (SAML IDPs through direct federation or social logins).
- * Conditional Access or Risk policy changes
+* **Azure AD tenant-wide configurations**
+
+ Any change to tenant-wide configurations should generate alerts in the system. These changes include but aren't limited to:
+
+ * Updated custom domains.
+
+ * Azure AD B2B changes to allowlists and blocklists.
-* **Application and service principal objects**:
+ * Azure AD B2B changes to allowed identity providers (SAML identity providers through direct federation or social sign-ins).
+
+ * Conditional Access or Risk policy changes.
+
+* **Application and service principal objects**
+
* New applications or service principals that might require Conditional Access policies.
- * Additional credentials added to service principals.
+ * Credentials added to service principals.
* Application consent activity.
-* **Custom roles**:
- * Updates of the custom role definitions.
+* **Custom roles**
+ * Updates to the custom role definitions.
+
+ * Newly created custom roles.
- * New custom roles created.
+### Log management
-### Log Management
+Define a log storage and retention strategy, design, and implementation to facilitate a consistent tool set. For example, you could consider security information and event management (SIEM) systems like Azure Sentinel, common queries, and investigation and forensics playbooks.
-Define a log storage and retention strategy, design, and implementation to facilitate a consistent toolset such as SIEM systems like Azure Sentinel, common queries, and investigation and forensics playbooks.
+* **Azure AD logs**: Ingest generated logs and signals by consistently following best practices for settings such as diagnostics, log retention, and SIEM ingestion.
-* **Azure AD Logs** Ingest logs and signal produced following consistent best practices, including diagnostics settings, log retention, and SIEM ingestion. The log strategy must include the following Azure AD logs:
+ The log strategy must include the following Azure AD logs:
* Sign-in activity * Audit logs * Risk events
-Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](/graph/api/resources/identityriskevent). You can [stream Azure AD logs to Azure monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+ Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through the [Microsoft Graph API](/graph/api/resources/identityriskevent). You can [stream Azure AD logs to Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
-* **Hybrid Infrastructure OS Security Logs.** All hybrid identity infrastructure OS logs should be archived and carefully monitored as a <br>Tier 0 system, given the surface area implications. This includes:
+* **Hybrid infrastructure OS security logs**: All hybrid identity infrastructure OS logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements:
* Azure AD Connect. [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md) must be deployed to monitor identity synchronization.
- * Application Proxy Agents
+ * Application Proxy agents
- * Password write-back agents
+ * Password writeback agents
* Password Protection Gateway machines
- * NPS that have the Azure MFA RADIUS extension
+ * Network policy servers (NPSs) that have the Azure AD multifactor authentication RADIUS extension
-## Next Steps
-* [Build resilience into identity and access management with Azure AD](resilience-overview.md)
+## Next steps
+* [Build resilience into identity and access management by using Azure AD](resilience-overview.md)
* [Secure external access to resources](secure-external-access-resources.md) * [Integrate all your apps with Azure AD](five-steps-to-full-application-integration-with-azure-ad.md)\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/resilience-client-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilience-client-app.md
@@ -92,7 +92,7 @@ In general, an application that uses modern authentication will call an endpoint
### Cache tokens
-Apps should properly cache tokens received from Microsoft Identity. When your app receives tokens, the HTTP response that contains the tokens also contains an "expires_in" property that tells the application how long to cache, and reuse, the token. In it important that applications use the "expires_in" property to determine the lifespan of the token. Application must never attempt to decode an API access token.
+Apps should properly cache tokens received from Microsoft Identity. When your app receives tokens, the HTTP response that contains the tokens also contains an "expires_in" property that tells the application how long to cache, and reuse, the token. It is important that applications use the "expires_in" property to determine the lifespan of the token. Application must never attempt to decode an API access token.
![An application making a call to Microsoft identity, but the call goes through a token cache on the device running the application](media/resilience-client-app/token-cache.png)
@@ -178,4 +178,4 @@ If you develop resource APIs, we encourage you to participate in the [Shared Sig
- [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Build resilience into daemon applications](resilience-daemon-app.md) - [Build resilience in your identity and access management infrastructure](resilience-in-infrastructure.md)-- [Build resilience in your CIAM systems](resilience-b2c.md)\ No newline at end of file
+- [Build resilience in your CIAM systems](resilience-b2c.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-access-package-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-resources.md
@@ -12,7 +12,7 @@ na
ms.devlang: na Previously updated : 06/18/2020 Last updated : 12/14/2020
@@ -142,7 +142,13 @@ Azure AD can automatically assign users access to a SharePoint Online site or Sh
1. On the **Add resource roles to access package** page, click **SharePoint sites** to open the Select SharePoint Online sites pane.
-1. Select the SharePoint Online sites you want to include in the access package.
+ :::image type="content" source="media/entitlement-management-access-package-resources/sharepoint-multigeo-portal.png" alt-text="Access package - Add resource roles - Select SharePoint sites - Portal view":::
+
+1. If you have [Multi-Geo](https://docs.microsoft.com/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365?view=o365-worldwide) enabled for SharePoint, select the environment you would like to select sites from.
+
+ :::image type="content" source="media/entitlement-management-access-package-resources/sharepoint-multigeo-select.png" alt-text="Access package - Add resource roles - Select SharePoint Multi-geo sites":::
+
+1. If multi-geo is not enabled, you do not need to select an environment. Select the SharePoint Online sites you want to include in the access package.
![Access package - Add resource roles - Select SharePoint Online sites](./media/entitlement-management-access-package-resources/sharepoint-site-select.png)
@@ -182,4 +188,4 @@ When you remove a member of a team, they are removed from the Microsoft 365 Grou
- [Create a basic group and add members using Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md) - [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md)-- [Introduction to SharePoint Online](/sharepoint/introduction)\ No newline at end of file
+- [Introduction to SharePoint Online](/sharepoint/introduction)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/access-panel-collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-collections.md
@@ -4,7 +4,7 @@ description: Use My Apps collections to Customize My Apps pages for a simpler My
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/access-panel-manage-self-service-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-manage-self-service-access.md
@@ -3,7 +3,7 @@ Title: How to use self-service application access in Azure AD
description: Enable self-service so users can find apps in Azure AD -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/active-directory-app-proxy-protect-ndes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/active-directory-app-proxy-protect-ndes.md
@@ -4,7 +4,7 @@
description: Guidance on deploying an Azure Active Directory Application Proxy to protect your NDES server. -+ ms.assetid: na
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-assign-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-assign-users.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Assign users to an app that uses Azure Active Directory as a
description: This quickstart walks through the process of allowing users to use an app that you have setup to use Azure AD as an identity provider. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-configure.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Configure properties for an application in your Azure Active
description: This quickstart uses the Azure portal to configure an application that has been registered with your Azure Active Directory (Azure AD) tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-setup-oidc-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Set up OIDC-based single sign-on (SSO) for an application in
description: This quickstart walks through the process of setting up OIDC-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-setup-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Set up SAML-based single sign-on (SSO) for an application in
description: This quickstart walks through the process of setting up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Add an application to your Azure Active Directory (Azure AD)
description: This quickstart uses the Azure portal to add a gallery application to your Azure Active Directory (Azure AD) tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-management-fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-fundamentals.md
@@ -5,7 +5,7 @@ description: Learn best practices and recommendations for managing applications
documentationcenter: '' -+ editor: '' ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-add-on-premises-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-add-on-premises-application.md
@@ -3,7 +3,7 @@ Title: Tutorial - Add an on-premises app - Application Proxy in Azure AD
description: Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. This tutorial shows you how to prepare your environment for use with Application Proxy. Then, it uses the Azure portal to add an on-premises application to your Azure AD tenant. -+
@@ -51,7 +51,8 @@ For high availability in your production environment, we recommend having more t
> ``` > Windows Registry Editor Version 5.00 >
-> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp\EnableDefaultHttp2 Value: 0
+> [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp]
+> "EnableDefaultHTTP2"=dword:00000000
> ``` > > The key can be set via PowerShell with the following command.
@@ -265,4 +266,4 @@ You did these things:
You're ready to configure the application for single sign-on. Use the following link to choose a single sign-on method and to find single sign-on tutorials. > [!div class="nextstepaction"]
-> [Configure single sign-on](sso-options.md#choosing-a-single-sign-on-method)
\ No newline at end of file
+> [Configure single sign-on](sso-options.md#choosing-a-single-sign-on-method)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-back-end-kerberos-constrained-delegation-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
@@ -3,7 +3,7 @@ Title: Troubleshoot Kerberos constrained delegation - App Proxy
description: Troubleshoot Kerberos Constrained Delegation configurations for Application Proxy -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-config-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-config-how-to.md
@@ -4,7 +4,7 @@ description: Learn how to create and configure an APplication Proxy application
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-config-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-config-problem.md
@@ -4,7 +4,7 @@ description: How to troubleshoot issues creating Application Proxy applications
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-config-sso-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-config-sso-how-to.md
@@ -3,7 +3,7 @@ Title: Understand single sign-on with an on-premises app using Application Proxy
description: Understand single sign-on with an on-premises app using Application Proxy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-connectors-with-proxy-servers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-connectors-with-proxy-servers.md
@@ -3,7 +3,7 @@ Title: Work with existing on-premises proxy servers and Azure Active Directory
description: Covers how to work with existing on-premises proxy servers with Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-cookie-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-cookie-settings.md
@@ -3,7 +3,7 @@ Title: Application Proxy cookie settings - Azure Active Directory | Microsoft D
description: Azure Active Directory (Azure AD) has access and session cookies for accessing on-premises applications through Application Proxy. In this article, you'll find out how to use and configure the cookie settings. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-custom-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-custom-domain.md
@@ -3,7 +3,7 @@ Title: Custom domains in Azure AD Application Proxy
description: Configure and manage custom domains in Azure AD Application Proxy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-custom-home-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-custom-home-page.md
@@ -4,7 +4,7 @@ description: Covers the basics about Azure AD Application Proxy connectors
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-for-claims-aware-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-for-claims-aware-applications.md
@@ -4,7 +4,7 @@ description: How to publish on-premises ASP.NET applications that accept ADFS cl
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-hard-coded-link-translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-hard-coded-link-translation.md
@@ -4,7 +4,7 @@ description: Learn how to redirect hard-coded links for apps published with Azur
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-native-client-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-native-client-application.md
@@ -4,7 +4,7 @@ description: Covers how to enable native client apps to communicate with Azure A
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-on-premises-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-on-premises-apps.md
@@ -4,7 +4,7 @@ description: Learn how to provide single sign-on for on-premises applications th
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-password-vaulting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-password-vaulting.md
@@ -4,7 +4,7 @@ description: Turn on single sign-on for your published on-premises applications
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-headers.md
@@ -3,7 +3,7 @@ Title: Header-based single sign-on for on-premises apps with Azure AD App Proxy
description: Learn how to provide single sign-on for on-premises applications that are secured with header-based authentication. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-kcd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-kcd.md
@@ -3,7 +3,7 @@ Title: Kerberos-based single sign-on (SSO) in Azure Active Directory with Applic
description: Covers how to provide single sign-on using Azure Active Directory Application Proxy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connectivity-no-working-connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connectivity-no-working-connector.md
@@ -4,7 +4,7 @@ description: Address problems you might encounter when there is no working Conne
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connector-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connector-groups.md
@@ -3,7 +3,7 @@ Title: Publish apps on separate networks via connector groups - Azure AD
description: Covers how to create and manage groups of connectors in Azure AD Application Proxy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connector-installation-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connector-installation-problem.md
@@ -3,7 +3,7 @@ Title: Problem installing the Application Proxy Agent Connector
description: How to troubleshoot issues you might face when installing the Application Proxy Agent Connector for Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connectors.md
@@ -3,7 +3,7 @@ Title: Understand Azure AD Application Proxy connectors | Microsoft Docs
description: Learn about the Azure AD Application Proxy connectors. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-debug-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-debug-apps.md
@@ -3,7 +3,7 @@ Title: Debug Application Proxy applications - Azure Active Directory | Microsoft
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-debug-connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-debug-connectors.md
@@ -3,7 +3,7 @@ Title: Debug Application Proxy connectors - Azure Active Directory | Microsoft D
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy connectors. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-deployment-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-deployment-plan.md
@@ -3,7 +3,7 @@ Title: Plan an Azure Active Directory Application Proxy Deployment
description: An end-to-end guide for planning the deployment of Application proxy within your organization -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-faq.md
@@ -3,7 +3,7 @@ Title: Azure Active Directory Application Proxy frequently asked questions
description: Learn answers to frequently asked questions (FAQ) about using Azure AD Application Proxy to publish internal, on-premises applications to remote users. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-high-availability-load-balancing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-high-availability-load-balancing.md
@@ -4,7 +4,7 @@ description: How traffic distribution works with your Application Proxy deployme
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-microsoft-cloud-application-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-microsoft-cloud-application-security.md
@@ -2,7 +2,7 @@
Title: Integrate on-premises apps with Cloud App Security - Azure AD description: Configure an on-premises application in Azure Active Directory to work with Microsoft Cloud App Security (MCAS). Use the MCAS Conditional Access App Control to monitor and control sessions in real-time based on Conditional Access policies. You can apply these policies to on-premises applications that use Application Proxy in Azure Active Directory (Azure AD). -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-power-bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-power-bi.md
@@ -4,7 +4,7 @@ description: Covers the basics about how to integrate an on-premises Power BI wi
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-remote-desktop-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-remote-desktop-services.md
@@ -3,7 +3,7 @@ Title: Publish Remote Desktop with Azure Active Directory Application Proxy
description: Covers how to configure App Proxy with Remote Desktop Services (RDS) -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-sharepoint-server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-sharepoint-server.md
@@ -4,7 +4,7 @@ description: Covers the basics about how to integrate an on-premises SharePoint
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-tableau https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-tableau.md
@@ -3,7 +3,7 @@ Title: Azure Active Directory Application Proxy and Tableau | Microsoft Docs
description: Learn how to use Azure Active Directory (Azure AD) Application Proxy to provide remote access for your Tableau deployment. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-integrate-with-teams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-teams.md
@@ -4,7 +4,7 @@ description: Use Azure AD Application Proxy to access your on-premises applicati
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-migration.md
@@ -4,7 +4,7 @@ description: Choose which proxy solution is best if you're upgrading from Micros
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-network-topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-network-topology.md
@@ -4,7 +4,7 @@ description: Covers network topology considerations when using Azure AD Applicat
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-page-appearance-broken-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-page-appearance-broken-problem.md
@@ -4,7 +4,7 @@ description: Guidance when the page isnΓÇÖt displaying correctly in an Applicati
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-page-links-broken-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-page-links-broken-problem.md
@@ -3,7 +3,7 @@ Title: Links on the page don't work for an Application Proxy application
description: How to troubleshoot issues with broken links on Application Proxy applications you have integrated with Azure AD -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-page-load-speed-problem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-page-load-speed-problem.md
@@ -4,7 +4,7 @@ description: Troubleshoot page load performance issues with the Azure AD Applica
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-ping-access-publishing-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-ping-access-publishing-guide.md
@@ -3,7 +3,7 @@ Title: Header-based authentication with PingAccess for Azure AD Application Prox
description: Publish applications with PingAccess and App Proxy to support header-based authentication. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-powershell-samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-powershell-samples.md
@@ -3,7 +3,7 @@ Title: PowerShell samples for Azure AD Application Proxy
description: Use these PowerShell samples for Azure AD Application Proxy to get information about Application Proxy apps and connectors in your directory, assign users and groups to apps, and get certificate information. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-qlik https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-qlik.md
@@ -4,7 +4,7 @@ description: Turn on Application Proxy in the Azure portal, and install the Co
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-register-connector-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-register-connector-powershell.md
@@ -4,7 +4,7 @@ description: Covers how to perform an unattended installation of Azure AD Applic
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-release-version-history https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-release-version-history.md
@@ -3,7 +3,7 @@ Title: 'Azure AD Application Proxy: Version release history'
description: This article lists all releases of Azure AD Application Proxy and describes new features and fixed issues -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-remove-personal-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-remove-personal-data.md
@@ -3,7 +3,7 @@ Title: Remove personal data - Azure Active Directory Application Proxy
description: Remove personal data from connectors installed on devices for Azure Active Directory Application Proxy. documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-secure-api-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-secure-api-access.md
@@ -3,7 +3,7 @@ Title: Access on-premises APIs with Azure AD Application Proxy
description: Azure Active Directory's Application Proxy lets native apps securely access APIs and business logic you host on-premises or on cloud VMs. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-security.md
@@ -4,7 +4,7 @@ description: Covers security considerations for using Azure AD Application Proxy
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-sign-in-bad-gateway-timeout-error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-sign-in-bad-gateway-timeout-error.md
@@ -4,7 +4,7 @@ description: How to resolve common access issues with Azure AD Application Proxy
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-troubleshoot.md
@@ -3,7 +3,7 @@ Title: Troubleshoot Application Proxy | Microsoft Docs
description: Covers how to troubleshoot errors in Azure AD Application Proxy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-understand-cors-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-understand-cors-issues.md
@@ -3,7 +3,7 @@ Title: Understand and solve Azure AD Application Proxy CORS issues
description: Provides an understanding of CORS in Azure AD Application Proxy, and how to identify and solve CORS issues. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-wildcard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-wildcard.md
@@ -4,7 +4,7 @@ description: Learn how to use Wildcard applications in the Azure Active Director
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy.md
@@ -3,7 +3,7 @@ Title: Remote access to on-premises apps - Azure AD Application Proxy
description: Azure Active Directory's Application Proxy provides secure remote access to on-premises web applications. After a single sign-on to Azure AD, users can access both cloud and on-premises applications through an external URL or an internal application portal. For example, Application Proxy can provide remote access and single sign-on to Remote Desktop, SharePoint, Teams, Tableau, Qlik, and line of business (LOB) applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-other-problem-access-panel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
@@ -3,7 +3,7 @@ Title: Troubleshoot problems signing in to an application from Azure AD My Apps
description: Troubleshoot problems signing in to an application from Azure AD My Apps -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-application-error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
@@ -4,7 +4,7 @@ description: How to resolve issues with Azure AD sign in when the app returns an
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md
@@ -3,7 +3,7 @@ Title: Problems signing in to SAML-based single sign-on configured apps
description: Guidance for the specific errors when signing into an application you have configured for SAML-based federated single sign-on with Azure Active Directory -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
@@ -4,7 +4,7 @@ description: Troubleshoot common problems faced when signing in to first-party M
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-problem-on-premises-application-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-on-premises-application-proxy.md
@@ -4,7 +4,7 @@ description: Troubleshooting common issues faced when you are unable to sign in
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
@@ -4,7 +4,7 @@ description: Discusses errors that can occur during the process of consenting to
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
@@ -4,7 +4,7 @@ description: How to troubleshoot when a user sees a consent prompt for an applic
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-types.md
@@ -3,7 +3,7 @@ Title: Viewing apps using your Azure Active Directory tenant for identity manage
description: Understand how to view all applications using your Azure Active Directory tenant for identity management. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/assign-user-or-group-access-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
@@ -3,7 +3,7 @@ Title: Manage user assignment for an app in Azure Active Directory
description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/certificate-signing-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/certificate-signing-options.md
@@ -4,7 +4,7 @@ description: Learn how to use advanced certificate signing options in the SAML t
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/cloud-app-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/cloud-app-security.md
@@ -3,7 +3,7 @@ Title: App visibility and control with Microsoft Cloud App Security
description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/common-scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/common-scenarios.md
@@ -3,7 +3,7 @@ Title: Common application management scenarios for Azure Active Directory | Micr
description: Centralize application management with Azure AD documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-admin-consent-workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
@@ -3,7 +3,7 @@ Title: Configure the admin consent workflow - Azure Active Directory | Microsoft
description: Learn how to configure a way for end users to request access to applications that require admin consent. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-authentication-for-federated-users-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
@@ -4,7 +4,7 @@ description: Learn how to configure Home Realm Discovery policy for Azure Active
documentationcenter: -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-linked-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-linked-sign-on.md
@@ -3,7 +3,7 @@ Title: Understand linked sign-on in Azure Active Directory
description: Understand linked sign-on in Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-oidc-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-oidc-single-sign-on.md
@@ -3,7 +3,7 @@ Title: Understand OIDC-based single sign-on (SSO) for apps in Azure Active Direc
description: Understand OIDC-based single sign-on (SSO) for apps in Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
@@ -3,7 +3,7 @@ Title: Understand password-based single sign-on (SSO) for apps in Azure Active D
description: Understand password-based single sign-on (SSO) for apps in Azure Active Directory -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-permission-classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-permission-classifications.md
@@ -3,7 +3,7 @@ Title: Configure permission classifications with Azure AD
description: Learn how to manage delegated permission classifications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-saml-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-saml-single-sign-on.md
@@ -3,7 +3,7 @@ Title: Understand SAML-based single sign-on (SSO) for apps in Azure Active Direc
description: Understand SAML-based single sign-on (SSO) for apps in Azure Active Directory -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent-groups.md
@@ -3,7 +3,7 @@ Title: Configure group owner consent to apps accessing group data using Azure AD
description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent.md
@@ -3,7 +3,7 @@ Title: Configure how end-users consent to applications using Azure AD
description: Learn how to manage how and when users can consent to applications that will have access to your organization's data. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/debug-saml-sso-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/debug-saml-sso-issues.md
@@ -4,7 +4,7 @@ description: Debug SAML-based single sign-on to applications in Azure Active Dir
-+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/delete-application-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: Delete an application from your Azure Active Directory (Azur
description: This quickstart uses the Azure portal to delete an application from your Azure Active Directory (Azure AD) tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/disable-user-sign-in-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
@@ -4,7 +4,7 @@ description: How to disable an enterprise application so that no users may sign
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/end-user-experiences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/end-user-experiences.md
@@ -3,7 +3,7 @@ Title: End-user experiences for applications - Azure Active Directory
description: Azure Active Directory (Azure AD) provides several customizable ways to deploy applications to end users in your organization. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/get-it-now-azure-marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/get-it-now-azure-marketplace.md
@@ -3,7 +3,7 @@ Title: 'Add an app from the Azure Marketplace'
description: This article acts as a landing page from the Get It Now button on the Azure Marketplace. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/grant-admin-consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/grant-admin-consent.md
@@ -3,7 +3,7 @@ Title: Grant tenant-wide admin consent to an application - Azure AD
description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/hide-application-from-user-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/hide-application-from-user-portal.md
@@ -3,7 +3,7 @@ Title: Hide an Enterprise application from user's experience in Azure AD
description: How to hide an Enterprise application from user's experience in Azure Active Directory access panels or Microsoft 365 launchers. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/howto-saml-token-encryption.md
@@ -4,7 +4,7 @@ description: Learn how to configure Azure Active Directory SAML token encryption
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-app-consent-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
@@ -3,7 +3,7 @@ Title: Manage app consent policies in Azure AD
description: Learn how to manage built-in and custom app consent policies to control when consent can be granted. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-application-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-application-permissions.md
@@ -3,7 +3,7 @@ Title: Manage user and admin permissions - Azure Active Directory | Microsoft Do
description: Learn how to review and manage permissions for the application on Azure AD. For example, revoke all permissions granted to an application. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-certificates-for-federated-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-certificates-for-federated-single-sign-on.md
@@ -4,7 +4,7 @@ description: Learn how to customize the expiration date for your federation cert
documentationcenter: '' -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-consent-requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-consent-requests.md
@@ -3,7 +3,7 @@ Title: Managing consent to applications and evaluating consent requests in Azure
description: Learn how to manage consent requests when user consent is disabled or restricted, and how to evaluate a request for tenant-wide admin consent to an application in Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/manage-self-service-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-self-service-access.md
@@ -4,7 +4,7 @@ description: Enable self-service application access to allow users to find their
documentationcenter: '' -+ ms.assetid:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/methods-for-removing-user-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/methods-for-removing-user-access.md
@@ -3,7 +3,7 @@ Title: How to remove a user's access to an application in Azure Active Directory
description: Understand how to remove a user's access to an application in Azure Active Directory -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/migrate-adfs-application-activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
@@ -3,7 +3,7 @@ Title: Use the activity report to move AD FS apps to Azure Active Directory | Mi
description: The Active Directory Federation Services (AD FS) application activity report lets you quickly migrate applications from AD FS to Azure Active Directory (Azure AD). This migration tool for AD FS identifies compatibility with Azure AD and gives migration guidance. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/migrate-adfs-apps-to-azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
@@ -3,7 +3,7 @@ Title: 'Moving application authentication from AD FS to Azure Active Directory'
description: This article is intended to help organizations understand how to move applications to Azure AD, with a focus on federated SaaS applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/migration-resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migration-resources.md
@@ -3,7 +3,7 @@ Title: Resources for migrating apps to Azure Active Directory | Microsoft Docs
description: Resources to help you migrate application access and authentication to Azure Active Directory (Azure AD). -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/my-apps-deployment-plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/my-apps-deployment-plan.md
@@ -3,7 +3,7 @@ Title: Plan Azure Active Directory My Apps configuration
description: Planning guide to effectively use My Apps in your organization. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/non-gallery-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/non-gallery-apps.md
@@ -3,7 +3,7 @@ Title: Using Azure AD for applications not listed in the app gallery
description: Understand how to integrate apps not listed in the Azure AD gallery. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/one-click-sso-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/one-click-sso-tutorial.md
@@ -4,7 +4,7 @@ description: Steps for one-click configuration of SSO for your application from
documentationCenter: na -+ ms.assetid: e0416991-4b5d-4b18-89bb-91b6070ed3ba
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/plan-an-application-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
@@ -4,7 +4,7 @@ Title: Get started integrating Azure AD with apps
description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications. -+ na
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/plan-sso-deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-sso-deployment.md
@@ -3,7 +3,7 @@ Title: Plan an Azure Active Directory single sign-on deployment
description: Guide to help you plan, deploy, and manage SSO in your organization. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-assign-group-to-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-assign-group-to-app.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Assign group to an Application Proxy app
description: PowerShell example that assigns a group to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-assign-user-to-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-assign-user-to-app.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Assign user to an Application Proxy app
description: PowerShell example that assigns a user to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-display-users-group-of-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-display-users-group-of-app.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List users & groups for Application Proxy app
description: PowerShell example that lists all the users and groups assigned to a specific Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-basic.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List basic info for Application Proxy apps
description: PowerShell example that lists Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), and object ID (ObjId). -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-by-connector-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
@@ -3,7 +3,7 @@ Title: List Azure AD Application Proxy connector groups for apps
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy Connector groups with the assigned applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-extended https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-extended.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List extended info for Application Proxy apps
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), and authentication type (ExternalAuthenticationType). -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-with-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-with-policy.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List all Application Proxy apps with a policy
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications in your directory that have a lifetime token policy. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-connectors.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List all Application Proxy connector groups
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy connector groups and connectors in your directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-custom-domain-no-cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-custom-domain-no-cert.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Application Proxy apps with no certificate
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains but do not have a valid TLS/SSL certificate uploaded. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-custom-domains-and-certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-custom-domains-and-certs.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Application Proxy apps using custom domains
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains and certificate information. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-default-domain-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-default-domain-apps.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Application Proxy apps using default domain
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using default domains (.msappproxy.net). -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-all-wildcard-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-wildcard-apps.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - List Application Proxy apps using wildcards
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using wildcards. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-custom-domain-identical-cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-custom-domain-identical-cert.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Application Proxy apps with identical certs
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-get-custom-domain-replace-cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-custom-domain-replace-cert.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Replace certificate in Application Proxy apps
description: PowerShell example that bulk replaces a certificate across Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/scripts/powershell-move-all-apps-to-connector-group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-move-all-apps-to-connector-group.md
@@ -3,7 +3,7 @@ Title: PowerShell sample - Move Application Proxy apps to another group
description: Azure Active Directory (Azure AD) Application Proxy PowerShell example used to move all applications currently assigned to a connector group to a different connector group. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/sso-options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/sso-options.md
@@ -3,7 +3,7 @@ Title: Single sign-on options in Azure AD
description: Learn about the options available for single sign-on (SSO) in Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/tenant-restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
@@ -3,7 +3,7 @@ Title: Use tenant restrictions to manage access to SaaS apps - Azure AD
description: How to use tenant restrictions to manage which users can access apps based on their Azure AD tenant. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/troubleshoot-adding-apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-adding-apps.md
@@ -3,7 +3,7 @@ Title: Troubleshoot common problem adding or removing an application to Azure Ac
description: Troubleshoot the common problems people face when adding or removing an app to Azure Active Directory. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/troubleshoot-password-based-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
@@ -2,7 +2,7 @@
Title: Troubleshoot password-based single sign-on in Azure Active Directory description: Troubleshoot issues with an Azure AD app that's configured for password-based single sign-on. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/troubleshoot-saml-based-sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
@@ -3,7 +3,7 @@ Title: Troubleshoot SAML-based single sign-on in Azure Active Directory
description: Troubleshoot issues with an Azure AD app that's configured for SAML-based single sign-on. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/view-applications-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/view-applications-portal.md
@@ -3,7 +3,7 @@ Title: 'Quickstart: View the list of applications that are using your Azure Acti
description: In this Quickstart, use the Azure portal to view the list of applications that are registered to use your Azure Active Directory (Azure AD) tenant for identity management. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/ways-users-get-assigned-to-applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
@@ -3,7 +3,7 @@ Title: Understand how users are assigned to apps in Azure Active Directory
description: Understand how users get assigned to an app that is using Azure Active Directory for identity management. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-access-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-access-management.md
@@ -3,7 +3,7 @@ Title: Managing access to apps using Azure AD
description: Describes how Azure Active Directory enables organizations to specify the apps to which each user has access. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-application-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-application-management.md
@@ -3,7 +3,7 @@ Title: What is application management in Azure Active Directory
description: An overview of using Azure Active Directory (AD) as an Identity and Access Management (IAM) system for your cloud and on-premises applications. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-application-proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-application-proxy.md
@@ -3,7 +3,7 @@ Title: Publish on-premises apps with Azure AD Application Proxy
description: Understand why to use Application Proxy to publish on-premises web applications externally to remote users. Learn about Application Proxy architecture, connectors, authentication methods, and security benefits. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-single-sign-on https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-single-sign-on.md
@@ -3,7 +3,7 @@ Title: What is Azure single sign-on (SSO)?
description: Learn how single sign-on (SSO) works with Azure Active Directory. Use SSO so users don't need to remember passwords for every application. Also use SSO to simplify the administration of account management. -+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
@@ -1,20 +1,47 @@
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 12/15/2020 Last updated : 02/01/2021 -+ # Azure Active Directory application management: What's new Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## January 2021
+
+### New articles
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
+
+### Updated articles
+- [Problem installing the Application Proxy Agent Connector](application-proxy-connector-installation-problem.md)
+- [Troubleshoot password-based single sign-on in Azure AD](troubleshoot-password-based-sso.md)
+- [Application management best practices](application-management-fundamentals.md)
+- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)
+- [What is application management?](what-is-application-management.md)
+- [Active Directory (Azure AD) Application Proxy frequently asked questions](application-proxy-faq.md)
+- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md)
+- [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md)
+- [Develop line-of-business apps for Azure Active Directory](developer-guidance-for-integrating-applications.md)
+- [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md)
+- [Understand linked sign-on](configure-linked-sign-on.md)
+- [Understand password-based single sign-on](configure-password-single-sign-on-non-gallery-applications.md)
+- [Understand SAML-based single sign-on](configure-saml-single-sign-on.md)
+- [Troubleshoot common problem adding or removing an application to Azure Active Directory](troubleshoot-adding-apps.md)
+- [Viewing apps using your Azure AD tenant for identity management](application-types.md)
+- [Understand how users are assigned to apps in Azure Active Directory](ways-users-get-assigned-to-applications.md)
+- [Quickstart: Delete an application from your Azure Active Directory (Azure AD) tenant](delete-application-portal.md)
+- [Publish Remote Desktop with Azure AD Application Proxy](application-proxy-integrate-with-remote-desktop-services.md)
+- [Take action on overprivileged or suspicious applications in Azure Active Directory](manage-application-permissions.md)
++ ## December 2020 ### Updated articles
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/adaptivesuite-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adaptivesuite-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 07/19/2019 Last updated : 01/19/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Adaptive Insights with Azure Act
* Enable your users to be automatically signed-in to Adaptive Insights with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,38 +34,37 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Adaptive Insights supports **IDP** initiated SSO
-## Adding Adaptive Insights from the gallery
+## Add Adaptive Insights from the gallery
To configure the integration of Adaptive Insights into Azure AD, you need to add Adaptive Insights from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Adaptive Insights** in the search box. 1. Select **Adaptive Insights** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Adaptive Insights
Configure and test Azure AD SSO with Adaptive Insights using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adaptive Insights.
-To configure and test Azure AD SSO with Adaptive Insights, complete the following building blocks:
+To configure and test Azure AD SSO with Adaptive Insights, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure Adaptive Insights SSO](#configure-adaptive-insights-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-5. **[Create Adaptive Insights test user](#create-adaptive-insights-test-user)** - to have a counterpart of B.Simon in Adaptive Insights that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Adaptive Insights SSO](#configure-adaptive-insights-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Adaptive Insights test user](#create-adaptive-insights-test-user)** - to have a counterpart of B.Simon in Adaptive Insights that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Adaptive Insights** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Adaptive Insights** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -90,21 +87,45 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adaptive Insights.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Adaptive Insights**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+ ### Configure Adaptive Insights SSO 1. In a different web browser window, sign in to your Adaptive Insights company site as an administrator. 2. Go to **Administration**.
- ![Screenshot that highlights Administration in the navigation panel.](./media/adaptivesuite-tutorial/ic805644.png "Admin")
+ ![Screenshot that highlights Administration in the navigation panel.](./media/adaptivesuite-tutorial/administration.png "Admin")
3. In the **Users and Roles** section, click **SAML SSO Settings**.
- ![Manage SAML SSO Settings](./media/adaptivesuite-tutorial/ic805645.png "Manage SAML SSO Settings")
+ ![Manage SAML SSO Settings](./media/adaptivesuite-tutorial/settings.png "Manage SAML SSO Settings")
4. On the **SAML SSO Settings** page, perform the following steps:
- ![SAML SSO Settings](./media/adaptivesuite-tutorial/ic805646.png "SAML SSO Settings")
+ ![SAML SSO Settings](./media/adaptivesuite-tutorial/saml.png "SAML SSO Settings")
a. In the **Identity provider name** textbox, type a name for your configuration.
@@ -130,36 +151,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
h. Click **Save**.
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adaptive Insights.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Adaptive Insights**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
- ### Create Adaptive Insights test user To enable Azure AD users to sign in to Adaptive Insights, they must be provisioned into Adaptive Insights. In the case of Adaptive Insights, provisioning is a manual task.
@@ -170,15 +161,15 @@ To enable Azure AD users to sign in to Adaptive Insights, they must be provision
2. Go to **Administration**.
- ![Admin](./media/adaptivesuite-tutorial/IC805644.png "Admin")
+ ![Admin](./media/adaptivesuite-tutorial/administration.png "Admin")
3. In the **Users and Roles** section, click **Users**.
- ![Add User](./media/adaptivesuite-tutorial/IC805648.png "Add User")
+ ![Add User](./media/adaptivesuite-tutorial/users.png "Add User")
4. In the **New User** section, perform the following steps:
- ![Submit](./media/adaptivesuite-tutorial/IC805649.png "Submit")
+ ![Submit](./media/adaptivesuite-tutorial/new.png "Submit")
a. Type the **Name**, **Username**, **Email**, **Password** of a valid Azure Active Directory user you want to provision into the related textboxes.
@@ -191,14 +182,12 @@ To enable Azure AD users to sign in to Adaptive Insights, they must be provision
### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Adaptive Insights tile in the Access Panel, you should be automatically signed in to the Adaptive Insights for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Adaptive Insights for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Adaptive Insights tile in the My Apps, you should be automatically signed in to the Adaptive Insights for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Adaptive Insights you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/adobe-echosign-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobe-echosign-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 12/19/2018 Last updated : 01/19/2021 # Tutorial: Azure Active Directory integration with Adobe Sign
-In this tutorial, you learn how to integrate Adobe Sign with Azure Active Directory (Azure AD).
-Integrating Adobe Sign with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Adobe Sign with Azure Active Directory (Azure AD). When you integrate Adobe Sign with Azure AD, you can:
-* You can control in Azure AD who has access to Adobe Sign.
-* You can enable your users to be automatically signed-in to Adobe Sign (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Adobe Sign.
+* Enable your users to be automatically signed-in to Adobe Sign with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Adobe Sign, you need the following items:
-
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Adobe Sign single sign-on enabled subscription
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adobe Sign single sign-on (SSO)-enabled subscription.
## Scenario description
@@ -37,64 +33,47 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Adobe Sign supports **SP** initiated SSO
-## Adding Adobe Sign from the gallery
+## Add Adobe Sign from the gallery
To configure the integration of Adobe Sign into Azure AD, you need to add Adobe Sign from the gallery to your list of managed SaaS apps.
-**To add Adobe Sign from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Adobe Sign** in the search box.
+1. Select **Adobe Sign** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Adobe Sign**, select **Adobe Sign** from result panel then click **Add** button to add the application.
-
- ![Adobe Sign in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Adobe Sign
In this section, you configure and test Azure AD single sign-on with Adobe Sign based on a test user called **Britta Simon**. For single sign-on to work, a link relationship between an Azure AD user and the related user in Adobe Sign needs to be established.
-To configure and test Azure AD single sign-on with Adobe Sign, you need to complete the following building blocks:
+To configure and test Azure AD single sign-on with Adobe Sign, you need to perform the following steps:
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Adobe Sign Single Sign-On](#configure-adobe-sign-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Adobe Sign test user](#create-adobe-sign-test-user)** - to have a counterpart of Britta Simon in Adobe Sign that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+1. **[Configure Adobe Sign SSO](#configure-adobe-sign-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Adobe Sign test user](#create-adobe-sign-test-user)** - to have a counterpart of Britta Simon in Adobe Sign that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD single sign-on
+### Configure Azure AD SSO
In this section, you enable Azure AD single sign-on in the Azure portal. To configure Azure AD single sign-on with Adobe Sign, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **Adobe Sign** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Adobe Sign** application integration page, select **Single sign-on**.
- ![Single sign-on select mode](common/select-saml-option.png)
+1. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. On the **Set up Single Sign-On with SAML** page, click pencil icon to open **Basic SAML Configuration** dialog.
![Edit Basic SAML Configuration](common/edit-urls.png) 4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Adobe Sign Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<companyname>.echosign.com/`
@@ -104,7 +83,7 @@ To configure Azure AD single sign-on with Adobe Sign, perform the following step
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
@@ -112,13 +91,31 @@ To configure Azure AD single sign-on with Adobe Sign, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
- b. Azure Ad Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adobe Sign.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Adobe Sign**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Adobe Sign Single Sign-On
+### Configure Adobe Sign SSO
1. Before configuration, contact the [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) to add your domain in the Adobe Sign allow list. Here's how to add the domain:
@@ -143,13 +140,13 @@ To configure Azure AD single sign-on with Adobe Sign, perform the following step
1. In the SAML menu, select **Account Settings** > **SAML Settings**.
- ![Screenshot of Adobe Sign SAML Settings page](./media/adobe-echosign-tutorial/ic789520.png "Account")
+ ![Screenshot of Adobe Sign SAML Settings page](./media/adobe-echosign-tutorial/settings.png "Account")
1. In the **SAML Settings** section, perform the following steps:
- ![Screenshot that highlights the SAML settings, including SAML Mandatory.](./media/adobe-echosign-tutorial/ic789521.png "SAML Settings")
+ ![Screenshot that highlights the SAML settings, including SAML Mandatory.](./media/adobe-echosign-tutorial/saml1.png "SAML Settings")
- ![Screenshot of SAML Settings](./media/adobe-echosign-tutorial/ic789522.png "SAML Settings")
+ ![Screenshot of SAML Settings](./media/adobe-echosign-tutorial/saml.png "SAML Settings")
a. Under **SAML Mode**, select **SAML Mandatory**.
@@ -167,57 +164,6 @@ To configure Azure AD single sign-on with Adobe Sign, perform the following step
h. Select **Save Changes**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Adobe Sign.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Adobe Sign**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **Adobe Sign**.
-
- ![The Adobe Sign link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Adobe Sign test user To enable Azure AD users to sign in to Adobe Sign, they must be provisioned into Adobe Sign. This is a manual task.
@@ -229,11 +175,11 @@ To enable Azure AD users to sign in to Adobe Sign, they must be provisioned into
2. In the menu on the top, select **Account**. Then, in the left pane, select **Users & Groups** > **Create a new user**.
- ![Screenshot of Adobe Sign company site, with Account, Users &Groups, and Create a new user highlighted](./media/adobe-echosign-tutorial/ic789524.png "Account")
+ ![Screenshot of Adobe Sign company site, with Account, Users &Groups, and Create a new user highlighted](./media/adobe-echosign-tutorial/account.png "Account")
3. In the **Create New User** section, perform the following steps:
- ![Screenshot of Create New User section](./media/adobe-echosign-tutorial/ic789525.png "Create User")
+ ![Screenshot of Create New User section](./media/adobe-echosign-tutorial/user.png "Create User")
a. Type the **Email Address**, **First Name**, and **Last Name** of a valid Azure AD account you want to provision into the related text boxes.
@@ -242,16 +188,16 @@ To enable Azure AD users to sign in to Adobe Sign, they must be provisioned into
>[!NOTE] >The Azure Active Directory account holder receives an email that includes a link to confirm the account, before it becomes active.
-### Test single sign-on
+### Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Adobe Sign tile in the Access Panel, you should be automatically signed in to the Adobe Sign for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Adobe Sign Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Adobe Sign Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Adobe Sign tile in the My Apps, you should be automatically signed in to the Adobe Sign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Adobe Sign you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/aha-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aha-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 08/09/2019 Last updated : 01/20/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Aha! with Azure Active Directory
* Enable your users to be automatically signed-in to Aha! with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,22 +38,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Aha! supports **SP** initiated SSO * Aha! supports **Just In Time** user provisioning
-## Adding Aha! from the gallery
+## Add Aha! from the gallery
To configure the integration of Aha! into Azure AD, you need to add Aha! from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Aha!** in the search box. 1. Select **Aha!** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Aha!
+## Configure and test Azure AD SSO for Aha!
Configure and test Azure AD SSO with Aha! using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Aha!.
-To configure and test Azure AD SSO with Aha!, complete the following building blocks:
+To configure and test Azure AD SSO with Aha!, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -68,9 +66,9 @@ To configure and test Azure AD SSO with Aha!, complete the following building bl
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Aha!** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Aha!** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -112,15 +110,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Aha!**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Aha! SSO
@@ -137,23 +129,23 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. In the menu on the top, click **Settings**.
- ![Settings](./media/aha-tutorial/IC798950.png "Settings")
+ ![Settings](./media/aha-tutorial/setting.png "Settings")
5. Click **Account**.
- ![Profile](./media/aha-tutorial/IC798951.png "Profile")
+ ![Profile](./media/aha-tutorial/account.png "Profile")
6. Click **Security and single sign-on**.
- ![Screenshot that highlights the Security and single sign-on menu option.](./media/aha-tutorial/IC798952.png "Security and single sign-on")
+ ![Screenshot that highlights the Security and single sign-on menu option.](./media/aha-tutorial/security.png "Security and single sign-on")
7. In **Single Sign-On** section, as **Identity Provider**, select **SAML2.0**.
- ![Security and single sign-on](./media/aha-tutorial/IC798953.png "Security and single sign-on")
+ ![Security and single sign-on](./media/aha-tutorial/saml.png "Security and single sign-on")
8. On the **Single Sign-On** configuration page, perform the following steps:
- ![Single Sign-On](./media/aha-tutorial/IC798954.png "Single Sign-On")
+ ![Single Sign-On](./media/aha-tutorial/sso.png "Single Sign-On")
a. In the **Name** textbox, type a name for your configuration.
@@ -169,14 +161,14 @@ In this section, a user called B.Simon is created in Aha!. Aha! supports just-in
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Aha! tile in the Access Panel, you should be automatically signed in to the Aha! for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Aha! Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Aha! Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Aha! tile in the My Apps, this will redirect to Aha! Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Aha! you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/airwatch-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/airwatch-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 07/11/2019 Last updated : 01/20/2021
@@ -21,50 +21,50 @@ In this tutorial, you'll learn how to integrate AirWatch with Azure Active Direc
* Enable your users to be automatically signed-in to AirWatch with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:-
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
-* AirWatch single sign-on (SSO) enabled subscription.
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* AirWatch single sign-on (SSO)-enabled subscription.
## Scenario description
-In this tutorial, you configure and test Azure AD SSO in a test environment. AirWatch supports **SP** initiated SSO.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
-## Adding AirWatch from the gallery
+* AirWatch supports **SP** initiated SSO.
+
+## Add AirWatch from the gallery
To configure the integration of AirWatch into Azure AD, you need to add AirWatch from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **AirWatch** in the search box. 1. Select **AirWatch** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for AirWatch
Configure and test Azure AD SSO with AirWatch using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AirWatch.
-To configure and test Azure AD SSO with AirWatch, complete the following building blocks:
+To configure and test Azure AD SSO with AirWatch, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure AirWatch SSO](#configure-airwatch-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Create AirWatch test user](#create-airwatch-test-user)** - to have a counterpart of Britta Simon in AirWatch that is linked to the Azure AD representation of user.
-5. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure AirWatch SSO](#configure-airwatch-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AirWatch test user](#create-airwatch-test-user)** - to have a counterpart of B.Simon in AirWatch that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **AirWatch** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **AirWatch** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -112,25 +112,49 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AirWatch.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **AirWatch**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+ ### Configure AirWatch SSO 1. In a different web browser window, sign in to your AirWatch company site as an administrator. 1. On the settings page. Select **Settings > Enterprise Integration > Directory Services**.
- ![Settings](./media/airwatch-tutorial/ic791921.png "Settings")
+ ![Settings](./media/airwatch-tutorial/services.png "Settings")
1. Click the **User** tab, in the **Base DN** textbox, type your domain name, and then click **Save**.
- ![Screenshot that highlights the Base DN text box.](./media/airwatch-tutorial/ic791922.png "User")
+ ![Screenshot that highlights the Base DN text box.](./media/airwatch-tutorial/user.png "User")
1. Click the **Server** tab.
- ![Server](./media/airwatch-tutorial/ic791923.png "Server")
+ ![Server](./media/airwatch-tutorial/directory.png "Server")
1. Perform the following steps on the **LDAP** section:
- ![Screenshot that shows the changes you need to make to the LDAP section.](./media/airwatch-tutorial/ic791924.png "LDAP")
+ ![Screenshot that shows the changes you need to make to the LDAP section.](./media/airwatch-tutorial/ldap.png "LDAP")
a. As **Directory Type**, select **None**.
@@ -138,11 +162,11 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **SAML 2.0** section, to upload the downloaded certificate, click **Upload**.
- ![Upload](./media/airwatch-tutorial/ic791932.png "Upload")
+ ![Upload](./media/airwatch-tutorial/uploads.png "Upload")
1. In the **Request** section, perform the following steps:
- ![Request](./media/airwatch-tutorial/ic791925.png "Request")
+ ![Request section](./media/airwatch-tutorial/request.png "Request")
a. As **Request Binding Type**, select **POST**.
@@ -156,11 +180,11 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click the **User** tab again.
- ![User](./media/airwatch-tutorial/ic791926.png "User")
+ ![User](./media/airwatch-tutorial/users.png "User")
1. In the **Attribute** section, perform the following steps:
- ![Attribute](./media/airwatch-tutorial/ic791927.png "Attribute")
+ ![Attribute](./media/airwatch-tutorial/attributes.png "Attribute")
a. In the **Object Identifier** textbox, type `http://schemas.microsoft.com/identity/claims/objectidentifier`.
@@ -176,36 +200,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
g. Click **Save**.
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AirWatch.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AirWatch**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
- ### Create AirWatch test user To enable Azure AD users to sign in to AirWatch, they must be provisioned in to AirWatch. In the case of AirWatch, provisioning is a manual task.
@@ -216,15 +210,15 @@ To enable Azure AD users to sign in to AirWatch, they must be provisioned in to
2. In the navigation pane on the left side, click **Accounts**, and then click **Users**.
- ![Users](./media/airwatch-tutorial/ic791929.png "Users")
+ ![Users](./media/airwatch-tutorial/accounts.png "Users")
3. In the **Users** menu, click **List View**, and then click **Add > Add User**.
- ![Screenshot that highlights the Add and Add User buttons.](./media/airwatch-tutorial/ic791930.png "Add User")
+ ![Screenshot that highlights the Add and Add User buttons.](./media/airwatch-tutorial/add.png "Add User")
4. On the **Add / Edit User** dialog, perform the following steps:
- ![Add User](./media/airwatch-tutorial/ic791931.png "Add User")
+ ![Add User](./media/airwatch-tutorial/save.png "Add User")
a. Type the **Username**, **Password**, **Confirm Password**, **First Name**, **Last Name**, **Email Address** of a valid Azure Active Directory account you want to provision into the related textboxes.
@@ -235,12 +229,14 @@ To enable Azure AD users to sign in to AirWatch, they must be provisioned in to
### Test SSO
-When you select the AirWatch tile in the Access Panel, you should be automatically signed in to the AirWatch for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to AirWatch Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to AirWatch Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the AirWatch tile in the My Apps, this will redirect to AirWatch Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure AirWatch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/maptician-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maptician-tutorial.md
@@ -0,0 +1,153 @@
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Maptician | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Maptician.
++++++++ Last updated : 01/28/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Maptician
+
+In this tutorial, you'll learn how to integrate Maptician with Azure Active Directory (Azure AD). When you integrate Maptician with Azure AD, you can:
+
+* Control in Azure AD who has access to Maptician.
+* Enable your users to be automatically signed-in to Maptician with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Maptician single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Maptician supports **SP and IDP** initiated SSO
+
+## Adding Maptician from the gallery
+
+To configure the integration of Maptician into Azure AD, you need to add Maptician from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Maptician** in the search box.
+1. Select **Maptician** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Maptician
+
+Configure and test Azure AD SSO with Maptician using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Maptician.
+
+To configure and test Azure AD SSO with Maptician, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Maptician SSO](#configure-maptician-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Maptician test user](#create-maptician-test-user)** - to have a counterpart of B.Simon in Maptician that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Maptician** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.maptician.com/saml/acs_msft`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.maptician.com/saml/acs_msft`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.maptician.com/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Maptician Client support team](mailto:support@maptician.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Maptician application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Maptician application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | EmployeeID | user.employeeid |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Maptician.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Maptician**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Maptician SSO
+
+To configure single sign-on on **Maptician** side, you need to send the **App Federation Metadata Url** to [Maptician support team](mailto:support@maptician.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Maptician test user
+
+In this section, you create a user called Britta Simon in Maptician. Work with [Maptician support team](mailto:support@maptician.com) to add the users in the Maptician platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Maptician Sign on URL where you can initiate the login flow.
+
+* Go to Maptician Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Maptician for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Maptician tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Maptician for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Maptician you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tutorial-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
@@ -116,6 +116,7 @@ To find more tutorials, use the table of contents on the left.
| ![logo-Teamphoria](./medi)| | ![logo-Terraform Cloud](./medi)| | ![logo-TextMagic](./medi)|
+| ![logo-Timeclock 365 SAML](./medi)|
| ![logo-Upshotly](./medi)| | ![logo-Velpic SAML](./medi)| | ![logo-Wandera](./medi)|
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/user-help-auth-app-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
@@ -234,7 +234,7 @@ The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
- On iOS, under **Settings**, select **How to turn on Autofill** in the Autofill settings section to learn how to set Authenticator as the default autofill provider. - On Android, under **Settings**, select **Set as Autofill provider** in the Autofill settings section.
-**Q**: What if **Autofill** switch is not available for me in Settings?
+**Q**: What if **Autofill** is not available for me in Settings?
**A**: If Autofill is not available for you in Authenticator, it might be because autofill has not yet been allowed for your organization or account type. You can use this feature on a device where your work or school account isnΓÇÖt added. To learn more on how to allow Autofill for your organization, see [Autofill for IT admins](#autofill-for-it-admins).
aks https://docs.microsoft.com/en-us/azure/aks/howto-deploy-java-liberty-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
@@ -0,0 +1,234 @@
+
+ Title: Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
+description: Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster.
++++ Last updated : 02/01/2021
+keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
++
+# Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
+
+This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jakarta.ee/), or [MicroProfile](https://microprofile.io/) application on the Open Liberty or WebSphere Liberty runtime and then deploy the containerized application to an AKS cluster using the Open Liberty Operator. The Open Liberty Operator simplifies the deployment and management of applications running on Open Liberty Kubernetes clusters. You can also perform more advanced operations such as gathering traces and dumps using the operator. This article will walk you through preparing a Liberty application, building the application Docker image and running the containerized application on an AKS cluster. For more details on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more details on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
+
+[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+
+[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
+
+* This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* If running the commands in this guide locally (instead of Azure Cloud Shell):
+ * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS).
+ * Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)).
+ * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+ * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+
+## Create a resource group
+
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Create a resource group, *java-liberty-project* using the [az group create](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_create) command in the *eastus* location. It will be used for creating the Azure Container Registry (ACR) instance and the AKS cluster later.
+
+```azurecli-interactive
+az group create --name java-liberty-project --location eastus
+```
+
+## Create an ACR instance
+
+Use the [az acr create](/cli/azure/acr?view=azure-cli-latest&preserve-view=true#az_acr_create) command to create the ACR instance. The following example creates an ACR instance named *youruniqueacrname*. Make sure *youruniqueacrname* is unique within Azure.
+
+```azurecli-interactive
+az acr create --resource-group java-liberty-project --name youruniqueacrname --sku Basic --admin-enabled
+```
+
+After a short time, you should see a JSON output that contains:
+
+```output
+ "provisioningState": "Succeeded",
+ "publicNetworkAccess": "Enabled",
+ "resourceGroup": "java-liberty-project",
+```
+
+### Connect to the ACR instance
+
+To push an image to the ACR instance, you need to log into it first. Run the following commands to verify the connection:
+
+```azurecli-interactive
+REGISTRY_NAME=youruniqueacrname
+LOGIN_SERVER=$(az acr show -n $REGISTRY_NAME --query 'loginServer' -o tsv)
+USER_NAME=$(az acr credential show -n $REGISTRY_NAME --query 'username' -o tsv)
+PASSWORD=$(az acr credential show -n $REGISTRY_NAME --query 'passwords[0].value' -o tsv)
+
+docker login $LOGIN_SERVER -u $USER_NAME -p $PASSWORD
+```
+
+You should see `Login Succeeded` at the end of command output if you have logged into the ACR instance successfully.
+
+## Create an AKS cluster
+
+Use the [az aks create](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
+
+```azurecli-interactive
+az aks create --resource-group java-liberty-project --name myAKSCluster --node-count 1 --generate-ssh-keys --enable-managed-identity
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster, including the following:
+
+```output
+ "nodeResourceGroup": "MC_java-liberty-project_myAKSCluster_eastus",
+ "privateFqdn": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "java-liberty-project",
+```
+
+### Connect to the AKS cluster
+
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_install_cli) command:
+
+```azurecli-interactive
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks?view=azure-cli-latest&preserve-view=true#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli-interactive
+az aks get-credentials --resource-group java-liberty-project --name myAKSCluster --overwrite-existing
+```
+
+> [!NOTE]
+> The above command uses the default location for the [Kubernetes configuration file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), which is `~/.kube/config`. You can specify a different location for your Kubernetes configuration file using *--file*.
+
+To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
+
+```azurecli-interactive
+kubectl get nodes
+```
+
+The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
+
+```output
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.18.10
+```
+
+## Install Open Liberty Operator
+
+After creating and connecting to the cluster, install the [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator/tree/master/deploy/releases/0.7.0) by running the following commands.
+
+```azurecli-interactive
+OPERATOR_NAMESPACE=default
+WATCH_NAMESPACE='""'
+
+# Install Custom Resource Definitions (CRDs) for OpenLibertyApplication
+kubectl apply -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-crd.yaml
+
+# Install cluster-level role-based access
+curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-cluster-rbac.yaml \
+ | sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/" \
+ | kubectl apply -f -
+
+# Install the operator
+curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-operator.yaml \
+ | sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${WATCH_NAMESPACE}/" \
+ | kubectl apply -n ${OPERATOR_NAMESPACE} -f -
+```
+
+## Build application image
+
+To deploy and run your Liberty application on the AKS cluster, containerize your application as a Docker image using [Open Liberty container images](https://github.com/OpenLiberty/ci.docker) or [WebSphere Liberty container images](https://github.com/WASdev/ci.docker).
+
+1. Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
+1. Change directory to `javaee-app-simple-cluster` of your local clone.
+1. Run `mvn clean package` to package the application.
+1. Run one of the following commands to build the application image and push it to the ACR instance.
+ * Build with Open Liberty base image if you prefer to use Open Liberty as a lightweight open source JavaΓäó runtime:
+
+ ```azurecli-interactive
+ # Build and tag application image. This will cause the ACR instance to pull the necessary Open Liberty base images.
+ az acr build -t javaee-cafe-simple:1.0.0 -r $REGISTRY_NAME .
+ ```
+
+ * Build with WebSphere Liberty base image if you prefer to use a commercial version of Open Liberty:
+
+ ```azurecli-interactive
+ # Build and tag application image. This will cause the ACR instance to pull the necessary WebSphere Liberty base images.
+ az acr build -t javaee-cafe-simple:1.0.0 -r $REGISTRY_NAME --file=Dockerfile-wlp .
+ ```
+
+## Deploy application on the AKS cluster
+
+Follow steps below to deploy the Liberty application on the AKS cluster.
+
+1. Create a pull secret so that the AKS cluster is authenticated to pull image from the ACR instance.
+
+ ```azurecli-interactive
+ kubectl create secret docker-registry acr-secret \
+ --docker-server=${LOGIN_SERVER} \
+ --docker-username=${USER_NAME} \
+ --docker-password=${PASSWORD}
+ ```
+
+1. Verify the current working directory is `javaee-app-simple-cluster` of your local clone.
+1. Run the following commands to deploy your Liberty application with 3 replicas to the AKS cluster. Command output is also shown inline.
+
+ ```azurecli-interactive
+ # Create OpenLibertyApplication "javaee-app-simple-cluster"
+ cat openlibertyapplication.yaml | sed -e "s/\${Container_Registry_URL}/${LOGIN_SERVER}/g" | sed -e "s/\${REPLICAS}/3/g" | kubectl apply -f -
+
+ openlibertyapplication.openliberty.io/javaee-app-simple-cluster created
+
+ # Check if OpenLibertyApplication instance is created
+ kubectl get openlibertyapplication javaee-app-simple-cluster
+
+ NAME IMAGE EXPOSED RECONCILED AGE
+ javaee-app-simple-cluster youruniqueacrname.azurecr.io/javaee-cafe-simple:1.0.0 True 59s
+
+ # Check if deployment created by Operator is ready
+ kubectl get deployment javaee-app-simple-cluster --watch
+
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ javaee-app-simple-cluster 0/3 3 0 20s
+ ```
+
+1. Wait until you see `3/3` under the `READY` column and `3` under the `AVAILABLE` column, use `CTRL-C` to stop the `kubectl` watch process.
+
+### Test the application
+
+When the application runs, a Kubernetes load balancer service exposes the application front end to the internet. This process can take a while to complete.
+
+To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
+
+```azurecli-interactive
+kubectl get service javaee-app-simple-cluster --watch
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+javaee-app-simple-cluster LoadBalancer 10.0.251.169 52.152.189.57 9080:31732/TCP 68s
+```
+
+Wait until the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+
+Open a web browser to the external IP address and port of your service (`52.152.189.57:9080` for the above example) to see the application home page. You should see the pod name of your application replicas displayed at the top-left of the page. Wait for a few minutes and refresh the page, you will probably see a different pod name displayed due to load balancing provided by the AKS cluster.
+
+:::image type="content" source="./media/howto-deploy-java-liberty-app/java-liberty-app-aks-deployed-success.png" alt-text="Java liberty application successfully deployed on AKS":::
+
+>[!NOTE]
+> - Currently the application is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](ingress-own-tls.md).
+
+## Clean up the resources
+
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
+
+```azurecli-interactive
+az group delete --name java-liberty-project --yes --no-wait
+```
+
+## Next steps
+
+You can learn more from references used in this guide:
+
+* [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/)
+* [Open Liberty](https://openliberty.io/)
+* [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
+* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
+* [Liberty Maven Plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin)
+* [Open Liberty Container Images](https://github.com/OpenLiberty/ci.docker)
+* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
aks https://docs.microsoft.com/en-us/azure/aks/private-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
@@ -126,7 +126,6 @@ As mentioned, virtual network peering is one way to access your private cluster.
* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network. * No support for converting existing AKS clusters into private clusters * Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
-* Azure Monitor for containers Live Data isn't currently supported.
* After customers have updated the A record on their own DNS servers, those Pods would still resolve apiserver FQDN to the older IP after migration until they're restarted. Customers need to restart hostNetwork Pods and default-DNSPolicy Pods after control plane migration. * In the case of maintenance on the control plane, your [AKS IP](./limit-egress-traffic.md) might change. In this case you must update the A record pointing to the API server private IP on your custom DNS server and restart any custom pods or deployments using hostNetwork.
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
@@ -114,7 +114,7 @@ When an API Management service instance is hosted in a VNET, the ports in the fo
| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / Storage | **Dependency on Azure Storage** | External & Internal | | * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure KeyVault dependency | External & Internal | | * / 1433 | Outbound | TCP | VIRTUAL_NETWORK / SQL | **Access to Azure SQL endpoints** | External & Internal |
-| * / 433 | Outbound | TCP | VIRTUAL_NETWORK / AzureKeyVault | **Access to Azure KeyVault** | External & Internal |
+| * / 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureKeyVault | **Access to Azure KeyVault** | External & Internal |
| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / EventHub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent | External & Internal | | * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) | External & Internal | | * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension | External & Internal |
api-management https://docs.microsoft.com/en-us/azure/api-management/security-baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
@@ -86,16 +86,12 @@ Combining API Management provisioned in an internal Vnet with the Application Ga
Note: This feature is available in the Premium and Developer tiers of API Management.
-Enable Azure DDoS Protection Standard on the Vnet associated with your API Management deployment to protect from distributed denial of service (DDoS) attacks.
- Use Azure Security Center Integrated Threat Intelligence to deny communications with known malicious or unused Internet IP addresses. * [How to integrate API Management in an internal VNET with Application Gateway](./api-management-howto-integrate-internal-vnet-appgateway.md) * [Understand Azure Application Gateway](../application-gateway/index.yml)
-* [How to configure Azure DDoS Protection Standard](../ddos-protection/manage-ddos-protection.md)
- * [Understand Azure Security Center Integrated Threat Intelligence](../security-center/azure-defender.md) **Azure Security Center monitoring**: Yes
@@ -180,8 +176,7 @@ Caution: When configuring an NSG on the API Management subnet, there are a set o
### 1.9: Maintain standard security configurations for network devices
-**Guidance**: Define and implement standard security configurations for network settings related to your Azure API Management deployments. Use Azure Policy aliases in the "Microsoft.ApiManagement" and "Microsoft.Network" namespaces to create custom policies to audit or enforce network configuration of your Azure API Management deployments and related resources. You may also make use of built-in policy definitions for Azure Virtual Networks, such as:
-- DDoS Protection Standard should be enabled
+**Guidance**: Define and implement standard security configurations for network settings related to your Azure API Management deployments. Use Azure Policy aliases in the "Microsoft.ApiManagement" and "Microsoft.Network" namespaces to create custom policies to audit or enforce network configuration of your Azure API Management deployments and related resources.
You may also use Azure Blueprints to simplify large-scale Azure deployments by packaging key environment artifacts, such as Azure Resource Manager templates, Azure role-based access control (Azure RBAC), and policies in a single blueprint definition. You can easily apply the blueprint to new subscriptions, environments, and fine-tune control and management through versioning.
@@ -1203,4 +1198,4 @@ Additionally, clearly mark subscriptions (for ex. production, non-prod) using ta
## Next steps - See the [Azure security benchmark](../security/benchmarks/overview.md)-- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)\ No newline at end of file
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
app-service https://docs.microsoft.com/en-us/azure/app-service/configure-domain-traffic-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-domain-traffic-manager.md
@@ -71,9 +71,9 @@ Once you have finished adding or modifying DNS records at your domain provider,
### What about root domains?
-Since Traffic Manager only supports custom domain mapping with CNAME records, and because DNS standards don't support CNAME records for mapping root domains (for example, **contoso.com**), Traffic Manager doesn't support mapping to root domains. To work around this issue, use a URL redirect from at the app level. In ASP.NET Core, for example, you can use [URL Rewriting](/aspnet/core/fundamentals/url-rewriting). Then, use Traffic Manager to load balance the subdomain (**www.contoso.com**).
+Since Traffic Manager only supports custom domain mapping with CNAME records, and because DNS standards don't support CNAME records for mapping root domains (for example, **contoso.com**), Traffic Manager doesn't support mapping to root domains. To work around this issue, use a URL redirect from at the app level. In ASP.NET Core, for example, you can use [URL Rewriting](/aspnet/core/fundamentals/url-rewriting). Then, use Traffic Manager to load balance the subdomain (**www.contoso.com**). Another approach is you can [create an alias record for your domain name apex to reference an Azure Traffic Manager profile](https://docs.microsoft.com/azure/dns/tutorial-alias-tm). An example is contoso.com. Instead of using a redirecting service, you can configure Azure DNS to reference a Traffic Manager profile directly from your zone.
-For high availability scenarios, you can implement a fault-tolerant DNS setup without Traffic Manager by creating multiple *A records* that point from the root domain to each app copy's IP address. Then, [map the same root domain to all the app copies](app-service-web-tutorial-custom-domain.md#map-an-a-record). Since the same domain name cannot be mapped to two different apps in the same region, this setup only works when your app copies are in different regions.
+For high availability scenarios, you can implement a load-balancing DNS setup without Traffic Manager by creating multiple *A records* that point from the root domain to each app copy's IP address. Then, [map the same root domain to all the app copies](app-service-web-tutorial-custom-domain.md#map-an-a-record). Since the same domain name cannot be mapped to two different apps in the same region, this setup only works when your app copies are in different regions.
## Enable custom domain After the records for your domain name have propagated, use the browser to verify that your custom domain name resolves to your App Service app.
@@ -95,4 +95,4 @@ After the records for your domain name have propagated, use the browser to verif
## Next steps > [!div class="nextstepaction"]
-> [Secure a custom DNS name with an SSL binding in Azure App Service](configure-ssl-bindings.md)
\ No newline at end of file
+> [Secure a custom DNS name with an SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service https://docs.microsoft.com/en-us/azure/app-service/environment/app-service-app-service-environment-custom-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-custom-settings.md
@@ -1,12 +1,12 @@
Title: Configure custom settings description: Configure settings that apply to the entire Azure App Service environment. Learn how to do it with Azure Resource Manager templates.-+ ms.assetid: 1d1d85f3-6cc6-4d57-ae1a-5b37c642d812 Previously updated : 10/03/2020- Last updated : 01/29/2021+
@@ -57,7 +57,7 @@ For example, if an App Service Environment has four front ends, it will take rou
## Enable Internal Encryption
-The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is completely inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end, there is a way to enable this with a clusterSetting.
+The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end, there is a way to enable encryption of the complete data path with a clusterSetting.
```json "clusterSettings": [
@@ -67,7 +67,7 @@ The App Service Environment operates as a black box system where you cannot see
} ], ```
-This will encrypt internal network traffic in your ASE between the front ends and workers, encrypt the pagefile and also encrypt the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your ASE will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your ASE. We highly recommend that you do not enable this on an ASE while it is in use. If you need to enable this on an actively used ASE, we highly recommend that you divert traffic to a backup environment until the operation completes.
+Setting InternalEncryption to true encrypts internal network traffic in your ASE between the front ends and workers, encrypts the pagefile and also encrypts the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your ASE will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your ASE. We highly recommend that you do not enable InternalEncryption on an ASE while it is in use. If you need to enable InternalEncryption on an actively used ASE, we highly recommend that you divert traffic to a backup environment until the operation completes.
## Disable TLS 1.0 and TLS 1.1
@@ -88,13 +88,13 @@ If you want to disable all inbound TLS 1.0 and TLS 1.1 traffic for all of the ap
The name of the setting says 1.0 but when configured, it disables both TLS 1.0 and TLS 1.1. ## Change TLS cipher suite order
-Another question from customers is if they can modify the list of ciphers negotiated by their server and this can be achieved by modifying the **clusterSettings** as shown below. The list of cipher suites available can be retrieved from [this MSDN article](https://msdn.microsoft.com/library/windows/desktop/aa374757\(v=vs.85\).aspx).
+The ASE supports changing the cipher suite from the default. The default set of ciphers is the same set that is used in the multi-tenant service. Changing the cipher suites affects an entire App Service deployment making this only possible in the single-tenant ASE. There are two cipher suites required for an ASE; TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. If you wish to operate your ASE with the strongest and most minimal set of cipher suites, then use just the two required ciphers. To configure your ASE to use just the ciphers that it requires, modify the **clusterSettings** as shown below.
```json "clusterSettings": [ { "name": "FrontEndSSLCipherSuiteOrder",
- "value": "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256"
+ "value": "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
} ], ```
automation https://docs.microsoft.com/en-us/azure/automation/automation-security-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
@@ -4,7 +4,7 @@ description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 01/21/2021 Last updated : 02/01/2021
@@ -30,7 +30,7 @@ All tasks that you create against resources using Azure Resource Manager and the
Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
-* Azure Run As account: Allows you to manages Azure resources based on the Azure Resource Manager deployment and management service for Azure.
+* Azure Run As account: Allows you to manage Azure resources based on the Azure Resource Manager deployment and management service for Azure.
* Azure Classic Run As account: Allows you to manage Azure classic resources based on the Classic deployment model. To learn more about the Azure Resource Manager and Classic deployment models, see [Resource Manager and classic deployment](../azure-resource-manager/management/deployment-models.md).
availability-zones https://docs.microsoft.com/en-us/azure/availability-zones/az-region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
@@ -42,16 +42,16 @@ To achieve comprehensive business continuity on Azure, build your application ar
## Azure regions with Availability Zones
-| Americas | Europe | Germany | Africa | Asia Pacific |
-|--|-|-||-|
-| | | | | |
-| Canada Central | France Central | Germany West Central | South Africa North* | Japan East |
-| Central US | North Europe | | | Southeast Asia |
-| East US | UK South | | | Australia East |
-| East US 2 | West Europe | | | |
-| South Central US | | | | |
-| US Gov Virginia* | | | | |
-| West US 2 | | | | |
+| Americas | Europe | Africa | Asia Pacific |
+|--|-||-|
+| | | | |
+| Canada Central | France Central | South Africa North* | Japan East |
+| Central US | Germany West Central | | Southeast Asia |
+| East US | North Europe | | Australia East |
+| East US 2 | UK South | | |
+| South Central US | West Europe | | |
+| US Gov Virginia* | | | |
+| West US 2 | | | |
\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/en-us/global-infrastructure/geographies/).
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/faq.md
@@ -100,7 +100,7 @@ You can't downgrade a store from the Standard tier to the Free tier. You can cre
## Are there any limits on the number of requests made to App Configuration?
-Configuration stores in the Free tier are limited to 1,000 requests per day. Configuration stores in the Standard tier may experience temporary throttling when the request rate exceeds 20,000 requests per hour.
+In App Configuration, when reading key-values, data will be paginated and each request can read up to 100 key-values. When writing key-values, each request can create or update one key-value. This is supported through the REST API, App Configuration SDKs, and configuration providers. Configuration stores in the Free tier are limited to 1,000 requests per day. Configuration stores in the Standard tier may experience temporary throttling when the request rate exceeds 20,000 requests per hour.
When a store reaches its limit, it will return HTTP status code 429 for all requests made until the time period expires. The `retry-after-ms` header in the response gives a suggested wait time (in milliseconds) before retrying the request.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/add-bindings-existing-function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/add-bindings-existing-function.md
@@ -1,12 +1,12 @@
Title: Add bindings to an existing function in Azure Functions
-description: Learn how to add bindings to an existing function in your Azure Functions project.
+ Title: Connect functions to other Azure services
+description: Learn how to add bindings that connect to other Azure services to an existing function in your Azure Functions project.
Last updated 04/29/2020 #Customer intent: As a developer, I need to know how to add a binding to an existing function so that I can integrate external services to my function.
-# Add bindings to an existing function in Azure Functions
+# Connect functions to Azure services using bindings
When you create a function, language-specific trigger code is added in your project from a set of trigger templates. If you want to connect your function to other services by using input or output bindings, you have to add specific binding definitions in your function. To learn more about bindings, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-event-tracing-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/data-sources-event-tracing-windows.md
@@ -0,0 +1,74 @@
+
+ Title: Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs
+description: Learn how to collect Event Tracing for Windows (ETW) for analysis in Azure Monitor Logs.
+++++ Last updated : 01/29/2021+
+# Collecting Event Tracing for Windows (ETW) Events for analysis Azure Monitor Logs
+
+*Event Tracing for Windows (ETW)* provides a mechanism for instrumentation of user-mode applications and kernel-mode drivers. The Log Analytics agent is used to [collect Windows events](https://docs.microsoft.com/azure/azure-monitor/platform/data-sources-windows-events) written to the Administrative and Operational [ETW channels](https://docs.microsoft.com/windows/win32/wes/eventmanifestschema-channeltype-complextype). However, it is occasionally necessary to capture and analyze other events, such as those written to the Analytic channel.
+
+## Event flow
+
+To successfully collect [manifest-based ETW events](https://docs.microsoft.com/windows/win32/etw/about-event-tracing#types-of-providers) for analysis in Azure Monitor Logs, you must use the [Azure diagnostics extension](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostics-extension-overview) for Windows (WAD). In this scenario, the diagnostics extension acts as the ETW consumer, writing events to Azure Storage (tables) as an intermediate store. Here it will be stored in a table named **WADETWEventTable**. Log Analytics then collects the table data from Azure storage, presenting it as a table named **ETWEvent**.
+
+![Event flow](./media/data-sources-event-tracing-windows/event-flow.png)
+
+## Configuring ETW Log collection
+
+### Step 1: Locate the correct ETW provider
+
+Use either of the following commands to enumerate the ETW providers on a source Windows System.
+
+Command line:
+
+```
+logman query providers
+```
+
+PowerShell:
+```
+Get-NetEventProvider -ShowInstalled | Select-Object Name, Guid
+```
+Optionally, you may choose to pipe this PowerShell output to Out-Gridview to aid navigation.
+
+Record the ETW provider name and GUID that aligns to the Analytic or Debug log that is presented in the Event Viewer, or to the module you intend to collect event data for.
+
+### Step 2: Diagnostics extension
+
+Ensure the *Windows diagnostics extension* is [installed](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostics-extension-windows-install#install-with-azure-portal) on all source systems.
+
+### Step 3: Configure ETW log collection
+
+1. Navigate to the **Diagnostic Settings** blade of the virtual machine
+
+2. Select the **Logs** tab
+
+3. Scroll down and enable the **Event tracing for Windows (ETW) events** option
+![Screenshot of diagnostics settings](./media/data-sources-event-tracing-windows/enable-event-tracing-windows-collection.png)
+
+4. Set the provider GUID or provider class based on the provider you are configuring collection for
+
+5. Set the [**Log Level**](https://docs.microsoft.com/windows/win32/etw/configuring-and-starting-an-event-tracing-session) as appropriate
+
+6. Click the ellipsis adjacent to the supplied provider, and click **Configure**
+
+7. Ensure the **Default destination table** is set to **etweventtable**
+
+8. Set a [**Keyword filter**](https://docs.microsoft.com/windows/win32/wes/defining-keywords-used-to-classify-types-of-events) if required
+
+9. Save the provider and log settings
+
+Once matching events are generated, you should start to see the ETW events appearing in the **WADetweventtable** table in Azure Storage. You can use Azure Storage Explorer to confirm this.
+
+### Step 4: Configure Log Analytics storage account collection
+
+Follow [these instructions](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostics-extension-logs#collect-logs-from-azure-storage) to collect the logs from Azure Storage. Once configured, the ETW event data should appear in Log Analytics under the **ETWEvent** table.
+
+## Next steps
+- Use [custom fields](https://docs.microsoft.com/azure/azure-monitor/platform/custom-fields) to create structure in your ETW events
+- Learn about [log queries](https://docs.microsoft.com/azure/azure-monitor/log-query/log-query-overview) to analyze the data collected from data sources and solutions.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections-servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections-servicenow.md
@@ -25,10 +25,11 @@ For information about installing ITSMC, see [Add the IT Service Management Conne
### OAuth setup
-ServiceNow supported versions include Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
+ServiceNow supported versions include Paris, Orlando, New York, Madrid, London, Kingston, Jakarta, Istanbul, Helsinki, and Geneva.
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required:
+- [Set up OAuth for Paris](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
- [Set up OAuth for Orlando](https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for New York](https://docs.servicenow.com/bundle/newyork-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Madrid](https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
@@ -142,4 +143,4 @@ When you're successfully connected and synced:
* [ITSM Connector overview](itsmc-overview.md) * [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts)
-* [Troubleshooting problems in the ITSM Connector](./itsmc-resync-servicenow.md)
\ No newline at end of file
+* [Troubleshooting problems in the ITSM Connector](./itsmc-resync-servicenow.md)
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-best-practices.md
@@ -271,6 +271,8 @@ The following information can be helpful when you work with [resources](template
> [!NOTE] > To ensure that secrets are encrypted when they are passed as parameters to VMs and extensions, use the `protectedSettings` property of the relevant extensions.
+* Specify explicit values for properties that have default values that could change over time. For example, if you are deploying an AKS cluster, you can either specify or omit the `kubernetesVersion` property. If you don't specify it, then [the cluster is defaulted to the N-1 minor version and latest patch](../../aks/supported-kubernetes-versions.md#azure-portal-and-cli-versions). When you deploy the cluster using an ARM template, this default behavior might not be what you expect. Redeploying your template may result in the cluster being upgraded to a new Kubernetes version unexpectedly. Instead, consider specifying an explicit version number and then manually changing it when you are ready to upgrade your cluster.
+ ## Use test toolkit The ARM template test toolkit is a script that checks whether your template uses recommended practices. When your template isn't compliant with recommended practices, it returns a list of warnings with suggested changes. The test toolkit can help you learn how to implement best practices in your template.
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-byok-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
@@ -11,7 +11,7 @@
Previously updated : 03/18/2020 Last updated : 02/01/2021 # Azure SQL Transparent Data Encryption with customer-managed key [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
@@ -182,11 +182,9 @@ Additional consideration for log files: Backed up log files remain encrypted wit
## High availability with customer-managed TDE
-Even in cases when there is no configured geo-redundancy for server, it is highly recommended to configure the server to use two different key vaults in two different regions with the same key material. It can be accomplished by creating a TDE protector using the primary key vault co-located in the same region as the server and cloning the key into a key vault in a different Azure region, so that the server has access to a second key vault should the primary key vault experience an outage while the database is up and running.
+Even in cases when there is no configured geo-redundancy for server, it is highly recommended to configure the server to use two different key vaults in two different regions with the same key material. The key in the secondary key vault in the other region should not be marked as TDE protector, and it's not even allowed. If there is an outage affecting the primary key vault, and only then, the system will automatically switch to the other linked key with the same thumbprint in the secondary key vault, if it exists. Note though that switch will not happen if TDE protector is inaccessible because of revoked access rights, or because key or key vault is deleted, as it may indicate that customer intentionally wanted to restrict server from accessing the key.Providing the same key material to two key vaults in different regions can be done by creating the key outside of the key vault, and importing them into both key vaults.
-Use the Backup-AzKeyVaultKey cmdlet to retrieve the key in encrypted format from the primary key vault and then use the Restore-AzKeyVaultKey cmdlet and specify a key vault in the second region to clone the key. Alternatively, use the Azure portal to back up and restore the key. The key in the secondary key vault in the other region should not be marked as TDE protector, and it's not even allowed.
-
-If there is an outage affecting the primary key vault, and only then, the system will automatically switch to the other linked key with the same thumbprint in the secondary key vault, if it exists. Note though that switch will not happen if TDE protector is inaccessible because of revoked access rights, or because key or key vault is deleted, as it may indicate that customer intentionally wanted to restrict server from accessing the key.
+Alternatively, it can be accomplished by generating key using the primary key vault co-located in the same region as the server and cloning the key into a key vault in a different Azure region. Use the [Backup-AzKeyVaultKey](https://docs.microsoft.com/powershell/module/az.keyvault/Backup-AzKeyVaultKey) cmdlet to retrieve the key in encrypted format from the primary key vault and then use the [Restore-AzKeyVaultKey](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey) cmdlet and specify a key vault in the second region to clone the key. Alternatively, use the Azure portal to back up and restore the key. Key backup/restore operation is only allowed between key vaults within the same Azure subscription and [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/).
![Single-Server HA](./media/transparent-data-encryption-byok-overview/customer-managed-tde-with-ha.png)
@@ -214,4 +212,4 @@ You may also want to check the following PowerShell sample scripts for the commo
- [Remove a Transparent Data Encryption (TDE) protector for SQL Database using PowerShell](transparent-data-encryption-byok-remove-tde-protector.md) -- [Manage Transparent Data Encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)\ No newline at end of file
+- [Manage Transparent Data Encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/frequently-asked-questions-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
@@ -357,7 +357,7 @@ Yes. See [How to configure a Custom DNS for Azure SQL Managed Instance](./custom
**Can I do DNS refresh?**
-Currently, we don't provide a feature to refresh DNS server configuration for SQL Managed Instance.
+Yes. See [Synchronize virtual network DNS servers setting on SQL Managed Instance virtual cluster](./synchronize-vnet-dns-servers-setting-on-virtual-cluster.md).
DNS configuration is eventually refreshed:
@@ -524,4 +524,4 @@ You can vote for a new Managed Instance feature or put a new improvement idea on
**How can I create Azure support request?**
-To learn how to create Azure support request, see [How to create Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
\ No newline at end of file
+To learn how to create Azure support request, see [How to create Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/replication-transactional-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/replication-transactional-overview.md
@@ -103,7 +103,7 @@ Transactional replication is useful in the following scenarios:
| Category | Data Sync | Transactional Replication | |||| | Advantages | - Active-active support<br/>- Bi-directional between on-premises and Azure SQL Database | - Lower latency<br/>- Transactional consistency<br/>- Reuse existing topology after migration |
-| Disadvantages | - 5 min or more latency<br/>- No transactional consistency<br/>- Higher performance impact | - CanΓÇÖt publish from Azure SQL Database <br/>- High maintenance cost |
+| Disadvantages | - No transactional consistency<br/>- Higher performance impact | - CanΓÇÖt publish from Azure SQL Database <br/>- High maintenance cost |
## Common configurations
@@ -202,4 +202,4 @@ For more information about configuring transactional replication, see the follow
- [Create a Push Subscription](/sql/relational-databases/replication/create-a-push-subscription/) - [Types of Replication](/sql/relational-databases/replication/types-of-replication) - [Monitoring (Replication)](/sql/relational-databases/replication/monitor/monitoring-replication)-- [Initialize a Subscription](/sql/relational-databases/replication/initialize-a-subscription)\ No newline at end of file
+- [Initialize a Subscription](/sql/relational-databases/replication/initialize-a-subscription)
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
@@ -144,6 +144,18 @@ After you verify that data is same on both the source and the target, you can cu
> [!IMPORTANT] > For details on the specific steps associated with performing a cutover as part of migrations using DMS, see [Performing migration cutover](../../../dms/tutorial-sql-server-azure-sql-online.md#perform-migration-cutover).
+## Migration recommendations
+
+To speed up migration to Azure SQL Database, you should consider the following recommendations:
+
+| | Resource contention | Recommendation |
+|--|--|--|
+| **Source (typically on premises)** |Primary bottleneck during migration in source is DATA I/O and latency on DATA file which needs to be monitored carefully. |Based on DATA IO and DATA file latency and depending on whether itΓÇÖs a virtual machine or physical server, you will have to engage storage admin and explore options to mitigate the bottleneck. |
+|**Target (Azure SQL Database)**|Biggest limiting factor is the log generation rate and latency on log file. With Azure SQL Database, you can get a maximum of 96 MB/s log generation rate. | To speed up migration, scale up the target SQL DB to Business Critical Gen5 8 vcore to get the maximum log generation rate of 96 MB/s and also achieve low latency for log file. The [Hyperscale](https://docs.microsoft.com/azure/azure-sql/database/service-tier-hyperscale) service tier provides 100 MB/s log rate regardless of chosen service level |
+|**Network** |Network bandwidth needed is equal to max log ingestion rate 96 MB/s (768 Mb/s) |Depending on network connectivity from your on-premises data center to Azure, check your network bandwidth (typically [Azure ExpressRoute](https://docs.microsoft.com/azure/expressroute/expressroute-introduction#bandwidth-options)) to accommodate for the maximum log ingestion rate. |
+|**Virtual machine used for Data Migration Assistant (DMA)** |CPU is the primary bottleneck for the virtual machine running DMA |Things to consider to speed up data migration by using </br>- Azure compute intensive VMs </br>- Use at least F8s_v2 (8 vcore) VM for running DMA </br>- Ensure the VM is running in the same Azure region as target |
+|**Azure Database Migration Service (DMS)** |Compute resource contention and database objects consideration for DMS |Use Premium 4 vCore. DMS automatically takes care of database objects like foreign keys, triggers, constraints and non-clustered indexes and doesnΓÇÖt need any manual intervention. |
+ ## Post-migration
@@ -191,4 +203,4 @@ To learn more, see [managing Azure SQL Database after migration](../../database/
- [Best practices for costing and sizing workloads migrate to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - To assess the Application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit)-- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).\ No newline at end of file
+- For details on how to perform Data Access Layer A/B testing see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-encryption.md
@@ -38,11 +38,11 @@ Azure Backup can back up and restore Azure VMs using ADE with and without the Az
### Limitations -- You can back up and restore encrypted VMs within the same subscription and region.
+- You can back up and restore ADE encrypted VMs within the same subscription and region.
- Azure Backup supports VMs encrypted using standalone keys. Any key that's a part of a certificate used to encrypt a VM isn't currently supported.-- You can back up and restore encrypted VMs within the same subscription and region as the Recovery Services Backup vault.-- Encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders.-- When restoring a VM, you can't use the [replace existing VM](backup-azure-arm-restore-vms.md#restore-options) option for encrypted VMs. This option is only supported for unencrypted managed disks.
+- You can back up and restore ADE encrypted VMs within the same subscription and region as the Recovery Services Backup vault.
+- ADE encrypted VMs canΓÇÖt be recovered at the file/folder level. You need to recover the entire VM to restore files and folders.
+- When restoring a VM, you can't use the [replace existing VM](backup-azure-arm-restore-vms.md#restore-options) option for ADE encrypted VMs. This option is only supported for unencrypted managed disks.
## Before you start
@@ -119,6 +119,17 @@ To set permissions:
1. In the Azure portal, select **All services**, and search for **Key vaults**. 1. Select the key vault associated with the encrypted VM you're backing up.+
+ >[!TIP]
+ >To identify a VM's associated key vault, use the following PowerShell command. Substitute your resource group name and VM name:
+ >
+ >`Get-AzVm -ResourceGroupName "MyResourceGroup001" -VMName "VM001" -Status`
+ >
+ > Look for the key vault name in this line:
+ >
+ >`SecretUrl : https://<keyVaultName>.vault.azure.net`
+ >
+ 1. Select **Access policies** > **Add Access Policy**. ![Add access policy](./media/backup-azure-vms-encryption/add-access-policy.png)
@@ -142,7 +153,7 @@ Encrypted VMs can only be restored by restoring the VM disk as explained below.
Restore encrypted VMs as follows: 1. [Restore the VM disk](backup-azure-arm-restore-vms.md#restore-disks).
-2. Recreate the virtual machine instance by doing one of the following:
+2. Recreate the virtual machine instance by doing one of the following actions:
1. Use the template that's generated during the restore operation to customize VM settings, and trigger VM deployment. [Learn more](backup-azure-arm-restore-vms.md#use-templates-to-customize-a-restored-vm). 2. Create a new VM from the restored disks using PowerShell. [Learn more](backup-azure-vms-automation.md#create-a-vm-from-restored-disks). 3. For Linux VMs, reinstall the ADE extension so the data disks are open and mounted.
backup https://docs.microsoft.com/en-us/azure/backup/disk-backup-support-matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
@@ -17,7 +17,7 @@ You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This ar
## Supported regions
-Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Korea Central, Korea South, Japan West, East Asia, UAE North.
+Azure Disk Backup is available in preview in the following regions: West US, West Central US, East US2, Korea Central, Korea South, Japan West, East Asia, UAE North, Brazil South, Central India.
More regions will be announced when they become available.
backup https://docs.microsoft.com/en-us/azure/backup/guidance-best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/guidance-best-practices.md
@@ -68,7 +68,7 @@ You can use a single vault or multiple vaults to organize and manage your backup
* If your workloads are spread across subscriptions, then you can create multiple vaults, one or more per subscription. * Backup Center allows you to have a single pane of glass to manage all tasks related to Backup. [Learn more here](). * You can customize your views with workbook templates. Backup Explorer is one such template for Azure VMs. [Learn more here](monitor-azure-backup-with-backup-explorer.md).
- * If you needed consistent policy across vaults, then you can use Azure policy to propagate backup policy across multiple vaults. You can write a custom [Azure Policy definition](../governance/policy/concepts/definition-structure.md) that uses the [ΓÇÿdeployifnotexistsΓÇÖ](../governance/policy/concepts/effects.md#deployifnotexists) effect to propagate a backup policy across multiple vaults. You assign can [assign](../governance/policy/assign-policy-portal.md) this Azure Policy definition to a particular scope (subscription or RG), so that it deploys a 'backup policy' resource to all Recovery Services vaults in the scope of the Azure Policy assignment. The settings of the backup policy (such as backup frequency, retention, and so on) should be specified by the user as parameters in the Azure Policy assignment.
+ * If you needed consistent policy across vaults, then you can use Azure policy to propagate backup policy across multiple vaults. You can write a custom [Azure Policy definition](../governance/policy/concepts/definition-structure.md) that uses the [ΓÇÿdeployifnotexistsΓÇÖ](../governance/policy/concepts/effects.md#deployifnotexists) effect to propagate a backup policy across multiple vaults. You can also [assign](../governance/policy/assign-policy-portal.md) this Azure Policy definition to a particular scope (subscription or RG), so that it deploys a 'backup policy' resource to all Recovery Services vaults in the scope of the Azure Policy assignment. The settings of the backup policy (such as backup frequency, retention, and so on) should be specified by the user as parameters in the Azure Policy assignment.
* As your organizational footprint grows, you might want to move workloads across subscriptions for the following reasons: align by backup policy, consolidate vaults, trade-off on lower redundancy to save on cost (move from GRS to LRS). Azure Backup supports moving a Recovery Services vault across Azure subscriptions, or to another resource group within the same subscription. [Learn more here](backup-azure-move-recovery-services-vault.md).
batch https://docs.microsoft.com/en-us/azure/batch/batch-certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-certificates.md
@@ -1,41 +0,0 @@
- Title: Using certificates - Azure Batch
-description: Use certificates to enable authentication of applications
------- Previously updated : 02/17/2020----
-# Using certificates with Batch
-
-Currently the main reason to use certificates with Batch is if you have applications running in Pools that need to authenticate with an endpoint.
-
-If you don't already have a certificate, you can create a self-signed certificate using the
-`makecert` command-line tool.
-
-## Upload certificates manually through the Azure portal
-
-1. From the Batch account you want to upload a certificate to, select **Certificates** and then select **Add**.
-
-2. Upload the certificate with a .pfx or .cer extension.
-
-Once uploaded, the certificate is added to a list of certificates, and you can verify the thumbprint.
-
-Now when you create a Batch pool, you can navigate to Certificates within the pool and assign the certificate you uploaded to that pool.
-
-## Next steps
-
-Batch has a certificate API, [AZ batch certificate create](/cli/azure/batch/certificate)
-
-For information on using Key Vault, see [Securely access Key Vault with Batch](credential-access-key-vault.md).
batch https://docs.microsoft.com/en-us/azure/batch/credential-access-key-vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/credential-access-key-vault.md
@@ -1,12 +1,12 @@
Title: Securely access Key Vault with Batch
+ Title: Use certificates and securely access Azure Key Vault with Batch
description: Learn how to programmatically access your credentials from Key Vault using Azure Batch. Last updated 10/28/2020
-# Securely access Key Vault with Batch
+# Use certificates and securely access Azure Key Vault with Batch
In this article, you'll learn how to set up Batch nodes to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md). There's no point in putting your admin credentials in Key Vault, then hard-coding credentials to access Key Vault from a script. The solution is to use a certificate that grants your Batch nodes access to Key Vault.
cloud-services-extended-support https://docs.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
@@ -84,7 +84,7 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
$networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} ```
-9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. Ensure that you have enabled 'Access policies' (in portal) for access to 'Azure Virtual Machines for deployment' and 'Azure Resource Manager for template deployment'. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
```powershell New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
@@ -134,6 +134,8 @@ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
$expiration = (Get-Date).AddYears(1) $extension = New-AzCloudServiceRemoteDesktopExtensionObject -Name 'RDPExtension' -Credential $credential -Expiration $expiration -TypeHandlerVersion '1.2.1'
+ $storageAccountKey = Get-AzStorageAccountKey -ResourceGroupName "ContosOrg" -Name "contosostorageaccount"
+ $configFile = "<WAD configuration file path>"
$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "ContosSA" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true $extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/spatial-analysis-operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
@@ -125,7 +125,7 @@ This is an example of the DETECTOR_NODE_CONFIG parameters for all spatial analys
| `threshold` | float| Events are egressed when the confidence of the AI models is greater or equal this value. | | `type` | string| For **cognitiveservices.vision.spatialanalysis-personcount** this should be `count`.| | `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
-| `interval` | string| A time in seconds that the person count will be aggregated before an event is fired. The operation will continue to analyze the scene at a constant rate and returns the most common count over that interval. The aggregation interval is applicable to both `event` and `interval`.|
+| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`. |
| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).| ### Line configuration for cognitiveservices.vision.spatialanalysis-personcrossingline
@@ -250,8 +250,7 @@ This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
| `threshold` | float| Events are egressed when the confidence of the AI models is greater or equal this value. | | `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance** this should be `people_distance`.| | `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
-| `interval` | string | A time in seconds that the violations will be aggregated before an event is fired. The aggregation interval is applicable to both `event` and `interval`.|
-| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The output_frequency is applicable to both `event` and `interval`.|
+| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`.|
| `minimum_distance_threshold` | float| A distance in feet that will trigger a "TooClose" event when people are less than that distance apart.| | `maximum_distance_threshold` | float| A distance in feet that will trigger a "TooFar" event when people are greater than that distance apart.| | `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Face/ReleaseNotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/ReleaseNotes.md
@@ -17,8 +17,20 @@
The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with feature enhancements, fixes, and documentation updates.
+## January 2021
+* Mitigate latency when using the Face API: The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/how-to-mitigate-latency).
+
+## December 2020
+* Customer configuration for Face ID storage: While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
+ ## November 2020
-* Published a sample face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high quality enrollments. The open source sample can be found in the [Build an enrollment app](build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+* Published a sample face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+
+## August 2020
+* Customer-managed encryption of data at rest: The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](https://docs.microsoft.com/azure/cognitive-services/face/face-encryption-of-data-at-rest).
+
+## April 2020
+* New Face API Recognition Model: The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](https://docs.microsoft.com/azure/cognitive-services/face/face-api-how-to-topics/specify-recognition-model).
## June 2019
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/plan.md
@@ -89,13 +89,13 @@ You can now have knowledge bases in different languages within the same QnA Make
### Ingest data sources
-You can use one of the following ingested [data sources](../index.yml) to create a knowledge base:
+You can use one of the following ingested [data sources](../Concepts/data-sources-and-content.md) to create a knowledge base:
* Public URL * Private SharePoint URL * File
-The ingestion process converts [supported content types](../index.yml) to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit [QnA pairs](question-answer-set.md) in the QnA Maker portal with [rich text authoring](../how-to/edit-knowledge-base.md#rich-text-editing-for-answer).
+The ingestion process converts [supported content types](../reference-document-format-guidelines.md) to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit [QnA pairs](question-answer-set.md) in the QnA Maker portal with [rich text authoring](../how-to/edit-knowledge-base.md#rich-text-editing-for-answer).
### Data format considerations
@@ -119,7 +119,7 @@ You should design your conversational flow with a loop in mind so that a user kn
Collaborators may be other developers who share the full development stack of the knowledge base application or may be limited to just authoring the knowledge base.
-Knowledge base authoring supports several [role-based access permissions](../index.yml) you apply in the Azure portal to limit the scope of a collaborator's abilities.
+Knowledge base authoring supports several [role-based access permissions](../reference-role-based-access-control.md) you apply in the Azure portal to limit the scope of a collaborator's abilities.
## Integration with client applications
@@ -221,4 +221,4 @@ To have the _same score_ on the `test` and `production` knowledge bases, isolate
## Next steps * [Azure resources](../how-to/set-up-qnamaker-service-azure.md)
-* [Question and answer pairs](question-answer-set.md)
\ No newline at end of file
+* [Question and answer pairs](question-answer-set.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/faq-stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
@@ -8,7 +8,7 @@
Previously updated : 08/20/2020 Last updated : 02/01/2021
@@ -78,7 +78,7 @@ Both base models and custom models will be retired after some time (see [Model l
**Q: Are my requests logged?**
-**A**: By default requests are not logged (neither audio, nor transcription). If necessary, you may select *Log content from this endpoint* option when you [create a custom endpoint](./how-to-custom-speech-train-model.md). You can also enable audio logging in the [Speech SDK](speech-sdk.md) on a per-request basis without creating a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. For subscriptions that use Microsoft-owned storage, they will be available for 30 days.
+**A**: By default requests are not logged (neither audio, nor transcription). If necessary, you may select *Log content from this endpoint* option when you [create a custom endpoint](how-to-custom-speech-train-model.md#deploy-a-custom-model). You can also enable audio logging in the [Speech SDK](how-to-use-logging.md) on a per-request basis without creating a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. For subscriptions that use Microsoft-owned storage, they will be available for 30 days.
You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with *Log content from this endpoint* enabled. If audio logging is enabled via the SDK, call the [API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs) to access the files.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-cpp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-cpp.md
@@ -22,7 +22,7 @@ This article assumes that you have an Azure account and Speech service subscript
## Install the Speech SDK
-Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
+Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
## Import dependencies
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-csharp.md
@@ -23,7 +23,7 @@ This article assumes that you have an Azure account and Speech service subscript
## Install the Speech SDK
-Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
+Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
## Import dependencies
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-java.md
@@ -23,7 +23,7 @@ This article assumes that you have an Azure account and Speech service subscript
## Install the Speech SDK
-Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
+Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
## Import dependencies
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/how-to/speech-translation-basics/speech-translation-basics-python.md
@@ -22,7 +22,7 @@ This article assumes that you have an Azure account and Speech service subscript
## Install the Speech SDK
-Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
+Before you can do anything, you'll need to install the Speech SDK. Depending on your platform, follow the instructions under the <a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk#get-the-speech-sdk" target="_blank">Get the Speech SDK <span class="docon docon-navigate-external x-hidden-focus"></span></a> section of the _About the Speech SDK_ article.
## Import dependencies
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/includes/spx-setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/includes/spx-setup.md
@@ -25,14 +25,6 @@ Type `spx` to see help for the Speech CLI.
> As an alternative to NuGet, you can download and extract the Speech CLI [zip archive](https://aka.ms/speech/spx-zips.zip), > find and extract your platform from the `spx-zips` directory, and add the `spx` path to your system **PATH** variable.
-### Run the Speech CLI
-
-1. Open the command prompt or PowerShell, then navigate to the directory where you extracted the Speech CLI.
-2. Type `spx` to see help commands for the Speech CLI.
-
-> [!NOTE]
-> Powershell does not check the local directory when looking for a command. In Powershell, change directory to the location of `spx` and call the tool by entering `.\spx`.
-> If you add this directory to your path, Powershell and the Windows command prompt will find `spx` from any directory without including the `.\` prefix.
### Font limitations
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
@@ -21,7 +21,7 @@
**Highlights summary** - Smaller memory and disk footprint making the SDK more efficient.-- Improved Custom Voice quality and ease of use.
+- Higher fidelity output formats available for custom neural voice private preview.
- Intent Recognizer can now get return more than the top intent, giving you the ability to make a separate assessment about your customer's intent. - Your voice assistant or bot are now easier to set up, and you can make it stop listening immediately, and exercise greater control over how it responds to errors. - Improved on device performance through making compression optional.
@@ -38,7 +38,7 @@
- Android libraries are 3-5% smaller. **New features**-- **All**: Custom voice quality keeps getting better. Added 48kHz format for custom TTS voices, improving the audio quality of custom voices whose native output sample rates are higher than 24kHz.
+- **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm.
- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid?view=azure-dotnet#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid?view=azure-java-stable#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig?view=azure-node-latest#endpointId), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices. - **C++/C#/Java/Objective-C/Python**: Get more than the top intent from`IntentRecognizer`. It now supports configuring the JSON result containing all intents and not only the top scoring intent via `LanguageUnderstandingModel FromEndpoint` method by using `verbose=true` uri parameter. This addresses [GitHub issue #880](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/880). See updated documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstarts/intent-recognition/#add-a-languageunderstandingmodel-and-intents). - **C++/C#/Java**: Make your voice assistant or bot stop listening immediatedly. `DialogServiceConnector` ([C++](https://docs.microsoft.com/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector?view=azure-java-stable)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-container-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-faq.md
@@ -576,10 +576,10 @@ In C# to enable dictation, invoke the `SpeechConfig.EnableDictation()` function.
### `FromEndpoint` APIs | Language | API details | |-|:|
-| C++ | <a href="https://docs.microsoft.com/en-us/cpp/cognitive-services/speech/speechconfig#fromendpoint" target="_blank">`SpeechConfig::FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromendpoint" target="_blank">`SpeechConfig::FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
| C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromendpoint?view=azure-dotnet" target="_blank">`SpeechConfig.FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> | | Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromendpoint" target="_blank">`SpeechConfig.fromendpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Objective-C | <a href="https://docs.microsoft.com/en-us/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithendpoint" target="_blank">`SPXSpeechConfiguration:initWithEndpoint;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithendpoint" target="_blank">`SPXSpeechConfiguration:initWithEndpoint;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> | | JavaScript | Not currently supported, nor is it planned. |
@@ -598,9 +598,9 @@ In C# to enable dictation, invoke the `SpeechConfig.EnableDictation()` function.
| Language | API details | |--|:-| | C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromhost?view=azure-dotnet" target="_blank">`SpeechConfig.FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| C++ | <a href="https://docs.microsoft.com/en-us/cpp/cognitive-services/speech/speechconfig#fromhost" target="_blank">`SpeechConfig::FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromhost" target="_blank">`SpeechConfig::FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
| Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromhost" target="_blank">`SpeechConfig.fromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Objective-C | <a href="https://docs.microsoft.com/en-us/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithhost" target="_blank">`SPXSpeechConfiguration:initWithHost;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithhost" target="_blank">`SPXSpeechConfiguration:initWithHost;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> | | JavaScript | Not currently supported |
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/concepts/model-versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
@@ -24,9 +24,9 @@ Use the table below to find which model versions are supported by each hosted en
| Endpoint | Supported Versions | latest version | ||--|-| | `/sentiment` | `2019-10-01`, `2020-04-01` | `2020-04-01` |
-| `/languages` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-15` | `2021-01-05` |
+| `/languages` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | `2021-01-05` |
| `/entities/linking` | `2019-10-01`, `2020-02-01` | `2020-02-01` |
-| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-05` | `2021-01-15` |
+| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15` | `2021-01-15` |
| `/entities/recognition/pii` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01` | `2020-07-01` | | `/entities/health` | `2020-09-03` | `2020-09-03` | | `/keyphrases` | `2019-10-01`, `2020-07-01` | `2020-07-01` |
communication-services https://docs.microsoft.com/en-us/azure/communication-services/tutorials/building-app-start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
@@ -0,0 +1,464 @@
+
+ Title: Tutorial - Prepare a web app for Azure Communication Services (Node.js)
+
+description: Learn how to create a baseline web application that supports Azure Communication Services
+++ Last updated : 01/03/2012++++
+# Tutorial: Prepare a web app for Azure Communication Services (Node.js)
+
+[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
+
+Azure Communication Services allows you to add real-time communications to your applications. In this tutorial, you'll learn how to set up a web application that supports Azure Communication Services. This is an introductory tutorial intended for new developers who want to get started with real-time communications.
+
+By the end of this tutorial, you'll have a baseline web application configured with Azure Communication Services client libraries that you can use to begin building your real-time communications solution.
+
+Feel free to visit the [Azure Communication Services GitHub](https://github.com/Azure/communication) page to provide feedback.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Configure your development environment
+> * Set up a local webserver
+> * Add the Azure Communication Services packages to your website
+> * Publish your website to Azure Static Websites
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Note that the free account gives you $200 in Azure credits to try out any combination of services.
+- [Visual Studio Code](https://code.visualstudio.com/): We'll use this to edit code in your local development environment.
+- [webpack](https://webpack.js.org/): This will be used to bundle and locally host your code.
+- [Node.js](https://nodejs.org/en/): This will be used to install and manage dependencies like Azure Communication Services client libraries and webpack.
+- [nvm and npm](https://docs.microsoft.com/windows/nodejs/setup-on-windows) to handle version control.
+- The [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code. This extension is needed to publish your application in Azure Storage. [Read more about hosting static web sites in Azure Storage](https://docs.microsoft.com/azure/storage/blobs/storage-blob-static-website)
+- The [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice). The extension allows deploying websites (similar to the previous) but with the option to configure the fully managed  continuous integration and continuous delivery (CI/CD).
+- The [Azure Function extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to build your own serverless applications. For example, you can host your authentication application in Azure functions.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).
+- A user access token. See the [access tokens quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-javascript) or the [trusted service tutorial](https://docs.microsoft.com/azure/communication-services/tutorials/trusted-service-tutorial) for instructions.
++
+## Configure your development environment
+
+Your local development environment will be configured like this:
+
+:::image type="content" source="./media/step-one-pic-one.png" alt-text="Developer environment architecture":::
++
+### Install Node.js, nvm and npm
+
+We'll use Node.js to download and install various dependencies we need for our client-side application. We'll use it to generate static files that we'll then host in Azure, so you don't need to worry about configuring it on your server.
+
+Windows developers can follow [this NodeJS tutorial](https://docs.microsoft.com/windows/nodejs/setup-on-windows) to configure Node, nvm, and npm.
+
+We tested this tutorial using the LTS 12.20.0 version. After you install nvm, use the following PowerShell command to deploy the version that you want to use:
+
+```PowerShell
+nvm list available
+nvm install 12.20.0
+nvm use 12.20.0
+```
+
+:::image type="content" source="./media/step-one-pic-two.png" alt-text="Working with nvm to deploy Node.js":::
+
+### Configure Visual Studio Code
+
+You can download [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+
+### Create a workspace for your Azure Communication Services projects
+
+Create a new folder to store your project files, like this: `C:\Users\Documents\ACS\CallingApp`. In Visual Studio Code, click "File", "Add Folder to Workspace" and add the folder to your workspace.
+
+:::image type="content" source="./media/step-one-pic-three.png" alt-text="Creating new workplace":::
+
+Go to "Explorer" in Visual Studio Code on the left pane, and you'll see your "CallingApp" folder in the "Untitled" workspace.
+
+:::image type="content" source="./media/step-one-pic-four.png" alt-text="Explorer":::
+
+Feel free to update the name of your workspace. You can validate your Node.js version by right-clicking on your "CallingApp" folder and selecting "Open in Integrated Terminal".
+
+:::image type="content" source="./media/step-one-pic-five.png" alt-text="Opening a terminal":::
+
+In the terminal, type the following command to validate the node.js version installed on the previous step:
+
+```JavaScript
+node --version
+```
+
+:::image type="content" source="./media/step-one-pic-six.png" alt-text="Validating Node.js version":::
+
+### Install Azure Extensions for Visual Studio Code
+
+Install the [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) either through the Visual Studio marketplace or with Visual Studio Code (View > Extensions > Azure Storage).
+
+:::image type="content" source="./media/step-one-pic-seven.png" alt-text="Installing Azure Storage Extension 1":::
+
+Follow the same steps for the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) and [Azure App Service](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) extensions.
++
+## Set up a local webserver
+
+### Install webpack
+
+[webpack](https://webpack.js.org/) lets you bundle code into static files that you can deploy to Azure. It also has a development server, which we'll configure to use with the calling sample.
+
+Type the following in your open terminal to install webpack:
+
+``` Console
+npm install webpack@4.42.0 webpack-cli@3.3.11 webpack-dev-server@3.10.3 --save-dev
+```
+
+This tutorial was tested using the above specified versions. Specifying `-dev` tells the package manager that this dependency is for development purposes and shouldn't be included in the code that we deploy to Azure.
+
+You'll see two new packages added to your `package.json` file as "devDependencies". The packages will be installed into the `./CallingApp/node_modules/` directory.
+
+:::image type="content" source="./media/step-one-pic-ten.png" alt-text="webpack configuration":::
+
+### Configure the development server
+
+Running a static application (like your `https://docsupdatetracker.net/index.html` file) from your browser uses the `file://` protocol. For your npm modules to work properly, we'll need the HTTP protocol by using webpack as a local development server.
+
+We'll create two configurations: one for development and the other for production. Files prepared for production will be minified, meaning that we'll remove unused whitespace and characters. This is appropriate for production scenarios where latency should be minimized or where code should be obfuscated.
+
+We'll use the `webpack-merge` tool to work with [different configuration files for webpack](https://webpack.js.org/guides/production/)
+
+Let's start with the development environment. First, we need to install `webpack merge`. In your terminal, run the following:
+
+```Console
+npm install --save-dev webpack-merge
+```
+
+In your `package.json` file, you can see one more dependency added to the "devDependencies."
+
+In the next step, we need to create a new file `webpack.common.js` and add the following code:
+
+```JavaScript
+const path = require('path');
+module.exports ={
+ entry: './app.js',
+ output: {
+ filename:'app.js',
+ path: path.resolve(__dirname, 'dist'),
+ }
+}
+```
+
+We'll then add two more files, one for each configuration:
+
+* webpack.dev.js
+* webpack.prod.js
+
+In the next step, we need to modify the `webpack.dev.js` file. Add the following code to that file:
+
+```JavaScript
+const { merge } = require('webpack-merge');
+const common = require('./webpack.common.js');
+
+module.exports = merge(common, {
+ mode: 'development',
+ devtool: 'inline-source-map',
+});
+```
+In this configuration, we import common parameters from `webpack.common.js`, merge the two files, set the mode to "development," and configure SourceMap as "inline-source-map'.
+
+Development mode tells webpack not to minify the files and not produce optimized production files. Detailed documentation on webpack modes can be found [here](https://webpack.js.org/configuration/mode/).
+
+Source map options are listed [here](https://webpack.js.org/configuration/devtool/#root). Setting the source map makes it easier for you to debug through your browser.
+
+:::image type="content" source="./media/step-one-pic-11.png" alt-text="Configuring webpack":::
+
+To run the development server, go to `package.json.js` and add the following code under scripts:
+
+```JavaScript
+ "build:dev": "webpack-dev-server --config webpack.dev.js"
+```
+
+Your file now should look like this:
+
+```JavaScript
+{
+ "name": "CallingApp",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1",
+ "build:dev": "webpack-dev-server --config webpack.dev.js"
+ },
+ "keywords": [],
+ "author": "",
+ "license": "ISC",
+ "devDependencies": {
+ "webpack": "^4.42.0",
+ "webpack-cli": "^3.3.11",
+ "webpack-dev-server": "^3.10.3"
+ }
+}
+```
+
+You added the command that can be used from npm.
+
+:::image type="content" source="./media/step-one-pic-12.png" alt-text="Modifying package-json.js":::
+
+### Testing the development server
+
+ In Visual Studio Code, create three files under your project:
+
+* `https://docsupdatetracker.net/index.html`
+* `app.js`
+* `app.css` (optional, this lets you style your app)
+
+Paste this into `https://docsupdatetracker.net/index.html`:
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+<head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <title>My first ACS application</title>
+ <link rel="stylesheet" href="./app.css"/>
+ <script src="./app.js" defer></script>
+</head>
+<body>
+ <h1>Hello from ACS!</h1>
+</body>
+</html>
+```
+:::image type="content" source="./media/step-one-pic-13.png" alt-text="HTML file":::
+
+Add the following code to `app.js`:
+
+```JavaScript
+alert('Hello world alert!');
+console.log('Hello world console!');
+```
+Add the following code to `app.css`:
+
+```CSS
+html {
+ font-family: sans-serif;
+ }
+```
+
+ :::image type="content" source="./media/step-one-pic-14.png" alt-text="App.js file with JS code":::
+
+When you open this page, you should see your message displayed with an alert and within your browser's console.
+
+:::image type="content" source="./media/step-one-pic-15.png" alt-text="App.css file":::
+
+Use the following terminal command to test your development configuration:
+
+```Console
+npm run build:dev
+```
+
+The console will show you where the server is running. By default, it's `http://localhost:8080`. The build:dev command is the command we added to our `package-json.js` earlier.
+
+ :::image type="content" source="./media/step-one-pic-16.png" alt-text="Starting a development server":::
+
+ Navigate to the address in your browser and you should see the page and alert, configured on previous steps.
+
+ :::image type="content" source="./media/step-one-pic-17.png" alt-text="Html page":::
+
+
+While the server is running, you can change the code, and the server and the HTML page will automatically reload.
+
+Next, go to the `app.js` file in Visual Studio Code and delete `alert('Hello world alert!');`. Save your file and verify that the alert disappears from your browser.
+
+To stop your server, you can run `Ctrl+C` in your terminal. To start your server, type `npm run build:dev` at any time.
+
+## Add the Azure Communication Services packages
+
+Use the `npm install` command to install the Azure Communication Services Calling client library for JavaScript.
+
+```Console
+npm install @azure/communication-common --save
+npm install @azure/communication-calling --save
+```
+
+This action will add the Azure Communication Services common and calling packages as dependencies of your package. You'll see two new packages added to the `package.json` file. More information about the `npm install` command can be found [here](https://docs.npmjs.com/cli/v6/commands/npm-install).
+
+:::image type="content" source="./media/step-one-pic-nine.png" alt-text="Installing Azure Communication Services packages":::
+
+These packages are provided by the Azure Communication Services team and include the authentication and calling libraries. The "--save" command signals that our application depends on these packages for production use and will be included in the `dependencies` of our `package-json.js` file. When we build the application for production, the packages will be included in our production code.
++
+## Publish your website to Azure Static Websites
+
+### Create a new npm package
+
+In your terminal, from the path of your workspace folder, type:
+
+``` console
+npm init -y
+```
+
+This command initializes a new npm package and adds `package.json` into the root folder of your project.
+
+:::image type="content" source="./media/step-one-pic-eight.png" alt-text="Package JSON":::
+
+Additional documentation on the npm init command can be found [here](https://docs.npmjs.com/cli/v6/commands/npm-init)
+
+
+### Create a configuration for production deployment
+
+Add the following code to the `webpack.prod.js`:
+
+```JavaScript
+const { merge } = require('webpack-merge');
+ const common = require('./webpack.common.js');
+
+ module.exports = merge(common, {
+ mode: 'production',
+ });
+ ```
+
+Note this configuration will be merged with the webpack.common.js (where we specified the input file and where to store the results) and will set the mode to "production."
+
+In `package.json`, add the following code:
+
+```JavaScript
+"build:prod": "webpack --config webpack.prod.js"
+```
+
+Your file should look like this:
+
+```JavaScript
+{
+ "name": "CallingApp",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1",
+ "build:dev": "webpack-dev-server --config webpack.dev.js",
+ "build:prod": "webpack --config webpack.prod.js"
+ },
+ "keywords": [],
+ "author": "",
+ "license": "ISC",
+ "dependencies": {
+ "@azure/communication-calling": "^1.0.0-beta.3",
+ "@azure/communication-common": "^1.0.0-beta.3"
+ },
+ "devDependencies": {
+ "webpack": "^4.42.0",
+ "webpack-cli": "^3.3.11",
+ "webpack-dev-server": "^3.10.3",
+ "webpack-merge": "^5.7.3"
+ }
+}
+```
+
+ :::image type="content" source="./media/step-one-pic-20.png" alt-text="Configured files":::
++
+In the terminal run:
+
+```Console
+npm run build:prod
+```
+
+The command will create a `dist` folder and production-ready `app.js` static file in it.
+
+ :::image type="content" source="./media/step-one-pic-21.png" alt-text="Production build":::
+
+
+### Deploy your app to Azure Storage
+
+Copy `https://docsupdatetracker.net/index.html` and `app.css` to the `dist` folder.
+
+In the `dist` folder, create a new file and name it `404.html`. Copy the following markup into that file:
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+<head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <link rel="stylesheet" href="./app.css"/>
+ <title>Document</title>
+</head>
+<body>
+ <h1>The page does not exists.</h1>
+</body>
+</html>
+```
+
+Save the file (Ctrl + S).
+
+Right-click and select deploy to Static Website via Azure Storage.
+
+:::image type="content" source="./media/step-one-pic-22.png" alt-text="Start deploying to Azure":::
+
+In the `Select subscription` field, select "Sign in to Azure (or "Create a Free Azure Account" if you haven't created a subscription before)
+
+:::image type="content" source="./media/step-one-pic-23.png" alt-text="Sign in to Azure":::
+
+Select `Create new Storage Account` > `Advanced`:
+
+ :::image type="content" source="./media/step-one-pic-24.png" alt-text="Creating the Storage Account Group":::
+
+ Provide the name of the storage group:
+
+ :::image type="content" source="./media/step-one-pic-25.png" alt-text="Adding a name for the account":::
+
+Create a new resource group if needed:
+
+ :::image type="content" source="./media/step-one-pic-26.png" alt-text="Creating new group":::
+
+ Answer "Yes" to Would you like to enable static website hosting?"
+
+ :::image type="content" source="./media/step-one-pic-27.png" alt-text="Selecting option to enable static website hosting":::
+
+Accept the default file name in "Enter the index document name," as we created the file `https://docsupdatetracker.net/index.html`.
+
+Type the `404.html` for "Enter the 404 error document path".
+
+Select the location of the application. The location you select will define which media processor will be used in your future calling application in group calls.
+
+Azure Communication Services selects the Media Processor based on the application location.
+
+:::image type="content" source="./media/step-one-pic-28.png" alt-text="Select location":::
+
+Wait until the resource and your website are created.
+
+Click "Browse to website":
+
+:::image type="content" source="./media/step-one-pic-29.png" alt-text="Deployment completed":::
+
+From your browser's development tools, you can inspect the source and see our file, prepared for production.
+
+:::image type="content" source="./media/step-one-pic-30.png" alt-text="Website":::
+
+Go to the [Azure portal](https://portal.azure.com/#home), select your resource group, select the application you created, and navigate to `Settings` > `Static website`. You can see that static websites are enabled and note the primary endpoint, Index document, and Error path document files.
+
+:::image type="content" source="./media/step-one-pic-31.png" alt-text="Static website selection":::
+
+Under "Blob service" select the "Containers" and you'll see two containers created, one for logs ($logs) and content of your website ($web)
+
+:::image type="content" source="./media/step-one-pic-32.png" alt-text="Container configuration":::
+
+If you go to `$web` you'll see your files you created in Visual Studio and deployed to Azure.
+
+:::image type="content" source="./media/step-one-pic-33.png" alt-text="Deployment":::
+
+You can redeploy the application from Visual Studio Code at any time.
+
+You're now ready to build your first Azure Communication Services web application.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+You may also want to:
+
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Creating user access tokens](../quickstarts/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-troubleshoot.md
@@ -26,7 +26,7 @@ The following article describes common errors and solutions for deployments usin
| 13 | Unauthorized | The request lacks the permissions to complete. | Ensure that you set proper permissions for your database and collection. | | 16 | InvalidLength | The request specified has an invalid length. | If you are using the explain() function, ensure that you supply only one operation. | | 26 | NamespaceNotFound | The database or collection being referenced in the query cannot be found. | Ensure your database/collection name precisely matches the name in your query.|
-| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write.|
+| 50 | ExceededTimeLimit | The request has exceeded the timeout of 60 seconds of execution. | There can be many causes for this error. One of the causes is when the currently allocated request units capacity is not sufficient to complete the request. This can be solved by increasing the request units of that collection or database. In other cases, this error can be worked-around by splitting a large request into smaller ones. Retrying a write operation that has received this error may result in a duplicate write. <br><br>If you are trying to delete large amounts of data without impacting RUs: <br>-Consider using TTL (Based on Timestamp): [Expire data with Azure Cosmos DB's API for MongoDB](https://docs.microsoft.com/azure/cosmos-db/mongodb-time-to-live) <br>-Use Cursor/Batch size to perform the delete. You can fetch a single document at a time and delete it through a loop. This will help you slowly delete data without impacting your production application.|
| 61 | ShardKeyNotFound | The document in your request did not contain the collection's shard key (Azure Cosmos DB partition key). | Ensure the collection's shard key is being used in the request.| | 66 | ImmutableField | The request is attempting to change an immutable field | "id" fields are immutable. Ensure that your request does not attempt to update that field. | | 67 | CannotCreateIndex | The request to create an index cannot be completed. | Up to 500 single field indexes can be created in a container. Up to eight fields can be included in a compound index (compound indexes are supported in version 3.6+). |
@@ -36,7 +36,7 @@ The following article describes common errors and solutions for deployments usin
| 16501 | ExceededMemoryLimit | As a multi-tenant service, the operation has gone over the client's memory allotment. | Reduce the scope of the operation through more restrictive query criteria or contact support from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Example: `db.getCollection('users').aggregate([{$match: {name: "Andy"}}, {$sort: {age: -1}}]))` | | 40324 | Unrecognized pipeline stage name. | The stage name in your aggregation pipeline request was not recognized. | Ensure that all aggregation pipeline names are valid in your request. | | - | MongoDB wire version issues | The older versions of MongoDB drivers are unable to detect the Azure Cosmos account's name in the connection strings. | Append *appName=@**accountName**@* at the end of your Cosmos DB's API for MongoDB connection string, where ***accountName*** is your Cosmos DB account name. |
-| - | MongoDB client networking issues (such as socket or endOfStream exceptions)| The network request has failed. This is often caused by an inactive TCP connection that the MongoDB client is attempting to use. MongoDB drivers often utilize connection pooling, which results in a random connection chosen from the pool being used for a request. Inactive connections typically timeout on the Azure Cosmos DB end after four minutes. | You can either retry these failed requests in your application code, change your MongoDB client (driver) settings to teardown inactive TCP connections before the four-minute timeout window, or configure your OS keepalive settings to maintain the TCP connections in an active state. |
+| - | MongoDB client networking issues (such as socket or endOfStream exceptions)| The network request has failed. This is often caused by an inactive TCP connection that the MongoDB client is attempting to use. MongoDB drivers often utilize connection pooling, which results in a random connection chosen from the pool being used for a request. Inactive connections typically timeout on the Azure Cosmos DB end after four minutes. | You can either retry these failed requests in your application code, change your MongoDB client (driver) settings to teardown inactive TCP connections before the four-minute timeout window, or configure your OS keepalive settings to maintain the TCP connections in an active state.<br><br>To avoid connectivity messages, you may want to change the connection string to set maxConnectionIdleTime to 1-2 minutes.<br>- Mongo driver: configure *maxIdleTimeMS=120000* <br>- Node.JS: configure *socketTimeoutMS=120000*, *autoReconnect* = true, *keepAlive* = true, *keepAliveInitialDelay* = 3 minutes
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/prevent-rate-limiting-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/prevent-rate-limiting-errors.md
@@ -16,7 +16,6 @@ Azure Cosmos DB API for MongoDB operations may fail with rate-limiting (16500/42
You can enable the Server Side Retry (SSR) feature and let the server retry these operations automatically. The requests are retried after a short delay for all collections in your account. This feature is a convenient alternative to handling rate-limiting errors in the client application. - ## Use the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
@@ -31,6 +30,31 @@ You can enable the Server Side Retry (SSR) feature and let the server retry thes
:::image type="content" source="./media/prevent-rate-limiting-errors/portal-features-server-side-retry.png" alt-text="Screenshot of the server side retry feature for Azure Cosmos DB API for MongoDB":::
+## Use the Azure CLI
+
+1. Check if SSR is already enabled for your account:
+```bash
+az cosmosdb show --name accountname --resource-group resourcegroupname
+```
+2. **Enable** SSR for all collections in your database account. It may take up to 15min for this change to take effect.
+```bash
+az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses
+```
+The following command will **Disable** SSR for all collections in your database account. It may take up to 15min for this change to take effect.
+```bash
+az cosmosdb update --name accountname --resource-group resourcegroupname --capabilities EnableMongo DisableRateLimitingResponses
+```
+
+## Frequently Asked Questions
+* How are requests retried?
+ * Requests are retried continuously (over and over again) until a 60-second timeout is reached. If the timeout is reached, the client will receive an [ExceededTimeLimit exception (50)](mongodb-troubleshoot.md).
+* How can I monitor the effects of SSR?
+ * You can view the rate limiting errors (429s) that are retried server-side in the Cosmos DB Metrics pane. Keep in mind that these errors don't go to the client when SSR is enabled, since they are handled and retried server-side.
+ * You can search for log entries containing "estimatedDelayFromRateLimitingInMilliseconds" in your [Cosmos DB resource logs](cosmosdb-monitor-resource-logs.md).
+* Will SSR affect my consistency level?
+ * SSR does not affect a request's consistency. Requests are retried server-side if they are rate limited (with a 429 error).
+* Does SSR affect any type of error that my client might receive?
+ * No, SSR only affects rate limiting errors (429s) by retrying them server-side. This feature prevents you from having to handle rate-limiting errors in the client application. All [other errors](mongodb-troubleshoot.md) will go to the client.
## Next steps
data-factory https://docs.microsoft.com/en-us/azure/data-factory/concepts-data-flow-performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
@@ -6,7 +6,7 @@
Previously updated : 12/18/2020 Last updated : 01/29/2021 # Mapping data flows performance and tuning guide
@@ -240,11 +240,11 @@ If a column corresponds to how you wish to output the data, you can select **As
When writing to CosmosDB, altering throughput and batch size during data flow execution can improve performance. These changes only take effect during the data flow activity run and will return to the original collection settings after conclusion.
-**Batch size:** Calculate the rough row size of your data, and make sure that row size * batch size is less than two million. If it is, increase the batch size to get better throughput
+**Batch size:** Usually, starting with the default batch size is sufficient. To further tune this value, calculate the rough object size of your data, and make sure that object size * batch size is less than 2MB. If it is, you can increase the batch size to get better throughput.
**Throughput:** Set a higher throughput setting here to allow documents to write faster to CosmosDB. Keep in mind the higher RU costs based upon a high throughput setting.
-**Write Throughput Budget:** Use a value which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput will allow more balance across those partitions.
+**Write throughput budget:** Use a value which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput will allow more balance across those partitions.
## Optimizing transformations
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
@@ -10,7 +10,7 @@
Previously updated : 12/11/2019 Last updated : 01/29/2021 # Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
@@ -155,6 +155,7 @@ The following properties are supported in the Copy Activity **source** section:
| query |Specify the Azure Cosmos DB query to read data.<br/><br/>Example:<br /> `SELECT c.BusinessEntityID, c.Name.First AS FirstName, c.Name.Middle AS MiddleName, c.Name.Last AS LastName, c.Suffix, c.EmailPromotion FROM c WHERE c.ModifiedDate > \"2009-01-01T00:00:00\"` |No <br/><br/>If not specified, this SQL statement is executed: `select <columns defined in structure> from mycollection` | | preferredRegions | The preferred list of regions to connect to when retrieving data from Cosmos DB. | No | | pageSize | The number of documents per page of the query result. Default is "-1" which means uses the service side dynamic page size up to 1000. | No |
+| detectDatetime | Whether to detect datetime from the string values in the documents. Allowed values are: **true** (default), **false**. | No |
If you use "DocumentDbCollectionSource" type source, it is still supported as-is for backward compatibility. You are suggested to use the new model going forward which provide richer capabilities to copy data from Cosmos DB.
@@ -290,13 +291,16 @@ Settings specific to Azure Cosmos DB are available in the **Settings** tab of th
* None: No action will be done to the collection. * Recreate: The collection will get dropped and recreated
-**Batch size**: Controls how many rows are being written in each bucket. Larger batch sizes improve compression and memory optimization, but risk out of memory exceptions when caching data.
+**Batch size**: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
-**Partition Key:** Enter a string that represents the partition key for your collection. Example: ```/movies/title```
+- Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
+- The larger the batch size, the better throughput ADF can achieve, while make sure you allocate enough RUs to empower your workload.
+
+**Partition key:** Enter a string that represents the partition key for your collection. Example: ```/movies/title```
**Throughput:** Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow. Minimum is 400.
-**Write throughput budget:** An integer that represents the number of RUs you want to allocate to the bulk ingestion Spark job. This number is out of the total throughput allocated to the collection.
+**Write throughput budget:** An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.
## Lookup activity properties
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-database-for-postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
@@ -10,7 +10,7 @@
Previously updated : 12/08/2020 Last updated : 02/01/2021 # Copy and transform data in Azure Database for PostgreSQL by using Azure Data Factory
@@ -170,8 +170,9 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp
|: |: |: | | type | The type property of the copy activity sink must be set to **AzurePostgreSQLSink**. | Yes | | preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No |
-| writeBatchSize | Inserts data into the Azure Database for PostgreSQL table when the buffer size reaches writeBatchSize.<br>Allowed value is an integer that represents the number of rows. | No (default is 10,000) |
-| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:00:30) |
+| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (preview, which is more performant), **BulkInsert** (default). | No |
+| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) |
+| writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) |
**Example**:
@@ -199,7 +200,8 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp
"sink": { "type": "AzurePostgreSQLSink", "preCopyScript": "<custom SQL script>",
- "writeBatchSize": 100000
+ "writeMethod": "CopyCommand",
+ "writeBatchSize": 1000000
} } }
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
@@ -10,7 +10,7 @@
Previously updated : 01/22/2021 Last updated : 01/29/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory
@@ -371,10 +371,10 @@ Azure Data Factory supports three ways to load data into Azure Synapse Analytics
![Azure Synapse Analytics sink copy options](./media/connector-azure-sql-data-warehouse/sql-dw-sink-copy-options.png) - [Use PolyBase](#use-polybase-to-load-data-into-azure-synapse-analytics)-- [Use COPY statement (preview)](#use-copy-statement)
+- [Use COPY statement](#use-copy-statement)
- Use bulk insert
-The fastest and most scalable way to load data is through [PolyBase](/sql/relational-databases/polybase/polybase-guide) or the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) (preview).
+The fastest and most scalable way to load data is through [PolyBase](/sql/relational-databases/polybase/polybase-guide) or the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql).
To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to **SqlDWSink**. The following properties are supported in the Copy Activity **sink** section:
@@ -383,7 +383,7 @@ To copy data to Azure Synapse Analytics, set the sink type in Copy Activity to *
| type | The **type** property of the Copy Activity sink must be set to **SqlDWSink**. | Yes | | allowPolyBase | Indicates whether to use PolyBase to load data into Azure Synapse Analytics. `allowCopyCommand` and `allowPolyBase` cannot be both true. <br/><br/>See [Use PolyBase to load data into Azure Synapse Analytics](#use-polybase-to-load-data-into-azure-synapse-analytics) section for constraints and details.<br/><br/>Allowed values are **True** and **False** (default). | No.<br/>Apply when using PolyBase. | | polyBaseSettings | A group of properties that can be specified when the `allowPolybase` property is set to **true**. | No.<br/>Apply when using PolyBase. |
-| allowCopyCommand | Indicates whether to use [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) (preview) to load data into Azure Synapse Analytics. `allowCopyCommand` and `allowPolyBase` cannot be both true. <br/><br/>See [Use COPY statement to load data into Azure Synapse Analytics](#use-copy-statement) section for constraints and details.<br/><br/>Allowed values are **True** and **False** (default). | No.<br>Apply when using COPY. |
+| allowCopyCommand | Indicates whether to use [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) to load data into Azure Synapse Analytics. `allowCopyCommand` and `allowPolyBase` cannot be both true. <br/><br/>See [Use COPY statement to load data into Azure Synapse Analytics](#use-copy-statement) section for constraints and details.<br/><br/>Allowed values are **True** and **False** (default). | No.<br>Apply when using COPY. |
| copyCommandSettings | A group of properties that can be specified when `allowCopyCommand` property is set to TRUE. | No.<br/>Apply when using COPY. | | writeBatchSize | Number of rows to inserts into the SQL table **per batch**.<br/><br/>The allowed value is **integer** (number of rows). By default, Data Factory dynamically determines the appropriate batch size based on the row size. | No.<br/>Apply when using bulk insert. | | writeBatchTimeout | Wait time for the batch insert operation to finish before it times out.<br/><br/>The allowed value is **timespan**. Example: "00:30:00" (30 minutes). | No.<br/>Apply when using bulk insert. |
@@ -669,9 +669,9 @@ All columns of the table must be specified in the INSERT BULK statement.
The NULL value is a special form of the default value. If the column is nullable, the input data in the blob for that column might be empty. But it can't be missing from the input dataset. PolyBase inserts NULL for missing values in Azure Synapse Analytics.
-## <a name="use-copy-statement"></a> Use COPY statement to load data into Azure Synapse Analytics (preview)
+## <a name="use-copy-statement"></a> Use COPY statement to load data into Azure Synapse Analytics
-Azure Synapse Analytics [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) (preview) directly supports loading data from **Azure Blob and Azure Data Lake Storage Gen2**. If your source data meets the criteria described in this section, you can choose to use COPY statement in ADF to load data into Azure Synapse Analytics. Azure Data Factory checks the settings and fails the copy activity run if the criteria is not met.
+Azure Synapse Analytics [COPY statement](/sql/t-sql/statements/copy-into-transact-sql) directly supports loading data from **Azure Blob and Azure Data Lake Storage Gen2**. If your source data meets the criteria described in this section, you can choose to use COPY statement in ADF to load data into Azure Synapse Analytics. Azure Data Factory checks the settings and fails the copy activity run if the criteria is not met.
>[!NOTE] >Currently Data Factory only support copy from COPY statement compatible sources mentioned below.
@@ -792,9 +792,9 @@ SQL Example: ```Select * from MyTable where customerId > 1000 and customerId < 2
- Read Uncommitted - Repeatable Read - Serializable
-*- None (ignore isolation level)
+- None (ignore isolation level)
-![Isolation Level](media/data-flow/isolationlevel.png "Isolation Level")
+![Isolation Level](media/data-flow/isolationlevel.png)
### Sink transformation
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-dynamics-crm-office-365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
@@ -11,7 +11,7 @@
Previously updated : 09/23/2020 Last updated : 02/01/2021 # Copy data from and to Dynamics 365 (Common Data Service) or Dynamics CRM by using Azure Data Factory
@@ -51,11 +51,11 @@ For Dynamics 365 specifically, the following application types are supported:
This connector doesn't support other application types like Finance, Operations, and Talent.
-This Dynamics connector is built on top of [Dynamics XRM tooling](/dynamics365/customer-engagement/developer/build-windows-client-applications-xrm-tools).
- >[!TIP] >To copy data from Dynamics 365 Finance and Operations, you can use the [Dynamics AX connector](connector-dynamics-ax.md).
+This Dynamics connector is built on top of [Dynamics XRM tooling](/dynamics365/customer-engagement/developer/build-windows-client-applications-xrm-tools).
+ ## Prerequisites To use this connector with Azure AD service-principal authentication, you must set up server-to-server (S2S) authentication in Common Data Service or Dynamics. Refer to [this article](/powerapps/developer/common-data-service/build-web-applications-server-server-s2s-authentication) for detailed steps.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/connector-rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
@@ -20,7 +20,7 @@ This article outlines how to use Copy Activity in Azure Data Factory to copy dat
The difference among this REST connector, [HTTP connector](connector-http.md), and the [Web table connector](connector-web-table.md) are: - **REST connector** specifically supports copying data from RESTful APIs; -- **HTTP connector** is generic to retrieve data from any HTTP endpoint, for example, to download file. Before this REST connector becomes available, you may happen to use HTTP connector to copy data from RESTful API, which is supported but less functional comparing to REST connector.
+- **HTTP connector** is generic to retrieve data from any HTTP endpoint, for example, to download file. Before this REST connector you may happen to use HTTP connector to copy data from RESTful API, which is supported but less functional comparing to REST connector.
- **Web table connector** extracts table content from an HTML webpage. ## Supported capabilities
data-factory https://docs.microsoft.com/en-us/azure/data-factory/load-azure-sql-data-warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-sql-data-warehouse.md
@@ -10,7 +10,7 @@
Previously updated : 12/09/2020 Last updated : 01/29/2020 # Load data into Azure Synapse Analytics by using Azure Data Factory
@@ -121,7 +121,7 @@ This article shows you how to use the Data Factory Copy Data tool to _load data
b. In the **New Linked Service** page, select your storage account, and select **Create** to deploy the linked service.
- c. In the **Advanced settings** section, deselect the **Use type default** option, then select **Next**.
+ c. Deselect the **Use type default** option, and then select **Next**.
![Configure PolyBase](./media/load-azure-sql-data-warehouse/configure-polybase.png)
data-factory https://docs.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-bulk-copy-portal.md
@@ -10,7 +10,7 @@
Previously updated : 01/12/2021 Last updated : 01/29/2021 # Copy multiple tables in bulk by using Azure Data Factory in the Azure portal
@@ -257,7 +257,6 @@ The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
![Copy sink settings](./media/tutorial-bulk-copy-portal/copy-sink-settings.png) - 1. Switch to the **Settings** tab, and do the following steps: 1. Select the checkbox for **Enable Staging**.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-2101-release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-2101-release-notes.md
@@ -6,13 +6,13 @@
Previously updated : 01/29/2021 Last updated : 02/01/2021 # Azure Stack Edge Pro with FPGA 2101 release notes
-The following release notes identify the critical open issues and the resolved issues for the 2101 release of Azure Stack Edge Pro with with a built-in Field Programmable Gate Array (FPGA).
+The following release notes identify the critical open issues and the resolved issues for the 2101 release of Azure Stack Edge Pro with a built-in Field Programmable Gate Array (FPGA).
The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Stack Edge device, carefully review the information in the release notes.
@@ -33,7 +33,6 @@ This release also contains the following updates:
- All cumulative Windows updates and .NET framework updates released through October 2020. - The baseboard management controller (BMC) firmware version is upgraded from 3.32.32.32 to 3.36.36.36 during factory install to address incompatibility with newer Dell power supply units.-- The static IP address for Azure Data Box Gateway is retained across software updates. - This release supports IoT Edge 1.0.9.3 on Azure Stack Edge devices. ## Known issues in this release
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-deploy-prep.md
@@ -48,7 +48,7 @@ Following are the configuration prerequisites for your Azure Stack Edge resource
Before you begin, make sure that:
-* Your Microsoft Azure subscription is enabled for an Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions are not supported.
+* Your Microsoft Azure subscription is enabled for an Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions aren't supported.
* You have owner or contributor access at resource group level for the Azure Stack Edge / Data Box Gateway, IoT Hub, and Azure Storage resources.
@@ -86,6 +86,8 @@ Before you begin, make sure that:
If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+### [Portal](#tab/azure-portal)
+ To create an Azure Stack Edge resource, take the following steps in the Azure portal. 1. Use your Microsoft Azure credentials to sign in to
@@ -106,7 +108,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
|Setting |Value | |||
- |Subscription |This is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
+ |Subscription |This value is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
|Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). | 4. Enter or select the following **Instance details**.
@@ -137,14 +139,58 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Go to the Azure Stack Edge resource](media/azure-stack-edge-deploy-prep/data-box-edge-resource3.png)
-After the order is placed, Microsoft reviews the order and reaches out to you (via email) with shipping details.
+After the order is placed, Microsoft reviews the order and contacts you (via email) with shipping details.
![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-deploy-prep/data-box-edge-resource4.png) - > [!NOTE] > If you want to create multiple orders at one time or clone an existing order, you can use the [scripts in Azure Samples](https://github.com/Azure-Samples/azure-stack-edge-order). For more information, see the README file.
+### [Azure CLI](#tab/azure-cli)
+
+If necessary, prepare your environment for Azure CLI.
+
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+
+To create an Azure Stack Edge resource, run the following commands in Azure CLI.
+
+1. Create a resource group by using the [az group create](/cli/azure/group#az_group_create) command, or use an existing resource group:
+
+ ```azurecli
+ az group create --name myasepgpu1 --location eastus
+ ```
+
+1. To create a device, use the [az databoxedge device create](/cli/azure/databoxedge/device#az_databoxedge_device_create) command:
+
+ ```azurecli
+ az databoxedge device create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --location eastus --sku Edge
+ ```
+
+ Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
+
+ For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
+
+1. To create an order, run the [az databoxedge order create](/cli/azure/databoxedge/order#az_databoxedge_order_create) command:
+
+ ```azurecli
+ az databoxedge order create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --company-name "Contoso" \
+ --address-line1 "1020 Enterprise Way" --city "Sunnyvale" \
+ --state "California" --country "United States" --postal-code 94089 \
+ --contact-person "Gus Poland" --email-list gus@contoso.com --phone 4085555555
+ ```
+
+The resource creation takes a few minutes. Run the [az databoxedge order show](/cli/azure/databoxedge/order#az_databoxedge_order_show) command to see the order:
+
+```azurecli
+az databoxedge order show --resource-group myasepgpu1 --device-name myasegpu1
+```
+
+After you place an order, Microsoft reviews the order and contacts you by email with shipping details.
+++ ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro device with the resource. You can get this key now while you are in the Azure portal.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-prep.md
@@ -61,7 +61,7 @@ Following are the configuration prerequisites for your Azure Stack Edge resource
Before you begin, make sure that: -- Your Microsoft Azure subscription is enabled for an Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions are not supported. To identify the type of Azure subscription you have, see [What is an Azure offer?](../cost-management-billing/manage/switch-azure-offer.md#what-is-an-azure-offer).
+- Your Microsoft Azure subscription is enabled for an Azure Stack Edge resource. Make sure that you used a supported subscription such as [Microsoft Enterprise Agreement (EA)](https://azure.microsoft.com/overview/sales-number/), [Cloud Solution Provider (CSP)](/partner-center/azure-plan-lp), or [Microsoft Azure Sponsorship](https://azure.microsoft.com/offers/ms-azr-0036p/). Pay-as-you-go subscriptions aren't supported. To identify the type of Azure subscription you have, see [What is an Azure offer?](../cost-management-billing/manage/switch-azure-offer.md#what-is-an-azure-offer).
- You have owner or contributor access at resource group level for the Azure Stack Edge Pro/Data Box Gateway, IoT Hub, and Azure Storage resources. - To create any Azure Stack Edge / Data Box Gateway resource, you should have permissions as a contributor (or higher) scoped at resource group level.
@@ -98,6 +98,8 @@ Before you begin, make sure that:
If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+### [Portal](#tab/azure-portal)
+ To create an Azure Stack Edge resource, take the following steps in the Azure portal. 1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com).
@@ -138,7 +140,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Create a resource 6](media/azure-stack-edge-gpu-deploy-prep/create-resource-6.png)
- - If this is the new device that you are ordering, enter the contact name, company, address to ship the device, and contact information.
+ - If this is the new device that you're ordering, enter the contact name, company, address to ship the device, and contact information.
![Create a resource 7](media/azure-stack-edge-gpu-deploy-prep/create-resource-7.png)
@@ -158,7 +160,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Go to the Azure Stack Edge Pro resource](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-1.png)
-After the order is placed, Microsoft reviews the order and reaches out to you (via email) with shipping details.
+After the order is placed, Microsoft reviews the order and contacts you (via email) with shipping details.
<!--![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-2.png)-->
@@ -167,6 +169,51 @@ After the order is placed, Microsoft reviews the order and reaches out to you (v
If you run into any issues during the order process, see [Troubleshoot order issues](azure-stack-edge-troubleshoot-ordering.md).
+### [Azure CLI](#tab/azure-cli)
+
+If necessary, prepare your environment for Azure CLI.
+
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+
+To create an Azure Stack Edge resource, run the following commands in Azure CLI.
+
+1. Create a resource group by using the [az group create](/cli/azure/group#az_group_create) command, or use an existing resource group:
+
+ ```azurecli
+ az group create --name myasepgpu1 --location eastus
+ ```
+
+1. To create a device, use the [az databoxedge device create](/cli/azure/databoxedge/device#az_databoxedge_device_create) command:
+
+ ```azurecli
+ az databoxedge device create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --location eastus --sku EdgeP_Base
+ ```
+
+ Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
+
+ For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
+
+1. To create an order, run the [az databoxedge order create](/cli/azure/databoxedge/order#az_databoxedge_order_create) command:
+
+ ```azurecli
+ az databoxedge order create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --company-name "Contoso" \
+ --address-line1 "1020 Enterprise Way" --city "Sunnyvale" \
+ --state "California" --country "United States" --postal-code 94089 \
+ --contact-person "Gus Poland" --email-list gus@contoso.com --phone 4085555555
+ ```
+
+The resource creation takes a few minutes. Run the [az databoxedge order show](/cli/azure/databoxedge/order#az_databoxedge_order_show) command to see the order:
+
+```azurecli
+az databoxedge order show --resource-group myasepgpu1 --device-name myasegpu1
+```
+
+After you place an order, Microsoft reviews the order and contacts you by email with shipping details.
+++ ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro device with the resource. You can get this key now while you are in the Azure portal.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-mini-r-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-prep.md
@@ -80,6 +80,8 @@ Before you begin, make sure that:
If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+### [Portal](#tab/azure-portal)
+ To create an Azure Stack Edge resource, take the following steps in the Azure portal. 1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com).
@@ -148,6 +150,51 @@ After the order is placed, Microsoft reviews the order and reaches out to you (v
If you run into any issues during the order process, see [Troubleshoot order issues](azure-stack-edge-troubleshoot-ordering.md).
+### [Azure CLI](#tab/azure-cli)
+
+If necessary, prepare your environment for Azure CLI.
+
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+
+To create an Azure Stack Edge resource, run the following commands in Azure CLI.
+
+1. Create a resource group by using the [az group create](/cli/azure/group#az_group_create) command, or use an existing resource group:
+
+ ```azurecli
+ az group create --name myasepgpu1 --location eastus
+ ```
+
+1. To create a device, use the [az databoxedge device create](/cli/azure/databoxedge/device#az_databoxedge_device_create) command:
+
+ ```azurecli
+ az databoxedge device create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --location eastus --sku EdgeMR_Mini
+ ```
+
+ Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
+
+ For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
+
+1. To create an order, run the [az databoxedge order create](/cli/azure/databoxedge/order#az_databoxedge_order_create) command:
+
+ ```azurecli
+ az databoxedge order create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --company-name "Contoso" \
+ --address-line1 "1020 Enterprise Way" --city "Sunnyvale" \
+ --state "California" --country "United States" --postal-code 94089 \
+ --contact-person "Gus Poland" --email-list gus@contoso.com --phone 4085555555
+ ```
+
+The resource creation takes a few minutes. Run the [az databoxedge order show](/cli/azure/databoxedge/order#az_databoxedge_order_show) command to see the order:
+
+```azurecli
+az databoxedge order show --resource-group myasepgpu1 --device-name myasegpu1
+```
+
+After you place an order, Microsoft reviews the order and contacts you by email with shipping details.
+++ ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Mini R device with the resource. You can get this key now while you are in the Azure portal.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-pro-r-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-deploy-prep.md
@@ -82,6 +82,8 @@ Before you begin, make sure that:
If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+### [Portal](#tab/azure-portal)
+ To create an Azure Stack Edge resource, take the following steps in the Azure portal. 1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com).
@@ -123,7 +125,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Create a resource 5](media/azure-stack-edge-pro-r-deploy-prep/create-resource-5.png)
- - If this is the new device that you are ordering, enter the contact name, company, address to ship the device, and contact information.
+ - If this device is the new device that you're ordering, enter the contact name, company, address to ship the device, and contact information.
![Create a resource 6](media/azure-stack-edge-pro-r-deploy-prep/create-resource-6.png)
@@ -143,7 +145,7 @@ To create an Azure Stack Edge resource, take the following steps in the Azure po
![Go to the Azure Stack Edge Pro resource](media/azure-stack-edge-pro-r-deploy-prep/azure-stack-edge-resource-1.png)
-After the order is placed, Microsoft reviews the order and reaches out to you (via email) with shipping details.
+After the order is placed, Microsoft reviews the order and contacts you (via email) with shipping details.
<!--![Notification for review of the Azure Stack Edge Pro order](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-2.png) - If this is restored, it must go above "After the resource is successfully created." The azure-stack-edge-resource-1.png would seem superfluous in that case.-->
@@ -152,6 +154,51 @@ After the order is placed, Microsoft reviews the order and reaches out to you (v
If you run into any issues during the order process, see [Troubleshoot order issues](azure-stack-edge-troubleshoot-ordering.md).
+### [Azure CLI](#tab/azure-cli)
+
+If necessary, prepare your environment for Azure CLI.
+
+[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+
+To create an Azure Stack Edge resource, run the following commands in Azure CLI.
+
+1. Create a resource group by using the [az group create](/cli/azure/group#az_group_create) command, or use an existing resource group:
+
+ ```azurecli
+ az group create --name myasepgpu1 --location eastus
+ ```
+
+1. To create a device, use the [az databoxedge device create](/cli/azure/databoxedge/device#az_databoxedge_device_create) command:
+
+ ```azurecli
+ az databoxedge device create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --location eastus --sku EdgePR_Base
+ ```
+
+ Choose a location closest to the geographical region where you want to deploy your device. The region stores only the metadata for device management. The actual data can be stored in any storage account.
+
+ For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
+
+1. To create an order, run the [az databoxedge order create](/cli/azure/databoxedge/order#az_databoxedge_order_create) command:
+
+ ```azurecli
+ az databoxedge order create --resource-group myasepgpu1 \
+ --device-name myasegpu1 --company-name "Contoso" \
+ --address-line1 "1020 Enterprise Way" --city "Sunnyvale" \
+ --state "California" --country "United States" --postal-code 94089 \
+ --contact-person "Gus Poland" --email-list gus@contoso.com --phone 4085555555
+ ```
+
+The resource creation takes a few minutes. Run the [az databoxedge order show](/cli/azure/databoxedge/order#az_databoxedge_order_show) command to see the order:
+
+```azurecli
+az databoxedge order show --resource-group myasepgpu1 --device-name myasegpu1
+```
+
+After you place an order, Microsoft reviews the order and contacts you by email with shipping details.
+++ ## Get the activation key After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro device with the resource. You can get this key now while you are in the Azure portal.
defender-for-iot https://docs.microsoft.com/en-us/azure/defender-for-iot/references-defender-for-iot-glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-defender-for-iot-glossary.md
@@ -13,24 +13,26 @@
This glossary provides a brief description of important terms and concepts for the Azure Defender for IoT platform. Select the **Learn more** links to go to related terms in the glossary. This will help you more quickly learn and use product tools.
+<a name="glossary-a"></a>
+ ## A | Term | Description | Learn more | |--|--|--| | **Access group** | Support user access requirements for large organizations by creating access group rules.<br /><br />Rules let you control view and configuration access to the Defender for IoT on-premises management console for specific user roles at relevant business units, regions, sites, and zones.<br /><br />For example, allow security analysts from an Active Directory group to access West European automotive data but prevent access to data in Africa. | **[On-premises management console](#o)** <br /><br />**[Business unit](#b)** |
-| **Access tokens** | Generate access tokens to access the Defender for IoT REST API. | **[API](#a)** |
-| **Acknowledge alert event** | Instruct Defender for IoT to hide the alert once for the detected event. The alert will be triggered again if the event is detected again. | **[Alert](#a)<br /><br />[Learn alert event](#l)<br /><br />[Mute alert event](#m)** |
+| **Access tokens** | Generate access tokens to access the Defender for IoT REST API. | **[API](#glossary-a)** |
+| **Acknowledge alert event** | Instruct Defender for IoT to hide the alert once for the detected event. The alert will be triggered again if the event is detected again. | **[Alert](#glossary-a)<br /><br />[Learn alert event](#l)<br /><br />[Mute alert event](#m)** |
| **Alert** | A message that a Defender for IoT engine triggers regarding deviations from authorized network behavior, network anomalies, or suspicious network activity and traffic. | **[Forwarding rule](#f)<br /><br />[Exclusion rule](#e)<br /><br />[System notifications](#s)** |
-| **Alert comment** | Comments that security analysts and administrators make in alert messages. For example, an alert comment might give instructions about mitigation actions to take, or names of individuals to contact regarding the event.<br /><br />Users who are reviewing alerts can choose the comment or comments that best reflect the event status, or steps taken to investigate the alert. | **[Alert](#a)** |
+| **Alert comment** | Comments that security analysts and administrators make in alert messages. For example, an alert comment might give instructions about mitigation actions to take, or names of individuals to contact regarding the event.<br /><br />Users who are reviewing alerts can choose the comment or comments that best reflect the event status, or steps taken to investigate the alert. | **[Alert](#glossary-a)** |
| **Anomaly engine** | A Defender for IoT engine that detects unusual machine-to-machine (M2M) communication and behavior. For example, the engine might detect excessive SMB sign in attempts. Anomaly alerts are triggered when these events are detected. | **[Defender for IoT engines](#d)** |
-| **API** | Allows external systems to access data discovered by Defender for IoT and perform actions by using the external REST API over SSL connections. | **[Access tokens](#a)** |
+| **API** | Allows external systems to access data discovered by Defender for IoT and perform actions by using the external REST API over SSL connections. | **[Access tokens](#glossary-a)** |
| **Attack vector report** | A real-time graphical representation of vulnerability chains of exploitable endpoints.<br /><br />Reports let you evaluate the effect of mitigation activities in the attack sequence to determine. For example, you can evaluate whether a system upgrade disrupts the attacker's path by breaking the attack chain, or whether an alternate attack path remains. This prioritizes remediation and mitigation activities. | **[Risk assessment report](#r)** | ## B | Term | Description | Learn more | |--|--|--|
-| **Business unit** | A logical organization of your business according to specific industries.<br /><br />For example, a global company that contains glass factories and plastic factories can be managed as two different business units. You can control access of Defender for IoT users to specific business units. | **[On-premises management console](#o)<br /><br />[Access group](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
+| **Business unit** | A logical organization of your business according to specific industries.<br /><br />For example, a global company that contains glass factories and plastic factories can be managed as two different business units. You can control access of Defender for IoT users to specific business units. | **[On-premises management console](#o)<br /><br />[Access group](#glossary-a)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
| **Baseline** | Approved network traffic, protocols, commands, and devices. Defender for IoT identifies deviations from the network baseline. View approved baseline traffic by generating data-mining reports. | **[Data mining](#d)<br /><br />[Learning mode](#l)** | ## C
@@ -45,7 +47,7 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| | **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** |
-| **Defender for IoT engines** | The self-learning analytics engines in Defender for IoT eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and deviations from baseline network activity.<br /><br />When an engine detects a deviation, an alert is triggered. Alerts can be viewed and managed from the **Alerts** screen or from a SIEM. | **[Alert](#a)** |
+| **Defender for IoT engines** | The self-learning analytics engines in Defender for IoT eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and deviations from baseline network activity.<br /><br />When an engine detects a deviation, an alert is triggered. Alerts can be viewed and managed from the **Alerts** screen or from a SIEM. | **[Alert](#glossary-a)** |
| **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** | | **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** | | **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** |
@@ -58,13 +60,13 @@ This glossary provides a brief description of important terms and concepts for t
|--|--|--| | **Enterprise view** | A global map that presents business units, sites, and zones where Defenders for IoT sensors are installed. View geographical locations of malicious alerts, operational alerts, and more. | **[Business unit](#b)<br /><br />[Site](#s)<br /><br />[Zone](#z)** | | **Event timeline** | A timeline of activity detected on your network, including:<br /><br />- Alerts triggered.<br /><br />- Network events (informational).<br /><br />- User operations such as sign in, user deletion, and user creation, and alert management operations such as mute, learn, and acknowledge. Available in the sensor consoles. | - |
-| **Exclusion rule** | Instruct Defender for IoT to ignore alert triggers based on time period, device address, and alert name, or by a specific sensor.<br /><br />For example, if you know that all the OT devices monitored by a specific sensor will go through a maintenance procedure between 6:30 and 10:15 in the morning, you can set an exclusion rule that states that this sensor should send no alerts in the predefined period. | **[Alert](#a)<br /><br />[Mute alert event](#m)** |
+| **Exclusion rule** | Instruct Defender for IoT to ignore alert triggers based on time period, device address, and alert name, or by a specific sensor.<br /><br />For example, if you know that all the OT devices monitored by a specific sensor will go through a maintenance procedure between 6:30 and 10:15 in the morning, you can set an exclusion rule that states that this sensor should send no alerts in the predefined period. | **[Alert](#glossary-a)<br /><br />[Mute alert event](#m)** |
## F | Term | Description | Learn more | |--|--|--|
-| **Forwarding rule** | Forwarding rules instruct Defender for IoT to send alert information to partner vendors or systems.<br /><br />For example, send alert information to a Splunk server or a syslog server. | **[Alert](#a)** |
+| **Forwarding rule** | Forwarding rules instruct Defender for IoT to send alert information to partner vendors or systems.<br /><br />For example, send alert information to a Splunk server or a syslog server. | **[Alert](#glossary-a)** |
## G
@@ -90,7 +92,7 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **Learn alert event** | Instruct Defender for IoT to authorize the traffic detected in an alert event. | **[Alert](#a)<br /><br />[Acknowledge alert event](#a)<br /><br />[Mute alert event](#m)** |
+| **Learn alert event** | Instruct Defender for IoT to authorize the traffic detected in an alert event. | **[Alert](#glossary-a)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Mute alert event](#m)** |
| **Learning mode** | The mode used when Defender for IoT learns your network activity. This activity becomes your network baseline. Defender for IoT remains in the mode for a predefined period after installation. Activity that deviates from learned activity after this period will trigger Defender for IoT alerts. | **[Smart IT learning](#s)<br /><br />[Baseline](#b)** | | **Localization** | Localize text for alerts, events, and protocol parameters for dissector plug-ins developed by Horizon. | **[Horizon open development environment](#h)** |
@@ -98,20 +100,20 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#a)<br /><br />[Learn alert event](#l)** |
+| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#glossary-a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Learn alert event](#l)** |
## N | Term | Description | Learn more | |--|--|--|
-| **Notifications** | Information about network changes or unresolved device properties. Options are available to update device and network information with new data detected. Responding to notifications enriches the device inventory, map, and various reports. Available on sensor consoles. | **[Alert](#a)<br /><br />[System notifications](#s)** |
+| **Notifications** | Information about network changes or unresolved device properties. Options are available to update device and network information with new data detected. Responding to notifications enriches the device inventory, map, and various reports. Available on sensor consoles. | **[Alert](#glossary-a)<br /><br />[System notifications](#s)** |
## O | Term | Description | Learn more | |--|--|--| | **On-premises management console** | The on-premises management console provides a centralized view and management of devices and threats that Defenders for IoT sensor deployments detect in your organization. | **[Defender for IoT platform](#d)<br /><br />[Sensor](#s)** |
-| **Operational alert** | Alerts that deal with operational network issues, such as a device that's suspected to be disconnected from the network. | **[Alert](#a)<br /><br />[Security alert](#s)** |
+| **Operational alert** | Alerts that deal with operational network issues, such as a device that's suspected to be disconnected from the network. | **[Alert](#glossary-a)<br /><br />[Security alert](#s)** |
## P
@@ -124,7 +126,7 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **Region** | A logical division of a global organization into geographical regions. Examples are North America, Western Europe, and Eastern Europe.<br /><br />North America might have factories from various business units. | **[Access group](#a)<br /><br />[Business unit](#b)<br /><br />[On-premises management console](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
+| **Region** | A logical division of a global organization into geographical regions. Examples are North America, Western Europe, and Eastern Europe.<br /><br />North America might have factories from various business units. | **[Access group](#glossary-a)<br /><br />[Business unit](#b)<br /><br />[On-premises management console](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
| **Reports** | Reports reflect information generated by data-mining query results. This includes default data-mining results, which are available in the **Reports** view. Admins and security analysts can also generate custom data-mining queries and save them as reports. These reports will also be available for read-only users. | **[Data mining](#d)** | | **Risk assessment report** | Risk assessment reporting lets you generate a security score for each network device, along with an overall network security score. The overall score represents the percentage of 100 percent security. The report provides mitigation recommendations that will help you improve your current security score. | - |
@@ -132,14 +134,14 @@ This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--|
-| **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#a)<br /><br />[Operational alert](#o)** |
+| **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#glossary-a)<br /><br />[Operational alert](#o)** |
| **Selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - | | **Sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **[On-premises management console](#o)** | | **Site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **[Zone](#z)** | | **Site Management** | The on-premises management console option that that lets you manage enterprise sensors. | - | | **Smart IT learning** | After the learning period is complete and the learning mode is disabled, Defender for IoT might detect an unusually high level of baseline changes that are the result of normal IT activity, such as DNS and HTTP requests. This traffic might trigger unnecessary policy violation alerts and system notifications. To reduce these alerts and notifications, you can enable Smart IT Learning. | **[Learning mode](#l)<br /><br />[Baseline](#b)** | | **Subnets** | To enable focus on the OT devices, IT devices are automatically aggregated by subnet in the device map. Each subnet is presented as a single entity on the map, including an interactive collapsing or expanding capability to focus in to an IT subnet and back. | **[Device map](#d)** |
-| **System notifications** | Notifications from the on-premises management console regrading:<br /><br />- Sensor connection status.<br /><br />- Remote backup failures. | **[Notifications](#n)<br /><br />[Alert](#a)** |
+| **System notifications** | Notifications from the on-premises management console regrading:<br /><br />- Sensor connection status.<br /><br />- Remote backup failures. | **[Notifications](#n)<br /><br />[Alert](#glossary-a)** |
## Z
dms https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-postgresql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-azure-postgresql-online.md
@@ -115,4 +115,5 @@ When you try to perform an online migration from AWS RDS PostgreSQL to Azure Dat
- Updating a Primary Key segment is ignored. In such cases, applying such an update will be identified by the target as an update that didn't update any rows and will result in a record written to the exceptions table. - Migration of multiple tables with the same name but a different case (e.g. table1, TABLE1, and Table1) may cause unpredictable behavior and is therefore not supported. - Change processing of [CREATE | ALTER | DROP | TRUNCATE] table DDLs isn't supported.-- In Azure Database Migration Service, a single migration activity can only accommodate up to four databases.\ No newline at end of file
+- In Azure Database Migration Service, a single migration activity can only accommodate up to four databases.
+- Migration of the pg_largeobject table is not supported.
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
@@ -49,7 +49,7 @@ To complete this tutorial, you need to:
* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md) as the target database server to migrate data into. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model. For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL source to allow Azure Database Migration Service to access to the source databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. * Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL target to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. * [Enable logical replication](../postgresql/concepts-logical.md) in the Azure DB for PostgreSQL source.
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-mysql-azure-mysql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-online.md
@@ -44,7 +44,7 @@ In this tutorial, you learn how to:
To complete this tutorial, you need to:
-* Download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or 5.7. The on-premises MySQL version must match with Azure Database for MySQL version. For example, MySQL 5.6 can only migrate to Azure Database for MySQL 5.6 and not upgraded to 5.7. Migrations to or from MySQL 8.0 are not supported. Migrations to or from MySQL 8.0 are not supported.
+* Download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or 5.7. The on-premises MySQL version must match with Azure Database for MySQL version. For example, MySQL 5.6 can only migrate to Azure Database for MySQL 5.6 and not upgraded to 5.7. Migrations to or from MySQL 8.0 are not supported.
* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Azure portal. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
@@ -57,8 +57,8 @@ To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
+* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/mysql/concepts-firewall-rules).
* Open your Windows firewall to allow Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration. * Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for MySQL to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
@@ -275,4 +275,4 @@ After the initial Full load is completed, the databases are marked **Ready to cu
* For information about known issues and limitations when performing online migrations to Azure Database for MySQL, see the article [Known issues and workarounds with Azure Database for MySQL online migrations](known-issues-azure-mysql-online.md). * For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
\ No newline at end of file
+* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-postgresql-azure-postgresql-online-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
@@ -54,11 +54,11 @@ To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
+* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL Server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.
-* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
+* Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
* Enable logical replication in the postgresql.config file, and set the following parameters: * wal_level = **logical**
@@ -282,4 +282,4 @@ After the initial Full load is completed, the databases are marked **Ready to cu
* For information about known issues and limitations when performing online migrations to Azure Database for PostgreSQL, see the article [Known issues and workarounds with Azure Database for PostgreSQL online migrations](known-issues-azure-postgresql-online.md). * For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
\ No newline at end of file
+* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-postgresql-azure-postgresql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
@@ -53,11 +53,11 @@ To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
+* Ensure that your virtual network Network Security Group (NSG) rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL Server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.
-* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
+* Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
* There are two methods for invoking the CLI: * In the upper-right corner of the Azure portal, select the Cloud Shell button:
@@ -522,4 +522,4 @@ If you need to cancel or delete any DMS task, project, or service, perform the c
* For information about known issues and limitations when performing online migrations to Azure Database for PostgreSQL, see the article [Known issues and workarounds with Azure Database for PostgreSQL online migrations](known-issues-azure-postgresql-online.md). * For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
\ No newline at end of file
+* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-rds-mysql-server-azure-db-for-mysql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-mysql-server-azure-db-for-mysql-online.md
@@ -52,8 +52,8 @@ To complete this tutorial, you need to:
* Download and install the [MySQL **Employees** sample database](https://dev.mysql.com/doc/employee/en/employees-installation.html). * Create an instance of [Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access) (or your Linux firewall) to allow for database engine access. For MySQL server, allow port 3306 for connectivity.
+* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall](https://docs.microsoft.com/azure/mysql/concepts-firewall-rules) (or your Linux firewall) to allow for database engine access. For MySQL server, allow port 3306 for connectivity.
> [!NOTE] > Azure Database for MySQL only supports InnoDB tables. To convert MyISAM tables to InnoDB, please see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) .
@@ -268,4 +268,4 @@ Your online migration of an on-premises instance of MySQL to Azure Database for
* For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md). * For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
-* For other questions, email the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias.
\ No newline at end of file
+* For other questions, email the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias.
dms https://docs.microsoft.com/en-us/azure/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
@@ -48,11 +48,11 @@ To complete this tutorial, you need to:
* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/quickstart-create-hyperscale-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
+* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.
-* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for the Azure Database for PostgreSQL server to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
+* Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for the Azure Database for PostgreSQL server to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
### Set up AWS RDS PostgreSQL for replication
@@ -266,4 +266,4 @@ Your online migration of an on-premises instance of RDS PostgreSQL to Azure Data
* For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md). * For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
-* For other questions, email the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias.
\ No newline at end of file
+* For other questions, email the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias.
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-geo-dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-geo-dr.md
@@ -9,7 +9,7 @@ Last updated 06/23/2020
Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
-Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the risk is outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
+Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures cannot sufficiently defend against.
@@ -18,7 +18,7 @@ The Event Hubs Geo-disaster recovery feature is designed to make it easier to re
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated. > [!IMPORTANT]
-> The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md).
+> The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md).
## Outages and disasters
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-set-global-reach-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-set-global-reach-cli.md
@@ -54,7 +54,7 @@ When running the command to enable connectivity, note the following requirements
* *peer-circuit* should be the full resource ID. For example:
- > /subscriptions/{your_subscription_id}/resourceGroups/{your_resource_group}/providers/Microsoft.Network/expressRouteCircuits/{your_circuit_name}
+ > /subscriptions/{your_subscription_id}/resourceGroups/{your_resource_group}/providers/Microsoft.Network/expressRouteCircuits/{your_circuit_name}/peerings/AzurePrivatePeering
* *address-prefix* must be a "/29" IPv4 subnet (for example, "10.0.0.0/29"). We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. You can't use addresses in this subnet in your Azure virtual networks or in your on-premises networks.
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
@@ -263,7 +263,7 @@ If you are remote and don't have fiber connectivity or you want to explore other
| **[CloudXpress](https://www2.telenet.be/fr/business/produits-services/internet/cloudxpress/)** | Equinix | Amsterdam | | **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore | | **[Aptum Technologies](https://aptum.com/services/cloud/managed-azure/)**| Equinix | Montreal, Toronto |
-| **[CoreAzure](http://www.coreazure.com/)**| Equinix | London |
+| **[CoreAzure](https://www.coreazure.com/)**| Equinix | London |
| **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas, Silicon Valley, Washington DC | | **[Crown Castle](https://fiber.crowncastle.com/solutions/added/cloud-connect)**| Equinix | Atlanta, Chicago, Dallas, Los Angeles, New York, Washington DC | | **[Data Foundry](https://www.datafoundry.com/services/cloud-connect)** | Megaport | Dallas |
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
@@ -1,7 +1,7 @@
Title: Index of blueprint samples description: Index of compliance and standard samples for deploying environments, policies, and Cloud Adoptions Framework foundations with Azure Blueprints. Previously updated : 01/27/2021 Last updated : 02/01/2021 # Azure Blueprints samples
@@ -23,7 +23,7 @@ quality and ready to deploy today to assist you in meeting your various complian
| [FedRAMP High](./fedramp-h/index.md) | Provides a set of policies to help comply with FedRAMP High. | | [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) | Provides a set of policies to help comply with HIPAA HITRUST. | | [IRS 1075](./irs-1075/index.md) | Provides guardrails for compliance with IRS 1075.|
-| [ISO 27001](./iso27001/index.md) | Provides guardrails for compliance with ISO 27001. |
+| [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. |
| [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guard-rails that help towards ISO 27001 attestation. | | [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. | | [Media](./medi) | Provides a set of policies to help comply with Media MPAA. |
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/iso-27001-2013 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso-27001-2013.md
@@ -0,0 +1,164 @@
+
+ Title: ISO 27001 blueprint sample overview
+description: Overview of the ISO 27001 blueprint sample. This blueprint sample helps customers assess specific ISO 27001 controls.
Last updated : 02/01/2021++
+# ISO 27001 blueprint sample
+
+The ISO 27001 blueprint sample provides governance guard-rails using
+[Azure Policy](../../policy/overview.md) that help you assess specific ISO 27001 controls. This
+blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that
+must implement ISO 27001 controls.
+
+## Control mapping
+
+The [Azure Policy control mapping](../../policy/samples/iso-27001.md) provides details on policy
+definitions included within this blueprint and how these policy definitions map to the **compliance
+domains** and **controls** in ISO 27001. When assigned to an architecture, resources are evaluated
+by Azure Policy for non-compliance with assigned policy definitions. For more information, see
+[Azure Policy](../../policy/overview.md).
+
+## Deploy
+
+To deploy the Azure Blueprints ISO 27001 blueprint sample, the following steps must be taken:
+
+> [!div class="checklist"]
+> - Create a new blueprint from the sample
+> - Mark your copy of the sample as **Published**
+> - Assign your copy of the blueprint to an existing subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+before you begin.
+
+### Create blueprint from sample
+
+First, implement the blueprint sample by creating a new blueprint in your environment using the
+sample as a starter.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. From the **Getting started** page on the left, select the **Create** button under _Create a
+ blueprint_.
+
+1. Find the **ISO 27001** blueprint sample under _Other Samples_ and select **Use this sample**.
+
+1. Enter the _Basics_ of the blueprint sample:
+
+ - **Blueprint name**: Provide a name for your copy of the ISO 27001 blueprint sample.
+ - **Definition location**: Use the ellipsis and select the management group to save your copy of
+ the sample to.
+
+1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
+ page.
+
+1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
+ parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
+ blueprint sample.
+
+### Publish the sample copy
+
+Your copy of the blueprint sample has now been created in your environment. It's created in
+**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
+blueprint sample can be customized to your environment and needs, but that modification may move it
+away from alignment with ISO 27001 controls.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
+ **Version** for your copy of the blueprint sample. This property is useful for if you make a
+ modification later. Provide **Change notes** such as "First version published from the ISO 27001
+ blueprint sample." Then select **Publish** at the bottom of the page.
+
+### Assign the sample copy
+
+Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
+subscription within the management group it was saved to. This step is where parameters are provided
+to make each deployment of the copy of the blueprint sample unique.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Assign blueprint** at the top of the blueprint definition page.
+
+1. Provide the parameter values for the blueprint assignment:
+
+ - Basics
+
+ - **Subscriptions**: Select one or more of the subscriptions that are in the management group
+ you saved your copy of the blueprint sample to. If you select more than one subscription, an
+ assignment will be created for each using the parameters entered.
+ - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
+ Change as needed or leave as is.
+ - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
+ this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
+ [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
+ - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
+ sample.
+
+ - Lock Assignment
+
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../concepts/resource-locking.md).
+
+ - Managed Identity
+
+ Leave the default _system assigned_ managed identity option.
+
+ - Blueprint parameters
+
+ The parameters defined in this section are used by many of the artifacts in the blueprint
+ definition to provide consistency.
+
+ - **Allowed location for resources and resource groups**: Value that indicates the allowed
+ locations for resource groups and resources.
+
+ - Artifact parameters
+
+ The parameters defined in this section apply to the artifact under which it's defined. These
+ parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
+ defined during the assignment of the blueprint. For a full list or artifact parameters and
+ their descriptions, see [Artifact parameters table](#artifact-parameters-table).
+
+1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
+ assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
+ on the status of deployment, open the blueprint assignment.
+
+> [!WARNING]
+> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
+
+### Artifact parameters table
+
+The following table provides a list of the blueprint artifact parameters:
+
+|Artifact name|Artifact type|Parameter name|Description|
+|-|-|-|-|
+|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
+|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
+|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
+|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
+|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
+|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
+|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
+|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
+|Allowed storage account SKUs|Policy assignment|List of allowed storage SKUs|The list of SKUs that can be specified for storage accounts.|
+|Allowed virtual machine SKUs|Policy assignment|List of allowed virtual machine SKUs|The list of SKUs that can be specified for virtual machines.|
+|Blueprint initiative for ISO 27001|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/platform/resource-logs-schema.md#service-specific-schemas).|
+
+## Next steps
+
+Additional articles about blueprints and how to use them:
+
+- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).
+- Understand how to use [static and dynamic parameters](../concepts/parameters.md).
+- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).
+- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/iso27001/control-mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001/control-mapping.md
@@ -1,296 +0,0 @@
- Title: ISO 27001 blueprint sample controls
-description: Control mapping of the ISO 27001 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 11/05/2020--
-# Control mapping of the ISO 27001 blueprint sample
-
-The following article details how the Azure Blueprints ISO 27001 blueprint sample maps to the ISO
-27001 controls. For more information about the controls, see
-[ISO 27001](https://www.iso.org/isoiec-27001-information-security.html).
-
-The following mappings are to the **ISO 27001:2013** controls. Use the navigation on the right to
-jump directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\] Audit ISO 27001:2013 controls and deploy specific VM Extensions to support audit
-requirements** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001/control-mapping.md).
-
-## A.6.1.2 Segregation of duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning two [Azure Policy](../../../policy/overview.md) definitions that audit the number of
-owners for Azure subscriptions. Managing subscription owner permissions can help you implement
-appropriate separation of duties.
--- A maximum of 3 owners should be designated for your subscription-- There should be more than one owner assigned to your subscription-
-## A.8.2.1 Classification of information
-
-Azure's
-[SQL Vulnerability Assessment service](../../../../azure-sql/database/sql-vulnerability-assessment.md)
-can help you discover sensitive data stored in your databases and includes recommendations to
-classify that data. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition
-to audit that vulnerabilities identified during SQL Vulnerability Assessment scan are remediated.
--- Vulnerabilities on your SQL databases should be remediated-
-## A.9.1.2 Access to networks and network services
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-manage who has access to Azure resources. This blueprint helps you control access to Azure resources
-by assigning seven [Azure Policy](../../../policy/overview.md) definitions. These policies audit use
-of resource types and configurations that may allow more permissive access to resources.
-Understanding resources that are in violation of these policies can help you take corrective actions
-to ensure access Azure resources is restricted to authorized users.
--- Deploy prerequisites to audit Linux VMs that have accounts without passwords-- Deploy prerequisites to audit Linux VMs that allow remote connections from accounts without
- passwords
-- Show audit results from Linux VMs that have accounts without passwords-- Show audit results from Linux VMs that allow remote connections from accounts without passwords-- Storage accounts should be migrated to new Azure Resource Manager resources-- Virtual machines should be migrated to new Azure Resource Manager resources-- Audit VMs that do not use managed disks-
-## A.9.2.3 Management of privileged access rights
-
-This blueprint helps you restrict and control privileged access rights by assigning four [Azure
-Policy](../../../policy/overview.md) definitions to audit external accounts with owner and/or write
-permissions and accounts with owner and/or write permissions that don't have multi-factor
-authentication enabled. Azure role-based access control (Azure RBAC) helps to manage who has access
-to Azure resources. This blueprint also assigns three Azure Policy definitions to audit use of Azure
-Active Directory authentication for SQL Servers and Service Fabric. Using Azure Active Directory
-authentication enables simplified permission management and centralized identity management of
-database users and other Microsoft services. This blueprint also assigns an Azure Policy definition
-to audit the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are
-implement can help you verify need and proper implementation, as custom Azure RBAC rules are error
-prone.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-- An Azure Active Directory administrator should be provisioned for SQL servers-- Service Fabric clusters should only use Azure Active Directory for client authentication-- Audit usage of custom RBAC rules-
-## A.9.2.4 Management of secret authentication information of users
-
-This blueprint assigns three [Azure Policy](../../../policy/overview.md) definitions to audit
-accounts that don't have multi-factor authentication enabled. Multi-factor authentication helps keep
-accounts secure even if one piece of authentication information is compromised. By monitoring
-accounts without multi-factor authentication enabled, you can identify accounts that may be more
-likely to be compromised. This blueprint also assigns two Azure Policy definitions that audit Linux
-VM password file permissions to alert if they're set incorrectly. This setup enables you to take
-corrective action to ensure authenticators aren't compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with read permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-- Show audit results from Linux VMs that do not have the passwd file permissions set to 0644-
-## A.9.2.5 Review of user access rights
-
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md)
-helps you manage who has access to resources in Azure. Using the Azure portal, you can review who
-has access to Azure resources and their permissions. This blueprint assigns four [Azure
-Policy](../../../policy/overview.md) definitions to audit accounts that should be prioritized for
-review, including depreciated accounts and external accounts with elevated permissions.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-
-## A.9.2.6 Removal or adjustment of access rights
-
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md)
-helps you manage who has access to resources in Azure. Using [Azure Active
-Directory](../../../../active-directory/fundamentals/active-directory-whatis.md) and Azure RBAC, you
-can update user roles to reflect organizational changes. When needed, accounts can be blocked from
-signing in (or removed), which immediately removes access rights to Azure resources. This blueprint
-assigns two [Azure Policy](../../../policy/overview.md) definitions to audit depreciated account
-that should be considered for removal.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-
-## A.9.4.2 Secure log-on procedures
-
-This blueprint assigns three Azure Policy definitions to audit accounts that don't have multi-factor
-authentication enabled. Azure AD Multi-Factor Authentication provides additional security by requiring
-a second form of authentication and delivers strong authentication. By monitoring accounts without
-multi-factor authentication enabled, you can identify accounts that may be more likely to be
-compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with read permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-
-## A.9.4.3 Password management system
-
-This blueprint helps you enforce strong passwords by assigning 10 [Azure
-Policy](../../../policy/overview.md) definitions that audit Windows VMs that don't enforce minimum
-strength and other password requirements. Awareness of VMs in violation of the password strength
-policy helps you take corrective actions to ensure passwords for all VM user accounts are compliant
-with policy.
--- Show audit results from Windows VMs that do not have the password complexity setting enabled-- Show audit results from Windows VMs that do not have a maximum password age of 70 days-- Show audit results from Windows VMs that do not have a minimum password age of 1 day-- Show audit results from Windows VMs that do not restrict the minimum password length to 14
- characters
-- Show audit results from Windows VMs that allow re-use of the previous 24 passwords-
-## A.10.1.1 Policy on the use of cryptographic controls
-
-This blueprint helps you enforce your policy on the use of cryptograph controls by assigning 13
-[Azure Policy](../../../policy/overview.md) definitions that enforce specific cryptograph controls
-and audit use of weak cryptographic settings. Understanding where your Azure resources may have
-non-optimal cryptographic configurations can help you take corrective actions to ensure resources
-are configured in accordance with your information security policy. Specifically, the policies
-assigned by this blueprint require encryption for blob storage accounts and data lake storage
-accounts; require transparent data encryption on SQL databases; audit missing encryption on storage
-accounts, SQL databases, virtual machine disks, and automation account variables; audit insecure
-connections to storage accounts, Function Apps, Web App, API Apps, and Redis Cache; audit weak
-virtual machine password encryption; and audit unencrypted Service Fabric communication.
--- Function App should only be accessible over HTTPS-- Web Application should only be accessible over HTTPS-- API App should only be accessible over HTTPS-- Show audit results from Windows VMs that do not store passwords using reversible encryption-- Disk encryption should be applied on virtual machines-- Automation account variables should be encrypted-- Only secure connections to your Azure Cache for Redis should be enabled-- Secure transfer to storage accounts should be enabled-- Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign-- Transparent Data Encryption on SQL databases should be enabled-
-## A.12.4.1 Event logging
-
-This blueprint helps you ensure system events are logged by assigning seven [Azure
-Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-Diagnostic logs provide insight into operations that were performed within Azure resources.
--- Audit Dependency agent deployment - VM Image (OS) unlisted-- Audit Dependency agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- [Preview]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit diagnostic setting-- Auditing on SQL server should be enabled-
-## A.12.4.3 Administrator and operator logs
-
-This blueprint helps you ensure system events are logged by assigning seven Azure Policy definitions
-that audit log settings on Azure resources. Diagnostic logs provide insight into operations that
-were performed within Azure resources.
--- Audit Dependency agent deployment - VM Image (OS) unlisted-- Audit Dependency agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- [Preview]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit diagnostic setting-- Auditing on SQL server should be enabled-
-## A.12.4.4 Clock synchronization
-
-This blueprint helps you ensure system events are logged by assigning seven Azure Policy definitions
-that audit log settings on Azure resources. Azure logs rely on synchronized internal clocks to
-create a time-correlated record of events across resources.
--- Audit Dependency agent deployment - VM Image (OS) unlisted-- Audit Dependency agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- [Preview]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit diagnostic setting-- Auditing on SQL server should be enabled-
-## A.12.5.1 Installation of software on operational systems
-
-Adaptive application control is solution from Azure Security Center that helps you control which
-applications can run on your VMs located in Azure. This blueprint assigns an Azure Policy definition
-that monitors changes to the set of allowed applications. This capability helps you control
-installation of software and applications on Azure VMs.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## A.12.6.1 Management of technical vulnerabilities
-
-This blueprint helps you manage information system vulnerabilities by assigning five [Azure
-Policy](../../../policy/overview.md) definitions that monitor missing system updates, operating
-system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security
-Center. Azure Security Center provides reporting capabilities that enable you to have real-time
-insight into the security state of deployed Azure resources.
--- Monitor missing Endpoint Protection in Azure Security Center-- System updates should be installed on your machines-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## A.12.6.2 Restrictions on software installation
-
-Adaptive application control is solution from Azure Security Center that helps you control which
-applications can run on your VMs located in Azure. This blueprint assigns an Azure Policy definition
-that monitors changes to the set of allowed applications. Restrictions on software installation can
-help you reduce the likelihood of introduction of software vulnerabilities.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## A.13.1.1 Network controls
-
-This blueprint helps you manage and control networks by assigning an [Azure
-Policy](../../../policy/overview.md) definition that monitors network security groups with
-permissive rules. Rules that are too permissive may allow unintended network access and should be
-reviewed. This blueprint also assigns three Azure Policy definitions that monitor unprotected
-endpoints, applications, and storage accounts. Endpoints and applications that aren't protected by a
-firewall, and storage accounts with unrestricted access can allow unintended access to information
-contained within the information system.
--- Access through Internet facing endpoint should be restricted-- Storage accounts should restrict network access-
-## A.13.2.1 Information transfer policies and procedures
-
-The blueprint helps you ensure information transfer with Azure services is secure by assigning two
-[Azure Policy](../../../policy/overview.md) definitions to audit insecure connections to storage
-accounts and Redis Cache.
--- Only secure connections to your Azure Cache for Redis should be enabled-- Secure transfer to storage accounts should be enabled-
-## Next steps
-
-Now that you've reviewed the control mapping of the ISO 27001 blueprint, visit the
-following articles to learn about the architecture and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [ISO 27001 blueprint - Overview](./index.md)
-> [ISO 27001 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance https://docs.microsoft.com/en-us/azure/governance/blueprints/samples/iso27001/index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001/index.md
@@ -1,38 +0,0 @@
- Title: ISO 27001 blueprint sample overview
-description: Overview of the ISO 27001 blueprint sample. This blueprint sample helps customers assess specific ISO 27001 controls.
Previously updated : 11/02/2020--
-# Overview of the ISO 27001 blueprint sample
-
-The ISO 27001 blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help you assess specific ISO 27001 controls. This blueprint helps customers deploy a core set
-of policies for any Azure-deployed architecture that must implement ISO 27001 controls. Two
-additional ISO 27001 blueprint samples are available that can help you deploy a [foundational architecture](../iso27001-shared/index.md)
-and an [ASE/SQL workload](../iso27001-ase-sql-workload/index.md).
-
-## Control mapping
-
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in ISO 27001. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
-
-## Next steps
-
-You've reviewed the overview and architecture of the ISO 27001 blueprint sample.
-Next, visit the following articles to learn about the control mapping and how to deploy this
-sample:
-
-> [!div class="nextstepaction"]
-> [ISO 27001 blueprint - Control mapping](./control-mapping.md)
-> [ISO 27001 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance https://docs.microsoft.com/en-us/azure/governance/policy/how-to/guest-configuration-create-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
@@ -345,6 +345,28 @@ describe file(attr_path) do
end ```
+Add the property **AttributesYmlContent** in your configuration with any string as the value.
+The Guest Configuration agent automatically creates the YAML file
+used by InSpec to store attributes. See the example below.
+
+```powershell
+Configuration AuditFilePathExists
+{
+ Import-DscResource -ModuleName 'GuestConfiguration'
+
+ Node AuditFilePathExists
+ {
+ ChefInSpecResource 'Audit Linux path exists'
+ {
+ Name = 'linux-path'
+ AttributesYmlContent = "fromParameter"
+ }
+ }
+}
+```
+
+Recompile the MOF file using the examples given in this document.
+ The cmdlets `New-GuestConfigurationPolicy` and `Test-GuestConfigurationPolicyPackage` include a parameter named **Parameter**. This parameter takes a hashtable including all details about each parameter and automatically creates all the required sections of the files used to create each Azure
@@ -358,10 +380,10 @@ $PolicyParameterInfo = @(
@{ Name = 'FilePath' # Policy parameter name (mandatory) DisplayName = 'File path.' # Policy parameter display name (mandatory)
- Description = "File path to be audited." # Policy parameter description (optional)
- ResourceType = "ChefInSpecResource" # Configuration resource type (mandatory)
+ Description = 'File path to be audited.' # Policy parameter description (optional)
+ ResourceType = 'ChefInSpecResource' # Configuration resource type (mandatory)
ResourceId = 'Audit Linux path exists' # Configuration resource property name (mandatory)
- ResourcePropertyName = "AttributesYmlContent" # Configuration resource property name (mandatory)
+ ResourcePropertyName = 'AttributesYmlContent' # Configuration resource property name (mandatory)
DefaultValue = '/tmp' # Policy parameter default value (optional) } )
@@ -374,28 +396,10 @@ New-GuestConfigurationPolicy
-Description 'Audit that a file path exists on a Linux machine.' ` -Path './policies' ` -Parameter $PolicyParameterInfo `
+ -Platform 'Linux' `
-Version 1.0.0 ```
-For Linux policies, include the property **AttributesYmlContent** in your configuration and
-overwrite the values as needed. The Guest Configuration agent automatically creates the YAML file
-used by InSpec to store attributes. See the example below.
-
-```powershell
-Configuration AuditFilePathExists
-{
- Import-DscResource -ModuleName 'GuestConfiguration'
-
- Node AuditFilePathExists
- {
- ChefInSpecResource 'Audit Linux path exists'
- {
- Name = 'linux-path'
- AttributesYmlContent = "path: /tmp"
- }
- }
-}
-```
## Policy lifecycle
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/iso-27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
@@ -22,7 +22,7 @@ Then, find and select the **ISO 27001:2013** Regulatory Compliance built-in
initiative definition. This built-in initiative is deployed as part of the
-[ISO 27001:2013 blueprint sample](../../blueprints/samples/iso27001/index.md).
+[ISO 27001:2013 blueprint sample](../../blueprints/samples/iso-27001-2013.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/access-fhir-postman-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/access-fhir-postman-tutorial.md
@@ -8,7 +8,7 @@
Previously updated : 02/07/2019 Last updated : 02/01/2021 # Access Azure API for FHIR with Postman
@@ -18,7 +18,8 @@ A client application would access an FHIR API through a [REST API](https://www.h
## Prerequisites - A FHIR endpoint in Azure. You can set that up using the managed Azure API for FHIR or the Open Source FHIR server for Azure. Set up the managed Azure API for FHIR using [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).-- A [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service
+- A [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service.
+- You have granted permissions, for example, "FHIR Data Contributor", to the client application to access the FHIR service. More info at [Configure Azure RBAC for FHIR](https://docs.microsoft.com/azure/healthcare-apis/configure-azure-rbac)
- Postman installed. You can get it from [https://www.getpostman.com](https://www.getpostman.com) ## FHIR server and authentication details
healthcare-apis https://docs.microsoft.com/en-us/azure/healthcare-apis/fhir-features-supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir-features-supported.md
@@ -6,7 +6,7 @@
Previously updated : 1/21/2021 Last updated : 1/30/2021
@@ -36,8 +36,8 @@ Previous versions also currently supported include: `3.0.2`
| create | Yes | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) | | search | Partial | Partial | Partial | See below |
-| chained search | No | Yes | No | |
-| reverse chained search | No | No | No | |
+| chained search | No | Yes | No | |
+| reverse chained search | No | Yes | No | |
| capabilities | Yes | Yes | Yes | | | batch | Yes | Yes | Yes | | | transaction | No | Yes | No | |
@@ -67,39 +67,39 @@ All search parameter types are supported.
|`:exact` | Yes | Yes | Yes | | |`:contains` | Yes | Yes | Yes | | |`:text` | Yes | Yes | Yes | |
+|`:[type]` (reference) | Yes | Yes | Yes | |
+|`:not` | Yes | Yes | Yes | |
+|`:below` (uri) | Yes | Yes | Yes | |
+|`:above` (uri) | No | No | No | Issue [#158](https://github.com/Microsoft/fhir-server/issues/158) |
|`:in` (token) | No | No | No | | |`:below` (token) | No | No | No | | |`:above` (token) | No | No | No | | |`:not-in` (token) | No | No | No | |
-|`:[type]` (reference) | No | No | No | |
-|`:below` (uri) | Yes | Yes | Yes | |
-|`:not` | No | No | No | |
-|`:above` (uri) | No | No | No | Issue [#158](https://github.com/Microsoft/fhir-server/issues/158) |
| Common search parameter | Supported - PaaS | Supported - OSS (SQL) | Supported - OSS (Cosmos DB) | Comment | |-| -| -| -|| | `_id` | Yes | Yes | Yes | | | `_lastUpdated` | Yes | Yes | Yes | | | `_tag` | Yes | Yes | Yes | |
-| `_profile` | Partial | Partial | Partial | Only supported in STU3, no support in R4 |
+| `_list` | Yes | Yes | Yes | |
+| `_type` | Yes | Yes | Yes | Issue [#1562](https://github.com/microsoft/fhir-server/issues/1562) |
| `_security` | Yes | Yes | Yes | |
+| `_profile` | Partial | Partial | Partial | Only supported in STU3, no support in R4 |
| `_text` | No | No | No | | | `_content` | No | No | No | |
-| `_list` | Yes | Yes | Yes | |
| `_has` | No | No | No | |
-| `_type` | Yes | Yes | Yes | |
| `_query` | No | No | No | | | `_filter` | No | No | No | | | Search result parameters | Supported - PaaS | Supported - OSS (SQL) | Supported - OSS (Cosmos DB) | Comment | |-|--|--|--||
-| `_sort` | Partial | Partial | Partial | `_sort=_lastUpdated` is supported |
+| `_elements` | Yes | Yes | Yes | Issue [#1256](https://github.com/microsoft/fhir-server/issues/1256) |
| `_count` | Yes | Yes | Yes | `_count` is limited to 100 characters. If set to higher than 100, only 100 will be returned and a warning will be returned in the bundle. | | `_include` | Yes | Yes | Yes |Included items are limited to 100. Include on PaaS and OSS on Cosmos DB does not include :iterate support.|
-| `_revinclude` | Yes | Yes | Yes | Included items are limited to 100. Include on PaaS and OSS on Cosmos DB does not include :iterate support.|
+| `_revinclude` | Yes | Yes | Yes | Included items are limited to 100. Include on PaaS and OSS on Cosmos DB does [not include :iterate support](https://github.com/microsoft/fhir-server/issues/1313). Issue [#1319](https://github.com/microsoft/fhir-server/issues/1319)|
| `_summary` | Partial | Partial | Partial | `_summary=count` is supported |
-| `_total` | Partial | Partial | Partial | _total=non and _total=accurate |
-| `_elements` | Yes | Yes | Yes | |
+| `_total` | Partial | Partial | Partial | `_total=none` and `_total=accurate` |
+| `_sort` | Partial | Partial | Partial | `_sort=_lastUpdated` is supported |
| `_contained` | No | No | No | | | `containedType` | No | No | No | | | `_score` | No | No | No | |
iot-central https://docs.microsoft.com/en-us/azure/iot-central/core/howto-export-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
@@ -161,7 +161,7 @@ Now that you have a destination to export your data to, set up data export in yo
1. When you've finished setting up your export, select **Save**. After a few minutes, your data appears in your destinations.
-## Export contents and format
+## Destinations
### Azure Blob Storage destination
@@ -182,7 +182,7 @@ The annotations or system properties bag of the message contains the `iotcentral
For webhooks destinations, data is also exported in near real time. The data in the message body is in the same format as for Event Hubs and Service Bus.
-### Telemetry format
+## Telemetry format
Each exported message contains a normalized form of the full message the device sent in the message body. The message is in JSON format and encoded as UTF-8. Information in each message includes:
@@ -229,6 +229,102 @@ The following example shows an exported telemetry message:
} ```
+### Message properties
+
+Telemetry messages have properties for metadata in addition to the telemetry payload. The previous snippet shows examples of system messages such as `deviceId` and `enqueuedTime`. To learn more about the system message properties, see [System Properties of D2C IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-d2c-iot-hub-messages).
+
+You can add properties to telemetry messages if you need to add custom metadata to your telemetry messages. For example, you need to add a timestamp when the device creates the message.
+
+The following code snippet shows how to add the `iothub-creation-time-utc` property to the message when you create it on the device:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+async function sendTelemetry(deviceClient, index) {
+ console.log('Sending telemetry message %d...', index);
+ const msg = new Message(
+ JSON.stringify(
+ deviceTemperatureSensor.updateSensor().getCurrentTemperatureObject()
+ )
+ );
+ msg.properties.add("iothub-creation-time-utc", new Date().toISOString());
+ msg.contentType = 'application/json';
+ msg.contentEncoding = 'utf-8';
+ await deviceClient.sendEvent(msg);
+}
+```
+
+# [Java](#tab/java)
+
+```java
+private static void sendTemperatureTelemetry() {
+ String telemetryName = "temperature";
+ String telemetryPayload = String.format("{\"%s\": %f}", telemetryName, temperature);
+
+ Message message = new Message(telemetryPayload);
+ message.setContentEncoding(StandardCharsets.UTF_8.name());
+ message.setContentTypeFinal("application/json");
+ message.setProperty("iothub-creation-time-utc", Instant.now().toString());
+
+ deviceClient.sendEventAsync(message, new MessageIotHubEventCallback(), message);
+ log.debug("My Telemetry: Sent - {\"{}\": {}┬░C} with message Id {}.", telemetryName, temperature, message.getMessageId());
+ temperatureReadings.put(new Date(), temperature);
+}
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+private async Task SendTemperatureTelemetryAsync()
+{
+ const string telemetryName = "temperature";
+
+ string telemetryPayload = $"{{ \"{telemetryName}\": {_temperature} }}";
+ using var message = new Message(Encoding.UTF8.GetBytes(telemetryPayload))
+ {
+ ContentEncoding = "utf-8",
+ ContentType = "application/json",
+ };
+ message.Properties.Add("iothub-creation-time-utc", DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ"));
+ await _deviceClient.SendEventAsync(message);
+ _logger.LogDebug($"Telemetry: Sent - {{ \"{telemetryName}\": {_temperature}┬░C }}.");
+}
+```
+
+# [Python](#tab/python)
+
+```python
+async def send_telemetry_from_thermostat(device_client, telemetry_msg):
+ msg = Message(json.dumps(telemetry_msg))
+ msg.custom_properties["iothub-creation-time-utc"] = datetime.now(timezone.utc).isoformat()
+ msg.content_encoding = "utf-8"
+ msg.content_type = "application/json"
+ print("Sent message")
+ await device_client.send_message(msg)
+```
+++
+The following snippet shows this property in the message exported to Blob storage:
+
+```json
+{
+ "applicationId":"5782ed70-b703-4f13-bda3-1f5f0f5c678e",
+ "messageSource":"telemetry",
+ "deviceId":"sample-device-01",
+ "schema":"default@v1",
+ "templateId":"urn:modelDefinition:mkuyqxzgea:e14m1ukpn",
+ "enqueuedTime":"2021-01-29T16:45:39.143Z",
+ "telemetry":{
+ "temperature":8.341033560421833
+ },
+ "messageProperties":{
+ "iothub-creation-time-utc":"2021-01-29T16:45:39.021Z"
+ },
+ "enrichments":{}
+}
+```
+ ## Property changes format Each message or record represents one change to a device or cloud property. For device properties, only changes in the reported value are exported as a separate message. Information in the exported message includes:
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/how-to-send-additional-data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-send-additional-data.md
@@ -9,7 +9,7 @@
-# How to transfer a payload between device and DPS
+# How to transfer payloads between devices and DPS
Sometimes DPS needs more data from devices to properly provision them to the right IoT Hub, and that data needs to be provided by the device. Vice versa, DPS can return data to the device to facilitate client side logics. ## When to use it
iot-dps https://docs.microsoft.com/en-us/azure/iot-dps/tutorial-custom-allocation-policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-custom-allocation-policies.md
@@ -343,7 +343,7 @@ This sample code simulates a device boot sequence that sends the provisioning re
hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY; ```
-6. In the `main()` function, find the call to `Prov_Device_Register_Device()`. Just before that call, add the following lines of code that use [`Prov_Device_Set_Provisioning_Payload()`](/azure/iot-hub/iot-c-sdk-ref/prov-device-client-h/prov-device-set-provisioning-payload) to pass a custom JSON payload during provisioning. This can be used to provide more information to your custom allocation functions. This could also be used to pass the device type instead of examining the registration ID.
+6. In the `main()` function, find the call to `Prov_Device_Register_Device()`. Just before that call, add the following lines of code that use [`Prov_Device_Set_Provisioning_Payload()`](/azure/iot-hub/iot-c-sdk-ref/prov-device-client-h/prov-device-set-provisioning-payload) to pass a custom JSON payload during provisioning. This can be used to provide more information to your custom allocation functions. This could also be used to pass the device type instead of examining the registration ID. For more information on sending and receiving custom data payloads with DPS, see [How to transfer payloads between devices and DPS](how-to-send-additional-data.md).
```c // An example custom payload
key-vault https://docs.microsoft.com/en-us/azure/key-vault/certificates/import-cert-faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/import-cert-faqs.md
@@ -59,6 +59,10 @@ This error could be caused by either of two reasons:
* The certificate subject name is limited to 200 characters. * The certificate password is limited to 200 characters. +
+### Error "The specified PEM X.509 certificate content is in an unexpected format. Please check if certificate is in valid PEM format."
+Please verify that the content in the PEM file is uses UNIX-style line separators `(\n)`
+ ### Can I import an expired certificate to Azure Key Vault? No, expired PFX certificates can't be imported to Key Vault.
@@ -79,4 +83,4 @@ If you've imported the certificate successfully, you should be able to confirm i
## Next steps -- [Azure Key Vault certificates](./about-certificates.md)\ No newline at end of file
+- [Azure Key Vault certificates](./about-certificates.md)
key-vault https://docs.microsoft.com/en-us/azure/key-vault/general/soft-delete-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-overview.md
@@ -94,5 +94,6 @@ In general, when an object (a key vault or a key or a secret) is in deleted stat
The following two guides offer the primary usage scenarios for using soft-delete.
+- [How to use Key Vault soft-delete with Portal](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal)
- [How to use Key Vault soft-delete with PowerShell](./key-vault-recovery.md) - [How to use Key Vault soft-delete with CLI](./key-vault-recovery.md)
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/quickstart-load-balancer-standard-internal-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
@@ -1,7 +1,7 @@
Title: "Quickstart: Create an internal load balancer - Azure CLI"
+ Title: 'Quickstart: Create an internal load balancer - Azure CLI'
-description: This quickstart shows how to create an internal load balancer using the Azure CLI
+description: This quickstart shows how to create an internal load balancer by using the Azure CLI.
documentationcenter: na
@@ -16,24 +16,24 @@ Last updated 12/19/2020
-# Quickstart: Create an internal load balancer to load balance VMs using Azure CLI
+# Quickstart: Create an internal load balancer by using the Azure CLI
-Get started with Azure Load Balancer by using Azure CLI to create an internal load balancer and three virtual machines.
+Get started with Azure Load Balancer by using the Azure CLI to create an internal load balancer and three virtual machines.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+This quickstart requires version 2.0.28 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
-## Create a resource group
+>[!NOTE]
+>Azure Load Balancer Standard is the recommended choice for production workloads. This article contains information about Azure Load Balancer Standard, as well as Azure Load Balancer Basic. For more information about SKUs, see [Azure Load Balancer SKUs](skus.md).
-An Azure resource group is a logical container into which Azure resources are deployed and managed.
+## Create a resource group
-Create a resource group with [az group create](/cli/azure/group#az_group_create):
+An Azure resource group is a logical container into which you deploy and manage your Azure resources.
-* Named **CreateIntLBQS-rg**.
-* In the **eastus** location.
+Create a resource group with [az group create](/cli/azure/group#az_group_create). Name the resource group **CreateIntLBQS-rg**, and specify the location as **eastus**.
```azurecli-interactive az group create \
@@ -41,35 +41,27 @@ Create a resource group with [az group create](/cli/azure/group#az_group_create)
--location eastus ```--
-# [**Standard SKU**](#tab/option-1-create-load-balancer-standard)
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
+## Azure Load Balancer Standard
-In this section, you create a load balancer that load balances virtual machines.
-
-When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
-
-The following diagram shows the resources created in this quickstart:
+In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer. The following diagram shows the resources created in this quickstart:
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false":::
-## Configure virtual network - Standard
+### Configure the virtual network
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources.
-### Create a virtual network
+#### Create a virtual network
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create):
+Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). Specify the following:
-* Named **myVNet**.
-* Address prefix of **10.1.0.0/16**.
-* Subnet named **myBackendSubnet**.
-* Subnet prefix of **10.1.0.0/24**.
-* In the **CreateIntLBQS-rg** resource group.
-* Location of **eastus**.
+* Named **myVNet**
+* Address prefix of **10.1.0.0/16**
+* Subnet named **myBackendSubnet**
+* Subnet prefix of **10.1.0.0/24**
+* In the **CreateIntLBQS-rg** resource group
+* Location of **eastus**
```azurecli-interactive az network vnet create \
@@ -81,12 +73,12 @@ Create a virtual network using [az network vnet create](/cli/azure/network/vnet#
--subnet-prefixes 10.1.0.0/24 ```
-### Create a public IP address
+#### Create a public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host:
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host. Specify the following:
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CreateIntLBQS-rg**.
+* Create a standard zone-redundant public IP address named **myBastionIP**
+* In **CreateIntLBQS-rg**
```azurecli-interactive az network public-ip create \
@@ -94,14 +86,14 @@ az network public-ip create \
--name myBastionIP \ --sku Standard ```
-### Create a bastion subnet
+#### Create an Azure Bastion subnet
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a bastion subnet:
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a subnet. Specify the following:
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.1.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreateIntLBQS-rg**.
+* Named **AzureBastionSubnet**
+* Address prefix of **10.1.1.0/24**
+* In virtual network **myVNet**
+* In resource group **CreateIntLBQS-rg**
```azurecli-interactive az network vnet subnet create \
@@ -111,15 +103,15 @@ az network vnet subnet create \
--address-prefixes 10.1.1.0/24 ```
-### Create bastion host
+#### Create an Azure Bastion host
-Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a bastion host:
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host. Specify the following:
-* Named **myBastionHost**.
-* In **CreateIntLBQS-rg**.
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
+* Named **myBastionHost**
+* In **CreateIntLBQS-rg**
+* Associated with public IP **myBastionIP**
+* Associated with virtual network **myVNet**
+* In **eastus** location
```azurecli-interactive az network bastion create \
@@ -132,15 +124,12 @@ az network bastion create \
It can take a few minutes for the Azure Bastion host to deploy.
+#### Create a network security group
-### Create a network security group
-
-For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
+For a standard load balancer, ensure that your VMs have network interfaces that belong to a network security group. Create a network security group by using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). Specify the following:
-Create a network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create):
-
-* Named **myNSG**.
-* In resource group **CreateIntLBQS-rg**.
+* Named **myNSG**
+* In resource group **CreateIntLBQS-rg**
```azurecli-interactive az network nsg create \
@@ -148,20 +137,20 @@ Create a network security group using [az network nsg create](/cli/azure/network
--name myNSG ```
-### Create a network security group rule
+#### Create a network security group rule
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create):
+Create a network security group rule by using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create). Specify the following:
-* Named **myNSGRuleHTTP**.
-* In the network security group you created in the previous step, **myNSG**.
-* In resource group **CreateIntLBQS-rg**.
-* Protocol **(*)**.
-* Direction **Inbound**.
-* Source **(*)**.
-* Destination **(*)**.
-* Destination port **Port 80**.
-* Access **Allow**.
-* Priority **200**.
+* Named **myNSGRuleHTTP**
+* In the network security group you created in the previous step, **myNSG**
+* In resource group **CreateIntLBQS-rg**
+* Protocol **(*)**
+* Direction **Inbound**
+* Source **(*)**
+* Destination **(*)**
+* Destination port **Port 80**
+* Access **Allow**
+* Priority **200**
```azurecli-interactive az network nsg rule create \
@@ -178,22 +167,22 @@ Create a network security group rule using [az network nsg rule create](/cli/azu
--priority 200 ```
-## Create backend servers - Standard
+### Create back-end servers
In this section, you create: * Three network interfaces for the virtual machines.
-* Three virtual machines to be used as backend servers for the load balancer.
+* Three virtual machines to be used as servers for the load balancer.
-### Create network interfaces for the virtual machines
+#### Create network interfaces for the virtual machines
-Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
+Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create). Specify the following:
-* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* In resource group **CreateIntLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
+* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* In resource group **CreateIntLBQS-rg**
+* In virtual network **myVNet**
+* In subnet **myBackendSubnet**
+* In network security group **myNSG**
```azurecli-interactive array=(myNicVM1 myNicVM2 myNicVM3)
@@ -208,15 +197,15 @@ Create three network interfaces with [az network nic create](/cli/azure/network/
done ```
-### Create virtual machines
+#### Create the virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create). Specify the following:
-* Named **myVM1**, **myVM2**, and **myVM3**.
-* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 1**, **Zone 2**, and **Zone 3**.
+* Named **myVM1**, **myVM2**, and **myVM3**
+* In resource group **CreateIntLBQS-rg**
+* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* Virtual machine image **win2019datacenter**
+* In **Zone 1**, **Zone 2**, and **Zone 3**
```azurecli-interactive array=(1 2 3)
@@ -233,26 +222,26 @@ Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
done ```
-It may take a few minutes for the VMs to deploy.
+It can take a few minutes for the VMs to deploy.
-## Create standard load balancer
+### Create the load balancer
This section details how you can create and configure the following components of the load balancer:
- * A frontend IP pool that receives the incoming network traffic on the load balancer.
- * A backend IP pool where the frontend pool sends the load balanced network traffic.
- * A health probe that determines health of the backend VM instances.
- * A load balancer rule that defines how traffic is distributed to the VMs.
+* An IP pool that receives the incoming network traffic on the load balancer.
+* A second IP pool, where the first pool sends the load-balanced network traffic.
+* A health probe that determines health of the VM instances.
+* A load balancer rule that defines how traffic is distributed to the VMs.
-### Create the load balancer resource
+#### Create the load balancer resource
-Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create):
+Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create). Specify the following:
-* Named **myLoadBalancer**.
-* A frontend pool named **myFrontEnd**.
-* A backend pool named **myBackEndPool**.
-* Associated with the virtual network **myVNet**.
-* Associated with the backend subnet **myBackendSubnet**.
+* Named **myLoadBalancer**
+* A pool named **myFrontEnd**
+* A pool named **myBackEndPool**
+* Associated with the virtual network **myVNet**
+* Associated with the subnet **myBackendSubnet**
```azurecli-interactive az network lb create \
@@ -265,18 +254,16 @@ Create a public load balancer with [az network lb create](/cli/azure/network/lb#
--backend-pool-name myBackEndPool ```
-### Create the health probe
-
-A health probe checks all virtual machine instances to ensure they can send network traffic.
+#### Create the health probe
-A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
+A health probe checks all virtual machine instances to ensure they can send network traffic. A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create):
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create). Specify the following:
-* Monitors the health of the virtual machines.
-* Named **myHealthProbe**.
-* Protocol **TCP**.
-* Monitoring **Port 80**.
+* Monitors the health of the virtual machines
+* Named **myHealthProbe**
+* Protocol **TCP**
+* Monitoring **Port 80**
```azurecli-interactive az network lb probe create \
@@ -287,23 +274,23 @@ Create a health probe with [az network lb probe create](/cli/azure/network/lb/pr
--port 80 ```
-### Create the load balancer rule
+#### Create a load balancer rule
A load balancer rule defines:
-* Frontend IP configuration for the incoming traffic.
-* The backend IP pool to receive the traffic.
+* The IP configuration for the incoming traffic.
+* The IP pool to receive the traffic.
* The required source and destination port.
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create):
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create). Specify the following:
* Named **myHTTPRule**
-* Listening on **Port 80** in the frontend pool **myFrontEnd**.
-* Sending load-balanced network traffic to the backend address pool **myBackEndPool** using **Port 80**.
-* Using health probe **myHealthProbe**.
-* Protocol **TCP**.
-* Idle timeout of **15 minutes**.
-* Enable TCP reset.
+* Listening on **Port 80** in the pool **myFrontEnd**
+* Sending load-balanced network traffic to the address pool **myBackEndPool** by using **Port 80**
+* Using health probe **myHealthProbe**
+* Protocol **TCP**
+* Idle timeout of **15 minutes**
+* Enable TCP reset
```azurecli-interactive az network lb rule create \
@@ -320,14 +307,14 @@ Create a load balancer rule with [az network lb rule create](/cli/azure/network/
--enable-tcp-reset true ```
-### Add virtual machines to load balancer backend pool
+#### Add VMs to the load balancer pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add):
+Add the virtual machines to the back-end pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add). Specify the following:
-* In backend address pool **myBackEndPool**.
-* In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* Associated with load balancer **myLoadBalancer**.
+* In address pool **myBackEndPool**
+* In resource group **CreateIntLBQS-rg**
+* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* Associated with load balancer **myLoadBalancer**
```azurecli-interactive array=(VM1 VM2 VM3)
@@ -343,33 +330,26 @@ Add the virtual machines to the backend pool with [az network nic ip-config addr
```
-# [**Basic SKU**](#tab/option-1-create-load-balancer-basic)
+## Azure Load Balancer Basic
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUS, see **[Azure Load Balancer SKUs](skus.md)**.
-
-In this section, you create a load balancer that load balances virtual machines.
-
-When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+In this section, you create a load balancer that load balances virtual machines. When you create an internal load balancer, a virtual network is configured as the network for the load balancer. The following diagram shows the resources created in this quickstart:
-The following diagram shows the resources created in this quickstart:
+:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created for quickstart." border="false":::
-:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false":::
-
-## Configure virtual network - Basic
+### Configure the virtual network
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources.
-### Create a virtual network
+#### Create a virtual network
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-createt):
+Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-createt). Specify the following:
-* Named **myVNet**.
-* Address prefix of **10.1.0.0/16**.
-* Subnet named **myBackendSubnet**.
-* Subnet prefix of **10.1.0.0/24**.
-* In the **CreateIntLBQS-rg** resource group.
-* Location of **eastus**.
+* Named **myVNet**
+* Address prefix of **10.1.0.0/16**
+* Subnet named **myBackendSubnet**
+* Subnet prefix of **10.1.0.0/24**
+* In the **CreateIntLBQS-rg** resource group
+* Location of **eastus**
```azurecli-interactive az network vnet create \
@@ -381,12 +361,12 @@ Create a virtual network using [az network vnet create](/cli/azure/network/vnet#
--subnet-prefixes 10.1.0.0/24 ```
-### Create a public IP address
+#### Create a public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host:
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host. Specify the following:
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CreateIntLBQS-rg**.
+* Create a standard zone-redundant public IP address named **myBastionIP**
+* In **CreateIntLBQS-rg**
```azurecli-interactive az network public-ip create \
@@ -394,14 +374,14 @@ az network public-ip create \
--name myBastionIP \ --sku Standard ```
-### Create a bastion subnet
+#### Create an Azure Bastion subnet
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a bastion subnet:
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a subnet. Specify the following:
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.1.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreateIntLBQS-rg**.
+* Named **AzureBastionSubnet**
+* Address prefix of **10.1.1.0/24**
+* In virtual network **myVNet**
+* In resource group **CreateIntLBQS-rg**
```azurecli-interactive az network vnet subnet create \
@@ -411,15 +391,15 @@ az network vnet subnet create \
--address-prefixes 10.1.1.0/24 ```
-### Create bastion host
+#### Create an Azure Bastion host
-Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a bastion host:
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host. Specify the following:
-* Named **myBastionHost**.
-* In **CreateIntLBQS-rg**.
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
+* Named **myBastionHost**
+* In **CreateIntLBQS-rg**
+* Associated with public IP **myBastionIP**
+* Associated with virtual network **myVNet**
+* In **eastus** location
```azurecli-interactive az network bastion create \
@@ -432,14 +412,12 @@ az network bastion create \
It can take a few minutes for the Azure Bastion host to deploy.
-### Create a network security group
-
-For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
+#### Create a network security group
-Create a network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create):
+For a standard load balancer, ensure that your VMs have network interfaces that belong to a network security group. Create a network security group by using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). Specify the following:
-* Named **myNSG**.
-* In resource group **CreateIntLBQS-rg**.
+* Named **myNSG**
+* In resource group **CreateIntLBQS-rg**
```azurecli-interactive az network nsg create \
@@ -447,20 +425,20 @@ Create a network security group using [az network nsg create](/cli/azure/network
--name myNSG ```
-### Create a network security group rule
+#### Create a network security group rule
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create):
+Create a network security group rule by using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create). Specify the following:
-* Named **myNSGRuleHTTP**.
-* In the network security group you created in the previous step, **myNSG**.
-* In resource group **CreateIntLBQS-rg**.
-* Protocol **(*)**.
-* Direction **Inbound**.
-* Source **(*)**.
-* Destination **(*)**.
-* Destination port **Port 80**.
-* Access **Allow**.
-* Priority **200**.
+* Named **myNSGRuleHTTP**
+* In the network security group you created in the previous step, **myNSG**
+* In resource group **CreateIntLBQS-rg**
+* Protocol **(*)**
+* Direction **Inbound**
+* Source **(*)**
+* Destination **(*)**
+* Destination port **Port 80**
+* Access **Allow**
+* Priority **200**
```azurecli-interactive az network nsg rule create \
@@ -477,23 +455,23 @@ Create a network security group rule using [az network nsg rule create](/cli/azu
--priority 200 ```
-## Create backend servers - Basic
+### Create back-end servers
In this section, you create: * Three network interfaces for the virtual machines.
-* Availability set for the virtual machines
-* Three virtual machines to be used as backend servers for the load balancer.
+* The availability set for the virtual machines.
+* Three virtual machines to be used as servers for the load balancer.
-### Create network interfaces for the virtual machines
+#### Create network interfaces for the virtual machines
-Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
+Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create). Specify the following:
-* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* In resource group **CreateIntLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
+* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* In resource group **CreateIntLBQS-rg**
+* In virtual network **myVNet**
+* In subnet **myBackendSubnet**
+* In network security group **myNSG**
```azurecli-interactive array=(myNicVM1 myNicVM2 myNicVM3)
@@ -508,13 +486,13 @@ Create three network interfaces with [az network nic create](/cli/azure/network/
done ```
-### Create availability set for virtual machines
+#### Create the availability set for the virtual machines
-Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create):
+Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create). Specify the following:
-* Named **myAvailabilitySet**.
-* In resource group **CreateIntLBQS-rg**.
-* Location **eastus**.
+* Named **myAvailabilitySet**
+* In resource group **CreateIntLBQS-rg**
+* Location **eastus**
```azurecli-interactive az vm availability-set create \
@@ -524,15 +502,15 @@ Create the availability set with [az vm availability-set create](/cli/azure/vm/a
```
-### Create virtual machines
+#### Create the virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create). Specify the following:
-* Named **myVM1**, **myVM2**, and **myVM3**.
-* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* Virtual machine image **win2019datacenter**.
-* In **myAvailabilitySet**.
+* Named **myVM1**, **myVM2**, and **myVM3**
+* In resource group **CreateIntLBQS-rg**
+* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* Virtual machine image **win2019datacenter**
+* In **myAvailabilitySet**
```azurecli-interactive
@@ -549,26 +527,26 @@ Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
--no-wait done ```
-It may take a few minutes for the VMs to deploy.
+It can take a few minutes for the VMs to deploy.
-## Create basic load balancer
+### Create the load balancer
This section details how you can create and configure the following components of the load balancer:
- * A frontend IP pool that receives the incoming network traffic on the load balancer.
- * A backend IP pool where the frontend pool sends the load balanced network traffic.
- * A health probe that determines health of the backend VM instances.
- * A load balancer rule that defines how traffic is distributed to the VMs.
+* An IP pool that receives the incoming network traffic on the load balancer.
+* A second IP pool, where the first pool sends the load-balanced network traffic.
+* A health probe that determines health of the VM instances.
+* A load balancer rule that defines how traffic is distributed to the VMs.
-### Create the load balancer resource
+#### Create the load balancer resource
-Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create):
+Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create). Specify the following:
-* Named **myLoadBalancer**.
-* A frontend pool named **myFrontEnd**.
-* A backend pool named **myBackEndPool**.
-* Associated with the virtual network **myVNet**.
-* Associated with the backend subnet **myBackendSubnet**.
+* Named **myLoadBalancer**
+* A pool named **myFrontEnd**
+* A pool named **myBackEndPool**
+* Associated with the virtual network **myVNet**
+* Associated with the subnet **myBackendSubnet**
```azurecli-interactive az network lb create \
@@ -581,18 +559,16 @@ Create a public load balancer with [az network lb create](/cli/azure/network/lb#
--backend-pool-name myBackEndPool ```
-### Create the health probe
+#### Create the health probe
-A health probe checks all virtual machine instances to ensure they can send network traffic.
+A health probe checks all virtual machine instances to ensure they can send network traffic. A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create). Specify the following:
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create):
-
-* Monitors the health of the virtual machines.
-* Named **myHealthProbe**.
-* Protocol **TCP**.
-* Monitoring **Port 80**.
+* Monitors the health of the virtual machines
+* Named **myHealthProbe**
+* Protocol **TCP**
+* Monitoring **Port 80**
```azurecli-interactive az network lb probe create \
@@ -603,22 +579,22 @@ Create a health probe with [az network lb probe create](/cli/azure/network/lb/pr
--port 80 ```
-### Create the load balancer rule
+#### Create a load balancer rule
A load balancer rule defines:
-* Frontend IP configuration for the incoming traffic.
-* The backend IP pool to receive the traffic.
+* The IP configuration for the incoming traffic.
+* The IP pool to receive the traffic.
* The required source and destination port.
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create):
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create). Specify the following:
* Named **myHTTPRule**
-* Listening on **Port 80** in the frontend pool **myFrontEnd**.
-* Sending load-balanced network traffic to the backend address pool **myBackEndPool** using **Port 80**.
-* Using health probe **myHealthProbe**.
-* Protocol **TCP**.
-* Idle timeout of **15 minutes**.
+* Listening on **Port 80** in the pool **myFrontEnd**
+* Sending load-balanced network traffic to the address pool **myBackEndPool** by using **Port 80**
+* Using health probe **myHealthProbe**
+* Protocol **TCP**
+* Idle timeout of **15 minutes**
```azurecli-interactive az network lb rule create \
@@ -633,14 +609,14 @@ Create a load balancer rule with [az network lb rule create](/cli/azure/network/
--probe-name myHealthProbe \ --idle-timeout 15 ```
-### Add virtual machines to load balancer backend pool
+#### Add VMs to the load balancer pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add):
+Add the virtual machines to the back-end pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add). Specify the following:
-* In backend address pool **myBackEndPool**.
-* In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* Associated with load balancer **myLoadBalancer**.
+* In address pool **myBackEndPool**
+* In resource group **CreateIntLBQS-rg**
+* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**
+* Associated with load balancer **myLoadBalancer**
```azurecli-interactive array=(VM1 VM2 VM3)
@@ -655,19 +631,16 @@ Add the virtual machines to the backend pool with [az network nic ip-config addr
done ```- ## Test the load balancer
-### Create test virtual machine
-
-Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
+Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create). Specify the following:
-* Named **myNicTestVM**.
-* In resource group **CreateIntLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
+* Named **myNicTestVM**
+* In resource group **CreateIntLBQS-rg**
+* In virtual network **myVNet**
+* In subnet **myBackendSubnet**
+* In network security group **myNSG**
```azurecli-interactive az network nic create \
@@ -677,12 +650,12 @@ Create the network interface with [az network nic create](/cli/azure/network/nic
--subnet myBackEndSubnet \ --network-security-group myNSG ```
-Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create):
+Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create). Specify the following:
-* Named **myTestVM**.
-* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicTestVM**.
-* Virtual machine image **Win2019Datacenter**.
+* Named **myTestVM**
+* In resource group **CreateIntLBQS-rg**
+* Attached to network interface **myNicTestVM**
+* Virtual machine image **Win2019Datacenter**
```azurecli-interactive az vm create \
@@ -693,7 +666,7 @@ Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create):
--admin-username azureuser \ --no-wait ```
-Can take a few minutes for the virtual machine to deploy.
+You might need to wait a few minutes for the virtual machine to deploy.
## Install IIS
@@ -718,27 +691,27 @@ Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to instal
1. [Sign in](https://portal.azure.com) to the Azure portal.
-2. Find the private IP address for the load balancer on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer**.
+2. On the **Overview** page, find the private IP address for the load balancer. In the menu on the left, select **All services** > **All resources** > **myLoadBalancer**.
-3. Make note or copy the address next to **Private IP Address** in the **Overview** of **myLoadBalancer**.
+3. In the overview of **myLoadBalancer**, copy the address next to **Private IP Address**.
-4. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myTestVM** that is located in the **CreateIntLBQS-rg** resource group.
+4. In the menu on the left, select **All services** > **All resources**. From the resources list, in the **CreateIntLBQS-rg** resource group, select **myTestVM**.
-5. On the **Overview** page, select **Connect**, then **Bastion**.
+5. On the **Overview** page, select **Connect** > **Bastion**.
-6. Enter the username and password entered during VM creation.
+6. Enter the username and password that you entered when you created the VM.
-7. Open **Internet Explorer** on **myTestVM**.
+7. On **myTestVM**, open **Internet Explorer**.
-8. Enter the IP address from the previous step into the address bar of the browser. The default page of IIS Web server is displayed on the browser.
+8. Enter the IP address from the previous step into the address bar of the browser. The default page of the IIS web server is shown on the browser.
- :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Create a standard internal load balancer" border="true":::
+ :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot of the IP address in the address bar of the browser." border="true":::
-To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine.
+To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS web server. Then, manually refresh your web browser from the client machine.
## Clean up resources
-When no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, load balancer, and all related resources.
+When your resources are no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, load balancer, and all related resources.
```azurecli-interactive az group delete \
@@ -747,13 +720,6 @@ When no longer needed, use the [az group delete](/cli/azure/group#az-group-delet
## Next steps
-In this quickstart:
-
-* You created a standard or public load balancer
-* Attached virtual machines.
-* Configured the load balancer traffic rule and health probe.
-* Tested the load balancer.
-
-To learn more about Azure Load Balancer, continue to:
+Get an overview of Azure Load Balancer.
> [!div class="nextstepaction"]
-> [What is Azure Load Balancer?](load-balancer-overview.md)
\ No newline at end of file
+> [What is Azure Load Balancer?](load-balancer-overview.md)
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-using-sap-connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
@@ -7,7 +7,7 @@
Previously updated : 01/25/2021 Last updated : 02/01/2021 tags: connectors
@@ -533,6 +533,18 @@ For full error messages, check your SAP adapter's extended logs. You can also [e
For on-premises data gateway releases from June 2020 and later, you can [enable gateway logs in the app settings](/data-integration/gateway/service-gateway-tshoot#collect-logs-from-the-on-premises-data-gateway-app).
+* The default logging level is **Warning**.
+
+* If you enable **Additional logging** in the **Diagnostics** settings of the on-premises data gateway app, the logging level is increased to **Informational**.
+
+* To increase the logging level to **Verbose**, update the following setting in your configuration file. Typically, the configuration file is located at `C:\Program Files\On-premises data gateway\Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config`.
+
+```json
+<setting name="SapTraceLevel" serializeAs="String">
+ <value>Verbose</value>
+</setting>
+```
+ For on-premises data gateway releases from April 2020 and earlier, logs are disabled by default. ### Extended SAP logging in on-premises data gateway
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/concept-managed-identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-managed-identities.md
@@ -1,40 +1,35 @@
Title: Managed identities and trusted storage
-description: Media Services can be used with managed identities to enable trusted storage.
+ Title: Managed identities
+description: Media Services can be used with Azure Managed Identities.
+keywords:
Previously updated : 11/04/2020 Last updated : 1/29/2020
-# Managed identities and trusted storage with media services
+# Managed identities
-Media Services can be used with [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to enable trusted storage. When you create a Media Services account, you must associate it with a storage account. Media Services can access that storage account using system authentication. Media Services validates that the Media Services account and the storage account are in the same subscription and it validates that the user adding the association has access the storage account with Azure Resource Manager RBAC.
+A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-## Trusted storage
-
-However, if you want to use a firewall to secure your storage account, you must use managed identity authentication. It allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access. For more information about Trusted Microsoft Services, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md#trusted-microsoft-services).
-
-## Media services managed identity scenarios
-
-There are currently two scenarios where managed identity can be used with Media
+There are currently two scenarios where Managed Identities can be used with Media
- Use the managed identity of the Media Services account to access storage accounts. - Use the managed identity of the Media Services account to access Key Vault to access customer keys.
-The next two sections describe the differences in the two scenarios.
+The next two sections describe the steps of the two scenarios.
-### Use the managed identity of the Media Services account to access storage accounts
+## Use the managed identity of the Media Services account to access storage accounts
1. Create a Media Services account with a managed identity. 1. Grant the managed identity principal access to a storage account you own.
-1. Media Services can then access Storage account on your behalf using the managed identity.
+1. Media Services can then access storage account on your behalf using the managed identity.
-### Use the managed identity of the Media Services account to access Key Vault to access customer keys
+## Use the managed identity of the Media Services account to access Key Vault to access customer keys
1. Create a Media Services account with a managed identity. 1. Grant the managed identity principal access to a Key Vault that you own.
@@ -52,4 +47,4 @@ These tutorials include both of the scenarios mentioned above.
## Next steps
-To learn more about what managed identities can do for you and your Azure applications, see [Azure AD Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
\ No newline at end of file
+To learn more about what managed identities can do for you and your Azure applications, see [Azure AD Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/concept-trusted-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-trusted-storage.md
@@ -0,0 +1,35 @@
+
+ Title: Trusted storage for Media Services
+description: Managed Identities authentication allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access.
+keywords: trusted storage, managed identities
+++++ Last updated : 1/29/2020+++
+# Trusted storage for Media Services
+
+When you create a Media Services account, you must associate it with a storage account. Media Services can access that storage account using system authentication. Media Services validates that the Media Services account and the storage account are in the same subscription and it validates that the user adding the association has access the storage account with Azure Resource Manager RBAC.
+
+However, if you want to use a firewall to secure your storage account and enable trusted storage, you must use [Managed Identities](concept-managed-identities.md) authentication. It allows Media Services to access the storage account that has been configured with a firewall or a VNet restriction through trusted storage access.
+
+To understand the methods of creating trusted storage with Managed Identities, read [Managed Identities and Media Services](concept-managed-identities.md).
+
+For more information about customer managed keys and Key Vault, see [Bring your own key (customer-managed keys) with Media Services](concept-use-customer-managed-keys-byok.md)
+
+For more information about Trusted Microsoft Services, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md#trusted-microsoft-services).
+
+## Tutorials
+
+These tutorials include both of the scenarios mentioned above.
+
+- [Use the Azure portal to use customer-managed keys or BYOK with Media Services](tutorial-byok-portal.md)
+- [Use customer-managed keys or BYOK with Media Services REST API](tutorial-byok-postman.md).
+
+## Next steps
+
+To learn more about what managed identities can do for you and your Azure applications, see [Azure AD Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
\ No newline at end of file
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/concept-use-customer-managed-keys-byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-use-customer-managed-keys-byok.md
@@ -5,7 +5,7 @@
Previously updated : 10/14/2020 Last updated : 1/28/2020 # Bring your own key (customer-managed keys) with Media Services
@@ -32,6 +32,12 @@ You can specify a key name and key version, or just a key name. When you use onl
> [!WARNING] > Media Services monitors access to the customer key. If the customer key becomes inaccessible (for example, the key has been deleted or the Key Vault has been deleted or the access grant has been removed), Media Services will transition the account to the Customer Key Inaccessible State (effectively disabling the account). However, the account can be deleted in this state. The only supported operations are account GET, LIST and DELETE; all other requests (encoding, streaming, and so on) will fail until access to the account key is restored.
+## Double encryption
+
+Media Services supports double encryption. To learn more about double encryption, see [Azure double encryption](../../security/fundamentals/double-encryption.md).
+
+Double encryption is enabled automatically on the Media Services account. However, you need to configure the customer managed key and double encryption on your storage account separately.
+ ## Tutorials - [Use the Azure portal to use customer-managed keys or BYOK with Media Services](tutorial-byok-portal.md)
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/storage-account-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/storage-account-concept.md
@@ -11,7 +11,7 @@ editor: ''
Previously updated : 01/05/2021 Last updated : 01/29/2021
@@ -30,7 +30,7 @@ We recommend that you use GPv2, so you can take advantage of the latest features
> [!NOTE] > Only the hot access tier is supported for use with Azure Media Services, although the other access tiers can be used to reduce storage costs on content that isn't being actively used.
-There are different SKUs you can choose for your storage account. For more information, see [storage accounts](/cli/azure/storage/account?view=azure-cli-latest). If you want to experiment with storage accounts, use `--sku Standard_LRS`. However, when picking a SKU for production, you should consider `--sku Standard_RAGRS`, which provides geographic replication for business continuity.
+There are different SKUs you can choose for your storage account. If you want to experiment with storage accounts, use `--sku Standard_LRS`. However, when picking a SKU for production, you should consider `--sku Standard_RAGRS`, which provides geographic replication for business continuity.
## Assets in a storage account
@@ -45,14 +45,15 @@ To protect your assets at rest, the assets should be encrypted by the storage si
|Encryption option|Description|Media Services v3| ||||
-|Media Services storage encryption| AES-256 encryption, key managed by Media Services. |Not supported.<sup>(1)</sup>|
+|Media Services storage encryption| AES-256 encryption, key managed by Media Services. |Not supported.<sup>1</sup>|
|[Storage service encryption for data at rest](../../storage/common/storage-service-encryption.md)|Server-side encryption offered by Azure Storage, key managed by Azure or by customer.|Supported.| |[Storage client-side encryption](../../storage/common/storage-client-side-encryption.md)|Client-side encryption offered by Azure storage, key managed by customer in Key Vault.|Not supported.| <sup>1</sup> In Media Services v3, storage encryption (AES-256 encryption) is only supported for backwards compatibility when your assets were created with Media Services v2, which means v3 works with existing storage encrypted assets but won't allow creation of new ones.
-## Double encryption
-Media Services supports double encryption. To learn more about double encryption, see [Azure double encryption](../../security/fundamentals/double-encryption.md).
+## Storage account double encryption
+
+Storage accounts support double encryption but the second layer must explicitly be enabled. See [Azure Storage encryption for data at rest](https://docs.microsoft.com/azure/storage/common/storage-service-encryption#doubly-encrypt-data-with-infrastructure-encryption).
## Storage account errors
@@ -65,10 +66,6 @@ The following are the primary scenarios that would result in a Media Services ac
|The Media Services account or attached storage account(s) were migrated to separate subscriptions. |Migrate the storage account(s) or Media Services account so that they're all in the same subscription. | |The Media Services account is using an attached storage account in a different subscription as it was an early Media Services account where this was supported. All early Media Services accounts were converted to modern Azure Resources Manager based accounts and will have a Disconnected state. |Migrate the storage account or Media Services account so that they're all in the same subscription.|
-## Azure Storage firewall
-
-Azure Media Services doesn't support storage accounts with the Azure Storage firewall or [Private Endpoints](../../storage/common/storage-network-security.md) enabled.
- ## Next steps
-To learn how to attach a storage account to your Media Services account, see [Create an account](./create-account-howto.md).
\ No newline at end of file
+To learn how to attach a storage account to your Media Services account, see [Create an account](./create-account-howto.md).
mysql https://docs.microsoft.com/en-us/azure/mysql/flexible-server/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
@@ -126,20 +126,20 @@ The service runs the community version of MySQL. This allows full application co
One of the advantage of running your workload in Azure is it's global reach. The flexible server for Azure Database for MySQL is available today in following Azure regions:
-| Region | High Availability |
-| | |
-| West Europe | :heavy_check_mark: |
-| North Europe | :heavy_check_mark: |
-| UK South | :x: |
-| East US 2 | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: |
-| Central US | :x: |
-| East US | :heavy_check_mark: |
-| Canada Central | :x: |
-| Southeast Asia | :heavy_check_mark: |
-| Korea Central | :x: |
-| Japan East | :x: |
-| Australia East | :heavy_check_mark: |
+| Region | Availability | Zone redundant HA |
+| | | |
+| West Europe | :heavy_check_mark: | :heavy_check_mark: |
+| North Europe | :heavy_check_mark: | :heavy_check_mark: |
+| UK South | :heavy_check_mark: | :x: |
+| East US 2 | :heavy_check_mark: | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: | :heavy_check_mark: |
+| Central US | :heavy_check_mark: | :x: |
+| East US | :heavy_check_mark: | :heavy_check_mark: |
+| Canada Central | :heavy_check_mark: | :x: |
+| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: | :x: |
+| Japan East | :heavy_check_mark: | :x: |
+| Australia East | :heavy_check_mark: | :heavy_check_mark: |
We are working on adding new regions soon.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-packet-capture-manage-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
@@ -123,7 +123,7 @@ Once the preceding steps are complete, the packet capture agent is installed on
The next step is to retrieve the Network Watcher instance. This variable is passed to the `New-AzNetworkWatcherPacketCapture` cmdlet in step 4. ```powershell
-$networkWatcher = Get-AzResource -ResourceType "Microsoft.Network/networkWatchers" | Where {$_.Location -eq "WestCentralUS" }
+$networkWatcher = Get-AzNetworkWatcher | Where {$_.Location -eq "westcentralus" }
``` ### Step 2
@@ -275,4 +275,4 @@ Learn how to automate packet captures with Virtual machine alerts by viewing [Cr
Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
-<!-- Image references -->
\ No newline at end of file
+<!-- Image references -->
postgresql https://docs.microsoft.com/en-us/azure/postgresql/concepts-hyperscale-audit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-audit.md
@@ -0,0 +1,69 @@
+
+ Title: Audit logging - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Hyperscale (Citus).
++++ Last updated : 01/29/2021++
+# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Audit logging of database activities in Azure Database for PostgreSQL - Hyperscale (Citus) is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session and/or object audit logging.
+
+> [!IMPORTANT]
+> pgAudit is in preview on Azure Database for PostgreSQL - Hyperscale (Citus)
+
+If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../azure-monitor/platform/platform-logs-overview.md).
+
+## Usage considerations
+By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL - Hyperscale (Citus), you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, and/or Azure Monitor logs, depending on your choice.
+
+## Enabling pgAudit
+
+The pgAudit extension is pre-installed and enabled on all Hyperscale (Citus)
+server group nodes. No action is required to enable it.
+
+## pgAudit settings
+
+pgAudit allows you to configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
+
+> [!NOTE]
+> pgAudit settings are specified globally and cannot be specified at a database or role level.
+>
+> Also, pgAudit settings are specified per-node in a server group. To make a change on all nodes, you must apply it to each node individually.
+
+You must configure pgAudit parameters to start logging. The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first and confirm that you are getting the expected behavior.
+
+> [!NOTE]
+> Setting `pgaudit.log_client` to ON will redirect logs to a client process (like psql) instead of being written to file. This setting should generally be left disabled. <br> <br>
+> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on.
+
+> [!NOTE]
+> In Azure Database for PostgreSQL - Hyperscale (Citus), `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
+
+## Audit log format
+Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
+
+## Getting started
+To quickly get started, set `pgaudit.log` to `WRITE`, and open your server logs to review the output.
+
+## Viewing audit logs
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../azure-monitor/platform/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../azure-monitor/platform/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../azure-monitor/log-query/log-query-overview.md) overview.
+
+You can use this query to get started. You can configure alerts based on queries.
+
+Search for all pgAudit entries in Postgres logs for a particular server in the last day
+```kusto
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where TimeGenerated > ago(1d)
+| where Message contains "AUDIT:"
+```
+
+## Next steps
+
+- [Learn how to setup logging in Azure Database for PostgreSQL - Hyperscale (Citus) and how to access logs](howto-hyperscale-logging.md)
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/common-questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/common-questions.md
@@ -5,7 +5,7 @@
Previously updated : 09/14/2020 Last updated : 02/01/2021
@@ -58,7 +58,7 @@ Yes, both in transit and at rest.
### How is managed identity used in Resource Mover?
-[Managed identity](../active-directory/managed-identities-azure-resources/overview.md) (formerly known as Managed Service Identity (MIS)) provides Azure services with an automatically managed identity in Azure AD.
+[Managed identity](../active-directory/managed-identities-azure-resources/overview.md) (formerly known as Managed Service Identity (MSI)) provides Azure services with an automatically managed identity in Azure AD.
- Resource Mover uses managed identity so that it can access Azure subscriptions to move resources across regions. - A move collection needs a system-assigned identity, with access to the subscription that contains resources you're moving.
resource-mover https://docs.microsoft.com/en-us/azure/resource-mover/support-matrix-move-region-azure-vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/resource-mover/support-matrix-move-region-azure-vm.md
@@ -150,7 +150,7 @@ Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per
| | NIC | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process. Internal load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
-Public load balancer | Not currently supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
+Public load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
Public IP address | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.<br/><br/> The public IP address is region-specific, and won't be retained in the target region after the move. Keep this in mind when you modify networking settings (including load balancing rules) in the target location. Network security group | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process. Reserved (static) IP address | Supported | You can't currently configure this. The value defaults to the source value. <br/><br/> If the NIC on the source VM has a static IP address, and the target subnet has the same IP address available, it's assigned to the target VM.<br/><br/> If the target subnet doesn't have the same IP address available, the initiate move for the VM will fail.
@@ -186,4 +186,4 @@ If you're using a network security group (NSG) rules to control outbound connect
## Next steps
-Try [moving an Azure VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
\ No newline at end of file
+Try [moving an Azure VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
search https://docs.microsoft.com/en-us/azure/search/search-howto-index-json-blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-json-blobs.md
@@ -6,383 +6,141 @@ description: Crawl Azure JSON blobs for text content using the Azure Cognitive S
+ Previously updated : 09/25/2020 Last updated : 02/01/2021 - # How to index JSON blobs using a Blob indexer in Azure Cognitive Search
-This article shows you how to configure an Azure Cognitive Search blob [indexer](search-indexer-overview.md) to extract structured content from JSON documents in Azure Blob storage and make it searchable in Azure Cognitive Search. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from JSON blobs.
-
-You can use the [portal](#json-indexer-portal), [REST APIs](#json-indexer-rest), or [.NET SDK](#json-indexer-dotnet) to index JSON content. Common to all approaches is that JSON documents are located in a blob container in an Azure Storage account. For guidance on pushing JSON documents from other non-Azure platforms, see [Data import in Azure Cognitive Search](search-what-is-data-import.md).
-
-JSON blobs in Azure Blob storage are typically either a single JSON document (parsing mode is `json`) or a collection of JSON entities. For collections, the blob could have an **array** of well-formed JSON elements (parsing mode is `jsonArray`). Blobs could also be composed of multiple individual JSON entities separated by a newline (parsing mode is `jsonLines`). The **parsingMode** parameter on the request determines the output structures.
-
-> [!NOTE]
-> For more information about indexing multiple search documents from a single blob, see [One-to-many indexing](search-howto-index-one-to-many-blobs.md).
-
-<a name="json-indexer-portal"></a>
-
-## Use the portal
-
-The easiest method for indexing JSON documents is to use a wizard in the [Azure portal](https://portal.azure.com/). By parsing metadata in the Azure blob container, the [**Import data**](search-import-data-portal.md) wizard can create a default index, map source fields to target index fields, and load the index in a single operation. Depending on the size and complexity of source data, you could have an operational full text search index in minutes.
-
-We recommend using the same region or location for both Azure Cognitive Search and Azure Storage for lower latency and to avoid bandwidth charges.
-
-### 1 - Prepare source data
-
-[Sign in to the Azure portal](https://portal.azure.com/) and [create a Blob container](../storage/blobs/storage-quickstart-blobs-portal.md) to contain your data. The Public Access Level can be set to any of its valid values.
-
-You will need the storage account name, container name, and an access key to retrieve your data in the **Import data** wizard.
-
-### 2 - Start Import data wizard
-
-In the Overview page of your search service, you can [start the wizard](search-import-data-portal.md) from the command bar.
-
- :::image type="content" source="medi2.png" alt-text="Import data command in portal" border="false":::
-
-### 3 - Set the data source
-
-In the **data source** page, the source must be **Azure Blob Storage**, with the following specifications:
-
-+ **Data to extract** should be *Content and metadata*. Choosing this option allows the wizard to infer an index schema and map the fields for import.
-
-+ **Parsing mode** should be set to *JSON*, *JSON array* or *JSON lines*.
-
- *JSON* articulates each blob as a single search document, showing up as an independent item in search results.
-
- *JSON array* is for blobs that contain well-formed JSON data - the well-formed JSON corresponds to an array of objects, or has a property which is an array of objects and you want each element to be articulated as a standalone, independent search document. If blobs are complex, and you don't choose *JSON array* the entire blob is ingested as a single document.
-
- *JSON lines* is for blobs composed of multiple JSON entities separated by a new-line, where you want each entity to be articulated as a standalone independent search document. If blobs are complex, and you don't choose *JSON lines* parsing mode, then the entire blob is ingested as a single document.
-
-+ **Storage container** must specify your storage account and container, or a connection string that resolves to the container. You can get connection strings on the Blob service portal page.
-
- :::image type="content" source="media/search-howto-index-json/import-wizard-json-data-source.png" alt-text="Blob data source definition" border="false":::
-
-### 4 - Skip the "Enrich content" page in the wizard
-
-Adding cognitive skills (or enrichment) is not an import requirement. Unless you have a specific need to [add AI enrichment](cognitive-search-concept-intro.md) to your indexing pipeline, you should skip this step.
-
-To skip the step, click the blue buttons at the bottom of the page for "Next" and "Skip".
-
-### 5 - Set index attributes
-
-In the **Index** page, you should see a list of fields with a data type and a series of checkboxes for setting index attributes. The wizard can generate a fields list based on metadata and by sampling the source data.
-
-You can bulk-select attributes by clicking the checkbox at the top of an attribute column. Choose **Retrievable** and **Searchable** for every field that should be returned to a client app and subject to full text search processing. You'll notice that integers are not full text or fuzzy searchable (numbers are evaluated verbatim and are often useful in filters).
-
-Review the description of [index attributes](/rest/api/searchservice/create-index#bkmk_indexAttrib) and [language analyzers](/rest/api/searchservice/language-support) for more information.
-
-Take a moment to review your selections. Once you run the wizard, physical data structures are created and you won't be able to edit these fields without dropping and recreating all objects.
-
- :::image type="content" source="media/search-howto-index-json/import-wizard-json-index.png" alt-text="Blob index definition" border="false":::
-
-### 6 - Create indexer
-
-Fully specified, the wizard creates three distinct objects in your search service. A data source object and index object are saved as named resources in your Azure Cognitive Search service. The last step creates an indexer object. Naming the indexer allows it to exist as a standalone resource, which you can schedule and manage independently of the index and data source object, created in the same wizard sequence.
-
-If you are not familiar with indexers, an *indexer* is a resource in Azure Cognitive Search that crawls an external data source for searchable content. The output of the **Import data** wizard is an indexer that crawls your JSON data source, extracts searchable content, and imports it into an index on Azure Cognitive Search.
-
- :::image type="content" source="media/search-howto-index-json/import-wizard-json-indexer.png" alt-text="Blob indexer definition" border="false":::
-
-Click **OK** to run the wizard and create all objects. Indexing commences immediately.
-
-You can monitor data import in the portal pages. Progress notifications indicate indexing status and how many documents are uploaded.
-
-When indexing is complete, you can use [Search explorer](search-explorer.md) to query your index.
-
-> [!NOTE]
-> If you don't see the data you expect, you might need to set more attributes on more fields. Delete the index and indexer you just created, and step through the wizard again, modifying your selections for index attributes in step 5.
-
-<a name="json-indexer-rest"></a>
-
-## Use REST APIs
-
-You can use the REST API to index JSON blobs, following a three-part workflow common to all indexers in Azure Cognitive Search: create a data source, create an index, create an indexer. Data extraction from blob storage occurs when you submit the Create Indexer request. After this request is finished, you will have a queryable index.
-
-You can review [REST example code](#rest-example) at the end of this section that shows how to create all three objects. This section also contains details about [JSON parsing modes](#parsing-modes), [single blobs](#parsing-single-blobs), [JSON arrays](#parsing-arrays), and [nested arrays](#nested-json-arrays).
-
-For code-based JSON indexing, use [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) and the REST API to create these objects:
-
-+ [index](/rest/api/searchservice/create-index)
-+ [data source](/rest/api/searchservice/create-data-source)
-+ [indexer](/rest/api/searchservice/create-indexer)
-
-Order of operations requires that you create and call objects in this order. In contrast with the portal workflow, a code approach requires an available index to accept the JSON documents sent through the **Create Indexer** request.
-
-JSON blobs in Azure Blob storage are typically either a single JSON document or a JSON "array". The blob indexer in Azure Cognitive Search can parse either construction, depending on how you set the **parsingMode** parameter on the request.
-
-| JSON document | parsingMode | Description | Availability |
-|--|-|--|--|
-| One per blob | `json` | Parses JSON blobs as a single chunk of text. Each JSON blob becomes a single Azure Cognitive Search document. | Generally available in both [REST](/rest/api/searchservice/indexer-operations) API and [.NET](/dotnet/api/azure.search.documents.indexes.models.searchindexer) SDK. |
-| Multiple per blob | `jsonArray` | Parses a JSON array in the blob, where each element of the array becomes a separate Azure Cognitive Search document. | Generally available in both [REST](/rest/api/searchservice/indexer-operations) API and [.NET](/dotnet/api/azure.search.documents.indexes.models.searchindexer) SDK. |
-| Multiple per blob | `jsonLines` | Parses a blob which contains multiple JSON entities (an "array") separated by a newline, where each entity becomes a separate Azure Cognitive Search document. | Generally available in both [REST](/rest/api/searchservice/indexer-operations) API and [.NET](/dotnet/api/azure.search.documents.indexes.models.searchindexer) SDK. |
-
-### 1 - Assemble inputs for the request
-
-For each request, you must provide the service name and admin key for Azure Cognitive Search (in the POST header), and the storage account name and key for blob storage. You can use a [Web API test tool](search-get-started-rest.md) to send HTTP requests to Azure Cognitive Search.
-
-Copy the following four values into Notepad so that you can paste them into a request:
-
-+ Azure Cognitive Search service name
-+ Azure Cognitive Search admin key
-+ Azure storage account name
-+ Azure storage account key
-
-You can find these values in the portal:
-
-1. In the portal pages for Azure Cognitive Search, copy the search service URL from the Overview page.
-
-2. In the left navigation pane, click **Keys** and then copy either the primary or secondary key (they are equivalent).
-
-3. Switch to the portal pages for your storage account. In the left navigation pane, under **Settings**, click **Access Keys**. This page provides both the account name and key. Copy the storage account name and one of the keys to Notepad.
-
-### 2 - Create a data source
+This article shows you how to [configure a blob indexer](search-howto-indexing-azure-blob-storage.md) for blobs that consist of JSON documents. JSON blobs in Azure Blob storage commonly assume any of these forms:
-This step provides data source connection information used by the indexer. The data source is a named object in Azure Cognitive Search that persists the connection information. The data source type, `azureblob`, determines which data extraction behaviors are invoked by the indexer.
++ A single JSON document++ A JSON document containing an array of well-formed JSON elements++ A JSON document containing multiple entities, separated by a newline
-Substitute valid values for service name, admin key, storage account, and account key placeholders.
+The blob indexer provides a **`parsingMode`** parameter to optimize the output of the search document based on the structure Parsing modes consist of the following options:
-```http
- POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-blob-datasource",
- "type" : "azureblob",
- "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
- "container" : { "name" : "my-container", "query" : "optional, my-folder" }
- }
-```
-
-### 3 - Create a target search index
-
-Indexers are paired with an index schema. If you are using the API (rather than the portal), prepare an index in advance so that you can specify it on the indexer operation.
+| parsingMode | JSON document | Description |
+|--|-|--|
+| **`json`** | One per blob | (default) Parses JSON blobs as a single chunk of text. Each JSON blob becomes a single search document. |
+| **`jsonArray`** | Multiple per blob | Parses a JSON array in the blob, where each element of the array becomes a separate search document. |
+| **`jsonLines`** | Multiple per blob | Parses a blob that contains multiple JSON entities (also an array), with individual elements separated by a newline. The indexer starts a new search document after each new line. |
-The index stores searchable content in Azure Cognitive Search. To create an index, provide a schema that specifies the fields in a document, attributes, and other constructs that shape the search experience. If you create an index that has the same field names and data types as the source, the indexer will match the source and destination fields, saving you the work of having to explicitly map the fields.
+For both **`jsonArray`** and **`jsonLines`**, you should review [Indexing one blob to produce many search documents](search-howto-index-one-to-many-blobs.md) to understand how the blob indexer handles disambiguation of the document key for multiple search documents produced from the same blob.
-The following example shows a [Create Index](/rest/api/searchservice/create-index) request. The index will have a searchable `content` field to store the text extracted from blobs:
+Within the indexer definition, you can optionally set [field mappings](search-indexer-field-mappings.md) to choose which properties of the source JSON document are used to populate your target search index. For example, when using the **`jsonArray`** parsing mode, if the array exists as a lower-level property, you can set a **`document root`** property indicating where the array is placed within the blob.
-```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-target-index",
- "fields": [
- { "name": "id", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
- ]
- }
-```
+The following sections describe each mode in more detail. If you are unfamiliar with indexer clients and concepts, see [Create a search indexer](search-howto-create-indexers.md). You should also be familiar with the details of [basic blob indexer configuration](search-howto-indexing-azure-blob-storage.md), which isn't repeated here.
+<a name="parsing-single-blobs"></a>
-### 4 - Configure and run the indexer
+## Index single JSON documents (one per blob)
-As with an index and a data source, and indexer is also a named object that you create and reuse on an Azure Cognitive Search service. A fully specified request to create an indexer might look as follows:
+By default, blob indexers parse JSON blobs as a single chunk of text, one search document for each blob in a container. If the JSON is structured, the search document can reflect that structure, with individual elements represented as individual fields. For example, assume you have the following JSON document in Azure Blob storage:
```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-json-indexer",
- "dataSourceName" : "my-blob-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "parameters" : { "configuration" : { "parsingMode" : "json" } }
+{
+ "article" : {
+ "text" : "A hopefully useful article explaining how to parse JSON blobs",
+ "datePublished" : "2020-04-13",
+ "tags" : [ "search", "storage", "howto" ]
}
+}
```
-Indexer configuration is in the body of the request. It requires a data source and an empty target index that already exists in Azure Cognitive Search.
-
-Schedule and parameters are optional. If you omit them, the indexer runs immediately, using `json` as the parsing mode.
-
-This particular indexer does not include field mappings. Within the indexer definition, you can leave out **field mappings** if the properties of the source JSON document match the fields of your target search index.
--
-### REST Example
-
-This section is a recap of all the requests used for creating objects. For a discussion of component parts, see the previous sections in this article.
-
-### Data source request
-
-All indexers require a data source object that provides connection information to existing data.
-
-```http
- POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-blob-datasource",
- "type" : "azureblob",
- "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
- "container" : { "name" : "my-container", "query" : "optional, my-folder" }
- }
-```
-
-### Index request
+The blob indexer parses the JSON document into a single search document, loading an index by matching "text", "datePublished", and "tags" from the source against identically named and typed target index fields. Given an index with "text", "datePublished, and "tags" fields, the blob indexer can infer the correct mapping without a field mapping present in the request.
-All indexers require a target index that receives the data. The body of the request defines the index schema, consisting of fields, attributed to support the desired behaviors in a searchable index. This index should be empty when you run the indexer.
+Although the default behavior is one search document per JSON blob, setting the 'json' parsing mode changes the internal field mappings for content, promoting fields inside `content` to actual fields in the search index. An example indexer definition for the **`json`** parsing mode might look like this:
```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-target-index",
- "fields": [
- { "name": "id", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
- ]
- }
-```
-
-### Indexer request
-
-This request shows a fully-specified indexer. It includes field mappings, which were omitted in previous examples. Recall that "schedule", "parameters", and "fieldMappings" are optional as long as there is an available default. Omitting "schedule" causes the indexer to run immediately. Omitting "parsingMode" causes the index to use the "json" default.
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
-Creating the indexer on Azure Cognitive Search triggers data import. It runs immediately, and thereafter on a schedule if you've provided one.
-
-```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key for Azure Cognitive Search]
-
- {
- "name" : "my-json-indexer",
- "dataSourceName" : "my-blob-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "parameters" : { "configuration" : { "parsingMode" : "json" } },
- "fieldMappings" : [
- { "sourceFieldName" : "/article/text", "targetFieldName" : "text" },
- { "sourceFieldName" : "/article/datePublished", "targetFieldName" : "date" },
- { "sourceFieldName" : "/article/tags", "targetFieldName" : "tags" }
- ]
- }
+{
+ "name" : "my-json-indexer",
+ "dataSourceName" : "my-blob-datasource",
+ "targetIndexName" : "my-target-index",
+ "parameters" : { "configuration" : { "parsingMode" : "json" } }
+}
```
-<a name="json-indexer-dotnet"></a>
-
-## Use .NET SDK
-
-The .NET SDK has full parity with the REST API. We recommend that you review the previous REST API section to learn concepts, workflow, and requirements. You can then refer to following .NET API reference documentation to implement a JSON indexer in managed code.
-
-+ [azure.search.documents.indexes.models.searchindexerdatasourceconnection](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection)
-+ [azure.search.documents.indexes.models.searchindexerdatasourcetype](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype)
-+ [azure.search.documents.indexes.models.searchindex](/dotnet/api/azure.search.documents.indexes.models.searchindex)
-+ [azure.search.documents.indexes.models.searchindexer](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
-
-<a name="parsing-modes"></a>
-
-## Parsing modes
-
-JSON blobs can assume multiple forms. The **parsingMode** parameter on the JSON indexer determines how JSON blob content is parsed and structured in an Azure Cognitive Search index:
-
-| parsingMode | Description |
-|-|-|
-| `json` | Index each blob as a single document. This is the default. |
-| `jsonArray` | Choose this mode if your blobs consist of JSON arrays, and you need each element of the array to become a separate document in Azure Cognitive Search. |
-|`jsonLines` | Choose this mode if your blobs consist of multiple JSON entities, that are separated by a new line, and you need each entity to become a separate document in Azure Cognitive Search. |
-
-You can think of a document as a single item in search results. If you want each element in the array to show up in search results as an independent item, then use the `jsonArray` or `jsonLines` option as appropriate.
-
-Within the indexer definition, you can optionally use [field mappings](search-indexer-field-mappings.md) to choose which properties of the source JSON document are used to populate your target search index. For `jsonArray` parsing mode, if the array exists as a lower-level property, you can set a document root indicating where the array is placed within the blob.
-
-> [!IMPORTANT]
-> When you use `json`, `jsonArray` or `jsonLines` parsing mode, Azure Cognitive Search assumes that all blobs in your data source contain JSON. If you need to support a mix of JSON and non-JSON blobs in the same data source, let us know on [our UserVoice site](https://feedback.azure.com/forums/263029-azure-search).
--
-<a name="parsing-single-blobs"></a>
-
-## Parse single JSON blobs
+> [!NOTE]
+> As with all indexers, if fields do not clearly match, you should expect to explicitly specify individual [field mappings](search-indexer-field-mappings.md) unless you are using the implicit fields mappings available for blob content and metadata, as described in [basic blob indexer configuration](search-howto-indexing-azure-blob-storage.md).
-By default, [Azure Cognitive Search blob indexer](search-howto-indexing-azure-blob-storage.md) parses JSON blobs as a single chunk of text. Often, you want to preserve the structure of your JSON documents. For example, assume you have the following JSON document in Azure Blob storage:
+### json example (single hotel JSON files)
-```http
- {
- "article" : {
- "text" : "A hopefully useful article explaining how to parse JSON blobs",
- "datePublished" : "2016-04-13",
- "tags" : [ "search", "storage", "howto" ]
- }
- }
-```
+The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
-The blob indexer parses the JSON document into a single Azure Cognitive Search document. The indexer loads an index by matching "text", "datePublished", and "tags" from the source against identically named and typed target index fields.
-
-As noted, field mappings are not required. Given an index with "text", "datePublished, and "tags" fields, the blob indexer can infer the correct mapping without a field mapping present in the request.
+The data set consists of five blobs, each containing a hotel document with an address collection and a rooms collection. The blob indexer detects both collections and reflects the structure of the input documents in the index schema.
<a name="parsing-arrays"></a> ## Parse JSON arrays
-Alternatively, you can use the JSON array option. This option is useful when blobs contain an *array of well-formed JSON objects*, and you want each element to become a separate Azure Cognitive Search document. For example, given the following JSON blob, you can populate your Azure Cognitive Search index with three separate documents, each with "id" and "text" fields.
+Alternatively, you can use the JSON array option. This option is useful when blobs contain an array of well-formed JSON objects, and you want each element to become a separate search document. Using **`jsonArrays`**, the following JSON blob produces three separate documents, each with `"id"` and `"text"` fields.
```text
- [
- { "id" : "1", "text" : "example 1" },
- { "id" : "2", "text" : "example 2" },
- { "id" : "3", "text" : "example 3" }
- ]
+[
+ { "id" : "1", "text" : "example 1" },
+ { "id" : "2", "text" : "example 2" },
+ { "id" : "3", "text" : "example 3" }
+]
```
-For a JSON array, the indexer definition should look similar to the following example. Notice that the parsingMode parameter specifies the `jsonArray` parser. Specifying the right parser and having the right data input are the only two array-specific requirements for indexing JSON blobs.
+The **`parameters`** property on the indexer contains parsing mode values. For a JSON array, the indexer definition should look similar to the following example.
```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "my-json-indexer",
- "dataSourceName" : "my-blob-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "parameters" : { "configuration" : { "parsingMode" : "jsonArray" } }
- }
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "my-json-indexer",
+ "dataSourceName" : "my-blob-datasource",
+ "targetIndexName" : "my-target-index",
+ "parameters" : { "configuration" : { "parsingMode" : "jsonArray" } }
+}
```
-Again, notice that field mappings can be omitted. Assuming an index with identically named "id" and "text" fields, the blob indexer can infer the correct mapping without an explicit field mapping list.
+### jsonArrays example (clinical trials sample data)
+
+The [clinical trials JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials-json) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
+
+The data set consists of eight blobs, each containing a JSON array of entities, for a total of 100 entities. The entities vary as to which fields are populated, but the end result is one search document per entity, from all arrays, in all blobs.
<a name="nested-json-arrays"></a>
-## Parse nested arrays
-For JSON arrays having nested elements, you can specify a `documentRoot` to indicate a multi-level structure. For example, if your blobs look like this:
+### Parsing nested JSON arrays
+
+For JSON arrays having nested elements, you can specify a **`documentRoot`** to indicate a multi-level structure. For example, if your blobs look like this:
```http
- {
- "level1" : {
- "level2" : [
- { "id" : "1", "text" : "Use the documentRoot property" },
- { "id" : "2", "text" : "to pluck the array you want to index" },
- { "id" : "3", "text" : "even if it's nested inside the document" }
- ]
- }
+{
+ "level1" : {
+ "level2" : [
+ { "id" : "1", "text" : "Use the documentRoot property" },
+ { "id" : "2", "text" : "to pluck the array you want to index" },
+ { "id" : "3", "text" : "even if it's nested inside the document" }
+ ]
}
+}
``` Use this configuration to index the array contained in the `level2` property: ```http
- {
- "name" : "my-json-array-indexer",
- ... other indexer properties
- "parameters" : { "configuration" : { "parsingMode" : "jsonArray", "documentRoot" : "/level1/level2" } }
- }
+{
+ "name" : "my-json-array-indexer",
+ ... other indexer properties
+ "parameters" : { "configuration" : { "parsingMode" : "jsonArray", "documentRoot" : "/level1/level2" } }
+}
```
-## Parse blobs separated by newlines
+## Parse JSON entities separated by newlines
-If your blob contains multiple JSON entities separated by a newline, and you want each element to become a separate Azure Cognitive Search document, you can opt for the JSON lines option. For example, given the following blob (where there are three different JSON entities), you can populate your Azure Cognitive Search index with three separate documents, each with "id" and "text" fields.
+If your blob contains multiple JSON entities separated by a newline, and you want each element to become a separate search document, use **`jsonLines`**.
```text { "id" : "1", "text" : "example 1" }
@@ -390,70 +148,69 @@ If your blob contains multiple JSON entities separated by a newline, and you wan
{ "id" : "3", "text" : "example 3" } ```
-For JSON lines, the indexer definition should look similar to the following example. Notice that the parsingMode parameter specifies the `jsonLines` parser.
+For JSON lines, the indexer definition should look similar to the following example.
```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "my-json-indexer",
- "dataSourceName" : "my-blob-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" },
- "parameters" : { "configuration" : { "parsingMode" : "jsonLines" } }
- }
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "my-json-indexer",
+ "dataSourceName" : "my-blob-datasource",
+ "targetIndexName" : "my-target-index",
+ "parameters" : { "configuration" : { "parsingMode" : "jsonLines" } }
+}
```
-Again, notice that field mappings can be omitted, similar to the `jsonArray` parsing mode.
+### jsonLines example (caselaw sample data)
-## Add field mappings
+The [caselaw JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/caselaw) on GitHub is helpful for testing JSON new line parsing. As with other samples, you can upload this data to Blob storage and use the **Import data** wizard to quickly evaluate the impact of parsing mode on individual blobs.
-When source and target fields are not perfectly aligned, you can define a field mapping section in the request body for explicit field-to-field associations.
+The data set consists of one blob containing 10 JSON entities separate by a new line, where each entity describes a single legal case. The end result is one search document per entity.
-Currently, Azure Cognitive Search cannot index arbitrary JSON documents directly because it supports only primitive data types, string arrays, and GeoJSON points. However, you can use **field mappings** to pick parts of your JSON document and "lift" them into top-level fields of the search document. To learn about field mappings basics, see [Field mappings in Azure Cognitive Search indexers](search-indexer-field-mappings.md).
+## Map JSON fields to search fields
-Revisiting our example JSON document:
+Field mappings are used to associate a source field with a destination field in situations where the field names and types are not identical. But field mappings can also be used to match parts of a JSON document and "lift" them into top-level fields of the search document.
+
+The following example illustrates this scenario. For more information about field mappings in general, see [field mappings](search-indexer-field-mappings.md).
```http
- {
- "article" : {
- "text" : "A hopefully useful article explaining how to parse JSON blobs",
- "datePublished" : "2016-04-13"
- "tags" : [ "search", "storage", "howto" ]
- }
+{
+ "article" : {
+ "text" : "A hopefully useful article explaining how to parse JSON blobs",
+ "datePublished" : "2016-04-13"
+ "tags" : [ "search", "storage", "howto" ]
}
+}
``` Assume a search index with the following fields: `text` of type `Edm.String`, `date` of type `Edm.DateTimeOffset`, and `tags` of type `Collection(Edm.String)`. Notice the discrepancy between "datePublished" in the source and `date` field in the index. To map your JSON into the desired shape, use the following field mappings: ```http
- "fieldMappings" : [
- { "sourceFieldName" : "/article/text", "targetFieldName" : "text" },
- { "sourceFieldName" : "/article/datePublished", "targetFieldName" : "date" },
- { "sourceFieldName" : "/article/tags", "targetFieldName" : "tags" }
- ]
+"fieldMappings" : [
+ { "sourceFieldName" : "/article/text", "targetFieldName" : "text" },
+ { "sourceFieldName" : "/article/datePublished", "targetFieldName" : "date" },
+ { "sourceFieldName" : "/article/tags", "targetFieldName" : "tags" }
+ ]
```
-The source field names in the mappings are specified using the [JSON Pointer](https://tools.ietf.org/html/rfc6901) notation. You start with a forward slash to refer to the root of your JSON document, then pick the desired property (at arbitrary level of nesting) by using forward slash-separated path.
+Source fields are specified using the [JSON Pointer](https://tools.ietf.org/html/rfc6901) notation. You start with a forward slash to refer to the root of your JSON document, then pick the desired property (at arbitrary level of nesting) by using forward slash-separated path.
You can also refer to individual array elements by using a zero-based index. For example, to pick the first element of the "tags" array from the above example, use a field mapping like this: ```http
- { "sourceFieldName" : "/article/tags/0", "targetFieldName" : "firstTag" }
+{ "sourceFieldName" : "/article/tags/0", "targetFieldName" : "firstTag" }
``` > [!NOTE]
-> If a source field name in a field mapping path refers to a property that doesn't exist in JSON, that mapping is skipped without an error. This is done so that we can support documents with a different schema (which is a common use case). Because there is no validation, you need to take care to avoid typos in your field mapping specification.
+> If **`sourceFieldName`** refers to a property that doesn't exist in the JSON blob, that mapping is skipped without an error. This behavior allows indexing to continue for JSON blobs that have a different schema (which is a common use case). Because there is no validation check, check the mappings carefully for typos so that you aren't losing documents for the wrong reason.
>
-## Help us make Azure Cognitive Search better
-If you have feature requests or ideas for improvements, provide your input on [UserVoice](https://feedback.azure.com/forums/263029-azure-search/). If you need help using the existing feature, post your question on [Stack Overflow](https://stackoverflow.microsoft.com/questions/tagged/18870).
-
-## See also
+## Next steps
-+ [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-+ [Indexing Azure Blob Storage with Azure Cognitive Search](search-howto-index-json-blobs.md)
-+ [Indexing CSV blobs with Azure Cognitive Search blob indexer](search-howto-index-csv-blobs.md)
++ [Configure blob indexers](search-howto-indexing-azure-blob-storage.md)++ [Define field mappings](search-indexer-field-mappings.md)++ [Indexers overview](search-indexer-overview.md)++ [How to index CSV blobs with a blob indexer](search-howto-index-csv-blobs.md) + [Tutorial: Search semi-structured data from Azure Blob storage](search-semi-structured-data.md)\ No newline at end of file
security-center https://docs.microsoft.com/en-us/azure/security-center/upcoming-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
@@ -79,7 +79,7 @@ Learn more about these recommendations in the [security recommendations referenc
**Estimated date for change:** Q2 2021
-The current version of the recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be deprecated and replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result:
+The current version of the recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result:
- The recommendation will no longer affect your secure score - The security control ("Apply data classification") will no longer affect your secure score
@@ -89,4 +89,4 @@ The current version of the recommendation **Sensitive data in your SQL databases
## Next steps
-For all recent changes to the product, see [What's new in Azure Security Center?](release-notes.md).
\ No newline at end of file
+For all recent changes to the product, see [What's new in Azure Security Center?](release-notes.md).
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-azure-active-directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-active-directory.md
@@ -1,6 +1,6 @@
Title: Connect Azure Active Directory data to Azure Sentinel | Microsoft Docs
-description: Learn how to collect data from Azure Active Directory, and stream Azure AD sign-in logs and audit logs into Azure Sentinel.
+description: Learn how to collect data from Azure Active Directory, and stream Azure AD sign-in, audit, and provisioning logs into Azure Sentinel.
documentationcenter: na
@@ -18,13 +18,29 @@ Last updated 01/20/2021
-# Connect data from Azure Active Directory (Azure AD)
+# Connect Azure Active Directory (Azure AD) data to Azure Sentinel
-You can use Azure Sentinel's built-in connector to collect data from [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) and stream it into Azure Sentinel. The connector allows you to stream [sign-in logs](../active-directory/reports-monitoring/concept-sign-ins.md) and [audit logs](../active-directory/reports-monitoring/concept-audit-logs.md).
+You can use Azure Sentinel's built-in connector to collect data from [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) and stream it into Azure Sentinel. The connector allows you to stream the following log types:
+- [**Sign-in logs**](../active-directory/reports-monitoring/concept-all-sign-ins.md), which contain information about [interactive user sign-ins](../active-directory/reports-monitoring/concept-all-sign-ins.md#user-sign-ins) where a user provides an authentication factor.
+
+ The Azure AD connector now includes the following three additional categories of sign-in logs, all currently in **PREVIEW**:
+
+ - [**Non-interactive user sign-in logs**](../active-directory/reports-monitoring/concept-all-sign-ins.md#non-interactive-user-sign-ins), which contain information about sign-ins performed by a client on behalf of a user without any interaction or authentication factor from the user.
+
+ - [**Service principal sign-in logs**](../active-directory/reports-monitoring/concept-all-sign-ins.md#service-principal-sign-ins), which contain information about sign-ins by apps and service principals that do not involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.
+
+ - [**Managed Identity sign-in logs**](../active-directory/reports-monitoring/concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins), which contain information about sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+
+- [**Audit logs**](../active-directory/reports-monitoring/concept-audit-logs.md), which contain information about system activity relating to user and group management, managed applications, and directory activities.
+
+- [**Provisioning logs**](../active-directory/reports-monitoring/concept-provisioning-logs.md) (also in **PREVIEW**), which contain system activity information about users, groups, and roles provisioned by the Azure AD provisioning service.
+
+> [!IMPORTANT]
+> As indicated above, some of the available log types are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites -- You must have an [Azure AD Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) subscription to ingest sign-in logs into Azure Sentinel. Additional per-gigabyte charges may apply for Azure Monitor (Log Analytics) and Azure Sentinel.
+- Any Azure AD license (Free/O365/P1/P2) is sufficient to ingest sign-in logs into Azure Sentinel. Additional per-gigabyte charges may apply for Azure Monitor (Log Analytics) and Azure Sentinel.
- Your user must be assigned the Azure Sentinel Contributor role on the workspace.
@@ -38,10 +54,7 @@ You can use Azure Sentinel's built-in connector to collect data from [Azure Acti
1. From the data connectors gallery, select **Azure Active Directory** and then select **Open connector page**.
-1. Mark the check boxes next to the log types you want to stream into Azure Sentinel, and click **Connect**. These are the log types you can choose from:
-
- - **Sign-in logs**: Information about the usage of managed applications and user sign-in activities.
- - **Audit logs**: System activity information about user and group management, managed applications, and directory activities.
+1. Mark the check boxes next to the log types you want to stream into Azure Sentinel (see above), and click **Connect**.
## Find your data
@@ -49,10 +62,14 @@ After a successful connection is established, the data appears in **Logs**, unde
- `SigninLogs` - `AuditLogs`
+- `AADNonInteractiveUserSignInLogs`
+- `AADServicePrincipalSignInLogs`
+- `AADManagedIdentitySignInLogs`
+- `AADProvisioningLogs`
To query the Azure AD logs, enter the relevant table name at the top of the query window. ## Next steps In this document, you learned how to connect Azure Active Directory to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
spring-cloud https://docs.microsoft.com/en-us/azure/spring-cloud/spring-cloud-howto-staging-environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/spring-cloud-howto-staging-environment.md
@@ -17,6 +17,7 @@ This article discusses how to set up a staging deployment by using the blue-gree
## Prerequisites
+* Azure Spring Cloud instance with *Standard* **Pricing tier**.
* A running application. See [Quickstart: Deploy your first Azure Spring Cloud application](spring-cloud-quickstart.md). * Azure CLI [asc extension](https://docs.microsoft.com/cli/azure/azure-cli-extensions-overview)
@@ -73,7 +74,7 @@ View deployed apps using the following procedures.
1. In the Azure CLI, create a new deployment, and give it the staging deployment name "green." ```azurecli
- az spring-cloud app deployment create -g <resource-group-name> -s <service-instance-name> --app default -n green --jar-path gateway/target/gateway.jar
+ az spring-cloud app deployment create -g <resource-group-name> -s <service-instance-name> --app <appName> -n green --jar-path gateway/target/gateway.jar
``` 1. After the CLI deployment finishes successfully, access the app page from the **Application Dashboard**, and view all your instances in the **Deployments** tab on the left.
@@ -108,11 +109,11 @@ To verify that the green staging development is working:
[ ![Deployments set staging deployment](media/spring-cloud-blue-green-staging/set-staging-deployment.png)](media/spring-cloud-blue-green-staging/set-staging-deployment.png)
-1. Return to the **Deployment management** page. Your `green` deployment deployment status should show *Up*. This is now the running production build.
+1. Return to the **Deployment management** page. Set the `green` deployment to `production`. When the setting finishes, your `green` deployment status should show *Up*. This is now the running production build.
[ ![Deployments set staging deployment result](media/spring-cloud-blue-green-staging/set-staging-deployment-result.png)](media/spring-cloud-blue-green-staging/set-staging-deployment-result.png)
-1. Copy and paste the URL into a new browser window, and the new application page should be displayed with your changes.
+1. The URL of the app should display your changes.
>[!NOTE] > After you've set the green deployment as the production environment, the previous deployment becomes the staging deployment.
@@ -137,4 +138,4 @@ az spring-cloud app deployment delete -n <staging-deployment-name> -g <resource-
## Next steps
-* [CI/CD for Azure Spring Cloud](https://review.docs.microsoft.com/azure/spring-cloud/spring-cloud-howto-cicd?branch=pr-en-us-142929&pivots=programming-language-java)
\ No newline at end of file
+* [CI/CD for Azure Spring Cloud](https://review.docs.microsoft.com/azure/spring-cloud/spring-cloud-howto-cicd?branch=pr-en-us-142929&pivots=programming-language-java)
spring-cloud https://docs.microsoft.com/en-us/azure/spring-cloud/spring-cloud-quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/spring-cloud-quickstart.md
@@ -411,7 +411,7 @@ The following procedure builds and deploys the application using the Azure CLI.
1. Create the app with public endpoint assigned: ```azurecli
- az spring-cloud app create -n hellospring -s <service instance name> -g <resource group name> --is-public
+ az spring-cloud app create -n hellospring -s <service instance name> -g <resource group name> --is-public true
``` 1. Deploy the Jar file for the app (`target\hellospring-0.0.1-SNAPSHOT.jar` on Windows):
storage https://docs.microsoft.com/en-us/azure/storage/blobs/blob-inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-inventory.md
@@ -28,6 +28,8 @@ The blob inventory preview is available on storage accounts in the following reg
- France Central - Canada Central - Canada East
+- East US
+- East US2
### Pricing and billing
storage https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-query-acceleration-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-query-acceleration-how-to.md
@@ -173,10 +173,10 @@ Update-Module -Name Az
cd myProject ```
-2. Install the `12.5.0-preview.6` version of the Azure Blob storage client library for .NET package by using the `dotnet add package` command.
+2. Install the `12.5.0-preview.6` version or later of the Azure Blob storage client library for .NET package by using the `dotnet add package` command.
```console
- dotnet add package Azure.Storage.Blobs -v 12.6.0
+ dotnet add package Azure.Storage.Blobs -v 12.8.0
``` 3. The examples that appear in this article parse a CSV file by using the [CsvHelper](https://www.nuget.org/packages/CsvHelper/) library. To use that library, use the following command.
@@ -351,11 +351,11 @@ private static async Task DumpQueryCsv(BlockBlobClient blob, string query, bool
query, options)).Value.Content)) {
- using (var parser = new CsvReader(reader, new CsvConfiguration(CultureInfo.CurrentCulture) { HasHeaderRecord = true }))
+ using (var parser = new CsvReader(reader, new CsvConfiguration(CultureInfo.CurrentCulture, hasHeaderRecord: true) { HasHeaderRecord = true }))
{ while (await parser.ReadAsync()) {
- Console.Out.WriteLine(String.Join(" ", parser.Context.Record));
+ Console.Out.WriteLine(String.Join(" ", parser.Parser.Record));
} } }
@@ -610,4 +610,4 @@ async function queryDvds(blob)
## Next steps - [Azure Data Lake Storage query acceleration](data-lake-storage-query-acceleration.md)-- [Query acceleration SQL language reference](query-acceleration-sql-reference.md)\ No newline at end of file
+- [Query acceleration SQL language reference](query-acceleration-sql-reference.md)
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-deployment-guide.md
@@ -501,7 +501,7 @@ If you'd like to configure your Azure File sync to work with firewall and virtua
![Configuring firewall and virtual network settings to work with Azure File sync](media/storage-sync-files-deployment-guide/firewall-and-vnet.png) ## Onboarding with Azure File Sync
-The recommended steps to onboard on Azure File Sync for the first with zero downtime while preserving full file fidelity and access control list (ACL) are as follows:
+The recommended steps to onboard on Azure File Sync for the first time with zero downtime while preserving full file fidelity and access control list (ACL) are as follows:
1. Deploy a Storage Sync Service. 1. Create a sync group.
storage https://docs.microsoft.com/en-us/azure/storage/files/understanding-billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/understanding-billing.md
@@ -1,18 +1,40 @@
Title: Understanding Azure Files billing | Microsoft Docs
+ Title: Understand Azure Files billing | Microsoft Docs
description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure file shares. Previously updated : 01/20/2021 Last updated : 01/27/2021
-# Understanding Azure Files billing
+# Understand Azure Files billing
Azure Files provides two distinct billing models: provisioned and pay-as-you-go. The provisioned model is only available for premium file shares, which are file shares deployed in the **FileStorage** storage account kind. The pay-as-you-go model is only available for standard file shares, which are file shares deployed in the **general purpose version 2 (GPv2)** storage account kind. This article explains how both models work in order to help you understand your monthly Azure Files bill.
-The current pricing for Azure Files can be found on the [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
+For Azure Files pricing information, see [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
+
+## Storage units
+Azure Files uses base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB. Your operating system may or may not use the same unit of measurement or counting system.
+
+### Windows
+
+Both the Windows operating system and Azure Files measure storage capacity using the base-2 counting system, but there is a difference when labeling units. Azure Files labels its storage capacity with base-2 units of measurement while Windows labels its storage capacity in base-10 units of measurement. When reporting storage capacity, Windows doesn't convert its storage capacity from base-2 to base-10.
+
+|Acronym |Definition |Unit |Windows displays as |
+|||||
+|KiB |1,024 bytes |kibibyte |KB (kilobyte) |
+|MiB |1,024 KiB (1,048,576 bytes) |mebibyte |MB (megabyte) |
+|GiB |1024 MiB (1,073,741,824 bytes) |gibibyte |GB (gigabyte) |
+|TiB |1024 GiB (1,099,511,627,776 bytes) |tebibyte |TB (terabyte) |
+
+### macOS
+
+See [How iOS and macOS report storage capacity](https://support.apple.com/HT201402) on Apple's website to determine which counting system is used.
+
+### Linux
+
+A different counting system could be used by each operating system or individual piece of software. See their documentation to determine how they report storage capacity.
## Provisioned model Azure Files uses a provisioned model for premium file shares. In a provisioned business model, you proactively specify to the Azure Files service what your storage requirements are, rather than being billed based on what you use. This is similar to buying hardware on-premises, in that when you provision an Azure file share with a certain amount of storage, you pay for that storage regardless of whether you use it or not, just like you don't start paying the costs of physical media on-premises when you start to use space. Unlike purchasing physical media on-premises, provisioned file shares can be dynamically scaled up or down depending on your storage and IO performance characteristics.
@@ -74,7 +96,7 @@ If you put an infrequently accessed workload in the transaction optimized tier,
Similarly, if you put a highly accessed workload in the cool tier, you will pay a lot more in transaction costs, but less for data storage costs. This can lead to a situation where the increased costs from the transaction prices increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool than you would have on transaction optimized. It is possible for some usage levels that while the hot tier will be the most cost efficient tier, the cool tier will be more expensive than transaction optimized.
-Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.).
+Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.).
### What are transactions? Transactions are operations or requests against Azure Files to upload, download, or otherwise manipulate the contents of the file share. Every action taken on a file share translates to one or more transactions, and on standard shares that use the pay-as-you-go billing model, that translates to transaction costs.
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-ip-firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
@@ -38,7 +38,10 @@ Make sure that the firewall on your network and local computer allows outgoing c
Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433.
-If you're using the default Redirect connection policy setting, you may need to allow outgoing communication on additional ports. You can learn more about connection policies [here](../../azure-sql/database/connectivity-architecture.md#connection-policy).
+The SQL connection policy is set to *default* for the workspace. You can learn more about the IP addresses and ports that clients should allow outbound communication to [here](../../azure-sql/database/connectivity-architecture.md#connection-policy).
+++ ## Next steps
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-synapse-rbac-roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
@@ -34,7 +34,7 @@ The following table describes the built-in roles and the scopes at which they ca
|Role |Permissions|Scopes| |||--|
-|Synapse Administrator |Full Synapse access to serverless SQL pools, Apache Spark pools, and Integration runtimes.  Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential.  Includes assigning Synapse RBAC roles.  Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions </br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential |
+|Synapse Administrator |Full Synapse access to serverless SQL pools, Apache Spark pools, and Integration runtimes.  Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential.  Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions </br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential |
|Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| |Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace| |Synapse Contributor|Full Synapse access to serverless SQL pools, Apache Spark pools, Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot.md
@@ -71,6 +71,7 @@ This article lists common troubleshooting issues in dedicated SQL pool (formerly
| Unsupported SQL Database data types | See [Unsupported data types](sql-data-warehouse-tables-data-types.md#identify-unsupported-data-types). | | Stored procedure limitations | See [Stored procedure limitations](sql-data-warehouse-develop-stored-procedures.md#limitations) to understand some of the limitations of stored procedures. | | UDFs do not support SELECT statements | This is a current limitation of our UDFs. See [CREATE FUNCTION](/sql/t-sql/statements/create-function-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) for the syntax we support. |
+| sp_rename (preview) for columns does not work on schemas outside of *dbo* | This is a current limitation of Synapse [sp_rename (preview) for columns](/sql/relational-databases/system-stored-procedures/sp-rename-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). Columns in objects that are not a part of *dbo* schema can renamed via a CTAS into a new table. |
## Next steps
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
@@ -218,6 +218,7 @@ CSV parser version 1.0 specifics:
CSV parser version 2.0 specifics: - Not all data types are supported.
+- Maximum character column length is 8000.
- Maximum row size limit is 8 MB. - Following options aren't supported: DATA_COMPRESSION. - Quoted empty string ("") is interpreted as empty string.
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/get-started-azure-data-studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-azure-data-studio.md
@@ -21,7 +21,7 @@
> * [sqlcmd](get-started-connect-sqlcmd.md) > * [SSMS](get-started-ssms.md)
-You can use [Azure Data Studio)](/sql/azure-data-studio/download-azure-data-studio?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to connect to and query Synapse SQL in Azure Synapse Analytics.
+You can use [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to connect to and query Synapse SQL in Azure Synapse Analytics.
## Connect
@@ -91,4 +91,4 @@ Explore other ways to connect to Synapse SQL:
- [Visual Studio](../sql-data-warehouse/sql-data-warehouse-query-visual-studio.md?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json) - [sqlcmd](get-started-connect-sqlcmd.md)
-Visit [Use Azure Data Studio to connect and query data using a dedicated SQL pool in Azure Synapse Analytics](/sql/azure-data-studio/quickstart-sql-dw), for more information.
\ No newline at end of file
+Visit [Use Azure Data Studio to connect and query data using a dedicated SQL pool in Azure Synapse Analytics](/sql/azure-data-studio/quickstart-sql-dw), for more information.
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/fslogix-containers-azure-files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/fslogix-containers-azure-files.md
@@ -50,7 +50,7 @@ The following table shows benefits and limitations of previous user profile tech
#### Performance
-UPD requires [Storage Spaces Direct (S2D)](/windows-server/remote/remote-desktop-services/rds-storage-spaces-direct-deployment/) to address performance requirements. UPD uses Server Message Block (SMB) protocol. It copies the profile to the VM in which the user is being logged. UPD with S2D is the solution we recommend for Windows Virtual Desktop.
+UPD requires [Storage Spaces Direct (S2D)](/windows-server/remote/remote-desktop-services/rds-storage-spaces-direct-deployment/) to address performance requirements. UPD uses Server Message Block (SMB) protocol. It copies the profile to the VM in which the user is being logged.
#### Cost
virtual-desktop https://docs.microsoft.com/en-us/azure/virtual-desktop/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
@@ -32,7 +32,7 @@ Check out these articles to learn about updates for our clients for Windows Virt
## FSLogix updates
-Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](/fslogix/whats-new.md).
+Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](/fslogix/whats-new).
## January 2021
@@ -309,4 +309,4 @@ To learn more, see [our blog post](https://azure.microsoft.com/updates/windows-v
## Next steps
-Learn about future plans at the [Microsoft 365 Windows Virtual Desktop roadmap](https://www.microsoft.com/microsoft-365/roadmap?filters=Windows%20Virtual%20Desktop).
\ No newline at end of file
+Learn about future plans at the [Microsoft 365 Windows Virtual Desktop roadmap](https://www.microsoft.com/microsoft-365/roadmap?filters=Windows%20Virtual%20Desktop).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/custom-script-windows.md
@@ -243,8 +243,8 @@ Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
-Publisher "Microsoft.Compute" ` -ExtensionType "CustomScriptExtension" ` -TypeHandlerVersion "1.10" `
- -Settings $settings `
- -ProtectedSettings $protectedSettings `
+ -Settings $settings `
+ -ProtectedSettings $protectedSettings;
``` ### Running scripts from a local share
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/hpc-compute-infiniband-windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpc-compute-infiniband-windows.md
@@ -11,7 +11,7 @@
vm-windows Previously updated : 07/20/2020 Last updated : 02/01/2021
@@ -26,15 +26,15 @@ An extension is also available to install InfiniBand drivers for [Linux VMs](hpc
### Operating system
-This extension supports the following OS distros, depending on driver support for specific OS version.
+This extension supports the following OS distros, depending on driver support for specific OS version. Note the appropriate InfiniBand NIC for the H and N-series VM sizes of interest.
-| Distribution | Version |
+| Distribution | InfiniBand NIC drivers |
|||
-| Windows 10 | Core |
-| Windows Server 2019 | Core |
-| Windows Server 2016 | Core |
-| Windows Server 2012 R2 | Core |
-| Windows Server 2012 | Core |
+| Windows 10 | CX5, CX6 |
+| Windows Server 2019 | CX5, CX6 |
+| Windows Server 2016 | CX3-Pro, CX5, CX6 |
+| Windows Server 2012 R2 | CX3-Pro, CX5, CX6 |
+| Windows Server 2012 | CX3-Pro, CX5, CX6 |
### Internet connectivity
@@ -184,4 +184,4 @@ If you need more help at any point in this article, you can contact the Azure ex
For more information about InfiniBand-enabled ('r' sizes), see [H-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs. > [!div class="nextstepaction"]
-> [Learn more about Linux VMs extensions and features](features-linux.md)
\ No newline at end of file
+> [Learn more about Linux VMs extensions and features](features-linux.md)
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/linux/no-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/no-agent.md
@@ -150,7 +150,7 @@ wireserver_conn.close()
If your VM doesn't have Python installed or available, you can programmatically reproduce this above script logic with the following steps:
-1. Retrieve the `ContainerId` and `InstanceId` by parsing the response from the WireServer: `curl -X GET -H 'x-ms-version: 2012-11-30' http://$168.63.129.16/machine?comp=goalstate`.
+1. Retrieve the `ContainerId` and `InstanceId` by parsing the response from the WireServer: `curl -X GET -H 'x-ms-version: 2012-11-30' http://168.63.129.16/machine?comp=goalstate`.
2. Construct the following XML data, injecting the parsed `ContainerId` and `InstanceId` from the above step: ```xml
@@ -269,4 +269,4 @@ If you implement your own provisioning code/agent, then you own the support of t
## Next steps
-For more information, see [Linux provisioning](provisioning.md).
\ No newline at end of file
+For more information, see [Linux provisioning](provisioning.md).
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
@@ -51,7 +51,7 @@ $vhdSizeBytes = (Get-Item "<fullFilePathHere>").length
$diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -UploadSizeInBytes $vhdSizeBytes -Location '<yourregion>' -CreateOption 'Upload'
-New-AzDisk -ResourceGroupName '<yourresourcegroupname' -DiskName '<yourdiskname>' -Disk $diskconfig
+New-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -Disk $diskconfig
``` If you would like to upload either a premium SSD or a standard SSD, replace **Standard_LRS** with either **Premium_LRS** or **StandardSSD_LRS**. Ultra disks are not yet supported.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
@@ -14,7 +14,7 @@
vm-linux Previously updated : 01/23/2021 Last updated : 02/01/2021
@@ -81,6 +81,7 @@ In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- 02/01/2021: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add a link to [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
- 01/23/2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - 01/18/2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - 01/11/2021: Minor changes in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to adjust commands to work for both RHEL8 and RHEL7, and ENSA1 and ENSA2
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-overview-architecture.md
@@ -18,12 +18,12 @@
# What is SAP HANA on Azure (Large Instances)?
-SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to providing virtual machines for deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA on bare-metal servers that are dedicated to you. The SAP HANA on Azure (Large Instances) solution builds on non-shared host/server bare-metal hardware that is assigned to you. The server hardware is embedded in larger stamps that contain compute/server, networking, and storage infrastructure. As a combination, it's HANA tailored data center integration (TDI) certified. SAP HANA on Azure (Large Instances) offers different server SKUs or sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that have up to 480 Intel CPU cores and up to 24 TB of memory.
+SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to providing virtual machines for deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA on bare-metal servers that are dedicated to you. The SAP HANA on Azure (Large Instances) solution builds on non-shared host/server bare-metal hardware that is assigned to you. The server hardware is embedded in larger stamps that contain compute/server, networking, and storage infrastructure. SAP HANA on Azure (Large Instances) offers different server SKUs or sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that have up to 480 Intel CPU cores and up to 24 TB of memory.
The customer isolation within the infrastructure stamp is performed in tenants, which looks like: - **Networking**: Isolation of customers within infrastructure stack through virtual networks per customer assigned tenant. A tenant is assigned to a single customer. A customer can have multiple tenants. The network isolation of tenants prohibits network communication between tenants in the infrastructure stamp level, even if the tenants belong to the same customer.-- **Storage components**: Isolation through storage virtual machines that have storage volumes assigned to them. Storage volumes can be assigned to one storage virtual machine only. A storage virtual machine is assigned exclusively to one single tenant in the SAP HANA TDI certified infrastructure stack. As a result, storage volumes assigned to a storage virtual machine can be accessed in one specific and related tenant only. They aren't visible between the different deployed tenants.
+- **Storage components**: Isolation through storage virtual machines that have storage volumes assigned to them. Storage volumes can be assigned to one storage virtual machine only. A storage virtual machine is assigned exclusively to one single tenant in the infrastructure stack. As a result, storage volumes assigned to a storage virtual machine can be accessed in one specific and related tenant only. They aren't visible between the different deployed tenants.
- **Server or host**: A server or host unit isn't shared between customers or tenants. A server or host deployed to a customer, is an atomic bare-metal compute unit that is assigned to one single tenant. *No* hardware partitioning or soft partitioning is used that might result in you sharing a host or a server with another customer. Storage volumes that are assigned to the storage virtual machine of the specific tenant are mounted to such a server. A tenant can have one to many server units of different SKUs exclusively assigned. - Within an SAP HANA on Azure (Large Instances) infrastructure stamp, many different tenants are deployed and isolated against each other through the tenant concepts on networking, storage, and compute level.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
@@ -11,7 +11,7 @@
vm-linux Previously updated : 10/16/2020 Last updated : 02/01/2021
@@ -86,6 +86,7 @@ Read the following SAP Notes and papers first:
- [Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure.](https://access.redhat.com/solutions/3193782) - [Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571) - [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/us/media/tr-4746.pdf)
+- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
## Overview
@@ -688,4 +689,11 @@ This section describes how you can test your setup.
vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hanadb2 ```
- We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests described in [Setup SAP HANA System Replication on RHEL](./sap-hana-high-availability-rhel.md#test-the-cluster-setup).
\ No newline at end of file
+ We recommend to thoroughly test the SAP HANA cluster configuration, by also performing the tests described in [Setup SAP HANA System Replication on RHEL](./sap-hana-high-availability-rhel.md#test-the-cluster-setup).
+
+## Next steps
+
+* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
+* [Azure Virtual Machines deployment for SAP][deployment-guide]
+* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md
@@ -14,7 +14,7 @@
vm-windows Previously updated : 10/16/2020 Last updated : 02/01/2021
@@ -87,7 +87,7 @@ Before you begin, refer to the following SAP notes and papers:
* [Red Hat Enterprise Linux Solution for SAP HANA Scale-Out and System Replication](https://access.redhat.com/solutions/4386601) * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure] * [Azure NetApp Files documentation][anf-azure-doc] -
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
## Overview
@@ -1169,4 +1169,5 @@ We recommend to thoroughly test the SAP HANA cluster configuration, by also perf
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].\ No newline at end of file
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md
@@ -14,7 +14,7 @@
vm-windows Previously updated : 01/05/2021 Last updated : 02/01/2021
@@ -88,7 +88,7 @@ Before you begin, refer to the following SAP notes and papers:
* Azure-specific RHEL documentation: * [Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure](https://access.redhat.com/public-cloud/microsoft-azure) * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]-
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
## Overview
@@ -932,4 +932,5 @@ In this example for deploying SAP HANA in scale-out configuration with standby n
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
@@ -14,7 +14,7 @@
vm-windows Previously updated : 01/05/2021 Last updated : 02/01/2021
@@ -80,7 +80,7 @@ Before you begin, refer to the following SAP notes and papers:
* [SUSE SAP HA Best Practice Guides][suse-ha-guide]: Contains all required information to set up NetWeaver High Availability and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more detailed information) * [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes] * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]-
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
## Overview
@@ -860,4 +860,5 @@ In this example for deploying SAP HANA in scale-out configuration with standby n
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-netapp)
* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha].
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-bandwidth-testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-bandwidth-testing.md
@@ -51,9 +51,9 @@ Sender parameters: ntttcp -s10.27.33.7 -t 10 -n 1 -P 1
#### Get NTTTCP onto the VMs. Download the latest version:
-<https://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769>
+https://github.com/microsoft/ntttcp/releases/download/v5.35/NTttcp.exe
-Or search for it if moved: <https://www.bing.com/search?q=ntttcp+download>\< -- should be first hit
+Or view the top-level GitHub Page: <https://github.com/microsoft/ntttcp>\
Consider putting NTTTCP in separate folder, like c:\\tools
virtual-network https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-test-latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-test-latency.md
@@ -44,7 +44,7 @@ You can use this approach to measure network latency between two VMs or even bet
### Tools for testing To measure latency, you have two different tool options:
-* For Windows-based systems: [latte.exe (Windows)](https://gallery.technet.microsoft.com/Latte-The-Windows-tool-for-ac33093b)
+* For Windows-based systems: [latte.exe (Windows)](https://github.com/microsoft/latte/releases/download/v0/latte.exe)
* For Linux-based systems: [SockPerf (Linux)](https://github.com/mellanox/sockperf) By using these tools, you help ensure that only TCP or UDP payload delivery times are measured and not ICMP (Ping) or other packet types that aren't used by applications and don't affect their performance.
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/how-to-nva-hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/how-to-nva-hub.md
@@ -15,8 +15,7 @@ This article shows you how to use Virtual WAN to connect to your resources in Az
The steps in this article help you create a **Barracuda CloudGen WAN** Network Virtual Appliance in the Virtual WAN hub. To complete this exercise, you must have a Barracuda Cloud Premise Device (CPE) and a license for the Barracuda CloudGen WAN appliance that you deploy into the hub before you begin.
-For deployment documentation of **Cisco SD-WAN** within Azure Virtual WAN - Please see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701). To register your account and get the necessary Cisco SD-WAN Licenses, send email to Cisco at the following email address: vwan_public_preview@external.cisco.com
-
+For deployment documentation of **Cisco SD-WAN** within Azure Virtual WAN - Please see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701).
## Prerequisites
virtual-wan https://docs.microsoft.com/en-us/azure/virtual-wan/scenario-isolate-virtual-networks-branches https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-isolate-virtual-networks-branches.md
@@ -0,0 +1,102 @@
+
+ Title: 'Scenario: Custom isolation for virtual networks and branches'
+
+description: Scenarios for routing - prevent selected VNets and branches from being able to reach each other
+++++ Last updated : 01/25/2021+++
+# Scenario: Custom Isolation for Virtual Networks and Branches
+
+When working with Virtual WAN virtual hub routing, there are quite a few available scenarios. In a custom isolation scenario for both Virtual Networks (VNets) and branches, the goal is to prevent a specific set of VNets from reaching another set of VNets. Likewise, branches (VPN/ER/User VPN) are only allowed to reach certain sets of VNets.
+
+We also introduce the additional requirement that Azure Firewall should inspect branch to Vnet and Branch to Vnet traffic but **not** Vnet to Vnet traffic.
+
+For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+
+## <a name="design"></a>Design
+
+In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following, where each cell represents whether a source (row) can communicate to a destination (column):
+
+| From | To:| *Blue VNets* | *Red VNets* | *Red Branches*| *Blue Branches*|
+|||||||
+| **Blue VNets** | &#8594;| Direct | | | AzFW|
+| **Red VNets** | &#8594;| | Direct | AzFW |
+| **Red Branches** | &#8594;| | AzFW | Direct | Direct
+| **Blue Branches**| &#8594;| AzFW | |Direct | Direct
+
+Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). **Direct** implies the traffic flows directly through Virtual WAN while **AzFW** implies that the traffic is inspected by Azure Firewall before being forwarded to the destination. A blank entry means that flow is blocked in the setup.
+
+In this case, the two route tables for the VNets are required to achieve the goal of VNet isolation without Azure Firewall in the path. We will call these routes tables **RT_BLUE** and **RT_RED**.
+
+In addition, branches must always be associated to the **Default** Route Table. To ensure that traffic to and from the branches is inspected by Azure Firewall, we add static routes in the **Default**, **RT_RED** and **RT_BLUE** route tables pointing traffic to Azure Firewall and set up Network Rules to allow desired traffic. We also ensure that the branches do **not** propagate to **RT_BLUE** and **RT_RED**.
+
+As a result, this is the final design:
+
+* Blue virtual networks:
+ * Associated route table: **RT_BLUE**
+ * Propagating to route tables: **RT_BLUE**
+* Red virtual networks:
+ * Associated route table: **RT_RED**
+ * Propagating to route tables: **RT_RED**
+* Branches:
+ * Associated route table: **Default**
+ * Propagating to route tables: **Default**
+* Static Routes:
+ * **Default Route Table**: Virtual Network Address Spaces with next hop Azure Firewall
+ * **RT_RED**: 0.0.0.0/0 with next hop Azure Firewall
+ * **RT_BLUE**: 0.0.0.0/0 with next hop Azure Firewall
+* Firewall Network Rules:
+ * **ALLOW RULE** **Source Prefix**: Blue Branch Address Prefixes **Destination Prefix**: Blue VNet Prefixes
+ * **ALLOW RULE** **Source Prefix**: Red Branch Address Prefixes **Destination Prefix**: Red Vnet Prefixes
+
+> [!NOTE]
+> Since all branches need to be associated to the Default route table, as well as to propagate to the same set of routing tables, all branches will have the same connectivity profile. In other words, the Red/Blue concept for VNets cannot be applied to branches. However, to achieve custom routing for branches, we can forward traffic from the branches to Azure Firewall.
+
+> [!NOTE]
+> Azure Firewall by default denies traffic in a zero-trust model. If there is no explicit **ALLOW** rule that matches the inspected packet, Azure Firewall will drop the packet.
+
+For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
+++
+## <a name="architecture"></a>Workflow
+
+In **Figure 1**, there are Blue and Red VNets as well as branches that can access either Blue or Red VNets.
+
+* Blue-connected VNets can reach each other and can reach all blue branches (VPN/ER/P2S) connections. In the diagram, the blue branch is the Site-to-site VPN site.
+* Red-connected VNets can reach each other and can reach all red branches (VPN/ER/P2S) connections. In the diagram, the red branch is the Point-to-site VPN users.
+
+Consider the following steps when setting up routing.
+
+1. Create two custom route tables in the Azure portal, **RT_BLUE** and **RT_RED** in order to customize traffic to these VNets.
+2. For route table **RT_BLUE**, apply the following settings to ensure Blue VNets learn the address prefixes of all other Blue VNets.:
+ * **Association**: Select all Blue VNets.
+ * **Propagation**: Select all Blue VNets.
+3. Repeat the same steps for **RT_RED** route table for Red VNets.
+4. Provision an Azure Firewall in Virtual WAN. For more information about Azure Firewall in the Virtual WAN hub, see [Configuring Azure Firewall in Virtual WAN hub](howto-firewall.md).
+5. Add a static route to the **Default** Route Table of the Virtual Hub directing all traffic destined for the Vnet address spaces (both blue and red) to Azure Firewall. This step ensures any packets from your branches will be sent to Azure Firewall for inspection.
+ * Example: **Destination Prefix**: 10.0.0.0/24 **Next Hop**: Azure Firewall
+ >[!NOTE]
+ > This step can also be done via Firewall Manager by selecting the "Secure Private Traffic" option. This will add a route for all RFC1918 private IP addresses applicable to VNets and branches. You will need to manually add in any branches or virtual networks that are not compliant with RFC1918.
+
+6. Add a static route to **RT_RED** and **RT_BLUE** directing all traffic to Azure Firewall. This step ensures VNets will not be able to access branches directly. This step cannot be done via Firewall Manager because these Virtual Networks are not associated with the Default Route Table.
+ * Example: **Destination Prefix**: 0.0.0.0/0 **Next Hop**: Azure Firewall
+
+ > [!NOTE]
+ > Routing is performed using Longest Prefix Match (LPM). As a result, the 0.0.0.0/0 static routes will **NOT** be preferred over the exact prefixes that exist in **BLUE_RT** and **RED_RT**. As a result, intra-Vnet traffic will not be inspected by Azure Firewall.
+
+This will result in the routing configuration changes as seen in the figure below.
+
+**Figure 1**
+[ ![Figure 1](./media/routing-scenarios/custom-branch-vnet/custom-branch.png) ](./media/routing-scenarios/custom-branch-vnet/custom-branch.png#lightbox)
+
+## Next steps
+
+* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
+* For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).