Updates from: 04/30/2021 03:09:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-identity-provider.md
Previously updated : 03/03/2021 Last updated : 04/29/2021
You typically use only one identity provider in your applications, but you have
* [Azure AD (Single-tenant)](identity-provider-azure-ad-single-tenant.md) * [Azure AD (Multi-tenant)](identity-provider-azure-ad-multi-tenant.md) * [Azure AD B2C](identity-provider-azure-ad-b2c.md)
+* [eBay](identity-provider-ebay.md)
* [Facebook](identity-provider-facebook.md) * [Generic identity provider](identity-provider-generic-openid-connect.md) * [GitHub](identity-provider-github.md)
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-ebay.md
+
+ Title: Set up sign-up and sign-in with an eBay account
+
+description: Provide sign-up and sign-in to customers with eBay accounts in your applications using Azure Active Directory B2C.
+++++++ Last updated : 04/29/2021++
+zone_pivot_groups: b2c-policy-type
++
+# Set up sign-up and sign-in with an eBay account using Azure Active Directory B2C
++++++
+## Prerequisites
++
+## Create an eBay application
+
+To enable sign-in for users with an eBay account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [eBay developer console](https://developer.ebay.com). For more information, see [Creating a developer account](https://developer.ebay.com/api-docs/static/creating-edp-account.html). If you don't already have an eBay developer account, sign up at [https://developer.ebay.com/signin](https://developer.ebay.com/signin?tab=register).
+
+To create an eBay application, follow these steps:
+
+1. Sign in to the eBay developer console's [Application Keys](https://developer.ebay.com/my/keys) with your eBay developer account credentials.
+1. Enter an **Application Title**.
+1. Under the **Production**, select **Create a keyset**.
+1. In the **Confirm the Primary Contact for this Account** page, provide your account details. To complete the registration process, select **Continue to Create Keys**.
+1. Copy the values of **App ID (Client ID)** and **App ID (Client ID)**. You need both to add the identity provider to your tenant.
+1. Select **User Tokens**, then select **Get a Token from eBay via Your Application**.
+1. Select **Add eBay Redirect URL**.
+ 1. Enter a valid URL for the **Your privacy policy URL**, for example `https://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
+ 1. In the **Your auth accepted URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+1. Select **Save**.
+
+## Create a policy key
+
+You need to store the client secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. On the Overview page, select **Identity Experience Framework**.
+1. Select **Policy Keys** and then select **Add**.
+1. For **Options**, choose `Manual`.
+1. Enter a **Name** for the policy key. For example, `eBaySecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+1. In **Secret**, enter your client secret that you previously recorded.
+1. For **Key usage**, select `Signature`.
+1. Select **Create**.
+
+## Configure eBay as an identity provider
+
+To enable users to sign in using an eBay account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+
+You can define an eBay account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
+
+1. Open the *TrustFrameworkExtensions.xml*.
+2. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
+3. Add a new **ClaimsProvider** as follows:
+
+ ```xml
+ <!--
+ <ClaimsProviders> -->
+ <ClaimsProvider>
+ <Domain>ebay.com</Domain>
+ <DisplayName>eBay</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="eBay-OAUTH2">
+ <DisplayName>eBay</DisplayName>
+ <Protocol Name="OAuth2" />
+ <Metadata>
+ <Item Key="ProviderName">ebay.com</Item>
+ <Item Key="authorization_endpoint">https://auth.ebay.com/oauth2/authorize</Item>
+ <Item Key="AccessTokenEndpoint">https://api.ebay.com/identity/v1/oauth2/token</Item>
+ <Item Key="ClaimsEndpoint">https://apiz.ebay.com/commerce/identity/v1/user/</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
+ <Item Key="token_endpoint_auth_method">client_secret_basic</Item>
+ <Item Key="scope">https://api.ebay.com/oauth/api_scope/commerce.identity.readonly</Item>
+ <Item Key="UsePolicyInRedirectUri">0</Item>
+ <!-- Update the Client ID below to the Application ID -->
+ <Item Key="client_id">Your eBay app ID</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="eBaySecret"/>
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="userId"/>
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="individualAccount.firstName"/>
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="individualAccount.lastName"/>
+ <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="username"/>
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="ebay.com" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName"/>
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName"/>
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId"/>
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ <!--
+ </ClaimsProviders> -->
+ ```
+
+4. Set **client_id** to the application ID from the application registration.
+5. Save the file.
++++
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="eBayExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="eBayExchange" TechnicalProfileReferenceId="eBay-OAUTH2" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
++
+## Test your custom policy
+
+1. Select your relying party policy, for example `B2C_1A_signup_signin`.
+1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run now** button.
+1. From the sign-up or sign-in page, select **eBay** to sign in with eBay account.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
++
+## Next steps
+
+Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md).
+
active-directory-domain-services Create Resource Forest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/create-resource-forest-powershell.md
Last updated 07/27/2020-++ #Customer intent: As an identity administrator, I want to create an Azure AD Domain Services resource forest and one-way outbound forest from an Azure Active Directory Domain Services resource forest to an on-premises Active Directory Domain Services forest using Azure PowerShell to provide authentication and resource access between forests.
For more conceptual information about forest types in Azure AD DS, see [What are
[Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->
-[powershell-gallery]: https://www.powershellgallery.com/
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/faqs.md
- Title: Frequently asked questions about Azure AD Domain Services | Microsoft Docs
-description: Read and understand some of the frequently asked questions around configuration, administration, and availability for Azure Active Directory Domain Services
-------- Previously updated : 02/09/2021---
-# Frequently asked questions (FAQs) about Azure Active Directory (AD) Domain Services
-
-This page answers frequently asked questions about Azure Active Directory Domain Services.
-
-## Configuration
-
-* [Can I create multiple managed domains for a single Azure AD directory?](#can-i-create-multiple-managed-domains-for-a-single-azure-ad-directory)
-* [Can I enable Azure AD Domain Services in a Classic virtual network?](#can-i-enable-azure-ad-domain-services-in-a-classic-virtual-network)
-* [Can I enable Azure AD Domain Services in an Azure Resource Manager virtual network?](#can-i-enable-azure-ad-domain-services-in-an-azure-resource-manager-virtual-network)
-* [Can I migrate my existing managed domain from a classic virtual network to a Resource Manager virtual network?](#can-i-migrate-my-existing-managed-domain-from-a-classic-virtual-network-to-a-resource-manager-virtual-network)
-* [Can I enable Azure AD Domain Services in an Azure CSP (Cloud Solution Provider) subscription?](#can-i-enable-azure-ad-domain-services-in-an-azure-csp-cloud-solution-provider-subscription)
-* [Can I enable Azure AD Domain Services in a federated Azure AD directory? I do not synchronize password hashes to Azure AD. Can I enable Azure AD Domain Services for this directory?](#can-i-enable-azure-ad-domain-services-in-a-federated-azure-ad-directory-i-do-not-synchronize-password-hashes-to-azure-ad-can-i-enable-azure-ad-domain-services-for-this-directory)
-* [Can I make Azure AD Domain Services available in multiple virtual networks within my subscription?](#can-i-make-azure-ad-domain-services-available-in-multiple-virtual-networks-within-my-subscription)
-* [Can I enable Azure AD Domain Services using PowerShell?](#can-i-enable-azure-ad-domain-services-using-powershell)
-* [Can I enable Azure AD Domain Services using a Resource Manager Template?](#can-i-enable-azure-ad-domain-services-using-a-resource-manager-template)
-* [Can I add domain controllers to an Azure AD Domain Services managed domain?](#can-i-add-domain-controllers-to-an-azure-ad-domain-services-managed-domain)
-* [Can guest users be invited to my directory use Azure AD Domain Services?](#can-guest-users-be-invited-to-my-directory-use-azure-ad-domain-services)
-* [Can I move an existing Azure AD Domain Services managed domain to a different subscription, resource group, region, or virtual network?](#can-i-move-an-existing-azure-ad-domain-services-managed-domain-to-a-different-subscription-resource-group-region-or-virtual-network)
-* [Can I rename an existing Azure AD Domain Services domain name?](#can-i-rename-an-existing-azure-ad-domain-services-domain-name)
-* [Does Azure AD Domain Services include high availability options?](#does-azure-ad-domain-services-include-high-availability-options)
-
-### Can I create multiple managed domains for a single Azure AD directory?
-No. You can only create a single managed domain serviced by Azure AD Domain Services for a single Azure AD directory.
-
-### Can I enable Azure AD Domain Services in a Classic virtual network?
-Classic virtual networks aren't supported for new deployments. Existing managed domains deployed in Classic virtual networks continue to be supported until they're retired on March 1, 2023. For existing deployments, you can [migrate Azure AD Domain Services from the Classic virtual network model to Resource Manager](migrate-from-classic-vnet.md).
-
-For more information, see the [official deprecation notice](https://azure.microsoft.com/updates/we-are-retiring-azure-ad-domain-services-classic-vnet-support-on-march-1-2023/).
-
-### Can I enable Azure AD Domain Services in an Azure Resource Manager virtual network?
-Yes. Azure AD Domain Services can be enabled in an Azure Resource Manager virtual network. Classic Azure virtual networks are no longer available when you create a managed domain.
-
-### Can I migrate my existing managed domain from a Classic virtual network to a Resource Manager virtual network?
-Yes. For more information, see [Migrate Azure AD Domain Services from the Classic virtual network model to Resource Manager](migrate-from-classic-vnet.md).
-
-### Can I enable Azure AD Domain Services in an Azure CSP (Cloud Solution Provider) subscription?
-Yes. For more information, see [how to enable Azure AD Domain Services in Azure CSP subscriptions](csp.md).
-
-### Can I enable Azure AD Domain Services in a federated Azure AD directory? I do not synchronize password hashes to Azure AD. Can I enable Azure AD Domain Services for this directory?
-No. To authenticate users via NTLM or Kerberos, Azure AD Domain Services needs access to the password hashes of user accounts. In a federated directory, password hashes aren't stored in the Azure AD directory. Therefore, Azure AD Domain Services doesn't work with such Azure AD directories.
-
-However, if you're using Azure AD Connect for password hash synchronization, you can use Azure AD Domain Services because the password hash values are stored in Azure AD.
-
-### Can I make Azure AD Domain Services available in multiple virtual networks within my subscription?
-The service itself doesn't directly support this scenario. Your managed domain is available in only one virtual network at a time. However, you can configure connectivity between multiple virtual networks to expose Azure AD Domain Services to other virtual networks. For more information, see [how to connect virtual networks in Azure using VPN gateways](../vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md) or [virtual network peering](../virtual-network/virtual-network-peering-overview.md).
-
-### Can I enable Azure AD Domain Services using PowerShell?
-Yes. For more information, see [how to enable Azure AD Domain Services using PowerShell](powershell-create-instance.md).
-
-### Can I enable Azure AD Domain Services using a Resource Manager Template?
-Yes, you can create an Azure AD Domain Services managed domain using a Resource Manager template. A service principal and Azure AD group for administration must be created using the Azure portal or Azure PowerShell before the template is deployed. For more information, see [Create an Azure AD DS managed domain using an Azure Resource Manager template](template-create-instance.md). When you create an Azure AD Domain Services managed domain in the Azure portal, there's also an option to export the template for use with additional deployments.
-
-### Can I add domain controllers to an Azure AD Domain Services managed domain?
-No. The domain provided by Azure AD Domain Services is a managed domain. You don't need to provision, configure, or otherwise manage domain controllers for this domain. These management activities are provided as a service by Microsoft. Therefore, you can't add additional domain controllers (read-write or read-only) for the managed domain.
-
-### Can guest users be invited to my directory use Azure AD Domain Services?
-No. Guest users invited to your Azure AD directory using the [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) invite process are synchronized into your Azure AD Domain Services managed domain. However, passwords for these users aren't stored in your Azure AD directory. Therefore, Azure AD Domain Services has no way to synchronize NTLM and Kerberos hashes for these users into your managed domain. Such users can't sign in or join computers to the managed domain.
-
-### Can I move an existing Azure AD Domain Services managed domain to a different subscription, resource group, region, or virtual network?
-No. After you create an Azure AD Domain Services managed domain, you can't then move the managed domain to a different resource group, virtual network, subscription, etc. Take care to select the most appropriate subscription, resource group, region, and virtual network when you deploy the managed domain.
-
-### Can I rename an existing Azure AD Domain Services domain name?
-No. After you create an Azure AD Domain Services managed domain, you can't change the DNS domain name. Choose the DNS domain name carefully when you create the managed domain. For considerations when you choose the DNS domain name, see the [tutorial to create and configure an Azure AD Domain Services managed domain](tutorial-create-instance.md#create-a-managed-domain).
-
-### Does Azure AD Domain Services include high availability options?
-
-Yes. Each Azure AD Domain Services managed domain includes two domain controllers. You don't manage or connect to these domain controllers, they're part of the managed service. If you deploy Azure AD Domain Services into a region that supports Availability Zones, the domain controllers are distributed across zones. In regions that don't support Availability Zones, the domain controllers are distributed across Availability Sets. You have no configuration options or management control over this distribution. For more information, see [Availability options for virtual machines in Azure](../virtual-machines/availability.md).
-
-## Administration and operations
-
-* [Can I connect to the domain controller for my managed domain using Remote Desktop?](#can-i-connect-to-the-domain-controller-for-my-managed-domain-using-remote-desktop)
-* [I've enabled Azure AD Domain Services. What user account do I use to domain join machines to this domain?](#ive-enabled-azure-ad-domain-services-what-user-account-do-i-use-to-domain-join-machines-to-this-domain)
-* [Do I have domain administrator privileges for the managed domain provided by Azure AD Domain Services?](#do-i-have-domain-administrator-privileges-for-the-managed-domain-provided-by-azure-ad-domain-services)
-* [Can I modify group memberships using LDAP or other AD administrative tools on managed domains?](#can-i-modify-group-memberships-using-ldap-or-other-ad-administrative-tools-on-managed-domains)
-* [How long does it take for changes I make to my Azure AD directory to be visible in my managed domain?](#how-long-does-it-take-for-changes-i-make-to-my-azure-ad-directory-to-be-visible-in-my-managed-domain)
-* [Can I extend the schema of the managed domain provided by Azure AD Domain Services?](#can-i-extend-the-schema-of-the-managed-domain-provided-by-azure-ad-domain-services)
-* [Can I modify or add DNS records in my managed domain?](#can-i-modify-or-add-dns-records-in-my-managed-domain)
-* [What is the password lifetime policy on a managed domain?](#what-is-the-password-lifetime-policy-on-a-managed-domain)
-* [Does Azure AD Domain Services provide AD account lockout protection?](#does-azure-ad-domain-services-provide-ad-account-lockout-protection)
-* [Can I configure Distributed File System (DFS) and replication within Azure AD Domain Services?](#can-i-configure-distributed-file-system-and-replication-within-azure-ad-domain-services)
-* [How are Windows Updates applied in Azure AD Domain Services?](#how-are-windows-updates-applied-in-azure-ad-domain-services)
-
-### Can I connect to the domain controller for my managed domain using Remote Desktop?
-No. You don't have permissions to connect to domain controllers for the managed domain using Remote Desktop. Members of the *AAD DC Administrators* group can administer the managed domain using AD administration tools such as the Active Directory Administration Center (ADAC) or AD PowerShell. These tools are installed using the *Remote Server Administration Tools* feature on a Windows server joined to the managed domain. For more information, see [Create a management VM to configure and administer an Azure AD Domain Services managed domain](tutorial-create-management-vm.md).
-
-### I've enabled Azure AD Domain Services. What user account do I use to domain join machines to this domain?
-Any user account that's part of the managed domain can join a VM. Members of the *AAD DC Administrators* group are granted remote desktop access to machines that have been joined to the managed domain.
-
-### Do I have domain administrator privileges for the managed domain provided by Azure AD Domain Services?
-No. You aren't granted administrative privileges on the managed domain. *Domain Administrator* and *Enterprise Administrator* privileges aren't available for you to use within the domain. Members of the domain administrator or enterprise administrator groups in your on-premises Active Directory are also not granted domain / enterprise administrator privileges on the managed domain.
-
-### Can I modify group memberships using LDAP or other AD administrative tools on managed domains?
-Users and groups that are synchronized from Azure Active Directory to Azure AD Domain Services cannot be modified because their source of origin is Azure Active Directory. This includes moving users or groups from the AADDC Users managed organizational unit to a custom organizational unit. Any user or group originating in the managed domain may be modified.
-
-### How long does it take for changes I make to my Azure AD directory to be visible in my managed domain?
-Changes made in your Azure AD directory using either the Azure AD UI or PowerShell are automatically synchronized to your managed domain. This synchronization process runs in the background. There's no defined time period for this synchronization to complete all the object changes.
-
-### Can I extend the schema of the managed domain provided by Azure AD Domain Services?
-No. The schema is administered by Microsoft for the managed domain. Schema extensions aren't supported by Azure AD Domain Services.
-
-### Can I modify or add DNS records in my managed domain?
-Yes. Members of the *AAD DC Administrators* group are granted *DNS Administrator* privileges to modify DNS records in the managed domain. Those users can use the DNS Manager console on a machine running Windows Server joined to the managed domain to manage DNS. To use the DNS Manager console, install *DNS Server Tools*, which are part of the *Remote Server Administration Tools* optional feature on the server. For more information, see [Administer DNS in an Azure AD Domain Services managed domain](manage-dns.md).
-
-### What is the password lifetime policy on a managed domain?
-The default password lifetime on an Azure AD Domain Services managed domain is 90 days. This password lifetime is not synchronized with the password lifetime configured in Azure AD. Therefore, you may have a situation where users' passwords expire in your managed domain, but are still valid in Azure AD. In such scenarios, users need to change their password in Azure AD and the new password will synchronize to your managed domain. If you want to change the default password lifetime in a managed domain, you can [create and configure custom password policies.](password-policy.md).
-
-Additionally, the Azure AD password policy for *DisablePasswordExpiration* is synchronized to a managed domain. When *DisablePasswordExpiration* is applied to a user in Azure AD, the *UserAccountControl* value for the synchronized user in the managed domain has *DONT_EXPIRE_PASSWORD* applied.
-
-When users reset their password in Azure AD, the *forceChangePasswordNextSignIn=True* attribute is applied. A managed domain synchronizes this attribute from Azure AD. When the managed domain detects *forceChangePasswordNextSignIn* is set for a synchronized user from Azure AD, the *pwdLastSet* attribute in the managed domain is set to *0*, which invalidates the currently set password.
-
-### Does Azure AD Domain Services provide AD account lockout protection?
-Yes. Five invalid password attempts within 2 minutes on the managed domain cause a user account to be locked out for 30 minutes. After 30 minutes, the user account is automatically unlocked. Invalid password attempts on the managed domain don't lock out the user account in Azure AD. The user account is locked out only within your Azure AD Domain Services managed domain. For more information, see [Password and account lockout policies on managed domains](password-policy.md).
-
-### Can I configure Distributed File System and replication within Azure AD Domain Services?
-No. Distributed File System (DFS) and replication aren't available when using Azure AD Domain Services.
-
-### How are Windows Updates applied in Azure AD Domain Services?
-Domain controllers in a managed domain automatically apply required Windows updates. There's nothing for you to configure or administer here. Make sure you don't create network security group rules that block outbound traffic to Windows Updates. For your own VMs joined to the managed domain, you are responsible for configuring and applying any required OS and application updates.
-
-## Billing and availability
-
-* [Is Azure AD Domain Services a paid service?](#is-azure-ad-domain-services-a-paid-service)
-* [Is there a free trial for the service?](#is-there-a-free-trial-for-the-service)
-* [Can I pause an Azure AD Domain Services managed domain?](#can-i-pause-an-azure-ad-domain-services-managed-domain)
-* [Can I fail over Azure AD Domain Services to another region for a DR event?](#can-i-pause-an-azure-ad-domain-services-managed-domain)
-* [Can I get Azure AD Domain Services as part of Enterprise Mobility Suite (EMS)? Do I need Azure AD Premium to use Azure AD Domain Services?](#can-i-fail-over-azure-ad-domain-services-to-another-region-for-a-dr-event)
-* [What Azure regions is the service available in?](#can-i-get-azure-ad-domain-services-as-part-of-enterprise-mobility-suite-ems-do-i-need-azure-ad-premium-to-use-azure-ad-domain-services)
-
-### Is Azure AD Domain Services a paid service?
-Yes. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/active-directory-ds/).
-
-### Is there a free trial for the service?
-Azure AD Domain Services is included in the free trial for Azure. You can sign up for a [free one-month trial of Azure](https://azure.microsoft.com/pricing/free-trial/).
-
-### Can I pause an Azure AD Domain Services managed domain?
-No. Once you've enabled an Azure AD Domain Services managed domain, the service is available within your selected virtual network until you delete the managed domain. There's no way to pause the service. Billing continues on an hourly basis until you delete the managed domain.
-
-### Can I fail over Azure AD Domain Services to another region for a DR event?
-Yes, to provide geographical resiliency for a managed domain, you can create an additional [replica set](tutorial-create-replica-set.md) to a peered virtual network in any Azure region that supports Azure AD DS. Replica sets share the same namespace and configuration with the managed domain.
-
-### Can I get Azure AD Domain Services as part of Enterprise Mobility Suite (EMS)? Do I need Azure AD Premium to use Azure AD Domain Services?
-No. Azure AD Domain Services is a pay-as-you-go Azure service and isn't part of EMS. Azure AD Domain Services can be used with all editions of Azure AD (Free and Premium). You're billed on an hourly basis, depending on usage.
-
-### What Azure regions is the service available in?
-Refer to the [Azure Services by region](https://azure.microsoft.com/regions/#services/) page to see a list of the Azure regions where Azure AD Domain Services is available.
-
-## Troubleshooting
-
-Refer to the [Troubleshooting guide](troubleshoot.md) for solutions to common issues with configuring or administering Azure AD Domain Services.
-
-## Next steps
-
-To learn more about Azure AD Domain Services, see [What is Azure Active Directory Domain Services?](overview.md).
-
-To get started, see [Create and configure an Azure Active Directory Domain Services managed domain](tutorial-create-instance.md).
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Last updated 09/24/2020-++
With your managed domain migrated to the Resource Manager deployment model, [cre
[migration-benefits]: concepts-migration-benefits.md <!-- EXTERNAL LINKS -->
-[powershell-script]: https://www.powershellgallery.com/packages/Migrate-Aadds/
+[powershell-script]: https://www.powershellgallery.com/packages/Migrate-Aadds/
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Last updated 03/08/2021-++ # Configure scoped synchronization from Azure AD to Azure Active Directory Domain Services using Azure AD PowerShell
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/secure-your-domain.md
Last updated 03/08/2021-++ # Disable weak ciphers and password hash synchronization to secure an Azure Active Directory Domain Services managed domain
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/security-audit-events.md
Last updated 07/06/2020-++ # Enable security audits for Azure Active Directory Domain Services
For specific information on Kusto, see the following articles:
* Kusto [best practices](/azure/kusto/query/best-practices) to optimize your queries for success. <!-- LINKS - Internal -->
-[migrate-azure-adds]: migrate-from-classic-vnet.md
+[migrate-azure-adds]: migrate-from-classic-vnet.md
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/template-create-instance.md
Last updated 07/09/2020-++ # Create an Azure Active Directory Domain Services managed domain using an Azure Resource Manager template
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription [cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md [naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain
-[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment
+[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
+
+ Title: Application Proxy cookie settings - Azure Active Directory
+description: Azure Active Directory (Azure AD) has access and session cookies for accessing on-premises applications through Application Proxy. In this article, you'll find out how to use and configure the cookie settings.
+++++++ Last updated : 04/28/2021++++
+# Cookie settings for accessing on-premises applications in Azure Active Directory
+
+Azure Active Directory (Azure AD) has access and session cookies for accessing on-premises applications through Application Proxy. Find out how to use the Application Proxy cookie settings.
+
+## What are the cookie settings?
+
+[Application Proxy](application-proxy.md) uses the following access and session cookie settings.
+
+| Cookie setting | Default | Description | Recommendations |
+| -- | - | -- | |
+| Use HTTP-Only Cookie | **No** | **Yes** allows Application Proxy to include the HTTPOnly flag in HTTP response headers. This flag provides additional security benefits, for example, it prevents client-side scripting (CSS) from copying or modifying the cookies.<br></br><br></br>Before we supported the HTTP-Only setting, Application Proxy encrypted and transmitted cookies over a secured TLS channel to protect against modification.ΓÇ»| Use **Yes** because of the additional security benefits.<br></br><br></br>Use **No** for clients or user agents that do require access to the session cookie. For example, use **No** for an RDP or MTSC client that connects to a Remote Desktop Gateway server through Application Proxy.|
+| Use Secure Cookie | **No** | **Yes** allows Application Proxy to include the Secure flag in HTTP response headers. Secure Cookies enhances security by transmitting cookies over a TLS secured channel such as HTTPS. This prevents cookies from being observed by unauthorized parties due to the transmission of the cookie in clear text. | Use **Yes** because of the additional security benefits.|
+| Use Persistent Cookie | **No** | **Yes** allows Application Proxy to set its access cookies to not expire when the web browser is closed. The persistence lasts until the access token expires, or until the user manually deletes the persistent cookies. | Use **No** because of the security risk associated with keeping users authenticated.<br></br><br></br>We suggest only using **Yes** for older applications that can't share cookies between processes. It's better to update your application to handle sharing cookies between processes instead of using persistent cookies. For example, you might need persistent cookies to allow a user to open Office documents in explorer view from a SharePoint site. Without persistent cookies, this operation might fail if the access cookies aren't shared between the browser, the explorer process, and the Office process. |
+
+## SameSite Cookies
+Starting in version Chrome 80 and eventually in browsers leveraging Chromium, cookies that do not specify the [SameSite](https://web.dev/samesite-cookies-explained) attribute will be treated as if they were set to **SameSite=Lax**. The SameSite attribute declares how cookies should be restricted to a same-site context. When set to Lax, the cookie is only to sent to same-site requests or top-level navigation. However, Application Proxy requires these cookies to be preserved in the third-party context in order to keep users properly signed in during their session. Due to this, we are making updates to the Application Proxy access and session cookies to avoid adverse impact from this change. The updates include:
+
+* Setting the **SameSite** attribute to **None**. This allows Application Proxy access and sessions cookies to be properly sent in the third-party context.
+* Setting the **Use Secure Cookie** setting to use **Yes** as the default. Chrome also requires the cookies to specify the Secure flag or it will be rejected. This change will apply to all existing applications published through Application Proxy. Note that Application Proxy access cookies have always been set to Secure and only transmitted over HTTPS. This change will only apply to the session cookies.
+
+These changes to Application Proxy cookies will roll out over the course of the next several weeks before the Chrome 80 release date.
+
+Additionally, if your back-end application has cookies that need to be available in a third-party context, you must explicitly opt in by changing your application to use SameSite=None for these cookies. Application Proxy translates the Set-Cookie header to its URLS and will respect the settings for these cookies set by the back-end application.
+++
+## Set the cookie settings - Azure portal
+To set the cookie settings using the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to **Azure Active Directory**ΓÇ»>ΓÇ»**Enterprise applications**ΓÇ»>ΓÇ»**All applications**.
+3. Select the application for which you want to enable a cookie setting.
+4. Click **Application Proxy**.
+5. Under **Additional Settings**, set the cookie setting to **Yes** or **No**.
+6. Click **Save** to apply your changes.
+
+## View current cookie settings - PowerShell
+
+To see the current cookie settings for the application, use this PowerShell command:ΓÇ»
+
+```powershell
+Get-AzureADApplicationProxyApplication -ObjectId <ObjectId> | fl *
+```
+
+## Set cookie settings - PowerShell
+
+In the following PowerShell commands, ```<ObjectId>``` is the ObjectId of the application.
+
+**Http-Only Cookie**
+
+```powershell
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsHttpOnlyCookieEnabled $true
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsHttpOnlyCookieEnabled $false
+```
+
+**Secure Cookie**
+
+```powershell
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsSecureCookieEnabled $true
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsSecureCookieEnabled $false
+```
+
+**Persistent Cookies**
+
+```powershell
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsPersistentCookieEnabled $true
+Set-AzureADApplicationProxyApplication -ObjectId <ObjectId> -IsPersistentCookieEnabled $false
+```
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
The following table lists common capabilities required for header-based authenti
|Fine grained authorization |Provides access control at the URL level. Added policies can be enforced based on the URL being accessed. The internal URL configured for the app, defines the scope of app that the policy is applied to. The policy configured for the most granular path is enforced. | > [!NOTE]
-> This article features connecting header-based authentication applications to Azure AD using Application Proxy and is the recommended pattern. As an alternative, there is also an integration pattern that uses PingAccess with Azure AD to enable header-based authentication. For more details, see [Header-based authentication for single sign-on with Application Proxy and PingAccess](../manage-apps/application-proxy-ping-access-publishing-guide.md).
+> This article features connecting header-based authentication applications to Azure AD using Application Proxy and is the recommended pattern. As an alternative, there is also an integration pattern that uses PingAccess with Azure AD to enable header-based authentication. For more details, see [Header-based authentication for single sign-on with Application Proxy and PingAccess](application-proxy-ping-access-publishing-guide.md).
## How it works
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
The following articles cover common scenarios that can also be used to create tr
* [Configure single sign-on to my app](application-proxy-config-sso-how-to.md) * [Problem creating an app in admin portal](application-proxy-config-problem.md) * [Configure Kerberos Constrained Delegation](application-proxy-back-end-kerberos-constrained-delegation-how-to.md)
-* [Configure with PingAccess](../manage-apps/application-proxy-ping-access-publishing-guide.md)
+* [Configure with PingAccess](application-proxy-ping-access-publishing-guide.md)
* [Can't Access this Corporate Application error](application-proxy-sign-in-bad-gateway-timeout-error.md) * [Problem installing the Application Proxy Agent Connector](application-proxy-connector-installation-problem.md) * [Sign-in problem](application-sign-in-problem-on-premises-application-proxy.md)
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Title: High availability and load balancing - Azure AD Application Proxy
+ Title: High availability and load balancing - Azure Active Directory Application Proxy
description: How traffic distribution works with your Application Proxy deployment. Includes tips for how to optimize connector performance and use load balancing for back-end servers. -+ -+ Previously updated : 10/08/2019 Last updated : 04/29/2021 -- # High availability and load balancing of your Application Proxy connectors and applications
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
+
+ Title: Use Application Proxy to integrate on-premises apps with Cloud App Security - Azure Active Directory
+description: Configure an on-premises application in Azure Active Directory to work with Microsoft Cloud App Security (MCAS). Use the MCAS Conditional Access App Control to monitor and control sessions in real-time based on Conditional Access policies. You can apply these policies to on-premises applications that use Application Proxy in Azure Active Directory (Azure AD).
++++++ Last updated : 04/28/2021++++
+# Configure real-time application access monitoring with Microsoft Cloud App Security and Azure Active Directory
+Configure an on-premises application in Azure Active Directory (Azure AD) to use Microsoft Cloud App Security (MCAS) for real-time monitoring. MCAS uses Conditional Access App Control to monitor and control sessions in real-time based on Conditional Access policies. You can apply these policies to on-premises applications that use Application Proxy in Azure Active Directory (Azure AD).
+
+Here are some examples of the types of policies you can create with MCAS:
+
+- Block or protect the download of sensitive documents on unmanaged devices.
+- Monitor when high-risk users sign on to applications, and then log their actions from within the session. With this information, you can analyze user behavior to determine how to apply session policies.
+- Use client certificates or device compliance to block access to specific applications from unmanaged devices.
+- Restrict user sessions from non-corporate networks. You can give restricted access to users accessing an application from outside your corporate network. For example, this restricted access can block the user from downloading sensitive documents.
+
+For more information, see [Protect apps with Microsoft Cloud App Security Conditional Access App Control](/cloud-app-security/proxy-intro-aad).
+
+## Requirements
+
+License:
+
+- EMS E5 license, or
+- Azure Active Directory Premium P1 and MCAS Standalone.
+
+On-premises application:
+
+- The on-premises application must use Kerberos Constrained Delegation (KCD)
+
+Configure Application Proxy:
+
+- Configure Azure AD to use Application Proxy, including preparing your environment and installing the Application Proxy connector. For a tutorial, see [Add an on-premises applications for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md).
+
+## Add on-premises application to Azure AD
+
+Add an on-premises application to Azure AD. For a quickstart, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad). When adding the application, be sure to set the following two settings in the **Add your on-premises application** blade:
+
+- **Pre Authentication**: Enter **Azure Active Directory**.
+- **Translate URLs in Application Body**: Choose **Yes**.
+
+Those two settings are required for the application to work with MCAS.
+
+## Test the on-premises application
+
+After adding your application to Azure AD, use the steps in [Test the application](../app-proxy/application-proxy-add-on-premises-application.md#test-the-application) to add a user for testing, and test the sign-on.
+
+## Deploy Conditional Access App Control
+
+To configure your application with the Conditional Access Application Control, follow the instructions in [Deploy Conditional Access Application Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad).
++
+## Test Conditional Access App Control
+
+To test the deployment of Azure AD applications with Conditional Access Application Control, follow the instructions in [Test the deployment for Azure AD apps](/cloud-app-security/proxy-deployment-aad).
+++++
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
+
+ Title: Enable remote access to Power BI with Azure Active Directory Application Proxy
+description: Covers the basics about how to integrate an on-premises Power BI with Azure Active Directory Application Proxy.
+++++++ Last updated : 04/28/2021++++
+# Enable remote access to Power BI Mobile with Azure Active Directory Application Proxy
+
+This article discusses how to use Azure AD Application Proxy to enable the Power BI mobile app to connect to Power BI Report Server (PBIRS) and SQL Server Reporting Services (SSRS) 2016 and later. Through this integration, users who are away from the corporate network can access their Power BI reports from the Power BI mobile app and be protected by Azure AD authentication. This protection includes [security benefits](application-proxy-security.md#security-benefits) such as Conditional Access and multi-factor authentication.
+
+## Prerequisites
+
+This article assumes you've already deployed Report Services and [enabled Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
+
+- Enabling Application Proxy requires installing a connector on a Windows server and completing the [prerequisites](../app-proxy/application-proxy-add-on-premises-application.md#prepare-your-on-premises-environment) so that the connector can communicate with Azure AD services.
+- When publishing Power BI, we recommended you use the same internal and external domains. To learn more about custom domains, see [Working with custom domains in Application Proxy](./application-proxy-configure-custom-domain.md).
+- This integration is available for the **Power BI Mobile iOS and Android** application.
+
+## Step 1: Configure Kerberos Constrained Delegation (KCD)
+
+For on-premises applications that use Windows authentication, you can achieve single sign-on (SSO) with the Kerberos authentication protocol and a feature called Kerberos constrained delegation (KCD). When configured, KCD allows the Application Proxy connector to obtain a Windows token for a user, even if the user hasnΓÇÖt signed into Windows directly. To learn more about KCD, see [Kerberos Constrained Delegation Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj553400(v=ws.11)) and [Kerberos Constrained Delegation for single sign-on to your apps with Application Proxy](application-proxy-configure-single-sign-on-with-kcd.md).
+
+There isnΓÇÖt much to configure on the Reporting Services side. Just be sure to have a valid Service Principal Name (SPN) to enable the proper Kerberos authentication to occur. Also make sure the Reporting Services server is enabled for Negotiate authentication.
+
+To set up KCD for Reporting services, continue with the following steps.
+
+### Configure the Service Principal Name (SPN)
+
+The SPN is a unique identifier for a service that uses Kerberos authentication. You'll need to make sure you have a proper HTTP SPN present for your report server. For information on how to configure the proper Service Principal Name (SPN) for your report server, see [Register a Service Principal Name (SPN) for a Report Server](/sql/reporting-services/report-server/register-a-service-principal-name-spn-for-a-report-server).
+You can verify that the SPN was added by running the Setspn command with the -L option. To learn more about this command, see [Setspn](https://social.technet.microsoft.com/wiki/contents/articles/717.service-principal-names-spn-setspn-syntax.aspx).
+
+### Enable Negotiate authentication
+
+To enable a report server to use Kerberos authentication, configure the Authentication Type of the report server to be RSWindowsNegotiate. Configure this setting using the rsreportserver.config file.
+
+```xml
+<AuthenticationTypes>
+ <RSWindowsNegotiate />
+ <RSWindowsKerberos />
+ <RSWindowsNTLM />
+</AuthenticationTypes>
+```
+
+For more information, see [Modify a Reporting Services Configuration File](/sql/reporting-services/report-server/modify-a-reporting-services-configuration-file-rsreportserver-config) and [Configure Windows Authentication on a Report Server](/sql/reporting-services/security/configure-windows-authentication-on-the-report-server).
+
+### Ensure the Connector is trusted for delegation to the SPN added to the Reporting Services application pool account
+Configure KCD so that the Azure AD Application Proxy service can delegate user identities to the Reporting Services application pool account. Configure KCD by enabling the Application Proxy connector to retrieve Kerberos tickets for your users who have been authenticated in Azure AD. Then that server passes the context to the target application, or Reporting Services in this case.
+
+To configure KCD, repeat the following steps for each connector machine:
+
+1. Sign in to a domain controller as a domain administrator, and then open **Active Directory Users and Computers**.
+2. Find the computer that the connector is running on.
+3. Double-click the computer, and then select the **Delegation** tab.
+4. Set the delegation settings to **Trust this computer for delegation to the specified services only**. Then, select **Use any authentication protocol**.
+5. Select **Add**, and then select **Users or Computers**.
+6. Enter the service account that you're using for Reporting Services. This is the account you added the SPN to within the Reporting Services configuration.
+7. Click **OK**. To save the changes, click **OK** again.
+
+For more information, see [Kerberos Constrained Delegation for single sign-on to your apps with Application Proxy](application-proxy-configure-single-sign-on-with-kcd.md).
+
+## Step 2: Publish Report Services through Azure AD Application Proxy
+
+Now you're ready to configure Azure AD Application Proxy.
+
+1. Publish Report Services through Application Proxy with the following settings. For step-by-step instructions on how to publish an application through Application Proxy, see [Publishing applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad).
+ - **Internal URL**: Enter the URL to the Report Server that the connector can reach in the corporate network. Make sure this URL is reachable from the server the connector is installed on. A best practice is using a top-level domain such as `https://servername/` to avoid issues with subpaths published through Application Proxy. For example, use `https://servername/` and not `https://servername/reports/` or `https://servername/reportserver/`.
+ > [!NOTE]
+ > We recommend using a secure HTTPS connection to the Report Server. See [Configure SSL connections on a native mode report server](/sql/reporting-services/security/configure-ssl-connections-on-a-native-mode-report-server) for information how to.
+ - **External URL**: Enter the public URL the Power BI mobile app will connect to. For example, it may look like `https://reports.contoso.com` if a custom domain is used. To use a custom domain, upload a certificate for the domain, and point a DNS record to the default msappproxy.net domain for your application. For detailed steps, see [Working with custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md).
+
+ - **Pre-authentication Method**: Azure Active Directory
+
+2. Once your app is published, configure the single sign-on settings with the following steps:
+
+ a. On the application page in the portal, select **Single sign-on**.
+
+ b. For **Single Sign-on Mode**, select **Integrated Windows Authentication**.
+
+ c. Set **Internal Application SPN** to the value that you set earlier.
+
+ d. Choose the **Delegated Login Identity** for the connector to use on behalf of your users. For more information, see [Working with different on-premises and cloud identities](application-proxy-configure-single-sign-on-with-kcd.md#working-with-different-on-premises-and-cloud-identities).
+
+ e. Click **Save** to save your changes.
+
+To finish setting up your application, go to **the Users and groups** section and assign users to access this application.
+
+## Step 3: Modify the Reply URI's for the application
+
+Before the Power BI mobile app can connect and access Report Services, you must configure the Application Registration that was automatically created for you in step 2.
+
+1. On the Azure Active Directory **Overview** page, select **App registrations**.
+2. Under the **All applications** tab search for the application you created in step 2.
+3. Select the application, then select **Authentication**.
+4. Add the following Redirect URIs based on which platform you are using.
+
+ When configuring the app for Power BI Mobile **iOS**, add the following Redirect URIs of type Public Client (Mobile & Desktop):
+ - `msauth://code/mspbi-adal%3a%2f%2fcom.microsoft.powerbimobile`
+ - `msauth://code/mspbi-adalms%3a%2f%2fcom.microsoft.powerbimobilems`
+ - `mspbi-adal://com.microsoft.powerbimobile`
+ - `mspbi-adalms://com.microsoft.powerbimobilems`
+
+ When configuring the app for Power BI Mobile **Android**, add the following Redirect URIs of type Public Client (Mobile & Desktop):
+ - `urn:ietf:wg:oauth:2.0:oob`
+ - `mspbi-adal://com.microsoft.powerbimobile`
+ - `msauth://com.microsoft.powerbim/g79ekQEgXBL5foHfTlO2TPawrbI%3D`
+ - `msauth://com.microsoft.powerbim/izba1HXNWrSmQ7ZvMXgqeZPtNEU%3D`
+
+ > [!IMPORTANT]
+ > The Redirect URIs must be added for the application to work correctly. If you are configuring the app for both Power BI Mobile iOS and Android, add the following Redirect URI of type Public Client (Mobile & Desktop) to the list of Redirect URIs configured for iOS: `urn:ietf:wg:oauth:2.0:oob`.
+
+## Step 4: Connect from the Power BI Mobile App
+
+1. In the Power BI mobile app, connect to your Reporting Services instance. To do this, enter the **External URL** for the application you published through Application Proxy.
+
+ ![Power BI mobile app with External URL](media/application-proxy-integrate-with-power-bi/app-proxy-power-bi-mobile-app.png)
+
+2. Select **Connect**. You'll be directed to the Azure Active Directory sign in page.
+
+3. Enter valid credentials for your user and select **Sign in**. You'll see the elements from your Reporting Services server.
+
+## Step 5: Configure Intune policy for managed devices (optional)
+
+You can use Microsoft Intune to manage the client apps that your company's workforce uses. Intune allows you to use capabilities such as data encryption and additional access requirements. To learn more about app management through Intune, see Intune App Management. To enable the Power BI mobile application to work with the Intune policy, use the following steps.
+
+1. Go to **Azure Active Directory** and then **App Registrations**.
+2. Select the application configured in Step 3 when registering your native client application.
+3. On the applicationΓÇÖs page, select **API Permissions**.
+4. Click **Add a permission**.
+5. Under **APIs my organization uses**, search for ΓÇ£Microsoft Mobile Application ManagementΓÇ¥ and select it.
+6. Add the **DeviceManagementManagedApps.ReadWrite** permission to the application
+7. Click **Grant admin consent** to grant the permission access to the application.
+8. Configure the Intune policy you want by referring to [How to create and assign app protection policies](/intune/app-protection-policies).
+
+## Troubleshooting
+
+If the application returns an error page after trying to load a report for more than a few minutes, you might need to change the timeout setting. By default, Application Proxy supports applications that take up to 85 seconds to respond to a request. To lengthen this setting to 180 seconds, select the back-end timeout to **Long** in the App Proxy settings page for the application. For tips on how to create fast and reliable reports see [Power BI Reports Best Practices](/power-bi/power-bi-reports-performance).
+
+Using Azure AD Application Proxy to enable the Power BI mobile app to connect to on premises Power BI Report Server is not supported with conditional access policies that require the Microsoft Power BI app as an approved client app.
+
+## Next steps
+
+- [Enable native client applications to interact with proxy applications](application-proxy-configure-native-client-application.md)
+- [View on-premises report server reports and KPIs in the Power BI mobile apps](/power-bi/consumer/mobile/mobile-app-ssrs-kpis-mobile-on-premises-reports)
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
+
+ Title: Azure Active Directory Application Proxy and Tableau
+description: Learn how to use Azure Active Directory (Azure AD) Application Proxy to provide remote access for your Tableau deployment.
+++++++ Last updated : 04/28/2021++++
+# Azure Active Directory Application Proxy and Tableau
+
+Azure Active Directory Application Proxy and Tableau have partnered to ensure you are easily able to use Application Proxy to provide remote access for your Tableau deployment. This article explains how to configure this scenario.
+
+## Prerequisites
+
+The scenario in this article assumes that you have:
+
+- [Tableau](https://onlinehelp.tableau.com/current/server/en-us/proxy.htm#azure) configured.
+
+- An [Application Proxy connector](../app-proxy/application-proxy-add-on-premises-application.md) installed.
+
+
+## Enabling Application Proxy for Tableau
+
+Application Proxy supports the OAuth 2.0 Grant Flow, which is required for Tableau to work properly. This means that there are no longer any special steps required to enable this application, other than configuring it by following the publishing steps below.
++
+## Publish your applications in Azure
+
+To publish Tableau, you need to publish an application in the Azure Portal.
+
+For:
+
+- Detailed instructions of steps 1-8, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md).
+- Information about how to find Tableau values for the App Proxy fields, please see the Tableau documentation.
+
+**To publish your app**:
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as an application administrator.
+
+2. Select **Azure Active Directory > Enterprise applications**.
+
+3. Select **Add** at the top of the blade.
+
+4. Select **On-premises application**.
+
+5. Fill out the required fields with information about your new app. Use the following guidance for the settings:
+
+ - **Internal URL**: This application should have an internal URL that is the Tableau URL itself. For example, `https://adventure-works.tableau.com`.
+
+ - **Pre-authentication method**: Azure Active Directory (recommended but not required).
+
+6. Select **Add** at the top of the blade. Your application is added, and the quick start menu opens.
+
+7. In the quick start menu, select **Assign a user for testing**, and add at least one user to the application. Make sure this test account has access to the on-premises application.
+
+8. Select **Assign** to save the test user assignment.
+
+9. (Optional) On the app management page, select **Single sign-on**. Choose **Integrated Windows Authentication** from the drop-down menu, and fill out the required fields based on your Tableau configuration. Select **Save**.
+
+
+
+## Testing
+
+Your application is now ready to test. Access the external URL you used to publish Tableau, and login as a user assigned to both applications.
+++
+## Next steps
+
+For more information about Azure AD Application Proxy, see [How to provide secure remote access to on-premises applications](application-proxy.md).
+
active-directory Application Proxy Integrate With Teams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-integrate-with-teams.md
+
+ Title: Access Azure Active Directory Application Proxy apps in Teams
+description: Use Azure Active Directory Application Proxy to access your on-premises application through Microsoft Teams.
+++++++ Last updated : 04/28/2021++++
+# Access your on-premises applications through Microsoft Teams with Azure Active Directory Application Proxy
+
+Azure Active Directory Application Proxy gives you single sign-on to on-premises applications no matter where you are. Microsoft Teams streamlines your collaborative efforts in one place. Integrating the two together means that your users can be productive with their teammates in any situation.
+
+Your users can add cloud apps to their Teams channels [using tabs](https://support.office.com/article/Video-Using-Tabs-7350a03e-017a-4a00-a6ae-1c9fe8c497b3?ui=en-US&rs=en-US&ad=US), but what about the SharePoint sites or planning tool that are hosted on-premises? Application Proxy is the solution. They can add apps published through Application Proxy to their channels using the same external URLs they always use to access their apps remotely. And because Application Proxy authenticates through Azure Active Directory, your users get a single sign-on experience.
+
+## Install the Application Proxy connector and publish your app
+
+If you haven't already, [configure Application Proxy for your tenant and install the connector](../app-proxy/application-proxy-add-on-premises-application.md). Then, publish your on-premises application for remote access. When you're publishing the app, make note of the external URL because it's used to add the app to Teams.
+
+If you already have your apps published but don't remember their external URLs, look them up in the [Azure portal](https://portal.azure.com). Sign in, then navigate to **Azure Active Directory** > **Enterprise applications** > **All applications** > select your app > **Application proxy**.
+
+## Add your app to Teams
+
+Once you publish the app through Application Proxy, let your users know that they can add it as a tab directly in their Teams channels, and then the app is available for everyone in the team to use. Have them follow these three steps:
+
+1. Navigate to the Teams channel where you want to add this app and select **+** to add a tab.
+
+ ![Select + to add a tab in Teams](./media/application-proxy-integrate-with-teams/add-tab.png)
+
+1. Select **Website** from the tab options.
+
+ ![Select Website from the Add a tab screen](./media/application-proxy-integrate-with-teams/website.png)
+
+1. Give the tab a name and set the URL to the Application Proxy external URL.
+
+ ![Name the tab and add the external URL](./media/application-proxy-integrate-with-teams/tab-name-url.png)
+
+Once one member of a team adds the tab, it shows up for everyone in the channel. Any users who have access to the app get single sign-on access with the credentials they use for Microsoft Teams. Any users who don't have access to the app can see the tab in Teams, but are blocked until you give them permissions to the on-premises app and the Azure portal published version of the app.
+
+## Next steps
+
+- Learn how to [publish on-premises SharePoint sites](application-proxy-integrate-with-sharepoint-server.md) with Application Proxy.
+- Configure your apps to use [custom domains](application-proxy-configure-custom-domain.md) for their external URL.
active-directory Application Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-migration.md
+
+ Title: Upgrade to Azure Active Directory Application Proxy
+description: Choose which proxy solution is best if you're upgrading from Microsoft Forefront or Unified Access Gateway.
+++++++ Last updated : 04/29/2021++++
+# Compare remote access solutions
+
+Azure Active Directory Application Proxy is one of two remote access solutions that Microsoft offers. The other is Web Application Proxy, the on-premises version. These two solutions replace earlier products that Microsoft offered: Microsoft Forefront Threat Management Gateway (TMG) and Unified Access Gateway (UAG). Use this article to understand how these four solutions compare to each other. For those of you still using the deprecated TMG or UAG solutions, use this article to help plan your migration to one of the Application Proxy.
++
+## Feature comparison
+
+Use this table to understand how Threat Management Gateway (TMG), Unified Access Gateway (UAG), Web Application Proxy (WAP), and Azure AD Application Proxy (AP) compare to each other.
+
+| Feature | TMG | UAG | WAP | AP |
+| - | | | | |
+| Certificate authentication | Yes | Yes | - | - |
+| Selectively publish browser apps | Yes | Yes | Yes | Yes |
+| Preauthentication and single sign-on | Yes | Yes | Yes | Yes |
+| Layer 2/3 firewall | Yes | Yes | - | - |
+| Forward proxy capabilities | Yes | - | - | - |
+| VPN capabilities | Yes | Yes | - | - |
+| Rich protocol support | - | Yes | Yes, if running over HTTP | Yes, if running over HTTP or through Remote Desktop Gateway |
+| Serves as ADFS proxy server | - | Yes | Yes | - |
+| One portal for application access | - | Yes | - | Yes |
+| Response body link translation | Yes | Yes | - | Yes |
+| Authentication with headers | - | Yes | - | Yes, with PingAccess |
+| Cloud-scale security | - | - | - | Yes |
+| Conditional Access | - | Yes | - | Yes |
+| No components in the demilitarized zone (DMZ) | - | - | - | Yes |
+| No inbound connections | - | - | - | Yes |
+
+For most scenarios, we recommend Azure AD Application Proxy as the modern solution. Web Application Proxy is only preferred in scenarios that require a proxy server for AD FS, and you can't use custom domains in Azure Active Directory.
+
+Azure AD Application Proxy offers unique benefits when compared to similar products, including:
+
+- Extending Azure AD to on-premises resources
+ - Cloud-scale security and protection
+ - Features like Conditional Access and Multi-Factor Authentication are easy to enable
+- No components in the demilitarized zone
+- No inbound connections required
+- One My Apps page that your users can go to for all their applications, including Microsoft 365, Azure AD integrated SaaS apps, and your on-premises web apps.
++
+## Next steps
+
+- [Use Azure AD Application Proxy to provide secure remote access to on-premises applications](application-proxy.md)
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
+
+ Title: Header-based authentication with PingAccess for Azure Active Directory Application Proxy
+description: Publish applications with PingAccess and App Proxy to support header-based authentication.
+++++++ Last updated : 04/28/2021++++
+# Header-based authentication for single sign-on with Application Proxy and PingAccess
+
+Azure Active Directory (Azure AD) Application Proxy has partnered with PingAccess so that your Azure AD customers can access more of your applications. PingAccess provides another option beyond integrated [header-based single sign-on](application-proxy-configure-single-sign-on-with-headers.md).
+
+## What's PingAccess for Azure AD?
+
+With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read.
+
+Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically.
+
+## How do I get access?
+
+Since this scenario comes from a partnership between Azure Active Directory and PingAccess, you need licenses for both services. However, Azure Active Directory Premium subscriptions include a basic PingAccess license that covers up to 20 applications. If you need to publish more than 20 header-based applications, you can purchase an additional license from PingAccess.
+
+For more information, see [Azure Active Directory editions](../fundamentals/active-directory-whatis.md).
+
+## Publish your application in Azure
+
+This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
+
+> [!NOTE]
+> Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site.
+
+### Install an Application Proxy connector
+
+If you've enabled Application Proxy enabled and installed a connector already, you can skip this section and go to [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy).
+
+The Application Proxy connector is a Windows Server service that directs the traffic from your remote employees to your published applications. For more detailed installation instructions, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md).
+
+1. Sign in to the [Azure Active Directory portal](https://aad.portal.azure.com/) as an application administrator. The **Azure Active Directory admin center** page appears.
+1. Select **Azure Active Directory** > **Application proxy** > **Download connector service**. The **Application Proxy Connector Download** page appears.
+
+ ![Application proxy connector download](./media/application-proxy-configure-single-sign-on-with-ping-access/application-proxy-connector-download.png)
+
+1. Follow the installation instructions.
+
+Downloading the connector should automatically enable Application Proxy for your directory, but if not, you can select **Enable Application Proxy**.
+
+### Add your application to Azure AD with Application Proxy
+
+There are two actions you need to take in the Azure portal. First, you need to publish your application with Application Proxy. Then, you need to collect some information about the application that you can use during the PingAccess steps.
+
+#### Publish your application
+
+You'll first have to publish your application. This action involves:
+
+- Adding your on-premises application to Azure AD
+- Assigning a user for testing the application and choosing header-based SSO
+- Setting up the application's redirect URL
+- Granting permissions for users and other applications to use your on-premises application
+
+To publish your own on-premises application:
+
+1. If you didn't in the last section, sign in to the [Azure Active Directory portal](https://aad.portal.azure.com/) as an application administrator.
+1. Select **Enterprise applications** > **New application** > **Add an on-premises application**. The **Add your own on-premises application** page appears.
+
+ ![Add your own on-premises application](./media/application-proxy-configure-single-sign-on-with-ping-access/add-your-own-on-premises-application.png)
+1. Fill out the required fields with information about your new application. Use the guidance below for the settings.
+
+ > [!NOTE]
+ > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad).
+
+ 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
+
+ > [!WARNING]
+ > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications.
+
+ 1. **Pre-authentication method**: Choose **Azure Active Directory**.
+ 1. **Translate URL in Headers**: Choose **No**.
+
+ > [!NOTE]
+ > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=reference/ui/pa_c_Listeners.html).
+
+1. Select **Add**. The overview page for the new application appears.
+
+Now assign a user for application testing and choose header-based single sign-on:
+
+1. From the application sidebar, select **Users and groups** > **Add user** > **Users and groups (\<Number> Selected)**. A list of users and groups appears for you to choose from.
+
+ ![Shows the list of users and groups](./media/application-proxy-configure-single-sign-on-with-ping-access/users-and-groups.png)
+
+1. Select a user for application testing, and select **Select**. Make sure this test account has access to the on-premises application.
+1. Select **Assign**.
+1. From the application sidebar, select **Single sign-on** > **Header-based**.
+
+ > [!TIP]
+ > If this is your first time using header-based single sign-on, you need to install PingAccess. To make sure your Azure subscription is automatically associated with your PingAccess installation, use the link on this single sign-on page to download PingAccess. You can open the download site now, or come back to this page later.
+
+ ![Shows header-based sign-on screen and PingAccess](./media/application-proxy-configure-single-sign-on-with-ping-access/sso-header.png)
+
+1. Select **Save**.
+
+Then make sure your redirect URL is set to your external URL:
+
+1. From the **Azure Active Directory admin center** sidebar, select **Azure Active Directory** > **App registrations**. A list of applications appears.
+1. Select your application.
+1. Select the link next to **Redirect URIs**, showing the number of redirect URIs set up for web and public clients. The **\<application name> - Authentication** page appears.
+1. Check whether the external URL that you assigned to your application earlier is in the **Redirect URIs** list. If it isn't, add the external URL now, using a redirect URI type of **Web**, and select **Save**.
+
+In addition to the external URL, an authorize endpoint of Azure Active Directory on the external URL should be added to the Redirect URIs list.
+
+`https://*.msappproxy.net/pa/oidc/cb`
+`https://*.msappproxy.net/`
+
+Finally, set up your on-premises application so that users have read access and other applications have read/write access:
+
+1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory.
+
+ ![Shows the Request API permissions page](./media/application-proxy-configure-single-sign-on-with-ping-access/required-permissions.png)
+
+1. Select **Delegated permissions** > **User** > **User.Read**.
+1. Select **Application permissions** > **Application** > **Application.ReadWrite.All**.
+1. Select **Add permissions**.
+1. In the **API permissions** page, select **Grant admin consent for \<your directory name>**.
+
+#### Collect information for the PingAccess steps
+
+You need to collect these three pieces of information (all GUIDs) to set up your application with PingAccess:
+
+| Name of Azure AD field | Name of PingAccess field | Data format |
+| | | |
+| **Application (client) ID** | **Client ID** | GUID |
+| **Directory (tenant) ID** | **Issuer** | GUID |
+| `PingAccess key` | **Client Secret** | Random string |
+
+To collect this information:
+
+1. From the **Azure Active Directory admin center** sidebar, select **Azure Active Directory** > **App registrations**. A list of applications appears.
+1. Select your application. The **App registrations** page for your application appears.
+
+ ![Registration overview for an application](./media/application-proxy-configure-single-sign-on-with-ping-access/registration-overview-for-an-application.png)
+
+1. Next to the **Application (client) ID** value, select the **Copy to clipboard** icon, then copy and save it. You specify this value later as PingAccess's client ID.
+1. Next the **Directory (tenant) ID** value, also select **Copy to clipboard**, then copy and save it. You specify this value later as PingAccess's issuer.
+1. From the sidebar of the **App registrations** for your application, select **Certificates and secrets** > **New client secret**. The **Add a client secret** page appears.
+
+ ![Shows the Add a client secret page](./media/application-proxy-configure-single-sign-on-with-ping-access/add-a-client-secret.png)
+
+1. In **Description**, type `PingAccess key`.
+1. Under **Expires**, choose how to set the PingAccess key: **In 1 year**, **In 2 years**, or **Never**.
+1. Select **Add**. The PingAccess key appears in the table of client secrets, with a random string that autofills in the **VALUE** field.
+1. Next to the PingAccess key's **VALUE** field, select the **Copy to clipboard** icon, then copy and save it. You specify this value later as PingAccess's client secret.
+
+**Update the `acceptMappedClaims` field:**
+
+1. Sign in to the [Azure Active Directory portal](https://aad.portal.azure.com/) as an application administrator.
+1. Select **Azure Active Directory** > **App registrations**. A list of applications appears.
+1. Select your application.
+1. From the sidebar of the **App registrations** page for your application, select **Manifest**. The manifest JSON code for your application's registration appears.
+1. Search for the `acceptMappedClaims` field, and change the value to `True`.
+1. Select **Save**.
+
+### Use of optional claims (optional)
+
+Optional claims allows you to add standard-but-not-included-by-default claims that every user and tenant has.
+You can configure optional claims for your application by modifying the application manifest. For more info, see the [Understanding the Azure AD application manifest article](../develop/reference-app-manifest.md)
+
+Example to include email address into the access_token that PingAccess will consume:
+
+```json
+ "optionalClaims": {
+ "idToken": [],
+ "accessToken": [
+ {
+ "name": "email",
+ "source": null,
+ "essential": false,
+ "additionalProperties": []
+ }
+ ],
+ "saml2Token": []
+ },
+```
+
+### Use of claims mapping policy (optional)
+
+[Claims Mapping Policy (preview)](../develop/reference-claims-mapping-policy-type.md#claims-mapping-policy-properties) for attributes which do not exist in AzureAD. Claims mapping allows you to migrate old on-prem apps to the cloud by adding additional custom claims that are backed by your ADFS or user objects
+
+To make your application use a custom claim and include additional fields, be sure you've also [created a custom claims mapping policy and assigned it to the application](../develop/active-directory-claims-mapping.md).
+
+> [!NOTE]
+> To use a custom claim, you must also have a custom policy defined and assigned to the application. This policy should include all required custom attributes.
+>
+> You can do policy definition and assignment through PowerShell or Microsoft Graph. If you're doing them in PowerShell, you may need to first use `New-AzureADPolicy` and then assign it to the application with `Add-AzureADServicePrincipalPolicy`. For more information, see [Claims mapping policy assignment](../develop/active-directory-claims-mapping.md).
+
+Example:
+```powershell
+$pol = New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source":"user","ID":"employeeid","JwtClaimType":"employeeid"}]}}') -DisplayName "AdditionalClaims" -Type "ClaimsMappingPolicy"
+
+Add-AzureADServicePrincipalPolicy -Id "<<The object Id of the Enterprise Application you published in the previous step, which requires this claim>>" -RefObjectId $pol.Id
+```
+
+### Enable PingAccess to use custom claims
+
+Enabling PingAccess to use custom claims is optional, but required if you expect the application to consume additional claims.
+
+When you will configure PingAccess in the following step, the Web Session you will create (Settings->Access->Web Sessions) must have **Request Profile** deselected and **Refresh User Attributes** set to **No**
+
+## Download PingAccess and configure your application
+
+Now that you've completed all the Azure Active Directory setup steps, you can move on to configuring PingAccess.
+
+The detailed steps for the PingAccess part of this scenario continue in the Ping Identity documentation. Follow the instructions in [Configure PingAccess for Azure AD to protect applications published using Microsoft Azure AD Application Proxy](https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=agents/azure/pa_c_PAAzureSolutionOverview.html) on the Ping Identity web site and download the [latest version of PingAccess](https://www.pingidentity.com/en/lp/azure-download.html?).
+
+Those steps help you install PingAccess and set up a PingAccess account (if you don't already have one). Then, to create an Azure AD OpenID Connect (OIDC) connection, you set up a token provider with the **Directory (tenant) ID** value that you copied from the Azure AD portal. Next, to create a web session on PingAccess, you use the **Application (client) ID** and `PingAccess key` values. After that, you can set up identity mapping and create a virtual host, site, and application.
+
+### Test your application
+
+When you've completed all these steps, your application should be up and running. To test it, open a browser and navigate to the external URL that you created when you published the application in Azure. Sign in with the test account that you assigned to the application.
+
+## Next steps
+
+- [Configure PingAccess for Azure AD to protect applications published using Microsoft Azure AD Application Proxy](https://docs.pingidentity.com/bundle/pingaccess-60/page/jep1564006742933.html)
+- [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md)
+- [Troubleshoot Application Proxy problems and error messages](application-proxy-troubleshoot.md)
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-powershell-samples.md
+
+ Title: PowerShell samples for Azure Active Directory Application Proxy
+description: Use these PowerShell samples for Azure Active Directory Application Proxy to get information about Application Proxy apps and connectors in your directory, assign users and groups to apps, and get certificate information.
+++++++ Last updated : 04/29/2021++++
+# Azure Active Directory Application Proxy PowerShell examples
+
+The following table includes links to PowerShell script examples for Azure AD Application Proxy. These samples require either the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true), unless otherwise noted.
++
+For more information about the cmdlets used in these samples, see [Application Proxy Application Management](/powershell/module/azuread/#application_proxy_application_management) and [Application Proxy Connector Management](/powershell/module/azuread/#application_proxy_connector_management).
+
+| Link | Description |
+|||
+|**Application Proxy apps**||
+| [List basic information for all Application Proxy apps](scripts/powershell-get-all-app-proxy-apps-basic.md) | Lists basic information (AppId, DisplayName, ObjId) about all the Application Proxy apps in your directory. |
+| [List extended information for all Application Proxy apps](scripts/powershell-get-all-app-proxy-apps-extended.md) | Lists extended information (AppId, DisplayName, ExternalUrl, InternalUrl, ExternalAuthenticationType) about all the Application Proxy apps in your directory. |
+| [List all Application Proxy apps by connector group](scripts/powershell-get-all-app-proxy-apps-by-connector-group.md) | Lists information about all the Application Proxy apps in your directory and which connector groups the apps are assigned to. |
+| [Get all Application Proxy apps with a token lifetime policy](scripts/powershell-get-all-app-proxy-apps-with-policy.md) | Lists all Application Proxy apps in your directory with a token lifetime policy and its details. This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true). |
+|**Connector groups**||
+| [Get all connector groups and connectors in the directory](scripts/powershell-get-all-connectors.md) | Lists all the connector groups and connectors in your directory. |
+| [Move all apps assigned to a connector group to another connector group](scripts/powershell-move-all-apps-to-connector-group.md) | Moves all applications currently assigned to a connector group to a different connector group. |
+|**Users and group assigned**||
+| [Display users and groups assigned to an Application Proxy application](scripts/powershell-display-users-group-of-app.md) | Lists the users and groups assigned to a specific Application Proxy application. |
+| [Assign a user to an application](scripts/powershell-assign-user-to-app.md) | Assigns a specific user to an application. |
+| [Assign a group to an application](scripts/powershell-assign-group-to-app.md) | Assigns a specific group to an application. |
+|**External URL configuration**||
+| [Get all Application Proxy apps using default domains (.msappproxy.net)](scripts/powershell-get-all-default-domain-apps.md) | Lists all the Application Proxy applications using default domains (.msappproxy.net). |
+| [Get all Application Proxy apps using wildcard publishing](scripts/powershell-get-all-wildcard-apps.md) | Lists all Application Proxy apps using wildcard publishing. |
+|**Custom Domain configuration**||
+| [Get all Application Proxy apps using custom domains and certificate information](scripts/powershell-get-all-custom-domains-and-certs.md) | Lists all Application Proxy apps that are using custom domains and the certificate information associated with the custom domains. |
+| [Get all Azure AD Proxy application apps published with no certificate uploaded](scripts/powershell-get-all-custom-domain-no-cert.md) | Lists all Application Proxy apps that are using custom domains but don't have a valid TLS/SSL certificate uploaded. |
+| [Get all Azure AD Proxy application apps published with the identical certificate](scripts/powershell-get-custom-domain-identical-cert.md) | Lists all the Azure AD Proxy application apps published with the identical certificate. |
+| [Get all Azure AD Proxy application apps published with the identical certificate and replace it](scripts/powershell-get-custom-domain-replace-cert.md) | For Azure AD Proxy application apps that are published with an identical certificate, allows you to replace the certificate in bulk. |
active-directory Application Proxy Remove Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-remove-personal-data.md
+
+ Title: Remove personal data - Azure Active Directory Application Proxy
+description: Remove personal data from connectors installed on devices for Azure Active Directory Application Proxy.
++++++ Last updated : 04/28/2021++++
+# Remove personal data for Azure Active Directory Application Proxy
+
+Azure Active Directory Application Proxy requires that you install connectors on your devices, which means that there might be personal data on your devices. This article provides steps for how to delete that personal data to improve privacy.
+
+## Where is the personal data?
+
+It is possible for Application Proxy to write personal data to the following log types:
+
+- Connector event logs
+- Windows event logs
+
+## Remove personal data from Windows event logs
+
+For information on how to configure data retention for the Windows event logs, see [Settings for event logs](https://technet.microsoft.com/library/cc952132.aspx). To learn about Windows event logs, see [Using Windows Event Log](/windows/win32/wes/using-windows-event-log).
++
+## Remove personal data from Connector event logs
+
+To ensure the Application Proxy logs do not have personal data, you can either:
+
+- Delete or view data when needed, or
+- Turn off logging
+
+Use the following sections to remove personal data from connector event logs. You must complete the removal process for all devices on which the connector is installed.
++
+### View or export specific data
+
+To view or export specific data, search for related entries in each of the connector event logs. The logs are located at `C:\ProgramData\Microsoft\Microsoft AAD Application Proxy Connector\Trace`.
+
+Since the logs are text files, you can use [findstr](/windows-server/administration/windows-commands/findstr) to search for text entries related to a user.
+
+To find personal data, search log files for UserID.
+
+To find personal data logged by an application that uses Kerberos Constrained Delegation, search for these components of the username type:
+
+- On-premises user principal name
+- Username part of user principal name
+- Username part of on-premises user principal name
+- On-premises security accounts manager (SAM) account name
+
+### Delete specific data
+
+To delete specific data:
+
+1. Restart the Microsoft Azure AD Application Proxy Connector service to generate a new log file. The new log file enables you to delete or modify the old log files.
+1. Follow the [View or export specific data](#view-or-export-specific-data) process described previously to find information that needs to be deleted. Search all of the connector logs.
+1. Either delete the relevant log files or selectively delete the fields that contain personal data. You can also delete all old log files if you donΓÇÖt need them anymore.
+
+### Turn off connector logs
+
+One option to ensure the connector logs do not contain personal data is to turn off the log generation. To stop generating connector logs, remove the following highlighted line from `C:\Program Files\Microsoft AAD App Proxy Connector\ApplicationProxyConnectorService.exe.config`.
+
+![Shows a code snippet with the highlighted code to remove](./media/application-proxy-remove-personal-data/01.png)
+
+## Next steps
+
+For an overview of Application Proxy, see [How to provide secure remote access to on-premises applications](application-proxy.md).
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
+
+ Title: Understand and solve Azure Active Directory Application Proxy CORS issues
+description: Provides an understanding of CORS in Azure Active Directory Application Proxy, and how to identify and solve CORS issues.
+++++++ Last updated : 04/28/2021++++
+# Understand and solve Azure Active Directory Application Proxy CORS issues
+
+[Cross-origin resource sharing (CORS)](https://www.w3.org/TR/cors/) can sometimes present challenges for the apps and APIs you publish through the Azure Active Directory Application Proxy. This article discusses Azure AD Application Proxy CORS issues and solutions.
+
+Browser security usually prevents a web page from making AJAX requests to another domain. This restriction is called the *same-origin policy*, and prevents a malicious site from reading sensitive data from another site. However, sometimes you might want to let other sites call your web API. CORS is a W3C standard that lets a server relax the same-origin policy and allow some cross-origin requests while rejecting others.
+
+## Understand and identify CORS issues
+
+Two URLs have the same origin if they have identical schemes, hosts, and ports ([RFC 6454](https://tools.ietf.org/html/rfc6454)), such as:
+
+- http:\//contoso.com/foo.html
+- http:\//contoso.com/bar.html
+
+The following URLs have different origins than the previous two:
+
+- http:\//contoso.net - Different domain
+- http:\//contoso.com:9000/foo.html - Different port
+- https:\//contoso.com/foo.html - Different scheme
+- http:\//www.contoso.com/foo.html - Different subdomain
+
+Same-origin policy prevents apps from accessing resources from other origins unless they use the correct access control headers. If the CORS headers are absent or incorrect, cross-origin requests fail.
+
+You can identify CORS issues by using browser debug tools:
+
+1. Launch the browser and browse to the web app.
+1. Press **F12** to bring up the debug console.
+1. Try to reproduce the transaction, and review the console message. A CORS violation produces a console error about origin.
+
+In the following screenshot, selecting the **Try It** button caused a CORS error message that https:\//corswebclient-contoso.msappproxy.net wasn't found in the Access-Control-Allow-Origin header.
+
+![CORS issue](./media/application-proxy-understand-cors-issues/image3.png)
+
+## CORS challenges with Application Proxy
+
+The following example shows a typical Azure AD Application Proxy CORS scenario. The internal server hosts a **CORSWebService** web API controller, and a **CORSWebClient** that calls **CORSWebService**. There's an AJAX request from **CORSWebClient** to **CORSWebService**.
+
+![On-premises same-origin request](./media/application-proxy-understand-cors-issues/image1.png)
+
+The CORSWebClient app works when you host it on-premises, but either fails to load or errors out when published through Azure AD Application Proxy. If you published the CORSWebClient and CORSWebService apps separately as different apps through Application Proxy, the two apps are hosted at different domains. An AJAX request from CORSWebClient to CORSWebService is a cross-origin request, and it fails.
+
+![Application Proxy CORS request](./media/application-proxy-understand-cors-issues/image2.png)
+
+## Solutions for Application Proxy CORS issues
+
+You can resolve the preceding CORS issue in any one of several ways.
+
+### Option 1: Set up a custom domain
+
+Use an Azure AD Application Proxy [custom domain](./application-proxy-configure-custom-domain.md) to publish from the same origin, without having to make any changes to app origins, code, or headers.
+
+### Option 2: Publish the parent directory
+
+Publish the parent directory of both apps. This solution works especially well if you have only two apps on the web server. Instead of publishing each app separately, you can publish the common parent directory, which results in the same origin.
+
+The following examples show the portal Azure AD Application Proxy page for the CORSWebClient app. When the **Internal URL** is set to *contoso.com/CORSWebClient*, the app can't make successful requests to the *contoso.com/CORSWebService* directory, because they're cross-origin.
+
+![Publish app individually](./media/application-proxy-understand-cors-issues/image4.png)
+
+Instead, set the **Internal URL** to publish the parent directory, which includes both the *CORSWebClient* and *CORSWebService* directories:
+
+![Publish parent directory](./media/application-proxy-understand-cors-issues/image5.png)
+
+The resulting app URLs effectively resolve the CORS issue:
+
+- https:\//corswebclient-contoso.msappproxy.net/CORSWebService
+- https:\//corswebclient-contoso.msappproxy.net/CORSWebClient
+
+### Option 3: Update HTTP headers
+
+Add a custom HTTP response header on the web service to match the origin request. For websites running in Internet Information Services (IIS), use IIS Manager to modify the header:
+
+![Add custom response header in IIS Manager](./media/application-proxy-understand-cors-issues/image6.png)
+
+This modification doesn't require any code changes. You can verify it in the Fiddler traces:
+
+**Post the Header Addition**\
+HTTP/1.1 200 OK\
+Cache-Control: no-cache\
+Pragma: no-cache\
+Content-Type: text/plain; charset=utf-8\
+Expires: -1\
+Vary: Accept-Encoding\
+Server: Microsoft-IIS/8.5 Microsoft-HTTPAPI/2.0\
+**Access-Control-Allow-Origin: https\://corswebclient-contoso.msappproxy.net**\
+X-AspNet-Version: 4.0.30319\
+X-Powered-By: ASP.NET\
+Content-Length: 17
+
+### Option 4: Modify the app
+
+You can change your app to support CORS by adding the Access-Control-Allow-Origin header, with appropriate values. The way to add the header depends on the app's code language. Changing the code is the least recommended option, because it requires the most effort.
+
+### Option 5: Extend the lifetime of the access token
+
+Some CORS issues can't be resolved, such as when your app redirects to *login.microsoftonline.com* to authenticate, and the access token expires. The CORS call then fails. A workaround for this scenario is to extend the lifetime of the access token, to prevent it from expiring during a userΓÇÖs session. For more information about how to do this, see [Configurable token lifetimes in Azure AD](../develop/active-directory-configurable-token-lifetimes.md).
+
+## See also
+- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
+- [Plan an Azure AD Application Proxy deployment](application-proxy-deployment-plan.md)
+- [Remote access to on-premises applications through Azure Active Directory Application Proxy](application-proxy.md)
active-directory Application Sign In Problem On Premises Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-sign-in-problem-on-premises-application-proxy.md
The following documents can help you to resolve some of the most common issues i
## I'm having a problem setting up back-end authentication to my application The following documents can help you to resolve some of the most common issues in this category. * [I don't know how to configure Kerberos Constrained Delegation](application-proxy-back-end-kerberos-constrained-delegation-how-to.md)
- * [I don't know how to configure my application with PingAccess](../manage-apps/application-proxy-ping-access-publishing-guide.md)
+ * [I don't know how to configure my application with PingAccess](application-proxy-ping-access-publishing-guide.md)
## I'm having a problem when signing in to my application The following documents can help you to resolve some of the most common issues in this category.
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
+
+ Title: PowerShell sample - Assign group to an Azure Active Directory Application Proxy app
+description: PowerShell example that assigns a group to an Azure Active Directory (Azure AD) Application Proxy application.
+++++++ Last updated : 04/29/2021++++
+# Assign a group to a specific Azure AD Application Proxy application
+
+This PowerShell script example allows you to assign a specific group to an Azure Active Directory (Azure AD) Application Proxy application.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/assign-group-to-app.ps1 "Assign a group to a specific Azure AD Application Proxy application")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+| [New-AzureADGroupAppRoleAssignment](/powershell/module/AzureAD/New-azureadgroupapproleassignment) | Assigns a group to an application role. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
+
+ Title: PowerShell sample - Assign user to an Azure Active Directory Application Proxy app
+description: PowerShell example that assigns a user to an Azure Active Directory (Azure AD) Application Proxy application.
+++++++ Last updated : 04/29/2021++++
+# Assign a user to a specific Azure Active Directory Application Proxy application
+
+This PowerShell script example allows you to assign a user to a specific Azure AD Application Proxy application.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/assign-user-to-app.ps1 "Assign a user to an application")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+| [New-AzureADUserAppRoleAssignment](/powershell/module/AzureAD/new-azureaduserapproleassignment) | Assigns a user to an application role. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
+
+ Title: PowerShell sample - List users & groups for an Azure Active Directory Application Proxy app
+description: PowerShell example that lists all the users and groups assigned to a specific Azure Active Directory (Azure AD) Application Proxy application.
+++++++ Last updated : 04/29/2021++++
+# Display users and groups assigned to an Application Proxy application
+
+This PowerShell script example lists the users and groups assigned to a specific Azure Active Directory (Azure AD) Application Proxy application.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/display-users-group-of-an-app.ps1 "Display users and groups assigned to an Application Proxy application")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+| [Get-AzureADUser](/powershell/module/AzureAD/get-azureaduser)| Gets a user. |
+| [Get-AzureADGroup](/powershell/module/AzureAD/get-azureadgroup)| Gets a group. |
+| [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+| [Get-AzureADUserAppRoleAssignment](/powershell/module/AzureAD/get-azureaduserapproleassignment) | Get a user application role assignment. |
+| [Get-AzureADGroupAppRoleAssignment](/powershell/module/AzureAD/get-azureadgroupapproleassignment) | Get a group application role assignment. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
+
+ Title: PowerShell sample - List basic info for Application Proxy apps
+description: PowerShell example that lists Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), and object ID (ObjId).
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps and list basic information
+
+This PowerShell script example lists information about all Azure Active Directory (Azure AD) Application Proxy applications, including the application ID (AppId), name (DisplayName), and object ID (ObjId).
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-appproxy-apps-basic.ps1 "Get all Application Proxy apps")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
+
+ Title: List Azure Active Directory Application Proxy connector groups for apps
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy Connector groups with the assigned applications.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps and list by connector group
+
+This PowerShell script example lists information about all Azure Active Directory (Azure AD) Application Proxy Connector groups with the assigned applications.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-appproxy-apps-by-connectorgroup.ps1 "Get all Application Proxy Connector groups with the assigned applications")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+| [Get-AzureADApplicationProxyConnectorGroup](/powershell/module/azuread/get-azureadapplicationproxyconnectorgroup) | Retrieves a list of all connector groups, or if specified, details of the specified connector group. |
++
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
+
+ Title: PowerShell sample - List extended info for Azure Active Directory Application Proxy apps
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), and authentication type (ExternalAuthenticationType).
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps and list extended information
+
+This PowerShell script example lists information about all Azure Active Directory (Azure AD) Application Proxy applications, including the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), authentication type (ExternalAuthenticationType), SSO mode and further settings.
+
+Changing the value of the $ssoMode variable enables a filtered output by SSO mode. Further details are documented in the script.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-appproxy-apps-extended.ps1 "Get all Application Proxy apps")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
+
+ Title: PowerShell sample - List all Azure Active Directory Application Proxy apps with a policy
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications in your directory that have a lifetime token policy.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps with a token lifetime policy
+
+This PowerShell script example lists all the Azure Active Directory (Azure AD) Application Proxy applications in your directory that have a token lifetime policy and lists details about the policy.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-appproxy-apps-with-policy.ps1 "Get all Application Proxy apps with a token lifetime policy")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) | Gets a policy in Azure AD. |
+|[Get-AzureADServicePrincipalPolicy](/powershell/module/azuread/get-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) | Gets the policy of a service principal in Azure AD. |
++
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
+
+ Title: PowerShell sample - List all Azure Active Directory Application Proxy connector groups
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy connector groups and connectors in your directory.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy connector groups and connectors in the directory
+
+This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy connector groups and connectors in your directory.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-connectors.ps1 "Get all connector groups and connectors in the directory")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+| [Get-AzureADApplicationProxyConnectorGroup](/powershell/module/azuread/get-azureadapplicationproxyconnectorgroup) | Retrieves a list of all connector groups, or if specified, details of the specified connector group. |
+| [Get-AzureADApplicationProxyConnectorGroupMembers](/powershell/module/azuread/get-azureadapplicationproxyconnectorgroupmembers) | Gets all Application Proxy connectors associated with each connector group.|
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
+
+ Title: PowerShell sample - Azure Active Directory Application Proxy apps with no certificate
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains but do not have a valid TLS/SSL certificate uploaded.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps published with no certificate uploaded
+
+This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy apps that are using custom domains but do not have a valid TLS/SSL certificate uploaded.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-custom-domain-no-cert.ps1 "Get all Azure AD Proxy application apps published with no certificate uploaded")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
+
+ Title: PowerShell sample - Azure Active Directory Application Proxy apps using custom domains
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains and certificate information.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps using custom domains and certificate information
+
+This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains and lists the certificate information associated with the custom domains.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-custom-domains-and-certs.ps1 "Get all Application Proxy apps using custom domains and certificate information")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
+
+ Title: PowerShell sample - Azure Active Directory Application Proxy apps using default domain
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using default domains (.msappproxy.net).
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps using default domains (.msappproxy.net)
+
+This PowerShell script example lists all the Azure Active Directory (Azure AD) Application Proxy applications that are using default domains (.msappproxy.net).
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-default-domain-apps.ps1 "Get all Application Proxy apps using default domains (.msappproxy.net)")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
+
+ Title: PowerShell sample - List Azure Active Directory Application Proxy apps using wildcards
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using wildcards.
+++++++ Last updated : 04/29/2021++++
+# Get all Application Proxy apps using wildcard publishing
+
+This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy applications that are using wildcard publishing.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-wildcard-apps.ps1 "Get all Application Proxy apps using wildcards")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
+
+ Title: PowerShell sample - Azure Active Directory Application Proxy apps with identical certs
+description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate.
+++++++ Last updated : 04/29/2021++++
+# Get all Azure Active Directory Application Proxy apps that are published with the identical certificate
+
+This PowerShell script example lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-custom-domain-identical-cert.ps1 "Get all Azure AD Proxy application apps published with the identical certificate")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
+
+ Title: PowerShell sample - Replace certificate in Azure Active Directory Application Proxy apps
+description: PowerShell example that bulk replaces a certificate across Azure Active Directory (Azure AD) Application Proxy applications.
+++++++ Last updated : 04/29/2021++++
+# Get all Azure Active Directory Application Proxy applications published with the identical certificate and replace it
+
+This PowerShell script example allows you to replace the certificate in bulk for all the Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-custom-domain-replace-cert.ps1 "Get all Application Proxy applications published with the identical certificate and replace it")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+|[Get-AzureADApplicationProxyApplication](/powershell/module/azuread/get-azureadapplicationproxyapplication) | Retrieves an application configured for Application Proxy in Azure AD. |
+|[Set-AzureADApplicationProxyApplicationCustomDomainCertificate](/powershell/module/azuread/set-azureadapplicationproxyapplicationcustomdomaincertificate) | Assigns a certificate to an application configured for Application Proxy in Azure AD. This command uploads the certificate and allows the application to use Custom Domains. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
+
+ Title: PowerShell sample - Move Azure Active Directory Application Proxy apps to another group
+description: Azure Active Directory (Azure AD) Application Proxy PowerShell example used to move all applications currently assigned to a connector group to a different connector group.
+++++++ Last updated : 04/29/2021++++
+# Move all Azure Active Directory Application Proxy apps assigned to a connector group to another connector group
+
+This PowerShell script example moves all Azure Active Directory (Azure AD) Application Proxy applications currently assigned to a connector group to a different connector group.
++++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-proxy/move-all-apps-to-a-connector-group.ps1 "Move all apps assigned to a connector group to another connector group")]
+
+## Script explanation
+
+| Command | Notes |
+|||
+|[Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
+|[Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Gets an Azure AD application. |
+| [Get-AzureADApplicationProxyConnectorGroup](/powershell/module/azuread/get-azureadapplicationproxyconnectorgroup) | Retrieves a list of all connector groups, or if specified, details of the specified connector group. |
+| [Set-AzureADApplicationProxyConnectorGroup](/powershell/module/azuread/set-azureadapplicationproxyapplicationconnectorgroup) | Assigns the given connector group to a specified application.|
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Proxy, see [Azure AD PowerShell examples for Azure AD Application Proxy](../application-proxy-powershell-samples.md).
active-directory Active Directory Passwords Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/active-directory-passwords-faq.md
- Title: Self-service password reset FAQ - Azure Active Directory
-description: Frequently asked questions about Azure AD self-service password reset
----- Previously updated : 07/20/2020--------
-# Self-service password reset frequently asked questions
-
-The following are some frequently asked questions (FAQ) for all things related to self-service password reset.
-
-If you have a general question about Azure Active Directory (Azure AD) and self-service password reset (SSPR) that's not answered here, you can ask the community for assistance on the [Microsoft Q&A question page for Azure Active Directory](/answers/topics/azure-active-directory.html). Members of the community include engineers, product managers, MVPs, and fellow IT professionals.
-
-This FAQ is split into the following sections:
-
-* [Questions about password reset registration](#password-reset-registration)
-* [Questions about password reset](#password-reset)
-* [Questions about password change](#password-change)
-* [Questions about password management reports](#password-management-reports)
-* [Questions about password writeback](#password-writeback)
-
-## Password reset registration
-
-* **Q: Can my users register their own password reset data?**
-
- > **A:** Yes. As long as password reset is enabled and they are licensed, users can go to the password reset registration portal (https://aka.ms/ssprsetup) to register their authentication information. Users can also register through the Access Panel (https://myapps.microsoft.com). To register through the Access Panel, they need to select their profile picture, select **Profile**, and then select the **Register for password reset** option.
- >
- > If you enable [combined registration](concept-registration-mfa-sspr-combined.md), users can register for both SSPR and Azure AD Multi-Factor Authentication at the same time.
-* **Q: If I enable password reset for a group and then decide to enable it for everyone are my users required re-register?**
-
- > **A:** No. Users who have populated authentication data are not required to re-register.
- >
- >
-* **Q: Can I define password reset data on behalf of my users?**
-
- > **A:** Yes, you can do so with Azure AD Connect, PowerShell, the [Azure portal](https://portal.azure.com), or the [Microsoft 365 admin center](https://admin.microsoft.com). For more information, see [Data used by Azure AD self-service password reset](howto-sspr-authenticationdata.md).
- >
- >
-* **Q: Can I synchronize data for security questions from on-premises?**
-
- > **A:** No, this is not possible today.
- >
- >
-* **Q: Can my users register data in such a way that other users can't see this data?**
-
- > **A:** Yes. When users register data by using the password reset registration portal, the data is saved into private authentication fields that are visible only to global administrators and the user.
- >
- >
-* **Q: Do my users have to be registered before they can use password reset?**
-
- > **A:** No. If you define enough authentication information on their behalf, users don't have to register. Password reset works as long as you have properly formatted the data stored in the appropriate fields in the directory.
- >
- >
-* **Q: Can I synchronize or set the authentication phone, authentication email, or alternate authentication phone fields on behalf of my users?**
-
- > **A:** The fields that are able to be set by a Global Administrator are defined in the article [SSPR Data requirements](howto-sspr-authenticationdata.md).
- >
- >
-* **Q: How does the registration portal determine which options to show my users?**
-
- > **A:** The password reset registration portal shows only the options that you have enabled for your users. These options are found under the **User Password Reset Policy** section of your directory's **Configure** tab. For example, if you don't enable security questions, then users are not able to register for that option.
- >
- >
-* **Q: When is a user considered registered?**
-
- > **A:** A user is considered registered for SSPR when they have registered at least the **Number of methods required to reset** a password that you have set in the [Azure portal](https://portal.azure.com).
- >
- >
-
-## Password reset
-
-* **Q: Do you prevent users from multiple attempts to reset a password in a short period of time?**
-
- > **A:** Yes, there are security features built into password reset to protect it from misuse.
- >
- > Users can try only five password reset attempts within a 24 hour period before they're locked out for 24 hours.
- >
- > Users can try to validate a phone number, send a SMS, or validate security questions and answers only five times within an hour before they're locked out for 24 hours.
- >
- > Users can send an email a maximum of 10 times within a 10 minute period before they're locked out for 24 hours.
- >
- > The counters are reset once a user resets their password.
- >
- >
-* **Q: How long should I wait to receive an email, SMS, or phone call from password reset?**
-
- > **A:** Emails, SMS messages, and phone calls should arrive in under a minute. The normal case is 5 to 20 seconds.
- > If you don't receive the notification in this time frame:
- > * Check your junk folder.
- > * Check that the number or email being contacted is the one you expect.
- > * Check that the authentication data in the directory is correctly formatted, for example, +1 4255551234 or *user\@contoso.com*.
-* **Q: What languages are supported by password reset?**
-
- > **A:** The password reset UI, SMS messages, and voice calls are localized in the same languages that are supported in Microsoft 365.
- >
- >
-* **Q: What parts of the password reset experience get branded when I set the organizational branding items in my directory's configure tab?**
-
- > **A:** The password reset portal shows your organization's logo and allows you to configure the "Contact your administrator" link to point to a custom email or URL. Any email that's sent by password reset includes your organization's logo, colors, and name in the body of the email, and is customized from the settings for that particular name.
- >
- >
-* **Q: How can I educate my users about where to go to reset their passwords?**
-
- > **A:** Try some of the suggestions in our [SSPR deployment](howto-sspr-deployment.md#plan-communications) article.
- >
- >
-* **Q: Can I use this page from a mobile device?**
-
- > **A:** Yes, this page works on mobile devices.
- >
- >
-* **Q: Do you support unlocking local Active Directory accounts when users reset their passwords?**
-
- > **A:** Yes. When a user resets their password, if password writeback has been deployed through Azure AD Connect, that user's account is automatically unlocked when they reset their password.
- >
- >
-* **Q: How can I integrate password reset directly into my user's desktop sign-in experience?**
-
- > **A:** If you're an Azure AD Premium customer, you can install Microsoft Identity Manager at no additional cost and deploy the on-premises password reset solution.
- >
- >
-* **Q: Can I set different security questions for different locales?**
-
- > **A:** No, this is not possible today.
- >
- >
-* **Q: How many questions can I configure for the security questions authentication option?**
-
- > **A:** You can configure up to 20 custom security questions in the [Azure portal](https://portal.azure.com).
- >
- >
-* **Q: How long can security questions be?**
-
- > **A:** Security questions can be 3 to 200 characters long.
- >
- >
-* **Q: How long can the answers to security questions be?**
-
- > **A:** Answers can be 3 to 40 characters long.
- >
- >
-* **Q: Are duplicate answers to security questions rejected?**
-
- > **A:** Yes, we reject duplicate answers to security questions.
- >
- >
-* **Q: Can a user register the same security question more than once?**
-
- > **A:** No. After a user registers a particular question, they can't register for that question a second time.
- >
- >
-* **Q: Is it possible to set a minimum limit of security questions for registration and reset?**
-
- > **A:** Yes, one limit can be set for registration and another for reset. Three to five security questions can be required for registration, and three to five questions can be required for reset.
- >
- >
-* **Q: I configured my policy to require users to use security questions for reset, but the Azure administrators seem to be configured differently.**
-
- > **A:** This is the expected behavior. Microsoft enforces a strong default two-gate password reset policy for any Azure administrator role. This prevents administrators from using security questions. You can find more information about this policy in the [Password policies and restrictions in Azure Active Directory](concept-sspr-policy.md) article.
- >
- >
-* **Q: If a user has registered more than the maximum number of questions required to reset, how are the security questions selected during reset?**
-
- > **A:** *N* number of security questions are selected at random out of the total number of questions a user has registered for, where *N* is the amount that is set for the **Number of questions required to reset** option. For example, if a user has registered five security questions, but only three are required to reset a password, three of the five questions are randomly selected and are presented at reset. To prevent question hammering, if the user gets the answers to the questions wrong the selection process starts over.
- >
- >
-* **Q: How long are the email and SMS one-time passcodes valid?**
-
- > **A:** The session lifetime for password reset is 15 minutes. From the start of the password reset operation, the user has 15 minutes to reset their password. The email and SMS one-time passcode are valid for 5 minutes during the password reset session.
- >
- >
-* **Q: Can I block users from resetting their password?**
-
- > **A:** Yes, if you use a group to enable SSPR, you can remove an individual user from the group that allows users to reset their password. If the user is a Global Administrator they will retain the ability to reset their password and this cannot be disabled.
- >
- >
-
-## Password change
-
-* **Q: Where should my users go to change their passwords?**
-
- > **A:** Users can change their passwords anywhere they see their profile picture or icon, like in the upper-right corner of their [Office 365](https://portal.office.com) portal or [Access Panel](https://myapps.microsoft.com) experiences. Users can change their passwords from the [Access Panel Profile page](https://account.activedirectory.windowsazure.com/r#/profile). Users can also be asked to change their passwords automatically at the Azure AD sign-in page if their passwords have expired. Finally, users can browse to the [Azure AD password change portal](https://account.activedirectory.windowsazure.com/ChangePassword.aspx) directly if they want to change their passwords.
- >
- >
-* **Q: Can my users be notified in the Office portal when their on-premises password expires?**
-
- > **A:** Yes, this is possible today if you use Active Directory Federation Services (AD FS). If you use AD FS, follow the instructions in the [Sending password policy claims with AD FS](/windows-server/identity/ad-fs/operations/configure-ad-fs-to-send-password-expiry-claims?f=255&MSPPError=-2147217396) article. If you use password hash synchronization, this is not possible today. We don't sync password policies from on-premises directories, so it's not possible for us to post expiration notifications to cloud experiences. In either case, it's also possible to [notify users whose passwords are about to expire through PowerShell](https://social.technet.microsoft.com/wiki/contents/articles/23313.notify-active-directory-users-about-password-expiry-using-powershell.aspx).
- >
- >
-* **Q: Can I block users from changing their password?**
-
- > **A:** For cloud-only users, password changes can't be blocked. For on-premises users, you can set the **User cannot change password** option to selected. The selected users can't change their password.
- >
- >
-
-## Password management reports
-
-* **Q: How long does it take for data to show up on the password management reports?**
-
- > **A:** Data should appear on the password management reports in 5 to 10 minutes. In some instances, it might take up to an hour to appear.
- >
- >
-* **Q: How can I filter the password management reports?**
-
- > **A:** To filter the password management reports, select the small magnifying glass to the extreme right of the column labels, near the top of the report. If you want to do richer filtering, you can download the report to Excel and create a pivot table.
- >
- >
-* **Q: What is the maximum number of events that are stored in the password management reports?**
-
- > **A:** Up to 75,000 password reset or password reset registration events are stored in the password management reports, spanning back as far as 30 days. We are working to expand this number to include more events.
- >
- >
-* **Q: How far back do the password management reports go?**
-
- > **A:** The password management reports show operations that occurred within the last 30 days. For now, if you need to archive this data, you can download the reports periodically and save them in a separate location.
- >
- >
-* **Q: Is there a maximum number of rows that can appear on the password management reports?**
-
- > **A:** Yes. A maximum of 75,000 rows can appear on either of the password management reports, whether they are shown in the UI or are downloaded.
- >
- >
-* **Q: Is there an API to access the password reset or registration reporting data?**
-
- > **A:** Yes. To learn how you can access the password reset reporting data, see the [Azure Log Analytics REST API Reference](/rest/api/loganalytics/).
- >
- >
-
-## Password writeback
-
-* **Q: How does password writeback work behind the scenes?**
-
- > **A:** See the article [How password writeback works](./tutorial-enable-sspr-writeback.md) for an explanation of what happens when you enable password writeback and how data flows through the system back into your on-premises environment.
- >
- >
-* **Q: How long does password writeback take to work? Is there a synchronization delay like there is with password hash sync?**
-
- > **A:** Password writeback is instant. It is a synchronous pipeline that works fundamentally differently than password hash synchronization. Password writeback allows users to get real-time feedback about the success of their password reset or change operation. The average time for a successful writeback of a password is under 500 ms.
- >
- >
-* **Q: If my on-premises account is disabled, how is my cloud account and access affected?**
-
- > **A:** If your on-premises ID is disabled, your cloud ID and access will also be disabled at the next sync interval through Azure AD Connect. By default, this sync is every 30 minutes.
- >
- >
-* **Q: If my on-premises account is constrained by an on-premises Active Directory password policy, does SSPR obey this policy when I change my password?**
-
- > **A:** Yes, SSPR relies on and abides by the on-premises Active Directory password policy. This policy includes the typical Active Directory domain password policy, as well as any defined, fine-grained password policies that are targeted to a user.
- >
- >
-* **Q: What types of accounts does password writeback work for?**
-
- > **A:** Password writeback works for user accounts that are synchronized from on-premises Active Directory to Azure AD, including federated, password hash synchronized, and Pass-Through Authentication Users.
- >
- >
-* **Q: Does password writeback enforce my domain's password policies?**
-
- > **A:** Yes. Password writeback enforces password age, history, complexity, filters, and any other restriction you might put in place on passwords in your local domain.
- >
- >
-* **Q: Is password writeback secure? How can I be sure I won't get hacked?**
-
- > **A:** Yes, password writeback is secure. To read more about the multiple layers of security implemented by the password writeback service, check out the [Password writeback security](concept-sspr-writeback.md#password-writeback-security) section in the [Password writeback overview](./tutorial-enable-sspr-writeback.md) article.
- >
- >
-
-## Next steps
-
-* [How do I complete a successful rollout of SSPR?](howto-sspr-deployment.md)
-* [Reset or change your password](../user-help/active-directory-passwords-update-your-own-password.md)
-* [Register for self-service password reset](../user-help/active-directory-passwords-reset-register.md)
-* [Do you have a licensing question?](concept-sspr-licensing.md)
-* [What data is used by SSPR and what data should you populate for your users?](howto-sspr-authenticationdata.md)
-* [What authentication methods are available to users?](concept-sspr-howitworks.md#authentication-methods)
-* [What are the policy options with SSPR?](concept-sspr-policy.md)
-* [What is password writeback and why do I care about it?](./tutorial-enable-sspr-writeback.md)
-* [How do I report on activity in SSPR?](howto-sspr-reporting.md)
-* [What are all of the options in SSPR and what do they mean?](concept-sspr-howitworks.md)
-* [I think something is broken. How do I troubleshoot SSPR?](./troubleshoot-sspr.md)
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following considerations apply:
- End users can register and manage these passwordless authentication methods in their account portal - End users can sign in with these passwordless authentication methods: - Microsoft Authenticator App: Works in scenarios where Azure AD authentication is used, including across all browsers, during Windows 10 setup, and with integrated mobile apps on any operating system.
- - Security keys: Work on lock screen for Windows 10 and the web in supported browsers like Microsoft Edge (both legacy and new Edge).
+ - Security keys: Work in Windows 10 setup in OOBE with or without Windows Autopilot, on lock screen for Windows 10 and the web in supported browsers like Microsoft Edge (both legacy and new Edge).
## Choose a passwordless method
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-adfs.md
Previously updated : 07/11/2018 Last updated : 04/29/2021
If your organization is federated with Azure Active Directory, use Azure AD Multi-Factor Authentication or Active Directory Federation Services (AD FS) to secure resources that are accessed by Azure AD. Use the following procedures to secure Azure Active Directory resources with either Azure AD Multi-Factor Authentication or Active Directory Federation Services.
+>[!NOTE]
+>To secure your Azure AD resource, it is recommended to require MFA through a [Conditional Access policy](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa), set the domain setting SupportsMfa to $True and [emit the multipleauthn claim](https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-adfs#secure-azure-ad-resources-using-ad-fs) when a user performs two-step verification successfully.
+ ## Secure Azure AD resources using AD FS To secure your cloud resource, set up a claims rule so that Active Directory Federation Services emits the multipleauthn claim when a user performs two-step verification successfully. This claim is passed on to Azure AD. Follow this procedure to walk through the steps:
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
In the United States, if you haven't configured MFA Caller ID, voice calls from
* *+1 (877) 668 6536* > [!NOTE]
-> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and to text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.md#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users)
+> When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and to text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-)
To configure your own caller ID number, complete the following steps:
active-directory Howto Mfaserver Adfs 2012 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfaserver-adfs-2012.md
To help with troubleshooting issues with the MFA Server AD FS Adapter use the st
## Related topics
-For troubleshooting help, see the [Azure Multi-Factor Authentication FAQs](multi-factor-authentication-faq.md)
+For troubleshooting help, see the [Azure Multi-Factor Authentication FAQs](multi-factor-authentication-faq.yml)
active-directory Howto Mfaserver Nps Rdg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfaserver-nps-rdg.md
The Azure Multi-Factor Authentication Server is configured as a RADIUS proxy bet
- Integrate Azure MFA and [IIS web apps](howto-mfaserver-iis.md) -- Get answers in the [Azure Multi-Factor Authentication FAQ](multi-factor-authentication-faq.md)
+- Get answers in the [Azure Multi-Factor Authentication FAQ](multi-factor-authentication-faq.yml)
active-directory Howto Password Ban Bad On Premises Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-faq.md
- Title: On-premises Azure AD Password Protection FAQ
-description: Review frequently asked questions for Azure AD Password Protection in an on-premises Active Directory Domain Services environment
----- Previously updated : 11/21/2019--------
-# Azure AD Password Protection on-premises frequently asked questions
-
-This section provides answers to many commonly asked questions about Azure AD Password Protection.
-
-## General questions
-
-**Q: What guidance should users be given on how to select a secure password?**
-
-Microsoft's current guidance on this topic can be found at the following link:
-
-[Microsoft Password Guidance](https://www.microsoft.com/research/publication/password-guidance)
-
-**Q: Is on-premises Azure AD Password Protection supported in non-public clouds?**
-
-On-premises Azure AD Password Protection is supported in the public cloud and the Arlington cloud. No date has been announced for availability in other clouds.
-
-The Azure AD portal does allow modification of the on-premises-specific "Password protection for Windows Server Active Directory" configuration even in non-supported clouds; such changes will be persisted but otherwise will never take effect. Registration of on-premises proxy agents or forests is unsupported in non-supported clouds, and any such registration attempts will always fail.
-
-**Q: How can I apply Azure AD Password Protection benefits to a subset of my on-premises users?**
-
-Not supported. Once deployed and enabled, Azure AD Password Protection doesn't discriminate - all users receive equal security benefits.
-
-**Q: What is the difference between a password change and a password set (or reset)?**
-
-A password change is when a user chooses a new password after proving they have knowledge of the old password. For example, a password change is what happens when a user logs into Windows and is then prompted to choose a new password.
-
-A password set (sometimes called a password reset) is when an administrator replaces the password on an account with a new password, for example by using the Active Directory Users and Computers management tool. This operation requires a high level of privilege (usually Domain Admin), and the person performing the operation usually does not have knowledge of the old password. Help-desk scenarios often perform password sets, for instance when assisting a user who has forgotten their password. You will also see password set events when a brand new user account is being created for the first time with a password.
-
-The password validation policy behaves the same regardless of whether a password change or set is being done. The Azure AD Password Protection DC Agent service does log different events to inform you whether a password change or set operation was done. See [Azure AD Password Protection monitoring and logging](./howto-password-ban-bad-on-premises-monitor.md).
-
-**Q: Does Azure AD Password Protection validate existing passwords after being installed?**
-
-No - Azure AD Password Protection can only enforce password policy on cleartext passwords during a password change or set operation. Once a password is accepted by Active Directory, only authentication-protocol-specific hashes of that password are persisted. The clear-text password is never persisted, therefore Azure AD Password Protection cannot validate existing passwords.
-
-After initial deployment of Azure AD Password Protection, all users and accounts will eventually start using an Azure AD Password Protection-validated password as their existing passwords expire normally over time. If desired, this process can be accelerated by a one-time manual expiration of user account passwords.
-
-Accounts configured with "password never expires" will never be forced to change their password unless manual expiration is done.
-
-**Q: Why are duplicated password rejection events logged when attempting to set a weak password using the Active Directory Users and Computers management snap-in?**
-
-The Active Directory Users and Computers management snap-in will first try to set the new password using the Kerberos protocol. Upon failure, the snap-in will make a second attempt to set the password using a legacy (SAM RPC) protocol (the specific protocols used are not important). If the new password is considered weak by Azure AD Password Protection, this snap-in behavior will result in two sets of password reset rejection events being logged.
-
-**Q: Why are Azure AD Password Protection password validation events being logged with an empty user name?**
-
-Active Directory supports the ability to test a password to see if it passes the domain's current password complexity requirements, for example using the [NetValidatePasswordPolicy](/windows/win32/api/lmaccess/nf-lmaccess-netvalidatepasswordpolicy) api. When a password is validated in this way, the testing also includes validation by password-filter-dll based products such as Azure AD Password Protection - but the user names passed to a given password filter dll will be empty. In this scenario, Azure AD Password Protection will still validate the password using the currently in-effect password policy and will issue an event log message to capture the outcome, however the event log message will have empty user name fields.
-
-**Q: Is it supported to install Azure AD Password Protection side by side with other password-filter-based products?**
-
-Yes. Support for multiple registered password filter dlls is a core Windows feature and not specific to Azure AD Password Protection. All registered password filter dlls must agree before a password is accepted.
-
-**Q: How can I deploy and configure Azure AD Password Protection in my Active Directory environment without using Azure?**
-
-Not supported. Azure AD Password Protection is an Azure feature that supports being extended into an on-premises Active Directory environment.
-
-**Q: How can I modify the contents of the policy at the Active Directory level?**
-
-Not supported. The policy can only be administered using the Azure AD portal. Also see previous question.
-
-**Q: Why is DFSR required for sysvol replication?**
-
-FRS (the predecessor technology to DFSR) has many known problems and is entirely unsupported in newer versions of Windows Server Active Directory. Zero testing of Azure AD Password Protection will be done on FRS-configured domains.
-
-For more information, please see the following articles:
-
-[The Case for Migrating sysvol replication to DFSR](/archive/blogs/askds/the-case-for-migrating-sysvol-to-dfsr)
-
-[The End is Nigh for FRS](https://blogs.technet.microsoft.com/filecab/2014/06/25/the-end-is-nigh-for-frs)
-
-If your domain is not already using DFSR, you MUST migrate it to use DFSR before installing Azure AD Password Protection. For more information, see the following link:
-
-[SYSVOL Replication Migration Guide: FRS to DFS Replication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd640019(v=ws.10))
-
-> [!WARNING]
-> The Azure AD Password Protection DC Agent software will currently install on domain controllers in domains that are still using FRS for sysvol replication, but the software will NOT work properly in this environment. Additional negative side-effects include individual files failing to replicate, and sysvol restore procedures appearing to succeed but silently failing to replicate all files. You should migrate your domain to use DFSR as soon as possible, both for DFSR's inherent benefits and also to unblock the deployment of Azure AD Password Protection. Future versions of the software will be automatically disabled when running in a domain that is still using FRS.
-
-**Q: How much disk space does the feature require on the domain sysvol share?**
-
-The precise space usage varies since it depends on factors such as the number and length of the banned tokens in the Microsoft global banned list and the per-tenant custom list, plus encryption overhead. The contents of these lists are likely to grow in the future. With that in mind, a reasonable expectation is that the feature will need at least five (5) megabytes of space on the domain sysvol share.
-
-**Q: Why is a reboot required to install or upgrade the DC agent software?**
-
-This requirement is caused by core Windows behavior.
-
-**Q: Is there any way to configure a DC agent to use a specific proxy server?**
-
-No. Since the proxy server is stateless, it's not important which specific proxy server is used.
-
-**Q: Is it okay to deploy the Azure AD Password Protection Proxy service side by side with other services such as Azure AD Connect?**
-
-Yes. The Azure AD Password Protection Proxy service and Azure AD Connect should never conflict directly with each other.
-
-Unfortunately, an incompatibility has been found between the version of the Microsoft Azure AD Connect Agent Updater service that is installed by the Azure AD Password Protection Proxy software and the version of the service that is installed by the [Azure Active Directory Application Proxy](../manage-apps/application-proxy.md) software. This incompatibility may result in the Agent Updater service being unable to contact Azure for software updates. It is not recommended to install Azure AD Password Protection Proxy and Azure Active Directory Application Proxy on the same machine.
-
-**Q: In what order should the DC agents and proxies be installed and registered?**
-
-Any ordering of Proxy agent installation, DC agent installation, forest registration, and Proxy registration is supported.
-
-**Q: Should I be concerned about the performance hit on my domain controllers from deploying this feature?**
-
-The Azure AD Password Protection DC Agent service shouldn't significantly impact domain controller performance in an existing healthy Active Directory deployment.
-
-For most Active Directory deployments password change operations are a small proportion of the overall workload on any given domain controller. As an example, imagine an Active Directory domain with 10000 user accounts and a MaxPasswordAge policy set to 30 days. On average, this domain will see 10000/30=~333 password change operations each day, which is a minor number of operations for even a single domain controller. Consider a potential worst case scenario: suppose those ~333 password changes on a single DC were done over a single hour. For example, this scenario may occur when many employees all come to work on a Monday morning. Even in that case, we're still looking at ~333/60 minutes = six password changes per minute, which again is not a significant load.
-
-However if your current domain controllers are already running at performance-limited levels (for example, maxed out with respect to CPU, disk space, disk I/O, etc.), it is advisable to add additional domain controllers or expand available disk space, before deploying this feature. Also see question above about sysvol disk space usage above.
-
-**Q: I want to test Azure AD Password Protection on just a few DCs in my domain. Is it possible to force user password changes to use those specific DCs?**
-
-No. The Windows client OS controls which domain controller is used when a user changes their password. The domain controller is selected based on factors such as Active Directory site and subnet assignments, environment-specific network configuration, etc. Azure AD Password Protection does not control these factors and cannot influence which domain controller is selected to change a user's password.
-
-One way to partially reach this goal would be to deploy Azure AD Password Protection on all of the domain controllers in a given Active Directory site. This approach will provide reasonable coverage for the Windows clients that are assigned to that site, and therefore also for the users that are logging into those clients and changing their passwords.
-
-**Q: If I install the Azure AD Password Protection DC Agent service on just the Primary Domain Controller (PDC), will all other domain controllers in the domain also be protected?**
-
-No. When a user's password is changed on a given non-PDC domain controller, the clear-text password is never sent to the PDC (this idea is a common mis-perception). Once a new password is accepted on a given DC, that DC uses that password to create the various authentication-protocol-specific hashes of that password and then persists those hashes in the directory. The clear-text password is not persisted. The updated hashes are then replicated to the PDC. User passwords may in some cases be changed directly on the PDC, again depending on various factors such as network topology and Active Directory site design. (See the previous question.)
-
-In summary, deployment of the Azure AD Password Protection DC Agent service on the PDC is required to reach 100% security coverage of the feature across the domain. Deploying the feature on the PDC only does not provide Azure AD Password Protection security benefits for any other DCs in the domain.
-
-**Q: Why is custom smart lockout not working even after the agents are installed in my on-premises Active Directory environment?**
-
-Custom smart lockout is only supported in Azure AD. Changes to the custom smart lockout settings in the Azure AD portal have no effect on the on-premises Active Directory environment, even with the agents installed.
-
-**Q: Is a System Center Operations Manager management pack available for Azure AD Password Protection?**
-
-No.
-
-**Q: Why is Azure AD still rejecting weak passwords even though I've configured the policy to be in Audit mode?**
-
-Audit mode is only supported in the on-premises Active Directory environment. Azure AD is implicitly always in "enforce" mode when it evaluates passwords.
-
-**Q: My users see the traditional Windows error message when a password is rejected by Azure AD Password Protection. Is it possible to customize this error message so that users know what really happened?**
-
-No. The error message seen by users when a password is rejected by a domain controller is controlled by the client machine, not by the domain controller. This behavior happens whether a password is rejected by the default Active Directory password policies or by a password-filter-based solution such as Azure AD Password Protection.
-
-## Password testing procedures
-
-You may want to do some basic testing of various passwords in order to validate proper operation of the software and to gain a better understanding of the [password evaluation algorithm](concept-password-ban-bad.md#how-are-passwords-evaluated). This section outlines a method for such testing that is designed to produce repeatable results.
-
-Why is it necessary to follow such steps? There are several factors that make it difficult to do controlled, repeatable testing of passwords in the on-premises Active Directory environment:
-
-* The password policy is configured and persisted in Azure, and copies of the policy are synced periodically by the on-premises DC agent(s) using a polling mechanism. The latency inherent in this polling cycle may cause confusion. For example, if you configure the policy in Azure but forget to sync it to the DC agent, then your tests may not yield the expected results. The polling interval is currently hardcoded to be once per hour, but waiting an hour between policy changes is non-ideal for an interactive testing scenario.
-* Once a new password policy is synced down to a domain controller, more latency will occur while it replicates to other domain controllers. These delays can cause unexpected results if you test a password change against a domain controller that has not yet received the latest version of the policy.
-* Testing password changes via a user interface makes it difficult to have confidence in your results. For example, it is easy to mis-type an invalid password into a user interface, especially since most password user interfaces hide user input (for example, such as the Windows Ctrl-Alt-Delete -> Change password UI).
-* It is not possible to strictly control which domain controller is used when testing password changes from domain-joined clients. The Windows client OS selects a domain controller based on factors such as Active Directory site and subnet assignments, environment-specific network configuration, etc.
-
-In order to avoid these problems, the steps below are based on command-line testing of password resets while logged into a domain controller.
-
-> [!WARNING]
-> These procedures should be used only in a test environment since all incoming password changes and resets will be accepted without validation while the DC agent service is stopped, and also to avoid the increased risks inherent in logging into a domain controller.
-
-The following steps assume that you have installed the DC agent on at least one domain controller, have installed at least one proxy, and have registered both the proxy and the forest.
-
-1. Log on to a domain controller using Domain Admin credentials (or other credentials that have sufficient privileges to create test user accounts and reset passwords), that has the DC agent software installed and has been rebooted.
-1. Open up Event Viewer and navigate to the [DC Agent Admin event log](howto-password-ban-bad-on-premises-monitor.md#dc-agent-admin-event-log).
-1. Open an elevated command prompt window.
-1. Create a test account for doing password testing
-
- There are many ways to create a user account, but a command-line option is offered here as a way to make it easy during repetitive testing cycles:
-
- ```text
- net.exe user <testuseraccountname> /add <password>
- ```
-
- For discussion purposes below, assume that we have created a test account named "ContosoUser", for example:
-
- ```text
- net.exe user ContosoUser /add <password>
- ```
-
-1. Open a web browser (you may need to use a separate device instead of your domain controller), sign in to the [Azure portal](https://portal.azure.com), and browse to Azure Active Directory > Security > Authentication methods > Password protection.
-1. Modify the Azure AD Password Protection policy as needed for the testing you want to perform. For example, you may decide to configure either Enforced or Audit Mode, or you may decide to modify the list of banned terms in your custom banned passwords list.
-1. Synchronize the new policy by stopping and restarting the DC agent service.
-
- This step can be accomplished in various ways. One way would be to use the Service Management administrative console, by right-clicking on the Azure AD Password Protection DC Agent service and choosing "Restart". Another way may be performed from the command prompt window like so:
-
- ```text
- net stop AzureADPasswordProtectionDCAgent && net start AzureADPasswordProtectionDCAgent
- ```
-
-1. Check the Event Viewer to verify that a new policy has been downloaded.
-
- Each time the DC agent service is stopped and started, you should see two 30006 events issued in close succession. The first 30006 event will reflect the policy that was cached on disk in the sysvol share. The second 30006 event (if present) should have an updated Tenant policy date, and if so will reflect the policy that was downloaded from Azure. The Tenant policy date value is currently coded to display the approximate timestamp that the policy was downloaded from Azure.
-
- If the second 30006 event does not appear, you should troubleshoot the problem before continuing.
-
- The 30006 events will look similar to this example:
-
- ```text
- The service is now enforcing the following Azure password policy.
-
- Enabled: 1
- AuditOnly: 0
- Global policy date: ΓÇÄ2018ΓÇÄ-ΓÇÄ05ΓÇÄ-ΓÇÄ15T00:00:00.000000000Z
- Tenant policy date: ΓÇÄ2018ΓÇÄ-ΓÇÄ06ΓÇÄ-ΓÇÄ10T20:15:24.432457600Z
- Enforce tenant policy: 1
- ```
-
- For example, changing between Enforced and Audit mode will result in the AuditOnly flag being modified (the above policy with AuditOnly=0 is in Enforced mode); changes to the custom banned password list are not directly reflected in the 30006 event above (and are not logged anywhere else for security reasons). Successfully downloading the policy from Azure after such a change will also include the modified custom banned password list.
-
-1. Run a test by trying to reset a new password on the test user account.
-
- This step can be done from the command prompt window like so:
-
- ```text
- net.exe user ContosoUser <password>
- ```
-
- After running the command, you can get more information about the outcome of the command by looking in the event viewer. Password validation outcome events are documented in the [DC Agent Admin event log](howto-password-ban-bad-on-premises-monitor.md#dc-agent-admin-event-log) topic; you will use such events to validate the outcome of your test in addition to the interactive output from the net.exe commands.
-
- Let's try an example: attempting to set a password that is banned by the Microsoft global list (note that list is [not documented](concept-password-ban-bad.md#global-banned-password-list) but we can test here against a known banned term). This example assumes that you have configured the policy to be in Enforced mode, and have added zero terms to the custom banned password list.
-
- ```text
- net.exe user ContosoUser PassWord
- The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements.
-
- More help is available by typing NET HELPMSG 2245.
- ```
-
- Per the documentation, because our test was a password reset operation you should see a 10017 and a 30005 event for the ContosoUser user.
-
- The 10017 event should look like this example:
-
- ```text
- The reset password for the specified user was rejected because it did not comply with the current Azure password policy. Please see the correlated event log message for more details.
-
- UserName: ContosoUser
- FullName:
- ```
-
- The 30005 event should look like this example:
-
- ```text
- The reset password for the specified user was rejected because it matched at least one of the tokens present in the Microsoft global banned password list of the current Azure password policy.
-
- UserName: ContosoUser
- FullName:
- ```
-
- That was fun - let's try another example! This time we will attempt to set a password that is banned by the custom banned list while the policy is in Audit mode. This example assumes that you have done the following steps: configured the policy to be in Audit mode, added the term "lachrymose" to the custom banned password list, and synchronized the resultant new policy to the domain controller by cycling the DC agent service as described above.
-
- Ok, set a variation of the banned password:
-
- ```text
- net.exe user ContosoUser LaChRymoSE!1
- The command completed successfully.
- ```
-
- Remember, this time it succeeded because the policy is in Audit mode. You should see a 10025 and a 30007 event for the ContosoUser user.
-
- The 10025 event should look like this example:
-
- ```text
- The reset password for the specified user would normally have been rejected because it did not comply with the current Azure password policy. The current Azure password policy is configured for audit-only mode so the password was accepted. Please see the correlated event log message for more details.
-
- UserName: ContosoUser
- FullName:
- ```
-
- The 30007 event should look like this example:
-
- ```text
- The reset password for the specified user would normally have been rejected because it matches at least one of the tokens present in the per-tenant banned password list of the current Azure password policy. The current Azure password policy is configured for audit-only mode so the password was accepted.
-
- UserName: ContosoUser
- FullName:
- ```
-
-1. Continue testing various passwords of your choice and checking the results in the event viewer using the procedures outlined in the previous steps. If you need to change the policy in the Azure portal, don't forget to synchronize the new policy down to the DC agent as described earlier.
-
-We've covered procedures that enable you to do controlled testing of Azure AD Password Protection's password validation behavior. Resetting user passwords from the command line directly on a domain controller may seem an odd means of doing such testing, but as described previously it is designed to produce repeatable results. As you are testing various passwords, keep the [password evaluation algorithm](concept-password-ban-bad.md#how-are-passwords-evaluated) in mind as it may help to explain results that you did not expect.
-
-> [!WARNING]
-> When all testing is completed do not forget to delete any user accounts created for testing purposes!
-
-## Additional content
-
-The following links are not part of the core Azure AD Password Protection documentation but may be a useful source of additional information on the feature.
-
-[Azure AD Password Protection is now generally available!](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-AD-Password-Protection-is-now-generally-available/ba-p/377487)
-
-[Email Phishing Protection Guide ΓÇô Part 15: Implement the Microsoft Azure AD Password Protection Service (for On-Premises too!)](http://kmartins.com/2018/10/14/email-phishing-protection-guide-part-15-implement-the-microsoft-azure-ad-password-protection-service-for-on-premises-too/)
-
-[Azure AD Password Protection and Smart Lockout are now in Public Preview!](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-AD-Password-Protection-and-Smart-Lockout-are-now-in-Public/ba-p/245423#M529)
-
-## Microsoft Premier\Unified support training available
-
-If you're interested in learning more about Azure AD Password Protection and deploying it in your environment, you can take advantage of a Microsoft proactive service available to those customers with a Premier or Unified support contract. The service is called Azure Active Directory: Password Protection. Contact your Technical Account Manager for more information.
-
-## Next steps
-
-If you have an on-premises Azure AD Password Protection question that isn't answered here, submit a Feedback item below - thank you!
-
-[Deploy Azure AD password protection](howto-password-ban-bad-on-premises-deploy.md)
active-directory Howto Password Ban Bad On Premises Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-troubleshoot.md
VerifyAzureConnectivityViaSpecificProxy Passed
## Next steps
-[Frequently asked questions for Azure AD Password Protection](howto-password-ban-bad-on-premises-faq.md)
+[Frequently asked questions for Azure AD Password Protection](howto-password-ban-bad-on-premises-faq.yml)
For more information on the global and custom banned password lists, see the article [Ban bad passwords](concept-password-ban-bad.md)
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
For more information about pricing, see [Azure Active Directory pricing](https:/
|Tutorials |[Complete an Azure AD self-service password reset pilot roll out](./tutorial-enable-sspr.md) | | |[Enabling password writeback](./tutorial-enable-sspr-writeback.md) | | |[Azure AD password reset from the login screen for Windows 10](./howto-sspr-windows.md) |
-| FAQ|[Password management frequently asked questions](./active-directory-passwords-faq.md) |
+| FAQ|[Password management frequently asked questions](./active-directory-passwords-faq.yml) |
### Solution architecture
Audit logs for registration and password reset are available for 30 days. If sec
* Refer to [Troubleshoot self-service password reset](./troubleshoot-sspr.md)
-* Follow [Password management frequently asked questions](./active-directory-passwords-faq.md)
+* Follow [Password management frequently asked questions](./active-directory-passwords-faq.yml)
### Helpful documentation
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-reporting.md
The following list explains this activity in detail:
* [What is password writeback and why do I care about it?](./tutorial-enable-sspr-writeback.md) * [What are all of the options in SSPR and what do they mean?](concept-sspr-howitworks.md) * [I think something is broken. How do I troubleshoot SSPR?](./troubleshoot-sspr.md)
-* [I have a question that was not covered somewhere else](active-directory-passwords-faq.md)
+* [I have a question that was not covered somewhere else](active-directory-passwords-faq.yml)
[Reporting]: ./media/howto-sspr-reporting/sspr-reporting.png "Example of SSPR activity audit logs in Azure AD"
active-directory Multi Factor Authentication Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/multi-factor-authentication-faq.md
- Title: Azure AD Multi-Factor Authentication FAQ - Azure Active Directory
-description: Frequently asked questions and answers related to Azure AD Multi-Factor Authentication.
----- Previously updated : 07/14/2020--------
-# Frequently asked questions about Azure AD Multi-Factor Authentication
-
-This FAQ answers common questions about Azure AD Multi-Factor Authentication and using the Multi-Factor Authentication service. It's broken down into questions about the service in general, billing models, user experiences, and troubleshooting.
-
-> [!IMPORTANT]
-> As of July 1, 2019, Microsoft will no longer offer MFA Server for new deployments. New customers who would like to require multi-factor authentication from their users should use cloud-based Azure AD Multi-Factor Authentication. Existing customers who have activated MFA Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual.
->
-> The information shared below regarding the Azure Multi-Factor Authentication Server is only applicable for users who already have the MFA server running.
->
-> Consumption-based licensing is no longer available to new customers effective September 1, 2018.
-> Effective September 1, 2018 new auth providers may no longer be created. Existing auth providers may continue to be used and updated. Multi-factor authentication will continue to be an available feature in Azure AD Premium licenses.
-
-## General
-
-* [How does Azure Multi-Factor Authentication Server handle user data?](#how-does-azure-multi-factor-authentication-server-handle-user-data)
-* [What SMS short codes are used for sending SMS messages to my users?](#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users)
-
-### How does Azure Multi-Factor Authentication Server handle user data?
-
-With Multi-Factor Authentication Server, user data is only stored on the on-premises servers. No persistent user data is stored in the cloud. When the user performs two-step verification, Multi-Factor Authentication Server sends data to the Azure AD Multi-Factor Authentication cloud service for authentication. Communication between Multi-Factor Authentication Server and the Multi-Factor Authentication cloud service uses Secure Sockets Layer (SSL) or Transport Layer Security (TLS) over port 443 outbound.
-
-When authentication requests are sent to the cloud service, data is collected for authentication and usage reports. The following data fields are included in two-step verification logs:
-
-* **Unique ID** (either user name or on-premises Multi-Factor Authentication Server ID)
-* **First and Last Name** (optional)
-* **Email Address** (optional)
-* **Phone Number** (when using a voice call or SMS authentication)
-* **Device Token** (when using mobile app authentication)
-* **Authentication Mode**
-* **Authentication Result**
-* **Multi-Factor Authentication Server Name**
-* **Multi-Factor Authentication Server IP**
-* **Client IP** (if available)
-
-The optional fields can be configured in Multi-Factor Authentication Server.
-
-The verification result (success or denial), and the reason if it was denied, is stored with the authentication data. This data is available in authentication and usage reports.
-
-For more information, see [Data residency and customer data for Azure AD Multi-Factor Authentication](concept-mfa-data-residency.md).
-
-### What SMS short codes are used for sending SMS messages to my users?
-
-In the United States, we use the following SMS short codes:
-
-* *97671*
-* *69829*
-* *51789*
-* *99399*
-
-In Canada, we use the following SMS short codes:
-
-* *759731*
-* *673801*
-
-There's no guarantee of consistent SMS or voice-based Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability.
-
-We don't support short codes for countries or regions besides the United States and Canada.
-
-## Billing
-
-Most billing questions can be answered by referring to either the [Multi-Factor Authentication Pricing page](https://azure.microsoft.com/pricing/details/multi-factor-authentication/) or the documentation for [Azure AD Multi-Factor Authentication versions and consumption plans](concept-mfa-licensing.md).
-
-* [Is my organization charged for sending the phone calls and text messages that are used for authentication?](#is-my-organization-charged-for-sending-the-phone-calls-and-text-messages-that-are-used-for-authentication)
-* [Does the per-user billing model charge me for all enabled users, or just the ones that performed two-step verification?](#does-the-per-user-billing-model-charge-me-for-all-enabled-users-or-just-the-ones-that-performed-two-step-verification)
-* [How does Multi-Factor Authentication billing work?](#how-does-multi-factor-authentication-billing-work)
-* [Is there a free version of Azure AD Multi-Factor Authentication?](#is-there-a-free-version-of-azure-ad-multi-factor-authentication)
-* [Can my organization switch between per-user and per-authentication consumption billing models at any time?](#can-my-organization-switch-between-per-user-and-per-authentication-consumption-billing-models-at-any-time)
-* [Can my organization switch between consumption-based billing and subscriptions (a license-based model) at any time?](#can-my-organization-switch-between-consumption-based-billing-and-subscriptions-a-license-based-model-at-any-time)
-* [Does my organization have to use and synchronize identities to use Azure AD Multi-Factor Authentication?](#does-my-organization-have-to-use-and-synchronize-identities-to-use-azure-ad-multi-factor-authentication)
-
-### Is my organization charged for sending the phone calls and text messages that are used for authentication?
-
-No, you're not charged for individual phone calls placed or text messages sent to users through Azure AD Multi-Factor Authentication. If you use a per-authentication MFA provider, you're billed for each authentication, but not for the method used.
-
-Your users might be charged for the phone calls or text messages they receive, according to their personal phone service.
-
-### Does the per-user billing model charge me for all enabled users, or just the ones that performed two-step verification?
-
-Billing is based on the number of users configured to use Multi-Factor Authentication, regardless of whether they performed two-step verification that month.
-
-### How does Multi-Factor Authentication billing work?
-
-When you create a per-user or per-authentication MFA provider, your organization's Azure subscription is billed monthly based on usage. This billing model is similar to how Azure bills for usage of virtual machines and Web Apps.
-
-When you purchase a subscription for Azure AD Multi-Factor Authentication, your organization only pays the annual license fee for each user. MFA licenses and Microsoft 365, Azure AD Premium, or Enterprise Mobility + Security bundles are billed this way.
-
-For more information, see [How to get Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
-
-### Is there a free version of Azure AD Multi-Factor Authentication?
-
-Security defaults can be enabled in the Azure AD Free tier. With security defaults, all users are enabled for multi-factor authentication using the Microsoft Authenticator app. There's no ability to use text message or phone verification with security defaults, just the Microsoft Authenticator app.
-
-For more information, see [What are security defaults?](../fundamentals/concept-fundamentals-security-defaults.md)
-
-### Can my organization switch between per-user and per-authentication consumption billing models at any time?
-
-If your organization purchases MFA as a standalone service with consumption-based billing, you choose a billing model when you create an MFA provider. You can't change the billing model after an MFA provider is created.
-
-If your MFA provider is *not* linked to an Azure AD tenant, or you link the new MFA provider to a different Azure AD tenant, user settings, and configuration options aren't transferred. Also, existing Azure MFA Servers need to be reactivated using activation credentials generated through the new MFA Provider. Reactivating the MFA Servers to link them to the new MFA Provider doesn't impact phone call and text message authentication, but mobile app notifications will stop working for all users until they reactivate the mobile app.
-
-Learn more about MFA providers in [Getting started with an Azure Multi-Factor Auth Provider](concept-mfa-authprovider.md).
-
-### Can my organization switch between consumption-based billing and subscriptions (a license-based model) at any time?
-
-In some instances, yes.
-
-If your directory has a *per-user* Azure Multi-Factor Authentication provider, you can add MFA licenses. Users with licenses aren't counted in the per-user consumption-based billing. Users without licenses can still be enabled for MFA through the MFA provider. If you purchase and assign licenses for all your users configured to use Multi-Factor Authentication, you can delete the Azure Multi-Factor Authentication provider. You can always create another per-user MFA provider if you have more users than licenses in the future.
-
-If your directory has a *per-authentication* Azure Multi-Factor Authentication provider, you're always billed for each authentication, as long as the MFA provider is linked to your subscription. You can assign MFA licenses to users, but you'll still be billed for every two-step verification request, whether it comes from someone with an MFA license assigned or not.
-
-### Does my organization have to use and synchronize identities to use Azure AD Multi-Factor Authentication?
-
-If your organization uses a consumption-based billing model, Azure Active Directory is optional, but not required. If your MFA provider isn't linked to an Azure AD tenant, you can only deploy Azure Multi-Factor Authentication Server on-premises.
-
-Azure Active Directory is required for the license model because licenses are added to the Azure AD tenant when you purchase and assign them to users in the directory.
-
-## Manage and support user accounts
-
-* [What should I tell my users to do if they don't receive a response on their phone?](#what-should-i-tell-my-users-to-do-if-they-dont-receive-a-response-on-their-phone)
-* [What should I do if one of my users can't get in to their account?](#what-should-i-do-if-one-of-my-users-cant-get-in-to-their-account)
-* [What should I do if one of my users loses a phone that is using app passwords?](#what-should-i-do-if-one-of-my-users-loses-a-phone-that-is-using-app-passwords)
-* [What if a user can't sign in to non-browser apps?](#what-if-a-user-cant-sign-in-to-non-browser-apps)
-* [My users say that sometimes they don't receive the text message or the verification times out.](#my-users-say-that-sometimes-they-dont-receive-the-text-message-or-the-verification-times-out)
-* [Can I change the amount of time my users have to enter the verification code from a text message before the system times out?](#can-i-change-the-amount-of-time-my-users-have-to-enter-the-verification-code-from-a-text-message-before-the-system-times-out)
-* [Can I use hardware tokens with Azure Multi-Factor Authentication Server?](#can-i-use-hardware-tokens-with-azure-multi-factor-authentication-server)
-* [Can I use Azure Multi-Factor Authentication Server to secure Terminal Services?](#can-i-use-azure-multi-factor-authentication-server-to-secure-terminal-services)
-* [I configured Caller ID in MFA Server, but my users still receive Multi-Factor Authentication calls from an anonymous caller.](#i-configured-caller-id-in-mfa-server-but-my-users-still-receive-multi-factor-authentication-calls-from-an-anonymous-caller)
-* [Why are my users being prompted to register their security information?](#why-are-my-users-being-prompted-to-register-their-security-information)
-
-### What should I tell my users to do if they don't receive a response on their phone?
-
-Have your users attempt up to five times in 5 minutes to get a phone call or SMS for authentication. Microsoft uses multiple providers for delivering calls and SMS messages. If this approach doesn't work, open a support case to troubleshoot further.
-
-Third-party security apps may also block the verification code text message or phone call. If using a third-party security app, try disabling the protection, then request another MFA verification code be sent.
-
-If the steps above don't work, check if users are configured for more than one verification method. Try signing in again, but select a different verification method on the sign-in page.
-
-For more information, see the [end-user troubleshooting guide](../user-help/multi-factor-authentication-end-user-troubleshoot.md).
-
-### What should I do if one of my users can't get in to their account?
-
-You can reset the user's account by making them to go through the registration process again. Learn more about [managing user and device settings with Azure AD Multi-Factor Authentication in the cloud](howto-mfa-userdevicesettings.md).
-
-### What should I do if one of my users loses a phone that is using app passwords?
-
-To prevent unauthorized access, delete all the user's app passwords. After the user has a replacement device, they can recreate the passwords. Learn more about [managing user and device settings with Azure AD Multi-Factor Authentication in the cloud](howto-mfa-userdevicesettings.md).
-
-### What if a user can't sign in to non-browser apps?
-
-If your organization still uses legacy clients, and you [allowed the use of app passwords](howto-mfa-app-passwords.md), then your users can't sign in to these legacy clients with their username and password. Instead, they need to [set up app passwords](../user-help/multi-factor-authentication-end-user-app-passwords.md). Your users must clear (delete) their sign-in information, restart the app, and then sign in with their username and *app password* instead of their regular password.
-
-If your organization doesn't have legacy clients, you shouldn't allow your users to create app passwords.
-
-> [!NOTE]
-> **Modern authentication for Office 2013 clients**
->
-> App passwords are only necessary for apps that don't support modern authentication. Office 2013 clients support modern authentication protocols, but need to be configured. Modern authentication is available to any customer running the March 2015 or later update for Office 2013. For more information, see the blog post [Updated Office 365 modern authentication](https://www.microsoft.com/microsoft-365/blog/2015/11/19/updated-office-365-modern-authentication-public-preview/).
-
-### My users say that sometimes they don't receive the text message or the verification times out.
-
-Delivery of SMS messages aren't guaranteed because there are uncontrollable factors that might affect the reliability of the service. These factors include the destination country or region, the mobile phone carrier, and the signal strength.
-
-Third-party security apps may also block the verification code text message or phone call. If using a third-party security app, try disabling the protection, then request another MFA verification code be sent.
-
-If your users often have problems with reliably receiving text messages, tell them to use the Microsoft Authenticator app or phone call method instead. The Microsoft Authenticator can receive notifications both over cellular and Wi-Fi connections. In addition, the mobile app can generate verification codes even when the device has no signal at all. The Microsoft Authenticator app is available for [Android](https://go.microsoft.com/fwlink/?Linkid=825072), [iOS](https://go.microsoft.com/fwlink/?Linkid=825073), and [Windows Phone](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6).
-
-### Can I change the amount of time my users have to enter the verification code from a text message before the system times out?
-
-In some cases, yes.
-
-For one-way SMS with Azure MFA Server v7.0 or higher, you can configure the timeout setting by setting a registry key. After the MFA cloud service sends the text message, the verification code (or one-time passcode) is returned to the MFA Server. The MFA Server stores the code in memory for 300 seconds by default. If the user doesn't enter the code before the 300 seconds have passed, their authentication is denied. Use these steps to change the default timeout setting:
-
-1. Go to `HKLM\Software\Wow6432Node\Positive Networks\PhoneFactor`.
-2. Create a **DWORD** registry key called *pfsvc_pendingSmsTimeoutSeconds* and set the time in seconds that you want the Azure MFA Server to store one-time passcodes.
-
->[!TIP]
->
-> If you have multiple MFA Servers, only the one that processed the original authentication request knows the verification code that was sent to the user. When the user enters the code, the authentication request to validate it must be sent to the same server. If the code validation is sent to a different server, the authentication is denied.
-
-If users don't respond to the SMS within the defined timeout period, their authentication is denied.
-
-For one-way SMS with Azure AD MFA in the cloud (including the AD FS adapter or the Network Policy Server extension), you can't configure the timeout setting. Azure AD stores the verification code for 180 seconds.
-
-### Can I use hardware tokens with Azure Multi-Factor Authentication Server?
-
-If you're using Azure Multi-Factor Authentication Server, you can import third-party Open Authentication (OATH) time-based, one-time password (TOTP) tokens, and then use them for two-step verification.
-
-You can use ActiveIdentity tokens that are OATH TOTP tokens if you put the secret key in a CSV file and import to Azure Multi-Factor Authentication Server. You can use OATH tokens with Active Directory Federation Services (ADFS), Internet Information Server (IIS) forms-based authentication, and Remote Authentication Dial-In User Service (RADIUS) as long as the client system can accept the user input.
-
-You can import third-party OATH TOTP tokens with the following formats:
--- Portable Symmetric Key Container (PSKC)-- CSV if the file contains a serial number, a secret key in Base 32 format, and a time interval-
-### Can I use Azure Multi-Factor Authentication Server to secure Terminal Services?
-
-Yes, but if you're using Windows Server 2012 R2 or later, you can only secure Terminal Services by using Remote Desktop Gateway (RD Gateway).
-
-Security changes in Windows Server 2012 R2 changed how Azure Multi-Factor Authentication Server connects to the Local Security Authority (LSA) security package in Windows Server 2012 and earlier versions. For versions of Terminal Services in Windows Server 2012 or earlier, you can [secure an application with Windows Authentication](howto-mfaserver-windows.md#to-secure-an-application-with-windows-authentication-use-the-following-procedure). If you're using Windows Server 2012 R2, you need RD Gateway.
-
-### I configured Caller ID in MFA Server, but my users still receive Multi-Factor Authentication calls from an anonymous caller.
-
-When Multi-Factor Authentication calls are placed through the public telephone network, sometimes they are routed through a carrier that doesn't support caller ID. Because of this carrier behavior, caller ID isn't guaranteed, even though the Multi-Factor Authentication system always sends it.
-
-### Why are my users being prompted to register their security information?
-
-There are several reasons that users could be prompted to register their security information:
--- The user has been enabled for MFA by their administrator in Azure AD, but doesn't have security information registered for their account yet.-- The user has been enabled for self-service password reset in Azure AD. The security information will help them reset their password in the future if they ever forget it.-- The user accessed an application that has a Conditional Access policy to require MFA and hasn't previously registered for MFA.-- The user is registering a device with Azure AD (including Azure AD Join), and your organization requires MFA for device registration, but the user hasn't previously registered for MFA.-- The user is generating Windows Hello for Business in Windows 10 (which requires MFA) and hasn't previously registered for MFA.-- The organization has created and enabled an MFA Registration policy that has been applied to the user.-- The user previously registered for MFA, but chose a verification method that an administrator has since disabled. The user must therefore go through MFA registration again to select a new default verification method.-
-## Errors
-
-* [What should users do if they see an "Authentication request is not for an activated account" error message when using mobile app notifications?](#what-should-users-do-if-they-see-an-authentication-request-is-not-for-an-activated-account-error-message-when-using-mobile-app-notifications)
-* [What should users do if they see a 0x800434D4L error message when signing in to a non-browser application?](#what-should-users-do-if-they-see-a-0x800434d4l-error-message-when-signing-in-to-a-non-browser-application)
-
-### What should users do if they see an "Authentication request is not for an activated account" error message when using mobile app notifications?
-
-Ask the user to complete the following procedure to remove their account from the Microsoft Authenticator, then add it again:
-
-1. Go to [their Azure portal profile](https://account.activedirectory.windowsazure.com/profile/) and sign in with an organizational account.
-2. Select **Additional Security Verification**.
-3. Remove the existing account from the Microsoft Authenticator app.
-4. Click **Configure**, and then follow the instructions to reconfigure the Microsoft Authenticator.
-
-### What should users do if they see a 0x800434D4L error message when signing in to a non-browser application?
-
-The *0x800434D4L* error occurs when you try to sign in to a non-browser application, installed on a local computer, that doesn't work with accounts that require two-step verification.
-
-A workaround for this error is to have separate user accounts for admin-related and non-admin operations. Later, you can link mailboxes between your admin account and non-admin account so that you can sign in to Outlook by using your non-admin account. For more details about this solution, learn how to [give an administrator the ability to open and view the contents of a user's mailbox](https://help.outlook.com/141/gg709759.aspx?sl=1).
-
-## Next steps
-
-If your question isn't answered here, the following support options are available:
-
-* Search the [Microsoft Support Knowledge Base](https://support.microsoft.com) for solutions to common technical issues.
-* Search for and browse technical questions and answers from the community, or ask your own question in the [Azure Active Directory Q&A](/answers/topics/azure-active-directory.html).
-* Contact Microsoft professional through [Azure Multi-Factor Authentication Server support](https://support.microsoft.com/oas/default.aspx?prid=14947). When contacting us, it's helpful if you can include as much information about your issue as possible. Information you can supply includes the page where you saw the error, the specific error code, the specific session ID, and the ID of the user who saw the error.
-* If you're a legacy PhoneFactor customer and you have questions or need help with resetting a password, use the [phonefactorsupport@microsoft.com](mailto:phonefactorsupport@microsoft.com) e-mail address to open a support case.
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
You can also contribute to the samples on GitHub. To learn how, see [Microsoft A
These samples show how to write a single-page application secured with Microsoft identity platform. These samples use one of the flavors of MSAL.js.
-| Platform | Description | Link |
-| -- | | -- |
-| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls Microsoft Graph using Auth Code Flow w/ PKCE |[javascript-v2](https://github.com/Azure-Samples/ms-identity-javascript-v2) |
-| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls B2C using Auth Code Flow w/PKCE |[b2c-javascript-spa](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) |
-| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls custom web API which in turn calls Microsoft Graph | [ms-identity-javascript-tutorial-chapter4-obo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls Microsoft Graph | [active-directory-javascript-singlepageapp-angular](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls Microsoft Graph using Auth Code Flow w/ PKCE | [ms-identity-javascript-angular-spa](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls custom web API | [ms-identity-javascript-angular-spa-aspnetcore-webapi](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnetcore-webapi) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | SPA calls B2C |[active-directory-b2c-javascript-angular-spa](https://github.com/Azure-Samples/active-directory-b2c-javascript-angular-spa) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | SPA calls custom web API with App Roles and Security Groups |[ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups) |
-| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL React)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)| SPA calls Microsoft Graph using Auth Code Flow w/ PKCE | [ms-identity-javascript-react-spa](https://github.com/Azure-Samples/ms-identity-javascript-react-spa) |
-| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL React)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)| SPA calls custom web API | [ms-identity-javascript-react-tutorial](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api) |
-| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-core)| SPA calls custom Web API which in turn calls Microsoft Graph | [ms-identity-javascript-react-spa-dotnetcore-webapi-obo](https://github.com/Azure-Samples/ms-identity-javascript-react-spa-dotnetcore-webapi-obo) |
-| ![This image shows the Blazor logo](media/sample-v2-code/logo-blazor.png) [Blazor WebAssembly (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | Blazor WebAssembly Tutorial to sign-in users and call APIs with Azure Active Directory |[ms-identity-blazor-wasm](https://github.com/Azure-Samples/ms-identity-blazor-wasm) |
+> [!div class="mx-tdCol2BreakAll"]
+> | Language/<br/>Platform | Code sample | Description | Auth libraries | Auth flow |
+> | - | -- | | - | -- |
+> |Angular|[GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa)| &#8226; Signs in users with AAD <br/>&#8226; Calls Microsoft Graph | MSAL Angular | Auth code flow (with PKCE) |
+> | Angular | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Calls .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/2-call-api-api-c)| MSAL Angular | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Proof of Possession (PoP)|
+> | Blazor WebAssembly | [GitHub repo](https://github.com/Azure-Samples/ms-identity-blazor-wasm) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL.js | Auth code flow (with PKCE) |
+> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-v2) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL.js | Auth code flow (with PKCE) |
+> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) | &#8226; Signs in users (B2C)<br/>&#8226; Calls Node.js web API | MSAL.js | Auth code flow (with PKCE) |
+> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Calls Node.js web API via OBO & CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/blob/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
+> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-spa) | &#8226; Signs in users<br/>&#8226; Calls Microsoft Graph | MSAL React | Auth code flow (with PKCE) |
+> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial) | &#8226; [Signs in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Signs in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Calls Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Calls Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Calls Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Uses App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Uses Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploys to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploys to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
## Web applications
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-acquire-token.md
Previously updated : 08/20/2019 Last updated : 04/2/2021 #Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
# Single-page application: Acquire a token to call an API
-The pattern for acquiring tokens for APIs with MSAL.js is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it sends a silent token request to Azure Active Directory (Azure AD) from a hidden iframe. This method also allows the library to renew tokens. For more information about single sign-on session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md).
+The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it attempts to use its refresh token to get the token. If the refresh token's 24-hour lifetime has expired, MSAL.js will open a hidden iframe to silently request a new authorization code, which it will exchange for a new, valid refresh token. For more information about single sign-on session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md).
-The silent token requests to Azure AD might fail for reasons like an expired Azure AD session or a password change. In that case, you can invoke one of the interactive methods (which will prompt the user) to acquire tokens:
+The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking 3rd party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens:
* [Pop-up window](#acquire-a-token-with-a-pop-up-window), by using `acquireTokenPopup` * [Redirect](#acquire-a-token-with-a-redirect), by using `acquireTokenRedirect`
publicClientApplication.acquireTokenSilent(accessTokenRequest).then(function(acc
## Next steps
-Move on to the next article in this scenario, [Calling a web API](scenario-spa-call-api.md).
+Move on to the next article in this scenario, [Calling a web API](scenario-spa-call-api.md).
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
If you attempt to use the authorization code flow and see this error:
Then, visit your app registration and update the redirect URI for your app to type `spa`.
+Applications cannot use a `spa` redirect URI with non-SPA flows, for example native applications or client credential flows. To ensure security, Azure AD will return an error if you attempt to use use a `spa` redirect URI in these scenarios, e.g. from a native app that doesn't send an`Origin` header.
+ ## Request an authorization code The authorization code flow begins with the client directing the user to the `/authorize` endpoint. In this request, the client requests the `openid`, `offline_access`, and `https://graph.microsoft.com/mail.read ` permissions from the user. Some permissions are admin-restricted, for example writing data to an organization's directory by using `Directory.ReadWrite.All`. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions. To request access to admin-restricted scopes, you should request them directly from a Global Administrator. For more information, read [Admin-restricted permissions](v2-permissions-and-consent.md#admin-restricted-permissions).
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/enterprise-state-roaming-enable.md
The data retention policy is not configurable. Once the data is permanently dele
## Next steps * [Enterprise State Roaming overview](enterprise-state-roaming-overview.md)
-* [Settings and data roaming FAQ](enterprise-state-roaming-faqs.md)
+* [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml)
* [Group Policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) * [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md) * [Troubleshooting](enterprise-state-roaming-troubleshooting.md)
active-directory Enterprise State Roaming Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/enterprise-state-roaming-faqs.md
- Title: Enterprise State Roaming FAQ - Azure Active Directory
-description: Frequently asked questions about ESR
----- Previously updated : 02/12/2020--------
-# Settings and data roaming FAQ
-
-This article answers some questions IT administrators might have about settings and app data sync.
-
-## What data roams?
-
-**Windows settings**:
-the PC settings that are built into the Windows operating system. Generally, these are settings that personalize your PC, and they include the following broad categories:
-
-* *Theme*, which includes features such as desktop theme and taskbar settings.
-* *Internet Explorer settings*, including recently opened tabs and favorites.
-* *Microsoft Edge browser settings*, such as favorites and reading list.
-* *Passwords*, including Internet passwords, Wi-Fi profiles, and others.
-* *Language preferences*, which include settings for keyboard layouts, system language, date and time, and more.
-* *Ease of access features*, such as high-contrast theme, Narrator, and Magnifier.
-* *Other Windows settings*, such as mouse settings.
-
-> [!NOTE]
-> This article applies to the Microsoft Edge Legacy HTML-based browser launched with Windows 10 in July 2015. The article does not apply to the new Microsoft Edge Chromium-based browser released on January 15, 2020. For more information on the Sync behavior for the new Microsoft Edge, see the article [Microsoft Edge Sync](/deployedge/microsoft-edge-enterprise-sync).
-
-**Application data**: Universal Windows apps can write settings data to a roaming folder, and any data written to this folder will automatically be synced. ItΓÇÖs up to the individual app developer to design an app to take advantage of this capability. For more information about how to develop a Universal Windows app that uses roaming, see the [appdata storage API](/windows/uwp/design/app-settings/store-and-retrieve-app-data) and the [Windows 8 appdata roaming developer blog](https://blogs.windows.com/windowsdeveloper/2016/05/04/roaming-app-data-and-the-user-experience/).
-
-## What account is used for settings sync?
-
-In Windows 8.1, settings sync always used consumer Microsoft accounts. Enterprise users had the ability to connect a Microsoft account to their Active Directory domain account to gain access to settings sync. In Windows 10, this connected Microsoft account functionality is being replaced with a primary/secondary account framework.
-
-The primary account is defined as the account used to sign in to Windows. This can be a Microsoft account, an Azure Active Directory (Azure AD) account, an on-premises Active Directory account, or a local account. In addition to the primary account, Windows 10 users can add one or more secondary cloud accounts to their device. A secondary account is generally a Microsoft account, an Azure AD account, or some other account such as Gmail or Facebook. These secondary accounts provide access to additional services such as single sign-on and the Windows Store, but they are not capable of powering settings sync.
-
-In Windows 10, only the primary account for the device can be used for settings sync (see
-[How do I upgrade from Microsoft account settings sync in Windows 8 to Azure AD settings sync in Windows 10?](enterprise-state-roaming-faqs.md#how-do-i-upgrade-from-microsoft-account-settings-sync-in-windows-8-to-azure-ad-settings-sync-in-windows-10)).
-
-Data is never mixed between the different user accounts on the device. There are two rules for settings sync:
-
-* Windows settings will always roam with the primary account.
-* App data will be tagged with the account used to acquire the app. Only apps tagged with the primary account will sync. App ownership tagging is determined when an app is side-loaded through the Windows Store or mobile device management (MDM).
-
-If an appΓÇÖs owner cannot be identified, it will roam with the primary account. If a device is upgraded from Windows 8 or Windows 8.1 to Windows 10, all the apps will be tagged as acquired by the Microsoft account. This is because most users acquire apps through the Windows Store, and there was no Windows Store support for Azure AD accounts prior to Windows 10. If an app is installed via an offline license, the app will be tagged using the primary account on the device.
-
-> [!NOTE]
-> Windows 10 devices that are enterprise-owned and are connected to Azure AD can no longer connect their Microsoft accounts to a domain account. The ability to connect a Microsoft account to a domain account and have all the user's data sync to the Microsoft account (that is, the Microsoft account roaming via the connected Microsoft account and Active Directory functionality) is removed from Windows 10 devices that are joined to a connected Active Directory or Azure AD environment.
-
-## How do I upgrade from Microsoft account settings sync in Windows 8 to Azure AD settings sync in Windows 10?
-
-If you are joined to the Active Directory domain running Windows 8.1 with a connected Microsoft account, you will sync settings through your Microsoft account. After upgrading to Windows 10, you will continue to sync user settings via Microsoft account as long as you are a domain-joined user and the Active Directory domain does not connect with Azure AD.
-
-If the on-premises Active Directory domain does connect with Azure AD, your device will attempt to sync settings using the connected Azure AD account. If the Azure AD administrator does not enable Enterprise State Roaming, your connected Azure AD account will stop syncing settings. If you are a Windows 10 user and you sign in with an Azure AD identity, you will start syncing windows settings as soon as your administrator enables settings sync via Azure AD.
-
-If you stored any personal data on your corporate device, you should be aware that Windows OS and application data will begin syncing to Azure AD. This has the following implications:
-
-* Your personal Microsoft account settings will drift apart from the settings on your work or school Azure AD accounts. This is because the Microsoft account and Azure AD settings sync are now using separate accounts.
-* Personal data such as Wi-Fi passwords, web credentials, and Internet Explorer favorites that were previously synced via a connected Microsoft account will be synced via Azure AD.
-
-## How do Microsoft account and Azure AD Enterprise State Roaming interoperability work?
-
-In the November 2015 or later releases of Windows 10, Enterprise State Roaming is only supported for a single account at a time. If you sign in to Windows by using a work or school Azure AD account, all data will sync via Azure AD. If you sign in to Windows by using a personal Microsoft account, all data will sync via the Microsoft account. Universal appdata will roam using only the primary sign-in account on the device, and it will roam only if the appΓÇÖs license is owned by the primary account. Universal appdata for the apps owned by any secondary accounts will not be synced.
-
-## Do settings sync for Azure AD accounts from multiple tenants?
-
-When multiple Azure AD accounts from different Azure AD tenants are on the same device, you must update the device's registry to communicate with the Azure Rights Management service for each Azure AD tenant.
-
-1. Find the GUID for each Azure AD tenant. Open the Azure portal and select an Azure AD tenant. The GUID for the tenant is on the Properties page for the selected tenant (https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Properties), labeled **Directory ID**.
-2. After you have the GUID, you will need to add the registry key
- **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\SettingSync\WinMSIPC\<tenant ID GUID>**.
- From the **tenant ID GUID** key, create a new Multi-String value (REG-MULTI-SZ) named **AllowedRMSServerUrls**. For its data, specify the licensing distribution point URLs of the other Azure tenants that the device accesses.
-3. You can find the licensing distribution point URLs by running the **Get-AadrmConfiguration** cmdlet from the AADRM module. If the values for the **LicensingIntranetDistributionPointUrl** and **LicensingExtranetDistributionPointUrl** are different, specify both values. If the values are the same, specify the value just once.
-
-## What are the roaming settings options for existing Windows desktop applications?
-
-Roaming only works for Universal Windows apps. There are two options available for enabling roaming on an existing Windows desktop application:
-
-* The [Desktop Bridge](/windows/msix/desktop/source-code-overview) helps you bring your existing Windows desktop apps to the Universal Windows Platform. From here, minimal code changes will be required to take advantage of Azure AD app data roaming. The Desktop Bridge provides your apps with an app identity, which is needed to enable app data roaming for existing desktop apps.
-* [User Experience Virtualization (UE-V)](/previous-versions//dn458947(v=vs.85)) helps you create a custom settings template for existing Windows desktop apps and enable roaming for Win32 apps. This option does not require the app developer to change code of the app. UE-V is limited to on-premises Active Directory roaming for customers who have purchased the Microsoft Desktop Optimization Pack.
-
-Administrators can configure UE-V to roam Windows desktop app data by changing roaming of Windows OS settings and Universal app data through [UE-V group policies](/microsoft-desktop-optimization-pack/uev-v2/configuring-ue-v-2x-with-group-policy-objects-both-uevv2), including:
-
-* Roam Windows settings group policy
-* Do not synchronize Windows Apps group policy
-* Internet Explorer roaming in the applications section
-
-In the future, Microsoft may investigate ways to make UE-V deeply integrated into Windows and extend UE-V to roam settings through the Azure AD cloud.
-
-## Can I store synced settings and data on-premises?
-
-Enterprise State Roaming stores all synced data in the Microsoft cloud. UE-V offers an on-premises roaming solution.
-
-## Who owns the data thatΓÇÖs being roamed?
-
-The enterprises own the data roamed via Enterprise State Roaming. Data is stored in an Azure datacenter. All user data is encrypted both in transit and at rest in the cloud using the Azure Rights Management service from Azure Information Protection. This is an improvement compared to Microsoft account-based settings sync, which encrypts only certain sensitive data such as user credentials before it leaves the device.
-
-Microsoft is committed to safeguarding customer data. An enterprise userΓÇÖs settings data is automatically encrypted by the Azure Rights Management service before it leaves a Windows 10 device, so no other user can read this data. If your organization has a paid subscription for the Azure Rights Management service, you can use other protection features, such as track and revoke documents, automatically protect emails that contain sensitive information, and manage your own keys (the "bring your own key" solution, also known as BYOK). For more information about these features and how this protection service works, see [What is Azure Rights Management](/azure/information-protection/what-is-information-protection).
-
-## Can I manage sync for a specific app or setting?
-
-In Windows 10, there is no MDM or Group Policy setting to disable roaming for an individual application. Tenant administrators can disable appdata sync for all apps on a managed device, but there is no finer control at a per-app or within-app level.
-
-## How can I enable or disable roaming?
-
-In the **Settings** app, go to **Accounts** > **Sync your settings**. From this page, you can see which account is being used to roam settings, and you can enable or disable individual groups of settings to be roamed.
-
-## What is MicrosoftΓÇÖs recommendation for enabling roaming in Windows 10?
-
-Microsoft has a few different settings roaming solutions available, including Roaming User Profiles, UE-V, and Enterprise State Roaming. Microsoft is committed to making an investment in Enterprise State Roaming in future versions of Windows. If your organization is not ready or comfortable with moving data to the cloud, then we recommend that you use UE-V as your primary roaming technology. If your organization requires roaming support for existing Windows desktop applications but is eager to move to the cloud, we recommend that you use both Enterprise State Roaming and UE-V. Although UE-V and Enterprise State Roaming are very similar technologies, they are not mutually exclusive. They complement each other to help ensure that your organization provides the roaming services that your users need.
-
-When using both Enterprise State Roaming and UE-V, the following rules apply:
-
-* Enterprise State Roaming is the primary roaming agent on the device. UE-V is being used to supplement the ΓÇ£Win32 gap.ΓÇ¥
-* UE-V roaming for Windows settings and modern UWP app data should be disabled when using the UE-V group policies. These are already covered by Enterprise State Roaming.
-
-## How does Enterprise State Roaming support virtual desktop infrastructure (VDI)?
-
-Enterprise State Roaming is supported on Windows 10 client SKUs, but not on server SKUs. If a client VM is hosted on a hypervisor machine and you remotely sign in to the virtual machine, your data will roam. If multiple users share the same OS and users remotely sign in to a server for a full desktop experience, roaming might not work. The latter session-based scenario is not officially supported.
-
-## What happens when my organization purchases a subscription that includes Azure Rights Management after using roaming?
-
-If your organization is already using roaming in Windows 10 with the Azure Rights Management limited-use free subscription, purchasing a [paid subscription](https://azure.microsoft.com/pricing/details/information-protection/) that includes the Azure Rights Management protection service will not have any impact on the functionality of the roaming feature, and no configuration changes will be required by your IT administrator.
-
-## Known issues
-
-See the documentation in the [troubleshooting](enterprise-state-roaming-troubleshooting.md) section for a list of known issues.
-
-## Next steps
-
-For an overview, see [enterprise state roaming overview](enterprise-state-roaming-overview.md)
active-directory Enterprise State Roaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/enterprise-state-roaming-overview.md
Enterprise State Roaming is available in multiple Azure regions. You can find th
| Article | Description | | | | | [Enable Enterprise State Roaming in Azure Active Directory](enterprise-state-roaming-enable.md) |Enterprise State Roaming is available to any organization with a Premium Azure Active Directory (Azure AD) subscription. For more information on how to get an Azure AD subscription, see the [Azure AD product](https://azure.microsoft.com/services/active-directory) page. |
-| [Settings and data roaming FAQ](enterprise-state-roaming-faqs.md) |This article answers some questions IT administrators might have about settings and app data sync. |
+| [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml) |This article answers some questions IT administrators might have about settings and app data sync. |
| [Group policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) |Windows 10 provides Group Policy and mobile device management (MDM) policy settings to limit settings sync. | | [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md) |A list of settings that will be roamed and/or backed-up in Windows 10. | | [Troubleshooting](enterprise-state-roaming-troubleshooting.md) |This article goes through some basic steps for troubleshooting, and contains a list of known issues. |
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/manage-stale-devices.md
Disable or delete Azure AD registered devices in the Azure AD.
## Clean up stale devices in the Azure portal
-While you can cleanup stale devices in the Azure portal, it is more efficient, to handle this process using a PowerShell script. Use the latest PowerShell V1 module to use the timestamp filter and to filter out system-managed devices such as Autopilot. At this point, using PowerShell V2 is not recommended.
+While you can cleanup stale devices in the Azure portal, it is more efficient, to handle this process using a PowerShell script. Use the latest PowerShell V2 module to use the timestamp filter and to filter out system-managed devices such as Autopilot.
A typical routine consists of the following steps:
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/plan-device-deployment.md
The key benefits of giving your devices an Azure AD identity:
Video: [Conditional access with device controls](https://youtu.be/NcONUf-jeS4)
-FAQs: [Azure AD device management FAQ](faq.yml) and [Settings and data roaming FAQ](enterprise-state-roaming-faqs.md)
+FAQs: [Azure AD device management FAQ](faq.yml) and [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml)
## Plan the deployment project
active-directory Troubleshoot Device Dsregcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-device-dsregcmd.md
This section performs the prerequisite checks for the provisioning of Windows He
## Next steps
-For questions, see the [device management FAQ](faq.yml)
+- [The Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool)
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
If the values are **NO**, it could be due:
## Next steps
-Continue [troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)
+- Continue [troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)
-For questions, see the [device management FAQ](faq.yml)
+- [The Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool)
active-directory Troubleshoot Hybrid Join Windows Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
You can also find the status information in the event log under: **Applications
## Next steps
-For questions, see the [device management FAQ](faq.yml)
+- [The Microsoft Error Lookup Tool](/windows/win32/debug/system-error-code-lookup-tool)
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-revoke-access.md
Most browser-based applications use session tokens instead of access and refresh
## Revoke access for a user in the hybrid environment
-For a hybrid environment with on-premises Active Directory synchronized with Azure Active Directory, Microsoft recommends IT admins to take the following actions.
+For a hybrid environment with on-premises Active Directory synchronized with Azure Active Directory, Microsoft recommends IT admins to take the following actions. If you have an **Azure AD only environment**, you may skip the [On-premises Active Directory environment](https://docs.microsoft.com/azure/active-directory/enterprise-users/users-revoke-access#on-premises-active-directory-environment) section.
### On-premises Active Directory environment
Once admins have taken the above steps, the user can't gain new tokens for any a
## Next steps - [Secure access practices for Azure AD administrators](../roles/security-planning.md)-- [Add or update user profile information](../fundamentals/active-directory-users-profile-azure-portal.md)
+- [Add or update user profile information](../fundamentals/active-directory-users-profile-azure-portal.md)
+- [Remove or Delete a former employee](https://docs.microsoft.com/microsoft-365/admin/add-users/remove-former-employee?view=o365-worldwide)
active-directory Active Directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-faq.md
For more information, see [Getting started with password management](../authenti
**A:** Yes, if you have password write-back enabled, the password operations performed by an admin are written back to your on-premises environment.
-For more answers to password-related questions, see [Password management frequently asked questions](../authentication/active-directory-passwords-faq.md).
+For more answers to password-related questions, see [Password management frequently asked questions](../authentication/active-directory-passwords-faq.yml).
**Q: What can I do if I can't remember my existing Microsoft 365/Azure AD password while trying to change my password?**
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
The use of third-party Active Directory Group Policy extensions to roll out the
#### Known browser limitations
-Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge (legacy) browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design.
+Seamless SSO doesn't work in private browsing mode on Firefox. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Microsoft Edge (legacy) is no longer supported.
## Step 4: Test the feature
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
- It is a free feature, and you don't need any paid editions of Azure AD to use it. - It is supported on web browser-based clients and Office clients that support [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) on platforms and browsers capable of Kerberos authentication:
-| OS\Browser |Internet Explorer|Microsoft Edge|Google Chrome|Mozilla Firefox|Safari|
+| OS\Browser |Internet Explorer|Microsoft Edge\*\*\*\*|Google Chrome|Mozilla Firefox|Safari|
| | | | | | -- |Windows 10|Yes\*|Yes|Yes|Yes\*\*\*|N/A |Windows 8.1|Yes\*|Yes*\*\*\*|Yes|Yes\*\*\*|N/A
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
|Windows Server 2012 R2 or above|Yes\*\*|N/A|Yes|Yes\*\*\*|N/A |Mac OS X|N/A|N/A|Yes\*\*\*|Yes\*\*\*|Yes\*\*\*
+ > [!NOTE]
+ >Microsoft Edge legacy is no longer supported
-\*Requires Internet Explorer version 10 or later.
-\*\*Requires Internet Explorer version 10 or later. Disable Enhanced Protected Mode.
+\*Requires Internet Explorer version 11 or later.
+
+\*\*Requires Internet Explorer version 11 or later. Disable Enhanced Protected Mode.
\*\*\*Requires [additional configuration](how-to-connect-sso-quick-start.md#browser-considerations).
-\*\*\*\*Requires Microsoft Edge version 77 or later.
+\*\*\*\*Microosft Edge based on Chromium
## Next steps
active-directory How To Connect Sync Feature Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-scheduler.md
The scheduler is responsible for two tasks:
The scheduler itself is always running, but it can be configured to only run one or none of these tasks. For example, if you need to have your own synchronization cycle process, you can disable this task in the scheduler but still run the maintenance task. >[!IMPORTANT]
->By default every 30 minutes a synchronization cycle is run. If you have modified the synchronization cycley you will need to make sure that a synchronization cycle is run at least once every 7 days.
+>By default every 30 minutes a synchronization cycle is run. If you have modified the synchronization cycle you will need to make sure that a synchronization cycle is run at least once every 7 days.
> >* A delta sync needs to happen within 7 days from the last delta sync. >* A delta sync (following a full sync) needs to happen within 7 days from the time the last full sync completed.
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-sso.md
This article helps you find troubleshooting information about common problems re
- Microsoft 365 Win32 clients (Outlook, Word, Excel, and others) with versions 16.0.8730.xxxx and above are supported using a non-interactive flow. Other versions are not supported; on those versions, users will enter their usernames, but not passwords, to sign-in. For OneDrive, you will have to activate the [OneDrive silent config feature](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/Previews-for-Silent-Sync-Account-Configuration-and-Bandwidth/ba-p/120894) for a silent sign-on experience. - Seamless SSO doesn't work in private browsing mode on Firefox. - Seamless SSO doesn't work in Internet Explorer when Enhanced Protected mode is turned on.-- Seamless SSO doesn't work in private browsing mode on Microsoft Edge (legacy).
+- Microsoft Edge (legacy) is no longer supported
- Seamless SSO doesn't work on mobile browsers on iOS and Android. - If a user is part of too many groups in Active Directory, the user's Kerberos ticket will likely be too large to process, and this will cause Seamless SSO to fail. Azure AD HTTPS requests can have headers with a maximum size of 50 KB; Kerberos tickets need to be smaller than that limit to accommodate other Azure AD artifacts (typically, 2 - 5 KB) such as cookies. Our recommendation is to reduce user's group memberships and try again. - If you're synchronizing 30 or more Active Directory forests, you can't enable Seamless SSO through Azure AD Connect. As a workaround, you can [manually enable](#manual-reset-of-the-feature) the feature on your tenant.
active-directory Troubleshooting Identity Protection Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/troubleshooting-identity-protection-faq.md
- Title: FAQs for Identity Protection in Azure Active Directory
-description: Frequently asked questions Azure AD Identity Protection
----- Previously updated : 01/07/2021--------
-# Frequently asked questions Identity Protection in Azure Active Directory
-
-## Dismiss user risk known issues
-
-**Dismiss user risk** in classic Identity Protection sets the actor in the userΓÇÖs risk history in Identity Protection to **Azure AD**.
-
-**Dismiss user risk** in Identity Protection sets the actor in the userΓÇÖs risk history in Identity Protection to **\<AdminΓÇÖs name with a hyperlink pointing to userΓÇÖs blade\>**.
-
-There is a current known issue causing latency in the user risk dismissal flow. If you have a "User risk policy", this policy will stop applying to dismissed users within minutes of clicking on "Dismiss user risk". However, there are known delays with the UX refreshing the "Risk state" of dismissed users. As a workaround, refresh the page on the browser level to see the latest user "Risk state".
--
-## Frequently asked questions
-
-### Why is a user at risk?
-
-If you are an Azure AD Identity Protection customer, go to the [risky users](howto-identity-protection-investigate-risk.md#risky-users) view and click on an at-risk user. In the drawer at the bottom, tab ΓÇÿRisk historyΓÇÖ will show all the events that led to a user risk change. To see all risky sign-ins for the user, click on ΓÇÿUserΓÇÖs risky sign-insΓÇÖ. To see all risk detections for this user, click on ΓÇÿUserΓÇÖs risk detectionsΓÇÖ.
-
-### Why was my sign-in blocked but Identity Protection didn't generate a risk detection?
-Sign-ins can be blocked for several reasons. It is important to note that Identity Protection only generates risk detections when correct credentials are used in the authentication request. If a user uses incorrect credentials, it will not be flagged by Identity Protection since there is not of risk of credential compromise unless a bad actor uses the correct credentials. Some reasons a user can be blocked from signing that will not generate an Identity Protection detection include:
-* The **IP can be blocked** due to malicious activity from the IP address. The IP blocked message does not differentiate whether the credentials were correct or not. If the IP is blocked and correct credentials are not used, it will not generate an Identity Protection detection
-* **[Smart Lockout](../authentication/howto-password-smart-lockout.md)** can block the account from signing-in after multiple failed attempts
-* A **Conditional Access policy** can be enforced that uses conditions other than risk level to block an authentication request
-
-### How can I get a report of detections of a specific type?
-
-Go to the risk detections view and filter by ΓÇÿDetection typeΓÇÖ. You can then download this report in .CSV or .JSON format using the **Download** button at the top. For more information, see the article [How To: Investigate risk](howto-identity-protection-investigate-risk.md#risk-detections).
-
-### Why canΓÇÖt I set my own risk levels for each risk detection?
-
-Risk levels in Identity Protection are based on the precision of the detection and powered by our supervised machine learning. To customize what experience users are presented, administrator can include/exclude certain users/groups from the User Risk and Sign-In Risk Policies.
-
-### Why does the location of a sign-in not match where the user truly signed in from?
-
-IP geolocation mapping is an industry-wide challenge. If you feel that the location listed in the sign-ins report does not match the actual location, reach out to Microsoft support.
-
-### How can I close specific risk detections like I did in the old UI?
-
-You can give feedback on risk detections by confirming the linked sign-in as compromised or safe. The feedback given on the sign-in trickles down to all the detections made on that sign-in. If you want to close detections that are not linked to a sign-in, you can provide that feedback on the user level. For more information, see the article [How to: Give risk feedback in Azure AD Identity Protection](howto-identity-protection-risk-feedback.md).
-
-### How far can I go back in time to understand whatΓÇÖs going on with my user?
--- The [risky users](howto-identity-protection-investigate-risk.md#risky-users) view shows a userΓÇÖs risk standing based on all past sign-ins. -- The [risky sign-ins](howto-identity-protection-investigate-risk.md#risky-sign-ins) view shows at-risk signs in the last 30 days. -- The [risk detections](howto-identity-protection-investigate-risk.md#risk-detections) view shows risk detections made in the last 90 days.-
-### How can I learn more about a specific detection?
-
-All risk detections are documented in the article [What is risk](concept-identity-protection-risks.md#risk-types-and-detection). You can hover over the (i) symbol next to the detection on the Azure portal to learn more about a detection.
-
-### How do the feedback mechanisms in Identity Protection work?
-
-**Confirm compromised** (on a sign-in) ΓÇô Informs Azure AD Identity Protection that the sign-in was not performed by the identity owner and indicates a compromise.
--- Upon receiving this feedback, we move the sign-in and user risk state to **Confirmed compromised** and risk level to **High**.--- In addition, we provide the information to our machine learning systems for future improvements in risk assessment.-
- > [!NOTE]
- > If the user is already remediated, don't click **Confirm compromised** because it moves the sign-in and user risk state to **Confirmed compromised** and risk level to **High**.
-
-**Confirm safe** (on a sign-in) ΓÇô Informs Azure AD Identity Protection that the sign-in was performed by the identity owner and does not indicate a compromise.
--- Upon receiving this feedback, we move the sign-in (not the user) risk state to **Confirmed safe** and the risk level to **-**.--- In addition, we provide the information to our machine learning systems for future improvements in risk assessment. -
- > [!NOTE]
- >Today, selecting confirm safe on a sign-in does not by itself prevent future sign-ins with the same properties from being flagged as risky. The best way to train the system to learn a user's properties is to use the risky sign-in policy with MFA. When a risky sign-ins is prompted for MFA and the user successfully responds to the request, the sign-in can succeed and help to train the system on the legitimate user's behavior.
- >
- > If you believe the user is not compromised, use **Dismiss user risk** on the user level instead of using **Confirmed safe** on the sign-in level. A **Dismiss user risk** on the user level closes the user risk and all past risky sign-ins and risk detections.
-
-### Why am I seeing a user with a low (or above) risk score, even if no risky sign-ins or risk detections are shown in Identity Protection?
-
-Given the user risk is cumulative in nature and does not expire, a user may have a user risk of low or above even if there are no recent risky sign-ins or risk detections shown in Identity Protection. This situation could happen if the only malicious activity on a user took place beyond the timeframe for which we store the details of risky sign-ins and risk detections. We do not expire user risk because bad actors have been known to stay in customers' environment over 140 days behind a compromised identity before ramping up their attack. Customers can review the user's risk timeline to understand why a user is at risk by going to: `Azure Portal > Azure Active Directory > Risky usersΓÇÖ report > Click on an at-risk user > DetailsΓÇÖ drawer > Risk history tab`
-
-### Why does a sign-in have a ΓÇ£sign-in risk (aggregate)ΓÇ¥ score of High when the detections associated with it are of low or medium risk?
-
-The high aggregate risk score could be based on other features of the sign-in, or the fact that more than one detection fired for that sign-in. And conversely, a sign-in may have a sign-in risk (aggregate) of Medium even if the detections associated with the sign-in are of High risk.
-
-### What is the difference between the "Activity from anonymous IP address" and "Anonymous IP address" detections?
-
-The "Anonymous IP address" detection's source is Azure AD Identity Protection, while the "Activity from anonymous IP address" detection is integrated from MCAS (Microsoft Cloud App Security). While they have very similar names and it is possible that you may see overlap in these signals, they have distinct back-end detections.
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-certs-faq.md
In Azure AD, you can set up certificate signing options and the certificate sign
## I need to replace the certificate for Azure AD Application Proxy applications and need more instructions.
-To replace certificates for Azure AD Application Proxy applications, see [PowerShell sample - Replace certificate in Application Proxy apps](scripts/powershell-get-custom-domain-replace-cert.md).
+To replace certificates for Azure AD Application Proxy applications, see [PowerShell sample - Replace certificate in Application Proxy apps](../app-proxy/scripts/powershell-get-custom-domain-replace-cert.md).
## How do I manage certificates for custom domains in Azure AD Application Proxy?
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Title: Troubleshoot problems signing in to an application from Azure AD My Apps description: Troubleshoot problems signing in to an application from Azure AD My Apps --++ Last updated 07/11/2017-+
active-directory One Click Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/one-click-sso-tutorial.md
description: Steps for one-click configuration of SSO for your application from
- ms.assetid: e0416991-4b5d-4b18-89bb-91b6070ed3ba
active-directory Auditboard Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/auditboard-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure AuditBoard for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to AuditBoard.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: e6ab736b-2bb7-4a5a-9f01-67c33f0ff97d
+++
+ na
+ms.devlang: na
+ Last updated : 04/21/2021+++
+# Tutorial: Configure AuditBoard for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both AuditBoard and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [AuditBoard](https://www.auditboard.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in AuditBoard
+> * Remove users in AuditBoard when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and AuditBoard
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/auditboard-tutorial) to AuditBoard (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An AuditBoard Site (Live).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and AuditBoard](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure AuditBoard to support provisioning with Azure AD
+
+1. Log in to AuditBoard. Navigate to **Settings** > **Users & Roles** > **Security** > **SCIM**.
+
+2. Click **Generate Token**.
+
+3. Save the **Token** and the **SCIM base URL**. These values will be entered in the Tenant URL and Secret Token field in the Provisioning tab of your AuditBoard application in the Azure portal.
+
+ > [!NOTE]
+ > Generating a new token will invalidate the previous token.
+
+4. The AuditBoard instance will require the following user permissions to be set on the SCIM user role (System Admin by default). Please connect with AuditBoard support to confirm this is set correctly
+`user:action.administer must be set to allow` and
+`user:action.edit must be set to allow`.
++
+## Step 3. Add AuditBoard from the Azure AD application gallery
+
+Add AuditBoard from the Azure AD application gallery to start managing provisioning to AuditBoard. If you have previously setup AuditBoard for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users to AuditBoard, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to AuditBoard
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for AuditBoard in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **AuditBoard**.
+
+ ![The AuditBoard link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab automatic](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your AuditBoard Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to AuditBoard. If the connection fails, ensure your AuditBoard account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AuditBoard**.
+
+9. Review the user attributes that are synchronized from Azure AD to AuditBoard in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AuditBoard for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the AuditBoard API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |emails[type eq "work"].value|String|&check;|
+ |active|Boolean|
+ |userName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for AuditBoard, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users that you would like to provision to AuditBoard by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
+
+ Title: Configure Azure Active Directory to meet FedRAMP High impact level
+description: overview of how you can meet a FedRAMP High impact level for your organization by using Azure Active Directory.
+++++++++ Last updated : 4/26/2021+++++
+# Configure Azure Active Directory to meet FedRAMP High Impact level
+
+The [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP) is an assessment and authorization process for cloud service providers (CSPs) creating cloud solution offerings (CSOs) for use with federal agencies. Azure and Azure Government have earned a [Provisional Authority to Operate (P-ATO) at the High Impact Level](https://docs.microsoft.com/compliance/regulatory/offering-fedramp) from the Joint Authorization Board, the highest bar for FedRAMP accreditation.
+
+Azure provides the capability to fulfill all control requirements to achieve a FedRAMP high rating for your CSO, or as a federal agency. It is your organizationΓÇÖs responsibility to complete additional configurations or processes to be compliant. This responsibility applies to both CSPs seeking a FedRAMP high authorization for their CSO, and federal agencies seeking an Authority to Operate (ATO).
+
+## Microsoft and FedRAMP
+
+Microsoft Azure supports more services at [FedRAMP High Impact](https://docs.microsoft.com/azure/azure-government/compliance/azure-services-in-fedramp-auditscope) levels than any other CSP. And while FedRAMP High in the Azure public cloud will meet the needs of many US government customers, agencies with more stringent requirements may rely on the Azure Government cloud. Azure Government cloud provides additional safeguards such as the heightened screening of personnel.
+
+Microsoft is required to recertify its cloud services each year to maintain its authorizations. To do so, Microsoft continuously monitors and assesses its security controls and demonstrate that the security of its services remains in compliance.
+
+* [Microsoft cloud services FedRAMP authorizations](https://marketplace.fedramp.gov/)
+
+* [Microsoft FedRAMP Audit Reports](https://aka.ms/MicrosoftFedRAMPAuditDocuments)
+
+To receive other FedRAMP reports, send email to [Azure Federal Documentation](mailto:AzFedDoc@microsoft.com).
+
+There are multiple paths towards FedRAMP authorization. You can reuse Microsoft Azure's existing authorization package and the guidance here to significantly reduce the time and effort required to obtain an ATO or P-ATO. More information on FedRAMP can be found at on the [FedRAMP website.](https://www.fedramp.gov/)
+
+ ## Scope of guidance
+
+The FedRAMP High Baseline is made up of 421 controls and control enhancements from [NIST 800-53 Security Controls Catalog Revision 4](https://csrc.nist.gov/publications/detail/sp/800-53/rev-4/final). Where applicable, we included clarifying information from the [800-53 Revision 5](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final). This article set covers a subset of these controls that are related to identity and which you must configure. We provide prescriptive guidance to assist you in achieving compliance with controls you are responsible for configuring in Azure Active Directory (Azure AD). To fully address some identity control requirements, you may need to use other systems. Other systems might include a security information and event management (SIEM) tool, such as Azure Sentinel. If you are using Azure services outside of Azure Active Directory, there will be other controls you need to consider, and you can use the capabilities Azure already has in place to meet the controls.
+
+FedRAMP Resources
+
+* [Federal Risk and Authorization Management Program](https://www.fedramp.gov/)
+
+* [FedRAMP Security Assessment Framework](https://www.fedramp.gov/assets/resources/documents/FedRAMP_Security_Assessment_Framework.pdf)
+
+* [Agency Guide for FedRAMP Authorizations](https://www.fedramp.gov/assets/resources/documents/Agency_Guide_for_Reuse_of_FedRAMP_Authorizations.pdf)
+
+* [Managing compliance in the cloud at Microsoft](https://www.microsoft.com/trustcenter/common-controls-hub)
+
+* [Microsoft Government Cloud](https://go.microsoft.com/fwlink/p/?linkid=2087246)
+
+* [Azure Compliance Offerings](https://aka.ms/azurecompliance)
+
+* [FedRAMP High blueprint sample overview](https://docs.microsoft.com/azure/governance/blueprints/samples/fedramp-h/)
+
+* [Microsoft 365 compliance center](https://docs.microsoft.com///microsoft-365/compliance/microsoft-365-compliance-center)
+
+* [Microsoft Compliance Manager ](https://docs.microsoft.com///microsoft-365/compliance/compliance-manager)
+
+
+
+## Next Steps
+
+[Configure access controls](fedramp-access-controls.md)
+
+[Configure identification & authentication controls](fedramp-identification-and-authentication-controls.md)
+
+[Configure other controls](fedramp-other-controls.md)
+
+
active-directory Fedramp Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-access-controls.md
+
+ Title: Configure identity access controls to meet FedRAMP High Impact level with Azure Active Directory
+description: Detailed guidance on how to configure Azure Active Directory access controls to meet FedRAMP High Impact levels.
+++++++++ Last updated : 4/26/2021++++
+# Configure identity access controls to meet FedRAMP High Impact level
+
+Access control is a major part of achieving a [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP) High Authority to operate.
+
+The following list of controls and control enhancements in the Access Control family may require configuration in your Azure AD tenant.
++
+|Control family|Description|
+| - | - |
+| AC-02| Account management |
+| AC-06| Least privilege |
+| AC-07| Unsuccessful logon attempts |
+| AC-08| System use notification |
+| AC-10| Concurrent session control |
+| AC-11| Session lock | pattern hiding displays |
+| AC-12| Session termination |
+| AC-20| Use of external information systems |
++
+Each row in the table below provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities for the control or control enhancement.
+
+## Configurations
++
+| Control ID | Customer responsibilities and guidance |
+| - | - |
+| AC-02 | **Implement account lifecycle management for customer-controlled accounts. Monitor the use of accounts and notify account managers of account lifecycle events. Review accounts for compliance with account management requirements every month for privileged access and every six (6) months for non-privileged access**.<p>Use Azure AD to provision accounts from external HR systems, on-premises Active Directory, or directly in the cloud. All account lifecycle operations are audited within the Azure AD audit logs. Logs can be collected and analyzed by a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, you can use Azure Event Hub to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts.<p>Provision accounts<br>[Plan cloud HR application to Azure Active Directory user provisioning](https://docs.microsoft.com/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[Add or delete users - Azure Active Directory](https://docs.microsoft.com/azure/active-directory/fundamentals/add-users-azure-active-directory)<p>Monitor accounts<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)<p>Review accounts<br>[What is entitlement management? - Azure AD](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of an access package in Azure AD entitlement management ](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-create)<br>[Review access of an access package in Azure AD entitlement management](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-review-access)<p>Resources:<br>[Administrator role permissions in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference)<br>[Dynamic Groups in Azure AD](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) |
+| AC-02(1)| **Employ automated mechanisms to support management of customer-controlled accounts.**<p>Configure automated provisioning of customer-controlled accounts from external HR systems or on-premises Active Directory. For applications that support application provisioning, configure Azure AD to automatically create user identities and roles in cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Ease monitoring of account usage by streaming Identity Protection logs (risky users, risky sign-ins, and risk detections) and audit logs directly into Azure Sentinel or Azure Event Hub.<p>Provision<br>[Plan cloud HR application to Azure Active Directory user provisioning](https://docs.microsoft.com/azure/active-directory/app-provisioning/plan-cloud-hr-provision)<br>[Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis)<br>[What is automated SaaS app user provisioning in Azure AD?](https://docs.microsoft.com/azure/active-directory/app-provisioning/user-provisioning)<br>[SaaS App Integration Tutorials for use with Azure AD](https://docs.microsoft.com/azure/active-directory/saas-apps/tutorial-list)<p>Monitor & Audit<br>[How To: Investigate risk](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<br>[Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)ΓÇÄ|
+| AC-02(2)<br>AC-02(3)| **Employ automated mechanisms to support automatically removing or disabling temporary and emergency accounts after 24 hours from last use and all customer-controlled accounts after 35 days of inactivity**.<p>Implement account management automation with Microsoft Graph and Microsoft Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required timeframe. <p>Determine Inactivity<br>[How to manage inactive user accounts in Azure AD](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br>[How to manage stale devices in Azure AD](https://docs.microsoft.com/azure/active-directory/devices/manage-stale-devices)<p>Remove or Disable Accounts<br>[Working with users in Microsoft Graph](https://docs.microsoft.com/graph/api/resources/users?view=graph-rest-1.0)<br>[Get a user](https://docs.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http)<br>[Update user](https://docs.microsoft.com/graph/api/user-update?view=graph-rest-1.0&tabs=http)<br>[Delete a user](https://docs.microsoft.com/graph/api/user-delete?view=graph-rest-1.0&tabs=http)<p>Working with devices in Microsoft Graph<br>[Get device](https://docs.microsoft.com/graph/api/device-get?view=graph-rest-1.0&tabs=http)<br>[Update device](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http)<br>[Delete device](https://docs.microsoft.com/graph/api/device-delete?view=graph-rest-1.0&tabs=http)<p>Using [Azure AD PowerShell](https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0)<br>[Get-AzureADUser](https://docs.microsoft.com/powershell/module/azuread/get-azureaduser?view=azureadps-2.0)<br>[Set-AzureADUser](https://docs.microsoft.com/powershell/module/azuread/set-azureaduser?view=azureadps-2.0)<br>[Get-AzureADDevice](https://docs.microsoft.com/powershell/module/azuread/get-azureaddevice?view=azureadps-2.0)<br>[Set-AzureADDevice](https://docs.microsoft.com/powershell/module/azuread/set-azureaddevice?view=azureadps-2.0) |
+| AC-02(4)| **Implement an automated audit and notification system for the lifecycle of managing customer-controlled accounts**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure audit logs and can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate notification.<p>Audit<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Azure Sentinel: Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<P>Notification<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(5)| **Implement device log out after a 15-minute period of inactivity**.<p>Implement device lock using a Conditional Access policy that restricts access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<P>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks and requires password to unlock ([Android](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-android), [iOS](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-ios), [Windows 10](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-windows-10)) |
+| AC-02(7)| **Administer and monitor privileged role assignments in accordance with a role-based access (RBAC) scheme for customer-controlled accounts including disabling or revoking privilege access for accounts when no longer appropriate**.<p>Implement Privileged Identity Management (PIM) with access reviews for privileged roles in Azure AD to monitor role assignments and remove role assignments when no longer appropriate. Audit logs can be streamed directly into Azure Sentinel or Azure Event Hub to facilitate monitoring.<p>Administer<br>[What is Azure AD Privileged Identity Management?](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-configure)<br>[Activation maximum duration](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-change-default-settings?tabs=new)<p>Monitor<br>[Create an access review of Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[View audit history for Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-use-audit-log?tabs=new)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream Azure Active Directory logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(11)| **Enforce usage of customer-controlled accounts to meet customer defined conditions or circumstances**.<p>Create Conditional Access policies to enforce access control decisions across users and devices.<p>Conditional Access<br>[Create a Conditional Access policy](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[What is Conditional Access?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) |
+| AC-02(12)| **Monitor and report customer-controlled accounts with privileged access for atypical usage**.<p>Facilitate monitoring of atypical usage by streaming Identity Protection logs (for example, risky users, risky sign-ins, and risk detections) and audit logs (to facilitate correlation with privilege assignment) directly into a SIEM solution such as Azure Sentinel. You can also use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Identity Protection<br>[What is Identity Protection?](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection)<br>[How To: Investigate risk](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<br>[Azure Active Directory Identity Protection notifications](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-configure-notifications)<p>Monitor accounts<br>[What is Azure Sentinel?](https://docs.microsoft.com/azure/sentinel/overview)<br>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-audit-logs)<br>[Connect Azure Active Directory data to Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-azure-active-directory) <br>[Tutorial - Stream logs to an Azure event hub](https://docs.microsoft.com/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AC-02(13)|**Disable customer-controlled accounts of users posing a significant risk within 1 hour**.<p>In Azure AD Identity Protection, configure and enable a user risk policy with the threshold set to High. Create Conditional Access policies to block access for risky users and risky sign-ins. Configure risk policies to allow users to self-remediate and unblock subsequent sign-in attempts.<p>Identity Protection<br>[What is Identity Protection?](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection)<p>Conditional Access<br>[What is Conditional Access?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)<br>[Create a Conditional Access policy](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json)<br>[Conditional Access: User risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Conditional Access: Sign-in risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Self-remediation with risk policy](https://docs.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-remediate-unblock) |
+| AC-06(7)| **Review and validate all users with privileged access every year and ensure privileges are reassigned (or removed if necessary) to align with organizational mission and business requirements**.<p>Use Azure AD entitlement management with access reviews for privileged users to verify if privileged access is required. <p>Access Reviews<br>[What is Azure AD entitlement management?](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-overview)<br>[Create an access review of Azure AD roles in Privileged Identity Management](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-start-security-review)<br>[Review access of an access package in Azure AD entitlement management](https://docs.microsoft.com/azure/active-directory/governance/entitlement-management-access-reviews-review-access) |
+| AC-07| **Enforce a limit of no more than three consecutive failed login attempts on customer-deployed resources within a 15-minute period and lock the account for a minimum of three (3) hours or until unlocked by an administrator**.<p>Enable custom Smart Lockout settings. Configure lockout threshold and lockout duration in seconds to implement these requirements. <p>Smart Lockout<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout)<br>[Manage Azure AD smart lockout values](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout) |
+| AC-08| **Display and require user acknowledgment of privacy and security notices before granting access to information systems**.<p>Azure AD provides administrators with the ability to deliver notification or banner messages for all apps that require and record acknowledgment before granting access. These terms of use policies can be granularly targeted to specific users (Member or Guest) and customized per application via Conditional Access policies.<p>Terms of Use<br>[Azure Active Directory terms of use](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use)<br>[View report of who has accepted and declined](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use) |
+| AC-10|**Limit concurrent sessions to three sessions for privileged access and two for non-privileged access**. <p>In todayΓÇÖs world where users connect from multiple devices (sometimes simultaneously), limiting concurrent sessions leads to a degraded user experience while providing limited security value. A better approach to address the intent behind this control is to adopt a zero trust security posture where the conditions are explicitly validated before a session is created, and continually throughout the life of a session. <p>Additionally, use the following compensating controls. <p>Use Conditional Access policies to restrict access to compliant devices. Configure policy settings on the device to enforce user sign on restrictions at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments.<p> Use Privileged Identity Management (PIM) to further restrict and control privileged accounts. <p> Configure Smart Account lockout for invalid sign in attempts.<p>**Implementation guidance** <p>Zero Trust<br> [Securing identity with Zero Trust](https://docs.microsoft.com/security/zero-trust/identity)<br>[Continuous access evaluation in Azure AD](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-continuous-access-evaluation)<p>Conditional Access<br>[What is Conditional Access in Azure AD?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>Device Policies<br>[Use PowerShell scripts on Windows 10 devices in Intune](https://docs.microsoft.com/mem/intune/apps/intune-management-extension)<br>[Additional smart card Group Policy settings and registry keys](https://docs.microsoft.com/windows/security/identity-protection/smart-cards/smart-card-group-policy-and-registry-settings)<br>[Microsoft Endpoint Manager overview](https://docs.microsoft.com/mem/endpoint-manager-overview)<p>Resources<br>[What is Azure AD Privileged Identity Management?](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-configure)<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout)<p>See AC-12 for additional session re-evaluation & risk mitigation guidance. |
+| AC-11<br>AC-11(1)| **Implement a session lock after a 15-minute period of inactivity or upon receiving a request from a user and retain the session lock until the user reauthenticates. Conceal previously visible information when a session lock is initiated.**<p> Implement device lock using a Conditional Access policy to restrict access to compliant devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Microsoft Intune. Microsoft Endpoint Manager (MEM) or group policy objects (GPO) can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<p>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[User sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime)<p>MDM Policy<br>Configure devices for maximum minutes of inactivity until screen locks ([Android](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-android), [iOS](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-ios), [Windows 10](https://docs.microsoft.com/mem/intune/configuration/device-restrictions-windows-10)) |
+| AC-12| **Automatically terminate user sessions when organizational defined conditions or trigger events occur**.<p>Implement automatic user session re-evaluation with Azure AD features such as Risk-Based Conditional Access and Continuous Access Evaluation. Inactivity conditions can be implemented at a device level as described in AC-11.<br>[Sign-in risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk)<br>[User risk-based Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-policy-risk-user)<br>[Continuous Access Evaluation](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-continuous-access-evaluation)
+| AC-12(1)| **Provide a logout capability for all sessions and display an explicit logout message**. <p>All Azure AD surfaced web interfaces provide a logout capability for user-initiated communications sessions. When SAML applications are integrated with Azure AD, implement single sign-out. <p>Logout capability<br>When user selects ΓÇ£[Sign-out everywhere](https://aka.ms/mysignins)ΓÇ¥ all current issued tokens are revoked. <p>Display Message<br>Azure AD automatically displays a message after user-initiated logout.<br>![Image of access control message.](media/fedramp/fedramp-access-controls-image-1.png)<p>Additional Resources<br>[View and search your recent sign-in activity from the My Sign-ins page](https://docs.microsoft.com/azure/active-directory/user-help/my-account-portal-sign-ins-page)<br>[Single Sign-Out SAML Protocol](https://docs.microsoft.com/azure/active-directory/develop/single-sign-out-saml-protocol) |
+| AC-20<br>AC-20(1)| **Establish terms and conditions allowing authorized individuals to access the customer-deployed resources from external information systems such as unmanaged devices and external networks**.<p>Require terms of use acceptance for authorized users accessing resources from external systems. Implement Conditional Access policies to restrict access from external systems. Conditional Access policies may also be integrated with Microsoft Cloud App Security (MCAS) to provide additional controls for both cloud and on-premises applications from external systems. Mobile application management (MAM) in Intune can protect organization data at the application level, including custom apps and store apps, from managed devices interacting with external systems (for example, accessing cloud services). App management can be used on organization-owned devices, and personal devices.<P>Terms and Conditions<br>[Terms of use - Azure Active Directory](https://docs.microsoft.com/azure/active-directory/conditional-access/terms-of-use)<p>Conditional Access<br>[Require device to be marked as compliant](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices)<br>[Conditions in Conditional Access policy - Device State (Preview)](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-conditions)<br>[Protect with Microsoft Cloud App Security Conditional Access App Control](https://docs.microsoft.com/cloud-app-security/proxy-intro-aad)<br>[Location condition in Azure Active Directory Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition)<p>Mobile Device management<br>[What is Microsoft Intune?](https://docs.microsoft.com/mem/intune/fundamentals/what-is-intune)<br>[What is Cloud App Security?](https://docs.microsoft.com/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](https://docs.microsoft.com/mem/intune/apps/app-management)<p>Resources<br>[Integrate on-premises apps with Cloud App Security](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy-integrate-with-microsoft-cloud-application-security) |
+
+## Next steps
+[FedRAMP compliance overview](configure-azure-active-directory-for-fedramp-high-impact.md)
+
+[Configure Identification & Authentication controls to meet FedRAMP High Impact level](fedramp-identification-and-authentication-controls.md)
+
+[Configure additional controls to meet FedRAMP High Impact level](fedramp-other-controls.md)
+
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
+
+ Title: Configure identification and authentication controls to meet FedRAMP High Impact levels with Azure Active directory
+description: Detailed guidance on how to configure identification and authentications controls to meet FedRAMP High Impact levels.
+++++++++ Last updated : 4/26/2021++++
+# Configure identification and authentication controls to meet FedRAMP High Impact level
+
+The following list of controls (and control enhancements) in the Identification and authentication family may require configuration in your Azure AD tenant.
+
+Each row in the table below provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities regarding the control and/or control enhancement.
+
+IA-02 Identification and Authentication (Organizational Users)
+
+IA-03 Device Identification and Authentication
+
+IA-04 Identifier Management
+
+IA-05 Authenticator Management
+
+IA-06 Authenticator Feedback
+
+IA-07 Cryptographic Module Authentication
+
+IA-08 Identification and Authentication (Non-Organizational Users)
+
+## Configurations
+
+| Control ID and subpart| Customer responsibilities and guidance |
+| - | - |
+| IA-02| **Uniquely identify and authenticate users or processes acting on behalf of users.<p>** Azure AD uniquely identifies user and service principal objects directly and provides multiple authentication methods including methods adhering to NIST Authentication Assurance Level (AAL) 3 that can be configured.<p>Identifiers <br> Users - [Working with users in Microsoft Graph : ID Property](https://docs.microsoft.com///graph/api/resources/users?view=graph-rest-1.0)<br>Service Principals - [ServicePrincipal resource type : ID Property](https://docs.microsoft.com///graph/api/resources/serviceprincipal?view=graph-rest-1.0)<p>Authentication & Multi-Factor Authentication<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
+| IA-02(1)<br>IA-02(3)| **Multi-factor authentication (MFA) for all access to privileged accounts**. <p>Configure the following elements for a complete solution to ensure all access to privileged accounts requires MFA.<p>Configure Conditional Access policies to require MFA for all users.<br> Implement Privileged Identity Management (PIM) to require MFA for activation of privileged role assignment prior to use.<p>With PIM activation requirement in place, privilege account activation is not possible without network access. Hence, local access is never privileged.<p>MFA & PIM<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<br> [Configure Azure AD role settings in PIM](https://docs.microsoft.com///azure/active-directory/privileged-identity-management/pim-how-to-change-default-settings?tabs=new) |
+| IA-02(2)<br>IA-02(4)| **Implement multi-factor authentication for all access to non-privileged accounts**<p>Configure the following elements as an overall solution to ensure all access to non-privileged accounts requires MFA.<p> Configure Conditional Access policies to require MFA for all users.<br> Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce use of specific authentication methods.<br> Configure Conditional Access policies to enforce device compliance.<p>Microsoft recommends using a multi-factor cryptographic hardware authenticator (e.g., FIDO2 security keys, Windows Hello for Business (with hardware TPM), or smart card) to achieve AAL3. If your organization is completely cloud-based, we recommend using FIDO2 security keys or Windows Hello for Business.<p>FIDO2 keys and Windows Hello for Business have not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting these authenticators as AAL3. For additional details regarding FIDO2 and Windows Hello for Business FIPS 140 validation please refer to [Microsoft NIST AALs](nist-overview.md).<p>Guidance regarding MDM polices differ slightly based on authentication methods, they are broken out below. <p>Smart Card / Windows Hello for Business<br> [Passwordless Strategy - Require Windows Hello for Business or smart card](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](https://docs.microsoft.com///azure/active-directory/conditional-access/require-managed-devices)<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<p> Hybrid Only<br> [Passwordless Strategy - Configure user accounts to disallow password authentication](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<p> Smart Card Only<br>[Create a Rule to Send an Authentication Method Claim](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/create-a-rule-to-send-an-authentication-method-claim)<br>[Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)<p>FIDO2 Security Key<br> [Passwordless Strategy - Excluding the password credential provider](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/passwordless-strategy)<br> [Require device to be marked as compliant](https://docs.microsoft.com///azure/active-directory/conditional-access/require-managed-devices)<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<p>Authentication Methods<br> [Azure Active Directory passwordless sign-in (preview) | FIDO2 security keys](https://docs.microsoft.com///azure/active-directory/authentication/concept-authentication-passwordless)<br> [Passwordless security key sign-in Windows - Azure Active Directory](https://docs.microsoft.com///azure/active-directory/authentication/howto-authentication-passwordless-security-key-windows)<br> [ADFS: Certificate Authentication with Azure AD & Office 365](https://docs.microsoft.com///archive/blogs/samueld/adfs-certauth-aad-o365)<br> [How Smart Card Sign-in Works in Windows (Windows 10)](https://docs.microsoft.com///windows/security/identity-protection/smart-cards/smart-card-how-smart-card-sign-in-works-in-windows)<br> [Windows Hello for Business Overview (Windows 10)](https://docs.microsoft.com///windows/security/identity-protection/hello-for-business/hello-overview)<p>Additional Resources:<br> [Policy CSP - Windows Client Management](https://docs.microsoft.com///windows/client-management/mdm/policy-configuration-service-provider)<br> [Use PowerShell scripts on Windows 10 devices in Intune](https://docs.microsoft.com///mem/intune/apps/intune-management-extension)<br> [Plan a passwordless authentication deployment with Azure AD](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-deployment)<br> |
+| IA-02(5)| **When multiple users have access to a shared or group account password, require each user to first authenticate using an individual authenticator**<p>Use an individual account per user. If a shared account is required, Azure AD permits binding of multiple authenticators to an account such that each user has an individual authenticator. <p> [How it works: Azure multi-factor authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks)<br> [Manage authentication methods for Azure AD multi-factor authentication](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-userdevicesettings) |
+| IA-02(8)| **Implement replay-resistant authentication mechanisms for network access to privileged accounts**<p>Configure Conditional Access policies to require MFA for all users. All Azure AD authentication methods at Authentication Assurance Level 2 & 3 use either nonce or challenges and are resistant to replay attacks.p>References:<br> [Conditional Access - Require MFA for all users](https://docs.microsoft.com///azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa)<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) |
+| IA-02(11)| **Implement Azure multi-factor authentication to access customer-deployed resources remotely such that one of the factors is provided by a device separate from the system gaining access where the device meets FIPS-140-2, NIAP Certification, or NSA approval**<p>See guidance for IA-02(1-4). Azure AD authentication methods to consider at AAL3 meeting the separate device requirements are:<p> FIDO2 Security Keys<br> Windows Hello for Business with Hardware TPM (TPM is recognized as a valid ΓÇ£something you haveΓÇ¥ factor by NIST 800-63B Section 5.1.7.1)<br> Smart Card<p>References:<br>[Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<br> [NIST 800-63B Section 5.1.7.1](https://pages.nist.gov/800-63-3/sp800-63b.html) |
+| IA-02(12)| **Accept and verify Personal Identity Verification (PIV) credentials. This control is not applicable if the customer does not deploy PIV credentials.**<p>Configure federated authentication using Active Directory Federation Services (ADFS) to accept PIV (certificate authentication) as both primary and multi-factor authentication methods and issue the MFA (MultipleAuthN) claim when PIV is used. Configure the federated domain in Azure AD with SupportsMFA to direct MFA requests originating at Azure AD to the ADFS. Alternatively, PIV can be used for sign-in on Windows devices and subsequently leverage Integrated Windows Authentication (IWA) along with Seamless Single Sign-On (SSSO). Windows Server & Client verify certificates by default when used for authentication. <p> [What is federation with Azure AD?](https://docs.microsoft.com///azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)[Configure AD FS support for user certificate authentication](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br> [Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)[Configure Authentication Policies](https://docs.microsoft.com///windows-server/identity/ad-fs/operations/configure-authentication-policies)<br> [Secure resources with Azure AD MFA and ADFS](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-adfs)[Secure resources with Azure AD MFA and ADFS](https://docs.microsoft.com///azure/active-directory/authentication/howto-mfa-adfs)<br>[Set-MsolDomainFederationSettings](https://docs.microsoft.com///powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)[Set-MsolDomainFederationSettings](https://docs.microsoft.com///powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0)<br> [Azure AD Connect: Seamless Single Sign-On](https://docs.microsoft.com///azure/active-directory/hybrid/how-to-connect-sso)[Azure AD Connect: Seamless Single Sign-On](https://docs.microsoft.com///azure/active-directory/hybrid/how-to-connect-sso) |
+| IA-03| **Implement device identification and authentication prior to establishing a connection.**<p>Configure Azure AD to identify and authenticate Azure AD Registered, Azure AD Joined, and Azure AD Hybrid joined devices.<p> [What is a device identity?](https://docs.microsoft.com/azure/active-directory/devices/overview)<br> [Plan an Azure AD devices deployment](https://docs.microsoft.com///azure/active-directory/devices/plan-device-deployment)<br>[How To: Require managed devices for cloud app access with Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/require-managed-devices) |
+| IA-04<br>IA-04(4)| **Disable account identifiers after 35 days of inactivity and prevent their reuse for 2 years. Manage individual identifiers by uniquely identifying each individual (e.g., contractors, foreign nationals, etc.).**<p>Assign and manage individual account identifiers and status in Azure AD in accordance with existing organizational policies defined in AC-02. Follow AC-02(3) to automatically disable user and device accounts after 35 days of inactivity. Ensure that organizational policy maintains all accounts that remain in the disabled state for at least 2 years after which they can be removed. <p>Determine Inactivity<br> [How to manage inactive user accounts in Azure AD](https://docs.microsoft.com///azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts)<br> [How to manage stale devices in Azure AD](https://docs.microsoft.com///azure/active-directory/devices/manage-stale-devices)[How to manage stale devices in Azure AD](https://docs.microsoft.com///azure/active-directory/devices/manage-stale-devices)<br> [See AC-02 guidance](fedramp-access-controls.md) |
+| IA-05| **Configure and manage information system authenticators.**<p>Azure AD supports a wide variety of authentication methods and can be managed using your existing organizational policies. See guidance for authenticator selection in IA-02(1-4). Enable users in combined registration for SSPR and Azure AD MFA and require users to register a minimum of two acceptable multi-factor authentication methods to facilitate self-remediation. Administrators can revoke user configured authenticators at any time with the authentication methods API. <p>Authenticator Strength/Protect Authenticator Content<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md)<p>Authentication Methods & Combined Registration<br> [What authentication and verification methods are available in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods)<br> [Combined registration for SSPR and Azure AD multi-factor authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-registration-mfa-sspr-combined)<p>Authenticator Revoke<br> [Azure AD authentication methods API overview](https://docs.microsoft.com/graph/api/resources/authenticationmethods-overview?view=graph-rest-beta) |
+| IA-05(1)| **Implement password-based authentication requirements.**<p>Per NIST SP 800-63B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<p>With Azure AD Password Protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your own business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<p>Microsoft strongly encourages passwordless strategies. This control is only applicable to password authenticators. Therefore, removing passwords as an available authenticator renders this control not applicable.<p>NIST Reference Documents:<br>[NIST Special Publication 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (1)<p>Additional Resources:<br>[Eliminate bad passwords using Azure Active Directory Password Protection](https://docs.microsoft.com///azure/active-directory/authentication/concept-password-ban-bad) |
+| IA-05(2)| **Implement PKI-Based authentication requirements.**<p>Federate Azure AD via ADFS to implement PKI-based authentication. By default, ADFS validates certificates, locally caches revocation data and maps users to the authenticated identity in Active Directory. <p> Additional Resources:<br> [What is federation with Azure AD?](https://docs.microsoft.com/azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) |
+| IA-05(4)| **Employ automated tools to validate password strength requirements.** <p>Azure AD implements automated mechanisms which enforce password authenticator strength at creation. This automated mechanism can also be extended to enforce password authenticator strength for on-premises Active Directory. Revision 5 of NIST 800-53 has withdrawn IA-04(4) and incorporated the requirement into IA-5(1).<p>Additional Resources:<p> [Eliminate bad passwords using Azure Active Directory Password Protection](https://docs.microsoft.com/azure/active-directory/authentication/concept-password-ban-bad)<br> [Azure AD Password Protection for Active Directory Domain Services](https://docs.microsoft.com///azure/active-directory/authentication/concept-password-ban-bad-on-premises)<br>[NIST Special Publication 800-53 Revision 5](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf) - IA-5 - Control Enhancement (4) |
+| IA-05(6)| **Protect authenticators as defined in FedRAMP High**.<p>For further details on how Azure AD protects authenticators see [Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) |
+| IA-05(7)| **Ensure unencrypted static authenticators (e.g., a password) are not embedded in applications or access scripts or stored on function keys.**<p>Implement managed identities or service principal objects (configured with only certificate).<p>[What are managed identities for Azure resources?](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)<br>[Create an Azure AD app & service principal in the portal](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal) |
+| IA-05(8)| **Implement security safeguards when individuals have accounts on multiple information systems.**<p>Implement single sign-on (SSO) by connecting all applications to Azure AD, as opposed to having individual accounts on multiple information systems.<p>[What is Azure single sign-on (SSO)?](https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) |
+| IA-05(11)| **Require hardware token quality requirements as required by FedRAMP High.**<p>Require the use of hardware tokens that meet AAL3.<p>Resources:<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform](https://azure.microsoft.com/resources/microsoft-nist/) |
+| IA-05(13)| **Enforce the expiration of cached authenticators.**<p>Cached authenticators are used to authenticate to the local machine when the network is not available. To limit the use of cached authenticators, configure Windows devices to disable their use. Where this is not possible or practical, use the following compensating controls:<p>Configure conditional access session controls using application enforced restrictions for Office applications.<br> Configure conditional access using application controls for other applications.<p>Resources:<br> [Interactive logon Number of previous logons to cache](https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available)<br> [Session controls in Conditional Access policy - Application enforced restrictions](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session)<br>[Session controls in Conditional Access policy - Conditional Access application control](https://docs.microsoft.com/azure/active-directory/conditional-access/concept-conditional-access-session) |
+| IA-06| **Obscure authentication feedback information during the authentication process.**<p>By default, Azure AD obscures all authenticator feedback. <p>
+| IA-07| **Implement mechanisms for authentication to a cryptographic module that meets applicable federal laws.**<p>FedRAMP High requires AAL3 authenticator. All authenticators supported by Azure AD at AAL3 provide mechanisms to authenticate operator access to the module as required. For example, in a Windows Hello for Business deployment with hardware TPM, configure the level of TPM owner authorization.<p> Resources:<br>See IA-02(2 & 4) for additional detail. Resources<br> [Achieving National Institute of Standards and Technology Authenticator Assurance Levels with the Microsoft Identity Platform.](nist-overview.md) <br> [TPM Group Policy settings](https://docs.microsoft.com/windows/security/information-protection/tpm/trusted-platform-module-services-group-policy-settings) |
+| IA-08| **The information system uniquely identifies and authenticates non-organizational users (or processes acting on behalf of non-organizational users).**<p>Azure AD uniquely identifies and authenticates non-organizational users homed in the organizations tenant or in external directories using FICAM approved protocols.<p> [What is B2B collaboration in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b)<br> [Direct federation with an identity provider for B2B](https://docs.microsoft.com/azure/active-directory/external-identities/direct-federation)<br> [Properties of a B2B guest user](https://docs.microsoft.com/azure/active-directory/external-identities/user-properties) |
+| IA-08(1)<br>IA-08(4)| **Accept and verify Personal Identity Verification (PIV) credentials issued by other federal agencies. Conform to the profiles issued by the Federal Identity, Credential, and Access Management (FICAM).**<p>Configure Azure AD to accept PIV credentials via federation (OIDC, SAML) or locally via Windows Integrated Authentication (WIA)<p>Resources:<br> [What is federation with Azure AD?](https://docs.microsoft.com/azure/active-directory/hybrid/whatis-fed)<br> [Configure AD FS support for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br>[What is B2B collaboration in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b)<br> [Direct federation with an identity provider for B2B](https://docs.microsoft.com/azure/active-directory/external-identities/direct-federation) |
+| IA-08(2)| **Accept only Federal Identity, Credential, and Access Management (FICAM) approved credentials.**<p>Azure AD supports authenticators at NIST Authentication Assurance Levels (AALs) 1, 2 & 3. Restrict the use of authenticators commensurate with the security category of the system being accessed. <p>Azure Active Directory supports a wide variety of authentication methods.<p>Resources<br> [What authentication and verification methods are available in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-methods)<br> [Azure AD authentication methods policy API overview](https://docs.microsoft.com/graph/api/resources/authenticationmethodspolicies-overview?view=graph-rest-beta)<br> [Achieving National Institute of Standards and Technology Authenticator <br>Assurance Levels with the Microsoft Identity Platform](https://azure.microsoft.com/resources/microsoft-nist/) |
++
+## Next Steps
+[Configure access controls](fedramp-access-controls.md)
+
+[Configure identification & authentication controls](fedramp-identification-and-authentication-controls.md)
+
+[Configure other controls](fedramp-other-controls.md)
++++
active-directory Fedramp Other Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/fedramp-other-controls.md
+
+ Title: Configure additional controls to meet FedRAMP High Impact
+description: Detailed guidance on how to configure additional controls to meet FedRAMP High Impact levels.
+++++++++ Last updated : 4/26/2021++++
+# Configure additional controls to achieve FedRAMP High Impact level
+
+The following list of controls (and control enhancements) in the families below may require configuration in your Azure AD tenant.
+
+Each row in the following tables provides prescriptive guidance to aid you in developing your organizationΓÇÖs response to any shared responsibilities regarding the control and/or control enhancement.
+
+## Audit & Accountability
+
+* AU-02 Audit events
+
+* AU-03 Content of audit
+* AU-06 Audit Review, Analysis, and Reporting
++
+| Control ID and subpart| Customer responsibilities and guidance |
+| - | - |
+| AU-02 <br>AU-03 <br>AU-03(1)<br>AU-03(2)| **Ensure the system is capable of auditing events defined in AU-02 Part a and coordinate with other entities within the organizationΓÇÖs subset of auditable events to support after-the-fact investigations. Implement centralized management of audit records**.<p>All account lifecycle operations (account creation, modification, enabling, disabling, and removal actions) are audited within the Azure AD audit logs. All authentication and authorization events are audited within Azure AD sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions.<p>Audit Events<li> [Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-audit-logs)<li> [Sign-in activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-sign-ins)<li>[How To: Investigate risk](https://docs.microsoft.com///azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<p>SIEM Integrations<li> [Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com///azure/sentinel/connect-azure-active-directory)<li>[Stream to Azure event hub and other SIEMs](https://docs.microsoft.com///azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub) |
+| AU-06<br>AU-06(1)<br>AU-06(3)<br>AU-06(4)<br>AU-06(5)<br>AU-06(6)<br>AU-06(7)<br>AU-06(10)<br>| **Review and analyze audit records at least once each week to identify inappropriate or unusual activity and report findings to appropriate personnel**. <p>Guidance provided above for AU-02 & AU-03 allows for weekly review of audit records and reporting to appropriate personnel. You cannot meet these requirements using only Azure AD. You must also use a SIEM solution such as Azure Sentinel.<p>[What is Azure Sentinel?](https://docs.microsoft.com///azure/sentinel/overview) |
+
+## Incident Response
+
+* IR-04 Incident handling
+
+* IR-05 Incident monitoring
+
+| Control ID and subpart| Customer responsibilities and guidance |
+| - | - |
+| IR-04<br>IR-04(1)<br>IR-04(2)<br>IR-04(3)<br>IR-04(4)<br>IR-04(6)<br>IR-04(8)<br>IR-05<br>IR-05(1)| **Implement incident handling and monitoring capabilities including Automated Incident Handling, Dynamic Reconfiguration, Continuity of Operations, Information Correlation, Insider Threats, Correlation with External Organizations, Incident Monitoring & Automated Tracking**. <p>All configuration changes are logged in the audit logs. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. Each of these logs can be streamed directly into a Security Information and Event Management (SIEM) solution such as Azure Sentinel. Alternatively, use Azure Event Hub to integrate logs with third-party SIEM solutions. Automate dynamic reconfiguration based on events within the SIEM using MSGraph and/or Azure AD PowerShell.<p>Audit Events<br><li>[Audit activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-audit-logs)<li>[Sign-in activity reports in the Azure Active Directory portal](https://docs.microsoft.com///azure/active-directory/reports-monitoring/concept-sign-ins)<li>[How To: Investigate risk](https://docs.microsoft.com///azure/active-directory/identity-protection/howto-identity-protection-investigate-risk)<p>SIEM Integrations<li>[Azure Sentinel : Connect data from Azure Active Directory (Azure AD)](https://docs.microsoft.com///azure/sentinel/connect-azure-active-directory)<li>[Stream to Azure event hub and other SIEMs](https://docs.microsoft.com///azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub)<p>Dynamic Reconfiguration<li>[AzureAD Module](https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0)<li>[Overview of Microsoft Graph](https://docs.microsoft.com/graph/overview?view=graph-rest-1.0) |
++
+
+## Personnel Security
+
+* PS-04 Personnel termination
+
+| Control ID and subpart| Customer responsibilities and guidance |
+| - | - |
+| PS-04<br>PS-04(2)| **Automatically notify personnel responsible for disabling access to the system.** <p>Disable accounts and revoke all associated authenticators and credentials within 8 hours. <p>Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions. <p>Account Provisioning<li> See detailed guidance in AC-02. <p>Revoke all Associated Authenticators. <li> [Revoke user access in an emergency in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/enterprise-users/users-revoke-access) |
++
+## System & Information Integrity
+
+* SI-04 Information system monitoring
+
+ Control ID and subpart| Customer responsibilities and guidance |
+| - | - |
+| SI-04<br>SI-04(1)| **Implement Information System wide monitoring & Intrusion Detection System**<p>Include all Azure AD logs (Audit, Sign-in, Identity Protection) within the information system monitoring solution. <p>Stream Azure AD logs into a SIEM solution (See IA-04). |
+
+## Next steps
+
+[Configure access controls](fedramp-access-controls.md)
+
+[Configure identification & authentication controls](fedramp-identification-and-authentication-controls.md)
+
+[Configure other controls](fedramp-other-controls.md)
+
active-directory Nist About Authenticator Assurance Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-about-authenticator-assurance-levels.md
+
+ Title: NIST Authenticator Assurance Levels with Azure Active Directory
+description: An overview of authenticator assurance levels as applied to Azure Active Directory
+++++++++ Last updated : 4/26/2021++++
+# About Authenticator Assurance Levels
+
+The National Institute of Standards and Technology (NIST) develops the technical requirements for US federal agencies implementing identity solutions. [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html) defines the technical guidelines for the implementation of digital authentication. It does so with a framework of Authenticator Assurance Levels (AALs). AALs characterize the strength of the authentication of a digital identity. The guidance also covers the management of the lifecycle of authenticators including revocation.
+
+The standard includes AAL requirements for 11 requirement categories:
+
+* Permitted authenticator types
+
+* Federal Information Processing Standards 140 (FIPS 140) verification level (FIPS 140 requirements are satisfied by [FIPS 140-2](https://csrc.nist.gov/publications/detail/fips/140/2/final) or newer revisions)
+
+* Reauthentication
+
+* Security controls
+
+* Man-in-the-middle (MitM) resistance
+
+* Verifier-impersonation resistance (phishing resistance)
+
+* Verifier-compromise resistance
+
+* Replay resistance
+
+* Authentication intent
+
+* Records Retention Policy
+
+* Privacy Controls
+
+## Applying NIST AALs in your environment
+
+> [!TIP]
+> We recommend that you meet at least AAL 2, unless business reasons, industry standards, or compliance requirements dictate that you meet AAL3.
+
+In general, AAL1 isn't recommended because it accepts password-only solutions, and passwords are the most easily compromised form of authentication. See [Your Pa$$word doesnΓÇÖt matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984).
+
+While NIST doesn't require verifier impersonation (also known as credential phishing) resistance until AAL3, we highly advise that you address this threat at all levels. You can select authenticators that provide verifier impersonation resistance, such as requiring Azure AD joined or hybrid Azure AD joined devices. If you're using Office 365 you can address use Office 365 Advanced Threat Protection, and specifically [Anti-phishing policies](https://docs.microsoft.com/microsoft-365/security/office-365-security/set-up-anti-phishing-policies?view=o365-worldwide).
+
+As you evaluate the appropriate NIST AAL for your organization, you can consider whether your entire organization must meet NIST standards, or if there are specific groups of users and resources that can be segregated, and the NIST AAL configurations applied to only a specific group of users and resources.
+
+## Security controls, privacy controls, records retention policy
+
+Azure and Azure Government have earned a Provisional Authority to Operate (P-ATO) at the [NIST SP 800-53 High Impact Level](https://nvd.nist.gov/800-53/Rev4/impact/high) from the Joint Authorization Board, the highest bar for FedRAMP accreditation, which authorizes the use of Azure and Azure Government to process highly sensitive data.
+
+These Azure and Azure Government certifications satisfy the security controls, privacy controls and records retention policy requirements for AAL1, AAL2 and AAL3.
+
+The FedRAMP audit of Azure and Azure Government included the information security management system that encompasses infrastructure, development, operations, management, and support of in-scope services. Once a P-ATO is granted, a Cloud service provider still requires an authorization (an ATO) from any government agency it works with. For Azure, a government agency, or organizations working with them, can use the Azure P-ATO in its own security authorization process and rely on it as the basis for issuing an agency ATO that also meets FedRAMP requirements.
+
+Azure continues to support more services at FedRAMP High Impact levels than any other cloud provider. And while FedRAMP High in the Azure public cloud will meet the needs of many US government customers, agencies with more stringent requirements will continue to rely on Azure Government, which provides additional safeguards such as the heightened screening of personnel. Microsoft lists all Azure public services currently available in Azure Government to the FedRAMP High boundary, as well as services planned for the current year.
+
+In addition, Microsoft is fully committed to [protecting and managing customer data](https://www.microsoft.com/trust-center/privacy/data-management) with clearly stated records retention policies. As a global company with customers in nearly every country in the world, Microsoft has a robust compliance portfolio to assist our customers. To view a complete list of our compliance offerings visit [Microsoft compliance offering](https://docs.microsoft.com/compliance/regulatory/offering-home).
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
+ΓÇÄ
active-directory Nist Authentication Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authentication-basics.md
+
+ Title: NIST authentication basics and Azure Active Directory
+description: Explanations of the terminology and authentication factors for NIST.
+++++++++ Last updated : 4/26/2021++++
+# NIST Authentication Basics
+
+Understanding the NIST guidelines requires that you have a firm grounding in the terminology, and the concepts of trusted platform modules (TPMs) and authentication factors.
+
+## Terminology
+
+The following terminology is used throughout these NIST-related articles.
+
+|Term| Definition - *Italicized* terms are defined in this table|
+| - | - |
+| Assertion| A statement from a *verifier* to a *relying party* containing information about the *subscriber*. May contain verified attributes. |
+|Authentication| The process of verifying the identity of a *subject*. |
+| Authentication factor| Something you know, something you have, or something you are: Every *authenticator* has one or more authentication factors. |
+| Authenticator| Something the *claimant* possesses and controls that is used to authenticate the *claimantΓÇÖs* identity. |
+| Claimant| A *subject* whose identity is to be verified using one or more authentication protocols. |
+|Credential| An object or data structure that authoritatively binds an identity to at least one *authenticator* possessed and controlled by a *subscriber*. |
+| Credential Service Provider (CSP)| A trusted entity that issues or registers *subscriber authenticators* and issues electronic *credentials* to *subscribers*. |
+|Relying Party| An entity that relies on a *verifierΓÇÖs assertion*, or a *claimantΓÇÖs authenticators* and *credentials*, usually to grant access to a system. |
+| Subject| A person, organization, device, hardware, network, software, or service. |
+| Subscriber| A party who has received a *credential* or *authenticator* from a *CSP*. |
+|Trusted Platform Module (TPM) | A TPM is a tamper resistant module that performs cryptographic operations including key generation. |
+| Verifier| An entity that verifies the *claimantΓÇÖs* identity by verifying the claimantΓÇÖs possession and control of *authenticators*. |
++
+## About Trusted Platform Modules
+
+Trusted Platform Module (TPM) technology is designed to provide hardware-based, security-related functions. A TPM chip, or hardware TPM, is a secure crypto processor that helps you with actions such as generating, storing, and limiting the use of cryptographic keys.
+
+Microsoft provides significant information on how TPMs work with Microsoft Windows. For more information, see this article on the [Trusted Platform Module](https://docs.microsoft.com/windows/security/information-protection/tpm/trusted-platform-module-top-node).
+
+A software TPM is an emulator that mimics this functionality.
+
+ ## Authentication factors and their strengths
+
+Authentication factors can be grouped into three categories. The following table presents example of the types of factors under each grouping.
+
+![Pictorial representation of something you know, something you have, and something you are.](media/nist-authentication-basics/nist-authentication-basics-0.png)
+
+The strength of an authentication factor is determined by how sure we can be that it is something that only the subscriber knows, has, or is.
+
+There is limited guidance in NIST about the relative strength of authentication factors. Here at Microsoft, we assess the strengths as below.
+
+**Something you know**: Passwords, the most common something you know, represent the greatest attack surface. The following mitigations improve confidence in the affinity to the subscriber and are effective at preventing password attacks such as brute-force attacks, eavesdropping and social engineering:
+
+* [Password complexity requirements](https://www.microsoft.com/research/wp-content/uploads/2016/06/Microsoft_Password_Guidance-1.pdf)
+
+* [Banned passwords](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-configure-custom-password-protection)
+
+* [Leaked credentials identification](https://docs.microsoft.com/azure/active-directory/identity-protection/overview-identity-protection)
+
+* [Secure hashed storage](https://aka.ms/AADDataWhitepaper)
+
+* [Account lockout](https://docs.microsoft.com/azure/active-directory/authentication/howto-password-smart-lockout)
+
+**Something you have**: The strength of something you have is based on how likely the subscriber is to keep it in possession, and the difficulty in an attacker gaining access to it. For example, a personal mobile device or hardware key will have a higher affinity, and therefore be more secure, than a desktop computer in an office when trying to protect against internal threat.
+
+**Something you are**: The ease with which an attacker can obtain a copy of something you are, or spoof a biometric, matters. NIST is drafting a framework for biometrics. Today, NIST will not accept biometrics as a separate authentication method. It must be a factor within multi-factor authentication. This is since biometrics are probabilistic in nature. That is, they use algorithms that determine the likelihood that it is the same person. It is not necessarily an exact match, as a password is. See this document on the [Strength of Function for Authenticators ΓÇô Biometrics](https://pages.nist.gov/SOFA/SOFA.html) (SOFA-B). SOFA-B attempts to present a framework to quantity biometricsΓÇÖ strength in terms of false match rate, false, fail rate, presentation attack detection error rate, and effort required to launch an attack.
+
+## ΓÇÄSingle-factor authentication
+
+Single-factor authentication can be achieved by using a single-factor authenticator that constitutes something you know or something you are. While an authentication factor that is ΓÇ£something you areΓÇ¥ is accepted as an authentication factor, it is not accepted as an authenticator by itself.
+
+![Conceptual image of single factor authentication.](media/nist-authentication-basics/nist-authentication-basics-1.png)
+
+## Multi-factor authentication
+
+Multi-factor authentication can be achieved by either a multi-factor authenticator or by a combination of two single-factor authenticators. A multi-factor authenticator requires two authentication factors to execute a single authentication transaction.
+
+### Multi-factor authentication using two single-factor authenticators
+
+Multi-factor authentication requires two different authentication factors. These can be two independent authenticators, such as
+
+* Memorized secret [password] and out of band [SMS]
+
+* Memorized secret [password] and one-time password [hardware or software]
+
+These methods perform two independent authentication transactions with Azure AD.
+
+![Conceptual image of multi-factor authentication using two separate authenticators.](media/nist-authentication-basics/nist-authentication-basics-2.png)
++
+### Multi-factor authentication using a single multi-factor authenticator
+
+Multi-factor factor authentication requires one authentication factor (something you know or something you are) to unlock a second authentication factor. This is typically a simpler user experience than multiple independent authenticators.
+
+![Conceptual image of multi-factor authentication a single multi-factor authenticator.](media/nist-authentication-basics/nist-authentication-basics-3a.png)
+
+One example is the Microsoft Authenticator app used in the passwordless mode. With this method the user attempts to access a secured resource (relying party), and receives a notification on their authenticator app. The user responds to a notification by providing either a biometric (something you are) or a PIN (something you know), which then unlocks the cryptographic key on the phone (something you have) which is then validated by the verifier.
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-assurance-level-1.md
+
+ Title: Achieving NIST AAL1 with the Azure Active Directory
+description: Guidance on achieving NIST authenticator assurance level 1 (AAL 1) with Azure Active Directory.
+++++++++ Last updated : 4/26/2021++++
+# Achieving NIST Authenticator assurance level 1 with Azure Active Directory
+
+The National Institute of Standards and Technology (NIST) develops the technical requirements for US federal agencies implementing identity solutions. Meeting these requirements is also required for organizations working with federal agencies. This article guides you to achieve NIST authentication assurance level 1 (AAL1).
+
+Resources you may want to see prior to trying to achieve AAL 1:
+* [NIST overview](nist-overview.md) - understand the different AAL levels.
+* [Authentication basics](nist-authentication-basics.md) - Important terminology and authentication types.
+* [NIST authenticator types](nist-authenticator-types.md)- Understand each of the authenticator types.
+* [NIST AALs](nist-about-authenticator-assurance-levels.md) - the components of the AALs, how Microsoft Azure Active Directory authentication methods map to them, and understanding trusted platform modules (TPMs).
+
+## Permitted authenticator types
+
+ Any NIST single- or multi-factor [permitted authenticator](nist-authenticator-types.md) can be used to achieve AAL1. the following table contains those not covered in [AAL2](nist-authenticator-assurance-level-2.md) and [AAL3](nist-authenticator-assurance-level-2.md).
+
+| Azure AD Authentication Method| NIST Authenticator Type |
+| - | - |
+| Password |Memorized Secret |
+| Phone (SMS)| Out-of-Band |
+| FIDO 2 security key <br>Microsoft Authenticator app for iOS (Passwordless)<br>Windows Hello for Business with software TPM <br>Smartcard (ADFS) | Multi-factor Crypto software |
+
+> [!TIP]
+> We recommend that you meet at least AAL 2, unless business reasons, industry standards, or compliance requirements dictate that you meet AAL3.
+
+## FIPS 140 validation
+
+### Verifier requirements
+
+Azure AD is using the Windows FIPS 140 Level 1 overall validated cryptographic
+ΓÇÄmodule for all its authentication related cryptographic operations. It is therefore a FIPS 140 compliant verifier as required by government agencies.
+
+## Man-in-the-middle (MitM) resistance
+
+All communications between the claimant and Azure AD are performed over an authenticated protected channel to provide resistance to MitM attacks. This satisfies the MitM resistance requirements for AAL1, AAL2 and AAL3.
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-assurance-level-2.md
+
+ Title: Achieving NIST AAL2 with the Azure Active Directory
+description: Guidance on achieving NIST authenticator assurance level 2 (AAL 2) with Azure Active Directory.
+++++++++ Last updated : 4/26/2021+++++
+# Achieving NIST authenticator assurance level 2 with Azure Active Directory
+
+The National Institute of Standards and Technology (NIST) develops the technical requirements for US federal agencies implementing identity solutions. Meeting these requirements is also required for organizations working with federal agencies. This article guides you to achieve NIST authentication assurance level 2 (AAL2).
+
+Resources you may want to see prior to trying to achieve AAL 2:
+* [NIST overview](nist-overview.md) - understand the different AAL levels.
+* [Authentication basics](nist-authentication-basics.md) - Important terminology and authentication types.
+* [NIST authenticator types](nist-authenticator-types.md)- Understand each of the authenticator types.
+* [NIST AALs](nist-about-authenticator-assurance-levels.md) - the components of the AALs, and how Microsoft Azure Active Directory authentication methods map to them.
+
+## Permitted Authenticator Types
++
+| Azure AD Authentication method| NIST Authenticator type |
+| - | - |
+| **Recommended methods** | |
+| Microsoft Authenticator app for iOS (Passwordless)<br>Windows Hello for Business w/ software TPM | Multi-factor crypto software |
+| FIDO 2 security key<br>Microsoft Authenticator app for Android (Passwordless)<br>Windows Hello for Business w/ hardware TPM<br>Smartcard (ADFS) | Multi-factor crypto hardware |
+| **Additional methods** | |
+| Password + Phone (SMS) | Memorized Secret + Out-of-Band |
+| Password + Microsoft Authenticator App (OTP)<br>Password + SF OTP | Memorized Secret + ΓÇÄSingle-factor one-time password |
+| Password + Azure AD joined with software TPM <br>Password + Compliant mobile device<br>Password + Hybrid Azure AD Joined with software TPM <br>Password + Microsoft Authenticator App (Notification) | Memorized Secret + ΓÇÄSingle-factor crypto SW |
+| Password + Azure AD joined with hardware TPM <br>Password + Hybrid Azure AD Joined with hardware TPM | Memorized Secret + ΓÇÄSingle-factor crypto hardware |
++
+### Our recommendations
+
+We recommend using multi-factor cryptographic hardware or software authenticators to achieve AAL2. Passwordless authentication eliminates the greatest attack surfaceΓÇöthe passwordΓÇöand offers users a streamlined method to authenticate.
+
+For detailed guidance on selecting a passwordless authentication method, see [Plan a passwordless authentication deployment in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-deployment).
+
+For more information on implementing Windows Hello for Business, see the [Windows Hello for Business deployment guide](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-deployment-guide).
+
+## FIPS 140 validation
+
+The following information is a guide to achieving FIPS 140 validation.
+
+### Verifier requirements
+
+Azure AD is using the Windows FIPS 140 Level 1 overall validated cryptographic
+ΓÇÄmodule for all its authentication related cryptographic operations. It is therefore a FIPS 140 compliant verifier as required by government agencies.
+
+### Authenticator requirements
+
+*Government agenciesΓÇÖ cryptographic authenticators are required to be FIPS 140 Level 1 overall validated*. This is not a requirement for non-governmental agencies. The following Azure AD authenticators meet the requirement when running on [Windows in a FIPS 140 approved mode of operation](https://docs.microsoft.com/windows/security/threat-protection/fips-140-validation)
+
+* Password
+
+* Azure AD joined w/ software or w/ hardware TPM
+
+* Hybrid Azure AD Joined w/ software or w/ hardware TPM
+
+* Windows Hello for Business w/ software or w/ hardware TPM
+
+* Smartcard (ADFS)
+
+FIDO2 security keys, and the Microsoft Authenticator app (in all its modes - Notification, OTP and Passwordless) do not meet government agencies requirement for FIPS 140 Level 1 overall validation as of this writing.
+
+* Microsoft Authenticator app is using FIPS 140 approved cryptography; however, it is not FIPS 140 Level 1 overall validated.
+
+* FIDO2 keys are a very recent innovation and as such are still in the process of the undergoing FIPS certification.
+
+## Reauthentication
+
+At AAL2 NIST requires reauthentication every 12 hours regardless of user activity, and after any period of inactivity lasting 30 minutes or longer. Presentation of something you know or something you are is required, since the session secret is something you have.
+
+To meet the requirement for reauthentication regardless of user activity, Microsoft recommends configuring [user sign-in frequency](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime) to 12 hours.
+
+NIST also allows the use of compensating controls for confirming the subscriberΓÇÖs presence:
+
+* Session inactivity timeout of 30 minutes can be achieved by locking the device at the OS level by leveraging Microsoft System Center Configuration Manager (SCCM), Group policy objects (GPO), or Intune. You must also require local authentication for the subscriber to unlock it.
+
+* Timeout regardless of activity can be achieved by running a scheduled task (leveraging SCCM, GPO or Intune) that locks the machine after 12 hours regardless of activity.
+
+## Man-in-the-middle (MitM) resistance
+
+All communications between the claimant and Azure AD are performed over an authenticated protected channel to provide resistance to MitM attacks. This satisfies the MitM resistance requirements for AAL1, AAL2 and AAL3.
+
+## Replay resistance
+
+All Azure AD authentication methods at AAL2 use either nonce or challenges and are resistant to replay attacks since the verifier will easily detect replayed authentication transactions since they will not contain the appropriate nonce or timeliness data.
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
+
+ Title: Achieving NIST AAL3 with the Azure Active Directory
+description: Guidance on achieving NIST authenticator assurance level 3 (AAL 3) with Azure Active Directory.
+++++++++ Last updated : 4/26/2021++++
+# Achieving NIST authenticator assurance level 3 with the Azure Active Directory
+
+This article guides you to achieving National Institute of Standards and Technology authenticator assurance level (NIST AAL) 3. Resources you may want to see prior to trying to achieve AAL 3:
+* [NIST overview](nist-overview.md) - understand the different AAL levels.
+* [Authentication basics](nist-authentication-basics.md) - Important terminology and authentication types.
+* [NIST authenticator types](nist-authenticator-types.md)- Understand each of the authenticator types.
+* [NIST AALs](nist-about-authenticator-assurance-levels.md) - the components of the AALs, and how Microsoft Azure Active Directory authentication methods map to them.
+
+## Permitted authenticator types
+Microsoft offers authentication methods that enable you to meet required NIST authenticator types. Please see our recommendations.
+
+
+| Azure AD Authentication Methods| NIST Authenticator Type |
+| - | -|
+| **Recommended methods**| |
+| FIDO2 security key **OR**<br> Smartcard (AD FS) **OR**<br>Windows Hello for Business w/ hardware TPM| Multi-factor cryptographic hardware |
+| **Additional methods**| |
+| Password **AND**<br>(Hybrid Azure AD Joined w/ hardware TPM **OR** <br> Azure AD joined w/ hardware TPM)| Memorized secret **+** Single-factor crypto hardware |
+| Password **AND**<br>Single-factor one-time-password hardware (from OTP manufacturers) **OR**<br>Hybrid Azure AD Joined w/ software TPM **OR** <br> Azure AD joined w/ software TPM **OR**<br> Compliant managed device| Memorized secret **AND**<br>Single-factor one-time password hardware **AND**<br>Single-factor crypto software |
+
+### Our recommendations
+
+We recommend using a multi-factor cryptographic hardware authenticator to achieve AAL3. Passwordless authentication eliminates the greatest attack surfaceΓÇöthe passwordΓÇöand offers users a streamlined method to authenticate. If your organization is completely cloud-based, we recommend using FIDO2 security keys.
+
+Please note that FIDO2 keys and Windows Hello for Business have not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting these authenticators as AAL3.
+
+For detailed guidance, see [Plan a passwordless authentication deployment in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-deployment).
+
+For more information on implementing Windows Hello for Business, see the [Windows Hello for Business deployment guide](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-deployment-guide).
+
+## FIPS 140 validation
+
+### Verifier requirements
+
+Azure AD is using the Windows FIPS 140 Level 1 overall validated cryptographic
+ΓÇÄmodule for all its authentication related cryptographic operations. It is therefore a FIPS 140 compliant verifier.
+
+### Authenticator requirements
+
+Single-factor and multi-factor cryptographic hardware authenticators have different authenticator requirements.
+
+Single-factor cryptographic hardware authenticators are required to be
+
+* FIPS 140 Level 1 overall (or higher)
+
+* FIPS 140 Level 3 Physical Security (or higher)
+
+Azure AD joined and Hybrid Azure AD joined devices meet this requirement when
+
+* you run [Windows in a FIPS 140 approved mode of operation](https://docs.microsoft.com/windows/security/threat-protection/fips-140-validation)
+
+* on a machine with a TPM that is FIPS 140 Level 1 overall (or higher) with FIPS 140 Level 3 Physical Security.
+
+ * Find compliant TPMs by searching ΓÇ£Trusted Platform ModuleΓÇ¥ and ΓÇ£TPMΓÇ¥ under [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules/Search).
+
+Check with your mobile device vendor to learn about their adherence with FIPS 140.
+
+**Multi-factor cryptographic hardware** authenticators are required to be
+
+* FIPS 140 Level 2 overall (or higher)
+
+* FIPS 140 Level 3 Physical Security (or higher)
+
+FIDO2 security keys, Smartcards, and Windows Hello for Business can help you meet these requirements.
+
+* FIDO2 keys are a very recent innovation and as such are still in the process of the undergoing FIPS certification.
+
+* Smartcards are a proven technology with multiple vendor products meeting FIPS requirements.
+
+ * Find out more on the [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules/Search).
+
+**Windows Hello for Business**
+
+FIPS 140 requires the entire cryptographic boundary including software, firmware, and hardware, to be in scope for evaluation. Windows operating systems are open computing platforms that can be paired with thousands of combinations of hardware. As such, Microsoft cannot maintain FIPS certifications for each combination. The following individual certifications of the components should be evaluated as part of the risk assessment for using WHfB as an AAL3 authenticator:
+
+* **Microsoft Windows 10, and Microsoft Windows Server** use the [US Government Approved Protection Profile for General Purpose Operating Systems Version 4.2.1](https://www.niap-ccevs.org/Profile/Info.cfm?PPID=442&id=442). from the National Information Assurance Partnership (NIAP). NIAP oversees a national program to evaluate Commercial Off-The-Shelf (COTS) Information Technology (IT) products for conformance to the international Common Criteria.
+
+* **Microsoft Windows Cryptographic Library** [has achieved FIPS Level 1 overall in the NIST Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/3544) (CMVP). The CMVP, a joint effort between the NIST and the Canadian Center for Cyber Security, validates cryptographic module to FIPS standards.
+
+* Choose a **Trusted Platform Module (TPM)** that is FIPS 140 Level 2 overall, and FIPS 140 Level 3 Physical Security. **As an organization, it is your responsibility to ensure that the hardware TPM you are using meets the needs of the AAL level you want to achieve**.
+ΓÇÄTo determine which TPMs meet the current standards, go to the [NIST Computer Security Resource Center Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules/Search). In the Module name field, enter ΓÇ£Trusted platform module.ΓÇ¥ The resultant list contains hardware TPMS that meet the current standards.
+
+## Reauthentication
+
+At AAL3 NIST requires reauthentication every 12 hours regardless of user activity, and after any period of inactivity lasting 15 minutes or longer. Presentation of both factors is required.
+
+To meet the requirement for reauthentication regardless of user activity Microsoft recommends configuring [user sign-in frequency](https://aka.ms/NIST/38) to 12 hours.
+
+NIST also allows the use of compensating controls for confirming the subscriberΓÇÖs presence:
+
+* Session inactivity timeout of 15 minutes can be achieved by locking the device at the OS level by leveraging Microsoft System Center Configuration manager (SCCM), Group policy objects (GPO), or Intune. You must also require local authentication for the subscriber to unlock it.
+
+* Timeout regardless of activity can be achieved by running a scheduled task (leveraging SCCM, GPO or Intune) that locks the machine after 12 hours regardless of activity.
+
+## Man-in-the-middle (MitM) resistance
+
+All communications between the claimant and Azure AD are performed over an authenticated protected channel to provide resistance to MitM attacks. This satisfies the MitM resistance requirements for AAL1, AAL2 and AAL3.
+
+## Verifier impersonation resistance
+
+All Azure AD authentication methods that meet AAL3 leverage cryptographic authenticators that bind the authenticator output to the specific session being authenticated. They do this by using a private key controlled by the claimant for which the public key is known to the verifier. This satisfies the verifier impersonation resistance requirements for AAL3.
+
+## Verifier compromise resistance
+
+All Azure AD authentication methods that meet AAL3 either use a cryptographic authenticator that requires the verifier store a public key corresponding to a private key held by the authenticator or store the expected authenticator output using FIPS 140 validated hash algorithms. You can find more details under [Azure AD Data Security Considerations](https://aka.ms/AADDataWhitepaper).
+
+## Replay resistance
+
+All Azure AD authentication methods at AAL3 either use nonce or challenges and are resistant to replay attacks since the verifier will easily detect replayed authentication transactions since they will not contain the appropriate nonce or timeliness data.
+
+## Authentication intent
+
+The goal of authentication intent is to make it more difficult for directly connected physical authenticators (e.g., multi-factor cryptographic devices) to be used without the subjectΓÇÖs knowledge, such as by malware on the endpoint.
+
+NIST allows the use of compensating controls for mitigating malware risk. Any Intune compliant device running Windows Defender System Guard and Windows Defender ATP meets this mitigation requirement.
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-types.md
+
+ Title: NIST Authenticator Types and aligned Azure Active Directory methods
+description: Explanations of how Azure Active Directory authentication methods align with NIST authenticator types.
+++++++++ Last updated : 4/26/2021++++
+# NIST Authenticator Types and aligned Azure Active Directory methods
+
+The authentication process begins when a claimant asserts its control of one of more authenticators that are associated with a subscriber. The subscriber may be a person or another entity.
+
+| NIST Authenticator Type| Azure AD Authentication Methods |
+| - | - |
+| Memorized secret <br> (Something you know)| Password (Cloud accounts) <br>Password (Federated)<br> Password (Password Hash Sync)<br>Password (Passthrough Authentication) |
+|Look-up secret <br> (Something you have)| None. A lookup secret is by definition data not held in a system. |
+|Out-of-band <br>(Something you have)| Phone (SMS) - not recommended |
+| Single-factor one-time password <br>ΓÇÄ(Something you have)| Microsoft Authenticator App (One-time password) <br>Single factor one-time password ΓÇÄ(through OTP manufacturers)<sup data-htmlnode="">1</sup> |
+| Multi-factor one-time password<br>(something you have + something you know or something you are)| Multi-factor one-time password ΓÇÄ(through OTP manufacturers) <sup data-htmlnode="">1</sup>|
+|Single-factor crypto software<br>(Something you have)|Compliant mobile device <br> Microsoft Authenticator App (Notification) <br> Hybrid Azure AD Joined<sup data-htmlnode="">2</sup> *with software TPM*<br> Azure AD joined<sup data-htmlnode="">2</sup> *with software TPM* |
+| Single-factor crypto hardware <br>(Something you have) | Azure AD joined<sup data-htmlnode="">2</sup> *with hardware TPM* <br> Hybrid Azure AD Joined<sup data-htmlnode="">2</sup> *with hardware TPM*|
+|Multi-factor crypto software<br>(Something you have + something you know or something you are) | Microsoft Authenticator app for iOS (Passwordless)<br> Windows Hello for Business *with software TPM* |
+|Multi-factor crypto hardware <br>(Something you have + something you know or something you are) |Microsoft Authenticator app for Android (Passwordless)<br> Windows Hello for Business *with hardware TPM*<br> Smartcard (Federated identity provider) <br> FIDO 2 security key |
++
+<sup data-htmlnode="">1</sup> OATH-TOTP SHA-1 tokens of the 30-second or 60-second variety.
+
+<sup data-htmlnode="">2</sup> For more information on device join states, see [Azure AD device identity documentation](https://docs.microsoft.com/azure/active-directory/devices/).
+
+## Why SMS isn't recommended
+
+SMS text messages meet the NIST standard, but NIST doesn't recommend them. The risks of device swap, SIM changes, number porting, and other behaviors can cause issues. If these actions are taken maliciously, they can result in an insecure experience. While they aren't recommended, they're better than using a password alone, as they require more effort for hackers.
+
+## Next Steps
+
+[NIST overview](nist-overview.md)
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-overview.md
+
+ Title: Achieving NIST Authenticator Assurance Levels with Azure Active Directory
+description: An overview of
+++++++++ Last updated : 4/26/2021++++
+# Configure Azure Active Directory to meet NIST Authenticator Assurance Levels
+
+Providing services for federal agencies is complicated by the number and complexity of standards that you must meet. As a cloud service provider (CSP) or federal agency, it is your responsibility to ensure compliance with all relevant standards. Azure and Azure Active Directory make this easier by enabling you to leverage our certifications, and then configure your specific requirements.
+Azure is certified for 90+ compliance offerings. See [Trust your cloud](https://azure.microsoft.com/overview/trusted-cloud/) for details on Azure compliance and certifications.
+
+## Why meet NIST standards?
+
+The National Institute of Standards and Technology (NIST) develops the technical requirements for US federal agencies implementing identity solutions. Organizations working with federal agencies must also meet these requirements. The NIST Identity requirements are found in the document [Special Publication 800-63 Revision 3](https://pages.nist.gov/800-63-3/sp800-63-3.html) (NIST SP 800-63-3).
+
+NIST SP 800-63 is also referenced by
+* the Electronic Prescription of Controlled Substances [ECPS](https://deadiversion.usdoj.gov/ecomm/e_rx/) program
+* [Financial Industry Regulatory Authority (FINRA) requirements](https://www.finra.org/rules-guidance).
+* Healthcare, defense, and other industry associations often use the NIST SP 800-63-3 as a baseline for identity and access management (IAM) requirements.
+
+NIST guidelines are referenced in other standards, most notably the Federal Risk and Authorization Management Program (FedRAMP) for CSPs. Azure is FedRAMP High Impact certified.
+
+The NIST digital identity guidelines cover proofing and authentication of users such as employees, partners, suppliers, and customers or citizens.
+
+NIST SP 800-63-3 digital identity guidelines encompass three areas:
+
+* [SP 800-63A](https://pages.nist.gov/800-63-3/sp800-63a.html) covers Enrollment & Identity Proofing
+
+* [SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html) covers Authentication & Lifecycle management
+
+* [SP 800-63C](https://pages.nist.gov/800-63-3/sp800-63c.html) covers Federation & Assertions
+
+Each area has mapped out assurance levels. This article set provides guidance for attaining the Authenticator Assurance Levels (AALs) in NIST SP 800-63B by using the Azure Active Directory and other Microsoft solutions.
+
+## Next Steps
+
+[Learn about AALs](nist-about-authenticator-assurance-levels.md)
+
+[Authentication basics](nist-authentication-basics.md)
+
+[NIST authenticator types](nist-authenticator-types.md)
+
+[Achieving NIST AAL1 with Azure AD](nist-authenticator-assurance-level-1.md)
+
+[Achieving NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
+
+[Achieving NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/standards-overview.md
+
+ Title: Azure Active Directory identity standards overview
+description: You can configure Azure Active directory to meet governmental and industry standards for identity management.
+++++++++ Last updated : 4/26/2021++++
+# Configure Azure Active Directory to meet identity standards
+
+In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory.
+
+Compliance frameworks can be extremely complex. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance needs in its Azure platform. You can take advantage of more than [90 Azure compliance certifications](https://docs.microsoft.com/azure/compliance). Our compliance offerings include many specific to global regions and countries. Azure also offers 35 compliance offerings specific to key industries, including health, government, finance, education, manufacturing, and media.
+
+## Azure compliance provides a head start
+
+Compliance is a shared responsibility among Microsoft, Cloud service providers (CSPs), and organizations. You can rely on Azure's compliance certifications as a basis for your compliance, and then configure Azure Active Directory to meet identity standards.
+
+Cloud service providers (CSPs), governmental agencies, and those who work with them must often meet stringent standards for one or more governments such as
+* [US Federal Risk and Authorization Management Program (FedRAMP)](https://docs.microsoft.com/azure/compliance/offerings/offering-fedramp)
+* [National Institute of Standards and Technologies (NIST)](https://docs.microsoft.com/azure/compliance/offerings/offering-nist-800-53).
+
+CSPs and organizations in industries such as healthcare and finance must also meet industry standards such as
+* [HIPPA](https://docs.microsoft.com/azure/compliance/offerings/offering-hipaa-us)
+* [Sorbanes-Oxley (Sox)](https://docs.microsoft.com/azure/compliance/offerings/offering-sox-us).
+
+To learn more about supported compliance frameworks, see [Azure compliance offerings](https://docs.microsoft.com/azure/compliance/offerings/).
+
+## Next steps
+
+[Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md)
+
+[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
For more information about creating SSH keys, see [Create and manage SSH keys fo
## Review the template
-The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/101-aks/).
+The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/aks/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.kubernetes/aks/azuredeploy.json":::
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
AKS uses several managed identities for built-in services and add-ons.
| Identity | Name | Use case | Default permissions | Bring your own identity |-|--|-|
-| Control plane | not visible | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS managed public IPs, and Cluster Autoscaler operations | Contributor role for Node resource group | supported
-| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR) | NA (for kubernetes v1.15+) | Not currently supported
+| Control plane | not visible | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS managed public IPs, and Cluster Autoscaler operations | Contributor role for Node resource group | Supported
+| Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR) | NA (for kubernetes v1.15+) | Supported (Preview)
| Add-on | AzureNPM | No identity required | NA | No | Add-on | AzureCNI network monitoring | No identity required | NA | No | Add-on | azure-policy (gatekeeper) | No identity required | NA | No
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/add-api-manually.md
description: This tutorial shows you how to use API Management (APIM) to add an
documentationcenter: '' - - Previously updated : 04/20/2020 Last updated : 04/26/2021
This section shows how to add a "/get" operation in order to map it to the back
1. Select the API you created in the previous step. 2. Click **+ Add Operation**.
-3. In the **URL**, select **GET** and enter "*/get*" in the resource.
+3. In the **URL**, select **GET** and enter `/get` in the resource.
4. Enter "*FetchData*" for **Display name**. 5. Select **Save**.
This section shows how to add an operation that takes a parameter. In this case,
1. Select the API you created in the previous step. 2. Click **+ Add Operation**.
-3. In the **URL**, select **GET** and enter "*/status/{code}*" in the resource. Optionally, you can provide some information associated with this parameter. For example, enter "*Number*" for **TYPE**, "*200*" (default) for **VALUES**.
-4. Enter "GetStatus" for **Display name**.
+3. In the **URL**, select **GET** and enter `*/status/{code}` in the resource. Optionally, you can provide some information associated with this parameter. For example, enter "*Number*" for **TYPE**, "*200*" (default) for **VALUES**.
+4. Enter "WildcardGet" for **Display name**.
5. Select **Save**. ### Test the operation
This section shows how to add an operation that takes a parameter. In this case,
Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**. 1. Select the **Test** tab.
-2. Select **GetStatus**. By default the code value is set to "*200*". You can change it to test other values. For example, type "*418*".
+2. Select **WildcardGet**. By default the code value is set to "*200*". You can change it to test other values. For example, type "*418*".
3. Press **Send**. The response that the "http://httpbin.org/status/200" operation generates appears. If you want to transform your operations, see [Transform and protect your API](transform-api.md).
+## Add and test a wildcard operation
+
+This section shows how to add a wildcard operation. A wildcard operation lets you pass an arbitrary value with an API request. Instead of creating separate GET operations as shown in the previous sections, you could create a wildcard GET operation.
+
+### Add the operation
+
+1. Select the API you created in the previous step.
+2. Click **+ Add Operation**.
+3. In the **URL**, select **GET** and enter `/*` in the resource.
+4. Enter "*WildcardGet*" for **Display name**.
+5. Select **Save**.
+
+### Test the operation
+
+Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+
+1. Select the **Test** tab.
+2. Select **WildcardGet**. Try one or more of the GET operations that you tested in previous sections, or try a different supported GET operation.
+
+ For example, in **Template parameters**, update the value next to the wildcard (*) name to `headers`. The operation returns the incoming request's HTTP headers.
+1. Press **Send**.
+
+ The response that the "http://httpbin.org/headers" operation generates appears. If you want to transform your operations, see [Transform and protect your API](transform-api.md).
+ [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)] [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-api-import-restrictions.md
If you're receiving errors importing your OpenAPI document, make sure you've val
### <a name="open-import-export-general"> </a>General -- API definitions exported from API Management service are primarily intended for applications external to API Management service that need to call the API hosted in API Management service. Exported API definitions are not intended to be imported again into the same or different API Management service. For configuration management of API defiitions across different serivces/envionments, please refer to documentation regarding using API Management Service with Git.
+- API definitions exported from API Management service are primarily intended for applications external to API Management service that need to call the API hosted in API Management service. Exported API definitions are not intended to be imported again into the same or different API Management service. For configuration management of API definitions across different services/environments, please refer to documentation regarding using API Management Service with Git.
### Add new API via OpenAPI import
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/developer-portal-faq.md
Last updated 04/15/2021-++ # API Management developer portal - frequently asked questions
api-management Import Api App As Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/import-api-app-as-api.md
- Title: Import an API App as an API with the Azure portal | Microsoft Docs
-description: This article shows you how to use API Management (APIM) to import API App as an API.
------- Previously updated : 04/22/2020---
-# Import an API App as an API
-
-This article shows how to import an API App as an API. The article also shows how to test the APIM API.
-
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Import an API App as an API
-> * Test the API in the Azure portal
-> * Test the API in the Developer portal
-
-## Prerequisites
-
-+ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
-+ Make sure there is an API App in your subscription. For more information, see [App Service Documentation](../app-service/index.yml)
--
-## <a name="create-api"> </a>Import and publish a back-end API
-
-1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Select **API App** from the **Add a new API** list.
-
- ![API app](./media/import-api-app-as-api/api-app.png)
-3. Press **Browse** to see the list of API Apps in your subscription.
-4. Select the app. APIM finds the swagger associated with the selected app, fetches it, and imports it.
-
- In case APIM does not find swagger, it exposes the API as a "pass-through" API.
-5. Add an API URL suffix. The suffix is a name that identifies this specific API in this APIM instance. It has to be unique in this APIM instance.
-6. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
-
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the APIM instance, you are an administrator already, so you are subscribed to every product by default.
-
- By default, each API Management instance comes with two sample products:
-
- * **Starter**
- * **Unlimited**
-7. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-8. Select **Create**.
-
-## Test the new API in the Azure portal
-
-Operations can be called directly from the Azure portal, which provides a convenient way to view and test the operations of an API.
-
-1. Select the API you created in the previous step.
-2. Press the **Test** tab.
-3. Select some operation.
-
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the APIM instance, you are an administrator already, so the key is filled in automatically.
-1. Press **Send**.
-
- Backend responds with **200 OK** and some data.
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
api-management Import App Service As Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/import-app-service-as-api.md
+
+ Title: Import Azure Web App to Azure API Management | Microsoft Docs
+description: This article shows you how to use Azure API Management to import a web API hosted in Azure App Service.
+
+documentationcenter: ''
++++ Last updated : 04/27/2021+++
+# Import an Azure Web App as an API
+
+This article shows how to import an Azure Web App to Azure API Management and test the imported API, using the Azure portal.
+
+> [!NOTE]
+> You can use the API Management Extension for Visual Studio Code to import and manage your APIs. Follow the [API Management Extension tutorial](visual-studio-code-tutorial.md) to install and get started.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Import a Web App hosted in App Service
+> * Test the API in the Azure portal
+
+## Expose Web App with API Management
+
+[Azure App Service](../app-service/overview.md) is an HTTP-based service for hosting web applications, REST APIs, and mobile backends. API developers can use their preferred technology stacks and pipelines to develop APIs and publish their API backends as Web Apps in a secure, scalable environment. Then, use API Management to expose the Web Apps, manage and protect the APIs throughout their lifecycle, and publish them to consumers.
+
+API Management is the recommended environment to expose a Web App-hosted API, for several reasons:
+
+* Decouple managing and securing the front end exposed to API consumers from managing and monitoring the backend Web App
+* Manage web APIs hosted as Web Apps in the same environment as your other APIs
+* Apply [policies](api-management-policies.md) to change API behavior, such as call rate limiting
+* Direct API consumers to API Management's customizable [developer portal](api-management-howto-developer-portal.md) to discover and learn about your APIs, request access, and try them
+
+For more information, see [About API Management](api-management-key-concepts.md).
+
+## OpenAPI specification versus wildcard operations
+
+API Management supports import of Web Apps hosted in App Service that include an OpenAPI specification (Swagger definition). However, an OpenAPI specification isn't required.
+
+* If the Web App has an OpenAPI specification configured in an API definition, API Management creates API operations that map directly to the definition, including required paths, parameters, and response types.
+
+ Having an OpenAPI specification is recommended, because the API is imported to API Management with high fidelity, giving you flexibility to validate, manage, secure, and update configurations for each operation separately.
+
+* If an OpenAPI specification isn't provided, API Management generates [wildcard operations](add-api-manually.md#add-and-test-a-wildcard-operation) for the common HTTP verbs (GET, PUT, and so on). Append a required path or parameters to a wildcard operation to pass an API request through to the backend API.
+
+ With wildcard operations, you can still take advantage of the same API Management features, but operations aren't defined at the same level of detail by default. In either case, you can [edit](edit-api.md) or [add](add-api-manually.md) operations to the imported API.
+
+### Example
+Your backend Web App might support two GET operations:
+* `https://myappservice.azurewebsites.net/customer/{id}`
+* `https://myappservice.azurewebsites.net/customers`
+
+You import the Web App to your API Management service at a path such as `https://contosoapi.azureapi.net/store`. The following table shows the operations that are imported to API Management, either with or without an OpenAPI specification:
+
+| Type |Imported operations |Sample requests |
+||||
+|OpenAPI specification | `GET /customer/{id}`<br/><br/> `GET /customers` | `GET https://contosoapi.azureapi.net/store/customer/1`<br/><br/>`GET https://contosoapi.azureapi.net/store/customers` |
+|Wildcard | `GET /*` | `GET https://contosoapi.azureapi.net/store/customer/1`<br/><br/>`GET https://contosoapi.azureapi.net/store/customers` |
+
+The wildcard operation allows the same requests to the backend service as the operations in the OpenAPI specification. However, the OpenAPI-specified operations can be managed separately in API Management.
+
+## Prerequisites
+++ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).++ Make sure there is an App Service in your subscription. For more information, see [App Service documentation](../app-service/index.yml).+
+ For steps to create an example web API and publish as an Azure Web App, see:
+
+ * [Tutorial: Create a web API with ASP.NET Core](/aspnet/core/tutorials/first-web-api)
+ * [Publish an ASP.NET Core app to Azure with Visual Studio Code](/aspnet/core/tutorials/publish-to-azure-webapp-using-vscode)
++
+## <a name="create-api"> </a>Import and publish a backend API
+
+> [!TIP]
+> The following steps start the import by using Azure API Management in the Azure portal. You can also link to API Management directly from your Web App, by selecting **API Management** from the app's **API** menu.
+
+1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
+1. Select **App Service** from the list.
+
+ :::image type="content" source="media/import-app-service-as-api/app-service.png" alt-text="Create from App Service":::
+1. Select **Browse** to see the list of App Services in your subscription.
+1. Select an App Service. If an OpenAPI definition is associated with the selected Web App, API Management fetches it and imports it.
+
+ If an OpenAPI definition isn't found, API Management exposes the API by generating wildcard operations for common HTTP verbs.
+1. Add an API URL suffix. The suffix is a name that identifies this specific API in this API Management instance. It has to be unique in this APIM instance.
+1. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
+
+ > [!NOTE]
+ > Products are associations of one or more APIs. You can include many APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the APIM instance, you are an administrator already, so you are subscribed to every product by default.
+ >
+ > By default, each API Management instance comes with two sample products:
+ > * **Starter**
+ > * **Unlimited**
+1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
+ :::image type="content" source="media/import-app-service-as-api/import-app-service.png" alt-text="Create API from App Service":::
+
+## Test the new API in the Azure portal
+
+Operations can be called directly from the Azure portal, which provides a convenient way to view and test the operations of an API. You can also test the API in the [developer portal](api-management-howto-developer-portal.md) or using your own REST client tools.
+
+1. Select the API you created in the previous step.
+1. Select the **Test** tab.
+1. Select an operation.
+
+ The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the API Management instance, you are an administrator already, so the key is filled in automatically.
+1. Press **Send**.
+
+ When the test is successful, the backend responds with **200 OK** and some data.
+
+### Test wildcard operation in the portal
+
+When wildcard operations are generated, the operations might not map directly to the backend API. For example, a wildcard GET operation imported in API Management uses the path `/` by default. However, your backend API might support a GET operation at the following path:
+
+`/api/TodoItems`
+
+You can test the path `/api/TodoItems` as follows.
+
+1. Select the API you created, and select the operation.
+1. Select the **Test** tab.
+1. In **Template parameters**, update the value next to the wildcard (*) name. For example, enter `api/TodoItems`. This value gets appended to the path `/` for the wildcard operation.
+
+ :::image type="content" source="media/import-app-service-as-api/test-wildcard-operation.png" alt-text="Test wildcard operation":::
+1. Select **Send**.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Transform and protect a published API](transform-api.md)
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Previously updated : 04/13/2021 Last updated : 04/28/2021
Configuring API Management for zone redundancy is currently supported in the fol
* Brazil South * Canada Central * Central India
+* Central US
* East US * East US 2 * France Central * Japan East
+* North Europe
* South Central US * Southeast Asia * UK South
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-send-email.md
Deploy an app with the language framework of your choice to App Service. To foll
-## Create the Logic App
+## Create the logic app
-1. In the [Azure portal](https://portal.azure.com), create an empty logic app by following the instructions in [Create your logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md#create-your-logic-app). When you see the **Logic Apps Designer**, return to this tutorial.
+1. In the [Azure portal](https://portal.azure.com), create an empty logic app by following the instructions in [Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). When you see the **Logic Apps Designer**, return to this tutorial.
1. In the splash page for Logic Apps Designer, select **When an HTTP request is received** under **Start with a common trigger**. ![Screenshot that shows the splash page for the Logic Apps Designer with When an H T T P request is received highlighted.](./media/tutorial-send-email/receive-http-request.png) 1. In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
- ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema opion selected. ](./media/tutorial-send-email/generate-schema-with-payload.png)
+ ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema option selected. ](./media/tutorial-send-email/generate-schema-with-payload.png)
1. Copy the following sample JSON into the textbox and select **Done**.
Deploy an app with the language framework of your choice to App Service. To foll
This HTTP request definition is a trigger to anything you want to do in this logic app, be it Gmail or anything else. Later you will invoke this URL in your App Service app. For more information on the request trigger, see the [HTTP request/response reference](../connectors/connectors-native-reqres.md).
-1. At the bottom of the designer, click **New step**, type **Gmail** in the actions search box and find and select **Send email (V2)**.
+1. At the bottom of the designer, click **New step**, type **Gmail** in the actions search box. Find and select **Send email (V2)**.
> [!TIP] > You can search for other types of integrations, such as SendGrid, MailChimp, Microsoft 365, and SalesForce. For more information, see [Logic Apps documentation](../logic-apps/index.yml).
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-how-to.md
static async Task Main()
} ```
-For more details, see the [Service Bus binding](../azure-functions/functions-bindings-service-bus-output.md#hostjson-settings) article.
+For more details, see the [Service Bus binding](../azure-functions/functions-bindings-service-bus.md#hostjson-settings) article.
### Configuration for other bindings
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
Title: Runbook execution in Azure Automation
description: This article provides an overview of the processing of runbooks in Azure Automation. Previously updated : 03/23/2021 Last updated : 04/28/2021
The [Log Analytics agent for Linux](../azure-monitor/agents/agent-linux.md) work
The **nxautomation** account with the corresponding sudo permissions must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md). If you try to install the worker and the account is not present or doesnΓÇÖt have the appropriate permissions, the installation fails.
-You should not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the **nxautomation** account and the permissions should not be removed. Restricting this to certain folders or commands may result in a breaking change.
+Do not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the **nxautomation** account and the permissions should not be removed. Restricting this to certain folders or commands may result in a breaking change.
The logs available for the Log Analytics agent and the **nxautomation** account are:
A runbook needs permissions for authentication to Azure, through credentials. Se
## Modules
-Azure Automation supports a number of default modules, including some AzureRM modules (AzureRM.Automation) and a module containing several internal cmdlets. Also supported are installable modules, including the Az modules (Az.Automation), currently being used in preference to AzureRM modules. For details of the modules that are available for your runbooks and DSC configurations, see [Manage modules in Azure Automation](shared-resources/modules.md).
+Azure Automation includes the following PowerShell modules:
+
+* Orchestrator.AssetManagement.Cmdlets - contains several internal cmdlets that are only available when you execute runbooks in the Azure sandbox environment or on a Windows Hybrid Runbook Worker. These cmdlets are designed to be used instead of Azure PowerShell cmdlets to interact with your Automation account resources.
+* Az.Automation - the recommended PowerShell module for interacting with Azure Automation that replaces the AzureRM Automation module. The Az.Automation module is not automatically included when you create an Automation account and you need to import them manually.
+* AzureRM.Automation - installed by default when you create an Automation account.
+
+Also supported are installable modules, based on the cmdlets that your runbooks and DSC configurations require. For details of the modules that are available for your runbooks and DSC configurations, see [Manage modules in Azure Automation](shared-resources/modules.md).
## Certificates
External services, for example, Azure DevOps Services and GitHub, can start a ru
To share resources among all runbooks in the cloud, Azure uses a concept called fair share. Using fair share, Azure temporarily unloads or stops any job that has run for more than three hours. Jobs for [PowerShell runbooks](automation-runbook-types.md#powershell-runbooks) and [Python runbooks](automation-runbook-types.md#python-runbooks) are stopped and not restarted, and the job status becomes Stopped.
-For long-running Azure Automation tasks, it's recommended to use a Hybrid Runbook Worker. Hybrid Runbook Workers aren't limited by fair share, and don't have a limitation on how long a runbook can execute. The other job [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) apply to both Azure sandboxes and Hybrid Runbook Workers. While Hybrid Runbook Workers aren't limited by the three hour fair share limit, you should develop runbooks to run on the workers that support restarts from unexpected local infrastructure issues.
+For long-running Azure Automation tasks, it's recommended to use a Hybrid Runbook Worker. Hybrid Runbook Workers aren't limited by fair share, and don't have a limitation on how long a runbook can execute. The other job [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) apply to both Azure sandboxes and Hybrid Runbook Workers. While Hybrid Runbook Workers aren't limited by the three-hour fair share limit, you should develop runbooks to run on the workers that support restarts from unexpected local infrastructure issues.
Another option is to optimize a runbook by using child runbooks. For example, your runbook might loop through the same function on several resources, for example, with a database operation on several databases. You can move this function to a [child runbook](automation-child-runbooks.md) and have your runbook call it using [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook). Child runbooks execute in parallel in separate processes.
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 04/14/2021 Last updated : 04/29/2021
To learn more about the Azure Resource Manager and Classic deployment models, se
>[!NOTE] >Azure Cloud Solution Provider (CSP) subscriptions support only the Azure Resource Manager model. Non-Azure Resource Manager services are not available in the program. When you are using a CSP subscription, the Azure Classic Run As account is not created, but the Azure Run As account is created. To learn more about CSP subscriptions, see [Available services in CSP subscriptions](/azure/cloud-solution-provider/overview/azure-csp-available-services).
-When you create an Automation account, the Run As account is created by default at the same time. If you chose not to create it along with the Automation account, it can be created individually at a later time. An Azure Classic Run As Account is optional, and is created separately if you need to manage classic resources.
+When you create an Automation account, the Run As account is created by default at the same time with a self-signed certificate. If you chose not to create it along with the Automation account, it can be created individually at a later time. An Azure Classic Run As Account is optional, and is created separately if you need to manage classic resources.
+
+If you want to use a certificate issued by your enterprise or third-party certification authority (CA) instead of the default self-signed certificate, can use the [PowerShell script to create a Run As account](create-run-as-account.md#powershell-script-to-create-a-run-as-account) option for your Run As and Classic Run As accounts.
+ > [!VIDEO https://www.microsoft.com/videoplayer/embed/RWwtF3]
automation Create Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/create-run-as-account.md
Title: Create an Azure Automation Run As account
-description: This article tells how to create a Run As account with PowerShell or from the Azure portal.
+description: This article tells how to create an Azure Automation Run As account with PowerShell or from the Azure portal.
Previously updated : 01/06/2021 Last updated : 04/29/2021
Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to create a Run As or Classic Run As account from the Azure portal or Azure PowerShell.
+When you create the Run As or Classic Run As account in the Azure portal, by default it uses a self-signed certificate. If you want to use a certificate issued by your enterprise or third-party certification authority (CA), can use the [PowerShell script to create a Run As account](#powershell-script-to-create-a-run-as-account).
+ ## Create account in Azure portal Perform the following steps to update your Azure Automation account in the Azure portal. The Run As and Classic Run As accounts are created separately. If you don't need to manage classic resources, you can just create the Azure Run As account.
To get the values for `AutomationAccountName`, `SubscriptionId`, and `ResourceGr
The PowerShell script includes support for several configurations.
-* Create a Run As account by using a self-signed certificate.
* Create a Run As account and/or a Classic Run As account by using a self-signed certificate.
-* Create a Run As account and/or a Classic Run As account by using a certificate issued by your enterprise certification authority (CA).
+* Create a Run As account and/or a Classic Run As account by using a certificate issued by your enterprise or third-party certification authority (CA).
* Create a Run As account and/or a Classic Run As account by using a self-signed certificate in the Azure Government cloud. 1. Download and save the script to a local folder using the following command.
The PowerShell script includes support for several configurations.
## Next steps
-* To learn more about graphical authoring, see [Author graphical runbooks in Azure Automation](automation-graphical-authoring-intro.md).
* To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](learn/automation-tutorial-runbook-textual-powershell.md).+ * To get started with a Python 3 runbook, see [Tutorial: Create a Python 3 runbook](learn/automation-tutorial-runbook-textual-python-3.md).
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
Title: Enable a managed identity for your Azure Automation account (preview)
description: This article describes how to set up managed identity for Azure Automation accounts. Previously updated : 04/20/2021 Last updated : 04/28/2021 # Enable a managed identity for your Azure Automation account (preview)
print(response.text)
## Next steps
+- If your runbooks aren't completing successfully, review [Troubleshoot Azure Automation managed identity issues (preview)](troubleshoot/managed-identity.md).
+ - If you need to disable a managed identity, see [Disable your Azure Automation account managed identity (preview)](disable-managed-identity-for-automation.md). -- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Manage Runas Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
Title: Manage an Azure Automation Run As account
-description: This article tells how to manage your Run As account with PowerShell or from the Azure portal.
+description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal.
Previously updated : 01/19/2021 Last updated : 04/29/2021
When you renew the self-signed certificate, the current valid certificate is ret
>If you think that the Run As account has been compromised, you can delete and re-create the self-signed certificate. >[!NOTE]
->If you have configured your Run As account to use a certificate issued by your enterprise certificate authority and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate.
+>If you have configured your Run As account to use a certificate issued by your enterprise or third-party certificate authority (CA) and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate.
Use the following steps to renew the self-signed certificate.
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
Title: Manage modules in Azure Automation
description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 02/01/2021 Last updated : 04/28/2021
For Az.Automation, the majority of the cmdlets have the same names as those used
## Internal cmdlets
-Azure Automation supports the internal `Orchestrator.AssetManagement.Cmdlets` module for the Log Analytics agent for Windows, installed by default. The following table defines the internal cmdlets. These cmdlets are designed to be used instead of Azure PowerShell cmdlets to interact with shared resources. They can retrieve secrets from encrypted variables, credentials, and encrypted connections.
+Azure Automation supports internal cmdlets that are only available when you execute runbooks in the Azure sandbox environment or on a Windows Hybrid Runbook Worker. The internal module `Orchestrator.AssetManagement.Cmdlets` is installed by default in your Automation account and when the Windows Hybrid Runbook Worker role is installed on the machine.
->[!NOTE]
->The internal cmdlets are only available when you're executing runbooks in the Azure sandbox environment, or on a Windows Hybrid Runbook Worker.
+The following table defines the internal cmdlets. These cmdlets are designed to be used instead of Azure PowerShell cmdlets to interact with your Automation account resources. They can retrieve secrets from encrypted variables, credentials, and encrypted connections.
|Name|Description| |||
Azure Automation supports the internal `Orchestrator.AssetManagement.Cmdlets` mo
|Start-AutomationRunbook|`Start-AutomationRunbook [-Name] <string> [-Parameters <IDictionary>] [-RunOn <string>] [-JobId <guid>] [<CommonParameters>]`| |Wait-AutomationJob|`Wait-AutomationJob -Id <guid[]> [-TimeoutInMinutes <int>] [-DelayInSeconds <int>] [-OutputJobsTransitionedToRunning] [<CommonParameters>]`|
-Note that the internal cmdlets differ in naming from the Az and AzureRM cmdlets. Internal cmdlet names don't contain words like `Azure` or `Az` in the noun, but do use the word `Automation`. We recommend their use over the use of Az or AzureRM cmdlets during runbook execution in an Azure sandbox or on a Windows Hybrid Runbook Worker. They require fewer parameters and run in the context of your job that's already running.
+Note that the internal cmdlets differ in naming from the Az and AzureRM cmdlets. Internal cmdlet names don't contain words like `Azure` or `Az` in the noun, but do use the word `Automation`. We recommend their use over the use of Az or AzureRM cmdlets during runbook execution in an Azure sandbox or on a Windows Hybrid Runbook Worker because they require fewer parameters and run in the context of your job during execution.
Use Az or AzureRM cmdlets for manipulating Automation resources outside the context of a runbook.
Importing an Az module into your Automation account doesn't automatically import
You can import the Az modules into the Automation account from the Azure portal. Remember to import only the Az modules that you need, not every Az module that's available. Because [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/1.1.0) is a dependency for the other Az modules, be sure to import this module before any others.
+1. Sign in to the Azure [portal](https://portal.azure.com).
+1. Search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
1. From your Automation account, under **Shared Resources**, select **Modules**.
-2. Select **Browse Gallery**.
-3. In the search bar, enter the module name (for example, `Az.Accounts`).
-4. On the PowerShell Module page, select **Import** to import the module into your Automation account.
+1. Select **Browse Gallery**.
+1. In the search bar, enter the module name (for example, `Az.Accounts`).
+1. On the PowerShell Module page, select **Import** to import the module into your Automation account.
![Screenshot of importing modules into your Automation account](../media/modules/import-module.png)
This section defines several ways that you can import a module into your Automat
To import a module in the Azure portal:
-1. Go to your Automation account.
-2. Under **Shared Resources**, select **Modules**.
-3. Select **Add a module**.
-4. Select the **.zip** file that contains your module.
-5. Select **OK** to start to import process.
+1. In the portal, search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. Under **Shared Resources**, select **Modules**.
+1. Select **Add a module**.
+1. Select the **.zip** file that contains your module.
+1. Select **OK** to start to import process.
### Import modules by using PowerShell
To import a module directly from the PowerShell Gallery:
To import a PowerShell Gallery module directly from your Automation account:
+1. In the portal, search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
1. Under **Shared Resources**, select **Modules**.
-2. Select **Browse gallery**, and then search the Gallery for a module.
-3. Select the module to import, and select **Import**.
-4. Select **OK** to start the import process.
+1. Select **Browse gallery**, and then search the Gallery for a module.
+1. Select the module to import, and select **Import**.
+1. Select **OK** to start the import process.
![Screenshot of importing a PowerShell Gallery module from the Azure portal](../media/modules/gallery-azure-portal.png)
If you have problems with a module, or you need to roll back to a previous versi
To remove a module in the Azure portal:
-1. Go to your Automation account. Under **Shared Resources**, select **Modules**.
-2. Select the module you want to remove.
-3. On the Module page, select **Delete**. If this module is one of the [default modules](#default-modules), it rolls back to the version that existed when the Automation account was created.
+1. In the portal, search for and select **Automation Accounts**.
+1. On the **Automation Accounts** page, select your Automation account from the list.
+1. Under **Shared Resources**, select **Modules**.
+1. Select the module you want to remove.
+1. On the Module page, select **Delete**. If this module is one of the [default modules](#default-modules), it rolls back to the version that existed when the Automation account was created.
### Delete modules by using PowerShell
automation Start Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/start-runbooks.md
Title: Start a runbook in Azure Automation
description: This article tells how to start a runbook in Azure Automation. Previously updated : 03/16/2018 Last updated : 04/28/2021
The following table helps you determine the method to start a runbook in Azure A
| [Schedule](./shared-resources/schedules.md) |<li>Automatically start runbook on hourly, daily, weekly, or monthly schedule.<br> <li>Manipulate schedule through Azure portal, PowerShell cmdlets, or Azure API.<br> <li>Provide parameter values to be used with schedule. | | [From Another Runbook](automation-child-runbooks.md) |<li>Use a runbook as an activity in another runbook.<br> <li>Useful for functionality used by multiple runbooks.<br> <li>Provide parameter values to child runbook and use output in parent runbook. |
-The following image illustrates detailed step-by-step process in the life cycle of a runbook. It includes different ways a runbook starts in Azure Automation, which components required for Hybrid Runbook Worker to execute Azure Automation runbooks and interactions between different components. To learn about executing Automation runbooks in your datacenter, refer to [hybrid runbook workers](automation-hybrid-runbook-worker.md)
+The following image illustrates detailed step-by-step process in the life cycle of a runbook. It includes different ways a runbook starts in Azure Automation, which components required for Hybrid Runbook Worker to execute Azure Automation runbooks, and interactions between different components. To learn about executing Automation runbooks in your datacenter, refer to [hybrid runbook workers](automation-hybrid-runbook-worker.md)
![Runbook Architecture](media/automation-starting-runbook/runbooks-architecture.png)
The following image illustrates detailed step-by-step process in the life cycle
When you start a runbook from the Azure portal or Windows PowerShell, the instruction is sent through the Azure Automation web service. This service doesn't support parameters with complex data types. If you need to provide a value for a complex parameter, then you must call it inline from another runbook as described in [Child runbooks in Azure Automation](automation-child-runbooks.md).
-The Azure Automation web service provides special functionality for parameters using certain data types as described in the following sections:
+The Azure Automation web service provides special functionality for parameters using certain data types as described in the following sections.
### Named values
jsmith
## Start a runbook with the Azure portal
-1. In the Azure portal, select **Automation** and then click the name of an Automation account.
-2. On the Hub menu, select **Runbooks**.
-3. On the Runbooks page, select a runbook, and then click **Start**.
+1. In the Azure portal, select **Automation** and then select the name of an Automation account.
+2. From the left-hand pane, select **Runbooks**.
+3. On the **Runbooks** page, select a runbook, and then click **Start**.
4. If the runbook has parameters, you're prompted to provide values with a text box for each parameter. For more information on parameters, see [Runbook Parameters](#work-with-runbook-parameters).
-5. On the Job pane, you can view the status of the runbook job.
+5. On the **Job** pane, you can view the status of the runbook job.
## Start a runbook with PowerShell
automation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/managed-identity.md
+
+ Title: Troubleshoot Azure Automation managed identity issues (preview)
+description: This article tells how to troubleshoot and resolve issues when using a managed identity with an Automation account.
++ Last updated : 04/28/2021++++
+# Troubleshoot Azure Automation managed identity issues (preview)
+
+This article discusses solutions to problems that you might encounter when you use a managed identity with your Automation account. For general information about using managed identity with Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md#managed-identities-preview).
+
+## Scenario: Attempt to use managed identity with Automation account fails
+
+### Issue
+
+When you try to work with managed identities in your Automation account, you encounter an error like this:
+
+```error
+Connect-AzureRMAccount : An error occurred while sending the request. At line:2 char:1 + Connect-AzureRMAccount -Identity +
+CategoryInfo : CloseError: (:) [Connect-AzureRmAccount], HttpRequestException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzureRmAccountCommand
+```
+
+### Cause
+
+The most common cause for this is that you didn't enable the identity before trying to use it. To verify this, run the following PowerShell runbook in the affected Automation account.
+
+```powershell
+resource= "?resource=https://management.azure.com/"
+$url = $env:IDENTITY_ENDPOINT + $resource
+$Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+$Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+$Headers.Add("Metadata", "True")
+
+try
+{
+ $Response = Invoke-RestMethod -Uri $url -Method 'GET' -Headers $Headers
+}
+catch
+{
+ $StatusCode = $_.Exception.Response.StatusCode.value__
+ $stream = $_.Exception.Response.GetResponseStream()
+ $reader = New-Object System.IO.StreamReader($stream)
+ $responseBody = $reader.ReadToEnd()
+
+ Write-Output "Request Failed with Status: $StatusCode, Message: $responseBody"
+}
+```
+
+If the issue is that you didn't enable the identity before trying to use it, you should see a result similar to this:
+
+`Request Failed with Status: 400, Message: {"Message":"No managed identity was found for Automation account xxxxxxxxxxxx"}`
+
+### Resolution
+
+You must enable an identity for your Automation account before you can use the managed identity service. See [Enable a managed identity for your Azure Automation account (preview)](../enable-managed-identity-for-automation.md)
+
+## Next steps
+
+If this article doesn't resolve your issue, try one of the following channels for additional support:
+
+* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).
+* Connect with [@AzureSupport](https://twitter.com/azuresupport). This is the official Microsoft Azure account for connecting the Azure community to the right resources: answers, support, and experts.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
+
+ Title: Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+description: Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
++++++ Last updated : 04/28/2021+++
+# Create an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+
+This document describes the steps to create a PostgreSQL Hyperscale server group on Azure Arc from the Azure portal.
+++
+## Getting started
+If you are already familiar with the topics below, you may skip this paragraph.
+There are important topics you may want read before you proceed with creation:
+- [Overview of Azure Arc enabled data services](overview.md)
+- [Connectivity modes and requirements](connectivity.md)
+- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)
+- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
+
+If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
++
+## Deploy an Arc data controller configured to use the Direct connectivity mode
+
+Requirement: before you deploy an Azure Arc enabled PostgreSQL Hyperscale server group that you operate from the Azure portal you must first deploy an Azure Arc data controller configured to use the *Direct* connectivity mode.
+To deploy an Arc data controller, complete the instructions in these articles:
+1. [Deploy data controller - direct connect mode (prerequisites)](deploy-data-controller-direct-mode-prerequisites.md)
+1. [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md)
++
+## Preliminary and temporary step for OpenShift users only
+Implement this step before moving to the next step. To deploy PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL Hyperscale server group. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller.
+
+```Console
+oc adm policy add-scc-to-user arc-data-scc -z <server-group-name> -n <namespace name>
+```
+
+**Server-group-name is the name of the server group you will create during the next step.**
+
+For more details on SCCs in OpenShift, refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html).
+
+Proceed to the next step.
+
+## Deploy an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal
+
+To deploy and operate an Azure Arc enabled Postgres Hyperscale server group from the Azure portal you must deploy it to an Arc data controller configured to use the *Direct* connectivity mode.
+
+> [!IMPORTANT]
+> You can not operate an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *Indirect* connectivity mode.
+
+After you deployed an Arc data controller enabled for Direct connectivity mode:
+1. Open a browser to following URL [https://portal.azure.com](https://portal.azure.com)
+2. In the search window at the top of the page search for "*azure arc postgres*" in the Azure Market Place and select **Azure Database for PostgreSQL server groups - Azure Arc**.
+3. In the page that opens, click **+ Create** at the top left corner.
+4. Fill in the form like you deploy an other Azure resource.
++
+### Important parameters you should consider are:
+
+- **The number of worker nodes** you want to deploy to scale out and potentially reach better performance. Before proceeding, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). For example, if you deploy a server group with two worker nodes, the deployment creates three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+
+## Next steps
+
+- Connect to your Azure Arc enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
+- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
+ * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
+ * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
+ * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
+ * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+
+- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)
+- [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)
+- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
++
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
Title: Create an Azure Arc enabled PostgreSQL Hyperscale server group
-description: Create an Azure Arc enabled PostgreSQL Hyperscale server group
+ Title: Create an Azure Arc enabled PostgreSQL Hyperscale server group from CLI
+description: Create an Azure Arc enabled PostgreSQL Hyperscale server group from CLI
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
## Next steps
+- Connect to your Azure Arc enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially: * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md) * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
azure-arc Deploy Data Controller Direct Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode.md
This article describes how to deploy the Azure Arc data controller in direct connect mode during the current preview of this feature.
-Currently you can create the Azure Arc data controller from Azure portal. Other tools for Azure Arc enabled data services do not support creating the data controller in direct connect mode. For details, see [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
+Currently you can create the Azure Arc data controller from Azure portal. Other tools for Azure Arc enabled data services do not support creating the data controller in direct connect mode. For details, see [Release notes](release-notes.md).
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azure-arc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/known-issues.md
- Title: Azure Arc enabled data services - known issues
-description: Latest known issues
------ Previously updated : 03/02/2021-
-# Customer intent: As a data professional, I want to understand why unexpected behaviors of the current system.
--
-# Known issues - Azure Arc enabled data services (Preview)
--
-## March 2021
-
-### Data controller
--- You can create a data controller in direct connect mode with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
- - Azure Data Studio
- - Azure Data CLI (`azdata`)
- - Kubernetes native tools
-
- [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md) explains how to create the data controller in the portal.
-
-### Azure Arc enabled PostgreSQL Hyperscale
--- It is not supported to deploy an Azure Arc enabled Postgres Hyperscale server group in an Arc data controller enabled for direct connect mode.-- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.-
-## February 2021
-
-### Data controller
--- Direct connect cluster mode is disabled-
-### Azure Arc enabled PostgreSQL Hyperscale
--- Point in time restore is not supported for now on NFS storage.-- It is not possible to enable and configure the pg_cron extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. -
- For example:
- ```console
- § azdata arc postgres server edit -n myservergroup --extensions pg_cron
- § azdata arc postgres server edit -n myservergroup --engine-settings cron.database_name='postgres'
- ```
-
- The first command requires a restart of the server group. So, before executing the second command, make sure the state of the server group has transitioned from updating to ready. If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again.
-
-## Introduced prior to February 2021
-
-### Data controller
--- On Azure Kubernetes Service (AKS), Kubernetes version 1.19.x is not supported.-- On Kubernetes 1.19 `containerd` is not supported.-- The data controller resource in Azure is currently an Azure resource. Any updates such as delete is not propagated back to the kubernetes cluster.-- Instance names can't be greater than 13 characters-- No in-place upgrade for the Azure Arc data controller or database instances.-- Arc enabled data services container images are not signed. You may need to configure your Kubernetes nodes to allow unsigned container images to be pulled. For example, if you are using Docker as the container runtime, you can set the DOCKER_CONTENT_TRUST=0 environment variable and restart. Other container runtimes have similar options such as in [OpenShift](https://docs.openshift.com/container-platform/4.5/openshift_images/image-configuration.html#images-configuration-file_image-configuration).-- Cannot create Azure Arc enabled SQL Managed instances or PostgreSQL Hyperscale server groups from the Azure portal.-- For now, if you are using NFS, you need to set `allowRunAsRoot` to `true` in your deployment profile file before creating the Azure Arc data controller.-- SQL and PostgreSQL login authentication only. No support for Azure Active Directory or Active Directory.-- Creating a data controller on OpenShift requires relaxed security constraints. See documentation for details.-- If you are using Azure Kubernetes Service (AKS) Engine on Azure Stack Hub with Azure Arc data controller and database instances, upgrading to a newer Kubernetes version is not supported. Uninstall Azure Arc data controller and all the database instances before upgrading the Kubernetes cluster.-- AKS clusters that span [multiple availability zones](../../aks/availability-zones.md) are not currently supported for Azure Arc enabled data services. To avoid this issue, when you create the AKS cluster in Azure portal, if you select a region where zones are available, clear all the zones from the selection control. See the following image:-
- :::image type="content" source="media/release-notes/aks-zone-selector.png" alt-text="Clear the checkboxes for each zone to specify none.":::
--
-## Next steps
-
-> **Just want to try things out?**
-> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on AKS, AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
--- [Install the client tools](install-client-tools.md)-- [Create the Azure Arc data controller](create-data-controller.md) (requires installing the client tools first)-- [Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first)-- [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) (requires creation of an Azure Arc data controller first)-- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md)-- [Release notes - Azure Arc enabled data services (Preview)](release-notes.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 04/09/2021 Last updated : 04/29/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently releas
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
+## April 2021
+
+This preview release is published on April 29, 2021.
+
+### What's new
+
+This section describes the new features introduced or enabled for this release.
+
+#### Platform
+
+- Direct connected clusters automatically upload telemetry information automatically Azure.
+
+#### Azure Arc enabled PostgreSQL Hyperscale
+
+- Azure Arc enabled PostgreSQL Hyperscale is now supported in Direct connect mode. You now can deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure Market Place in the Azure portal.
+- Azure Arc enabled PostgreSQL Hyperscale ships with the Citus 10.0 extension which features columnar table storage
+- Azure Arc enabled PostgreSQL Hyperscale now supports full user/role management.
+- Azure Arc enabled PostgreSQL Hyperscale now supports additional extensions with `Tdigest` and `pg_partman`.
+- Azure Arc enabled PostgreSQL Hyperscale now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group.
+- Azure Arc enabled PostgreSQL Hyperscale now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
+
+#### Azure Arc enabled SQL Managed Instance
+
+- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
+- Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
+
+### Known issues
+
+- You can create a data controller in direct connect mode with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
+ - Azure Data Studio
+ - Azure Data CLI (`azdata`)
+ - Kubernetes native tools (`kubectl`)
+
+ [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md) explains how to create the data controller in the portal.
+
+- In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.
+- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.
+- Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified.
+
+#### Azure Arc enabled SQL Managed Instance
+
+- Deployment of Azure Arc enabled SQL Managed Instance in direct mode can only be done from the Azure portal, and not available from tools such as azdata, Azure Data Studio, or kubectl.
+
+#### Azure Arc enabled PostgreSQL Hyperscale
+
+- Point in time restore is not supported for now on NFS storage.
+- It is not possible to enable and configure the `pg_cron` extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. For example:
+
+ 1. Enable the extension:
+
+ ```console
+ azdata arc postgres server edit -n myservergroup --extensions pg_cron
+ ```
+
+ 1. Restart the server group.
+
+ 1. Configure the extension:
+
+ ```console
+ azdata arc postgres server edit -n myservergroup --engine-settings cron.database_name='postgres'
+ ```
+
+ If you execute the second command before the restart has completed it will fail. If that is the case, simply wait for a few more moments and execute the second command again.
+
+- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+ ## March 2021 The March 2021 release was initially introduced on April 5th 2021, and the final stages of release were completed April 9th 2021.
-Review limitations of this release in [Known issues - Azure Arc enabled data services (Preview)](known-issues.md).
- Azure Data CLI (`azdata`) version number: 20.3.2. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata). ### Data controller
You will delete the previous CRDs as you cleanup past installations. See [Cleanu
- You can now create a SQL managed instance from the Azure portal in the direct connected mode. -- You can now restore a database to SQL Managed Instance with 3 replicas and it will be automatically added to the availability group. --- You can now connect to a secondary read-only endpoint on SQL Managed Instances deployed with 3 replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
+- You can now restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
-### Known issues
--- In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work.-- Deployment of data controller in direct mode can only be done from the Azure portal, and not available from client tools such as azdata, Azure Data Studio, or kubectl.-- Deployment of Azure Arc enabled SQL Managed Instance in direct mode can only be done from the Azure portal, and not available from tools such as azdata, Azure Data Studio, or kubectl.-- Deployment of Azure Arc enabled PostgeSQL Hyperscale in direct mode is currently not available.-- Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`.-- Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified.
+- You can now connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
## February 2021
Additional updates include:
- Azure Arc enabled PostgreSQL Hyperscale Azure Data Studio:
- - The overview page now shows the status of the server group itemized per node
- - A new properties pages is now available to show more details about the server group
+ - The overview page shows the status of the server group itemized per node
+ - A new properties page shows more details about the server group
- Configure Postgres engine parameters from **Node Parameters** page
-For issues associated with this release, see [Known issues - Azure Arc enabled data services (Preview)](known-issues.md)
- ## January 2021 ### New capabilities and features
Additional updates include:
In earlier releases, the status was aggregated at the server group level and not itemized at the PostgreSQL node level. -- PostgreSQL deployments now honor the volume size parameters indicated in create commands-- The engine version parameters is now honored when editing a server group
+- PostgreSQL deployments honor the volume size parameters indicated in create commands
+- The engine version parameters are now honored when editing a server group
- The naming convention of the pods for Azure Arc enabled PostgreSQL Hyperscale has changed It is now in the form: `ServergroupName{c, w}-n`. For example, a server group with three nodes, one coordinator node and two worker nodes is represented as:
You can specify direct connectivity when you create the data controller. The fol
azdata arc dc create --profile-name azure-arc-aks-hci --namespace arc --name arc --subscription <subscription id> --resource-group my-resource-group --location eastus --connectivity-mode direct ```
-### Known issues
--- On Azure Kubernetes Service (AKS), Kubernetes version 1.19.x is not supported.-- On Kubernetes 1.19 `containerd` is not supported.-- The data controller resource in Azure is currently an Azure resource. Any updates such as delete is not propagated back to the kubernetes cluster.-- Instance names can't be greater than 13 characters-- No in-place upgrade for the Azure Arc data controller or database instances.-- Arc enabled data services container images are not signed. You may need to configure your Kubernetes nodes to allow unsigned container images to be pulled. For example, if you are using Docker as the container runtime, you can set the DOCKER_CONTENT_TRUST=0 environment variable and restart. Other container runtimes have similar options such as in [OpenShift](https://docs.openshift.com/container-platform/4.5/openshift_images/image-configuration.html#images-configuration-file_image-configuration).-- Cannot create Azure Arc enabled SQL Managed instances or PostgreSQL Hyperscale server groups from the Azure portal.-- For now, if you are using NFS, you need to set `allowRunAsRoot` to `true` in your deployment profile file before creating the Azure Arc data controller.-- SQL and PostgreSQL login authentication only. No support for Azure Active Directory or Active Directory.-- Creating a data controller on OpenShift requires relaxed security constraints. See documentation for details.-- If you are using Azure Kubernetes Service (AKS) Engine on Azure Stack Hub with Azure Arc data controller and database instances, upgrading to a newer Kubernetes version is not supported. Uninstall Azure Arc data controller and all the database instances before upgrading the Kubernetes cluster.-- AKS clusters that span [multiple availability zones](../../aks/availability-zones.md) are not currently supported for Azure Arc enabled data services. To avoid this issue, when you create the AKS cluster in Azure portal, if you select a region where zones are available, clear all the zones from the selection control. See the following image:-
- :::image type="content" source="media/release-notes/aks-zone-selector.png" alt-text="Clear the checkboxes for each zone to specify none.":::
- ## October 2020 Azure Data CLI (`azdata`) version number: 20.2.3. You can install `azdata` from [Install Azure Data CLI (`azdata`)](/sql/azdata/install/deploy-install-azdata).
azure-arc Scale Up Down Postgresql Hyperscale Server Group Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-up-down-postgresql-hyperscale-server-group-using-cli.md
There are times when you may need to change the characteristics or the definitio
This guide explains how to scale vCore and/or memory.
-Scaling up or down the vCore or memory settings of your server group means you have the possibility to set a minimum and/or a maximum for each of the vCore and memory settings. If you want to configure your server group to use a specific number of vCore or a specific amount of memory, you would set the min settings equal to the max settings.
+Scaling up or down the vCore or memory settings of your server group means you have the possibility to set a minimum and/or a maximum for each of the vCore and memory settings. If you want to configure your server group to use a specific number of vCore or a specific amount of memory, you would set the minimum settings equal to the maximum settings.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azdata arc postgres server show -n <server group name>
### CLI with kubectl ```console
-kubectl describe postgresql-12/<server group name> [-n <namespace name>]
+kubectl describe postgresql/<server group name> -n <namespace name>
```
-> [!NOTE]
-> If you created a server group of PostgreSQL version 11, run `kubectl describe postgresql-11/<server group name>` instead.
It returns the configuration of your server group. If you have created the server group with the default settings, you should see the definition as follows:
-```console
-"scheduling": {
- "default": {
- "resources": {
- "requests": {
- "memory": "256Mi"
- }
- }
- }
- },
+```json
+Spec:
+ Dev: false
+ Engine:
+ Extensions:
+ Name: citus
+ Version: 12
+ Scale:
+ Workers: 2
+ Scheduling:
+ Default:
+ Resources:
+ Requests:
+ Memory: 256Mi
+...
``` ## Interpret the definition of the server group
-In the definition of a server group, the section that carries the settings of min/max vCore per node and min/max memory per node is the **"scheduling"** section. In that section, the max settings will be persisted in a subsection called **"limits"** and the min settings are persisted in the subsection called **"requests"**.
+In the definition of a server group, the section that carries the settings of minimum or maximum vCore per node and minimum or maximum memory per node is the **"scheduling"** section. In that section, the maximum settings will be persisted in a subsection called **"limits"** and the minimum settings are persisted in the subsection called **"requests"**.
-If you set min settings that are different from the max settings, the configuration guarantees that your server group is allocated the requested resources if it needs. It will not exceed the limits you set.
+If you set minimum settings that are different from the maximum settings, the configuration guarantees that your server group is allocated the requested resources if it needs. It will not exceed the limits you set.
-The resources (vCores and memory) that will actually be used by your server group are up to the max settings and depend on the workloads and the resources available on the cluster. If you do not cap the settings with a max, your server group may use up to all the resources that the Kubernetes cluster allocates to the Kubernetes nodes your server group is scheduled on.
+The resources (vCores and memory) that will actually be used by your server group are up to the maximum settings and depend on the workloads and the resources available on the cluster. If you do not cap the settings with a max, your server group may use up to all the resources that the Kubernetes cluster allocates to the Kubernetes nodes your server group is scheduled on.
-Those vCore and memory settings apply to each of the PostgreSQL Hyperscale nodes (coordinator node and worker nodes). It is not yet supported to set the definitions of the coordinator node and the worker nodes separately.
+Those vCore and memory settings apply to each of the roles of the Postgres instances constituting the PostgreSQL Hyperscale server group: coordinator and workers. You may define requests and limits per role. You may define requests and limits settings that are different for each role. They may also be similar depending on your needs.
In a default configuration, only the minimum memory is set to 256Mi as it is the minimum amount of memory that is recommended to run PostgreSQL Hyperscale. > [!NOTE]
-> Setting a minimum does not mean the server group will necessarily use that minimum. It means that if the server group needs it, it is guaranteed to be allocated at least this minimum. For example, let's consider we set `--minCpu 2`. It does not mean that the server group will be using at least 2 vCores at all times. It instead means that the sever group may start using less than 2 vCores if it does not need that much and it is guaranteed to be allocated at least 2 vCores if it needs them later on. It implies that the Kubernetes cluster allocates resources to other workloads in such a way that it can allocate 2 vCores to the server group if it ever needs them.
+> Setting a minimum does not mean the server group will necessarily use that minimum. It means that if the server group needs it, it is guaranteed to be allocated at least this minimum. For example, let's consider we set `--minCpu 2`. It does not mean that the server group will be using at least 2 vCores at all times. It instead means that the sever group may start using less than 2 vCores if it does not need that much and it is guaranteed to be allocated at least 2 vCores if it needs them later on. It implies that the Kubernetes cluster allocates resources to other workloads in such a way that it can allocate 2 vCores to the server group if it ever needs them. Also, scaling up and down is not a online operation as it requires the restart of the kubernetes pods.
>[!NOTE] >Before you modify the configuration of your system please make sure to familiarize yourself with the Kubernetes resource model [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
-## Scale up the server group
+## Scale up and down the server group
-The settings you are about to set have to be considered within the configuration you set for your Kubernetes cluster. Make sure you are not setting values that your Kubernetes cluster won't be able to satisfy. That could lead to errors or unpredictable behavior. As an example, if the status of your server group stays in status _updating_ for a long time after you change the configuration, it may be an indication that you set the below parameters to values that your Kubernetes cluster cannot satisfy. If that is the case, revert the change or read the _troubleshooting_section.
+Scaling up refers to increasing the values for the vCores and/or memory settings of your server group.
+Scaling down refers to decreasing the values for the vCores and/or memory settings of your server group.
-As an example, let's assume you want to scale up the definition of your server group to:
+The settings you are about to set have to be considered within the configuration you set for your Kubernetes cluster. Make sure you are not setting values that your Kubernetes cluster won't be able to satisfy. That could lead to errors or unpredictable behavior like unavailability of the database instance. As an example, if the status of your server group stays in status _updating_ for a long time after you change the configuration, it may be an indication that you set the below parameters to values that your Kubernetes cluster cannot satisfy. If that is the case, revert the change or read the _troubleshooting_section.
-- Min vCore = 2-- Max vCore = 4-- Min memory = 512Mb-- Max Memory = 1Gb
+What settings should you set?
+- To set minimum vCore, set `--cores-request`.
+- To set maximum vCore, set `--cores-limit`.
+- To set minimum memory, set `--memory-request`
+- To set maximum memory, set `--memory-limit`
-You would use either of the following approaches:
-
-### CLI with azdata
+How do you indicate what role does the setting apply to?
+- to configure the setting for the coordinator role, specify `coordinator=<value>`
+- to configure the setting for the worker role (the specified setting will be set to the same value on all workers), specify `worker=<value>`
-```console
-azdata arc postgres server edit -n <name of your server group> --cores-request <# core-request> --cores-limit <# core-limit> --memory-request <# memory-request>Mi --memory-limit <# memory-limit>Mi
-```
> [!CAUTION]
-> Below is an example provided to illustrate how you could use the command. Before executing an edit command, make sure to set the parameters to values that the Kubernetes cluster can honor.
-
-```console
-azdata arc postgres server edit -n <name of your server group> --cores-request 2 --cores-limit 4 --memory-request 512Mi --memory-limit 1024Mi
-```
-
-The command executes successfully when it shows:
-
-```console
-<name of your server group> is Ready
-```
+> With Kubernetes, configuring a limit setting without configuring the corresponding request setting forces the request value to be the same value as the limit. This could potentially lead to the unavailability of your server group as its pods may not be rescheduled if there isn't a Kubernetes node available with sufficient resources. As such, to avoid this situation, the below examples show how to set both the request and the limit settings.
-> [!NOTE]
-> For details about those parameters, run `azdata arc postgres server edit --help`.
-### CLI with kubectl
+**The general syntax is:**
```console
-kubectl edit postgresql-12/<server group name> [-n <namespace name>]
+azdata arc postgres server edit -n <servergroup name> --memory-limit/memory-request/cores-request/cores-limit <coordinator=val1,worker=val2>
```
-This takes you in the vi editor where you can navigate and change the configuration. Use the following to map the desired setting to the name of the field in the specification:
+The value you indicate for the memory setting is a number followed by a unit of volume. For example, to indicate 1Gb, you would indicate 1024Mi or 1Gi.
+To indicate a number of cores, you just pass a number without unit.
-> [!CAUTION]
-> Below is an example provided to illustrate how you could edit the configuration. Before updating the configuration, make sure to set the parameters to values that the Kubernetes cluster can honor.
-
-For example:
-- Min vCore = 2 -> scheduling\default\resources\requests\cpu-- Max vCore = 4 -> scheduling\default\resources\limits\cpu-- Min memory = 512Mb -> scheduling\default\resources\requests\cpu-- Max Memory = 1Gb -> scheduling\default\resources\limits\cpu-
-If you are not familiar with the vi editor, see a description of the commands you may need [here](https://www.computerhope.com/unix/uvi.htm):
-- edit mode: `i`-- move around with arrows-- _stop editing: `esc`-- _exit without saving: `:qa!`-- _exit after saving: `:qw!`
+### Examples using the azdata CLI
-## Show the scaled up definition of the server group
-Run again the command to display the definition of the server group and verify it is set as you desire:
-### CLI with azdata
+**Configure the coordinator role to not exceed 2 cores and the worker role to not exceed 4 cores:**
```console
-azdata arc postgres server show -n <the name of your server group>
+ azdata arc postgres server edit -n postgres01 --cores-request coordinator=1, --cores-limit coordinator=2
+ azdata arc postgres server edit -n postgres01 --cores-request worker=1, --cores-limit worker=4
```
-### CLI with kubectl
+or
```console
-kubectl describe postgresql-12/<server group name> [-n <namespace name>]
+azdata arc postgres server edit -n postgres01 --cores-request coordinator=1,worker=1 --cores-limit coordinator=4,worker=4
```
-> [!NOTE]
-> If you created a server group of PostgreSQL version 11, run `kubectl describe postgresql-11/<server group name>` instead.
+> [!NOTE]
+> For details about those parameters, run `azdata arc postgres server edit --help`.
-It will show the new definition of the server group:
+### Example using Kubernetes native tools like `kubectl`
+Run the command:
```console
-"scheduling": {
- "default": {
- "resources": {
- "limits": {
- "cpu": "4",
- "memory": "1024Mi"
- },
- "requests": {
- "cpu": "2",
- "memory": "512Mi"
- }
- }
- }
- },
+kubectl edit postgresql/<servergroup name> -n <namespace name>
+```
+
+This takes you in the `vi` editor where you can navigate and change the configuration. Use the following to map the desired setting to the name of the field in the specification:
+
+> [!CAUTION]
+> Below is an example provided to illustrate how you could edit the configuration. Before updating the configuration, make sure to set the parameters to values that the Kubernetes cluster can honor.
+
+For example if you want to set the following settings for both the coordinator and the worker roles to the following values:
+- Minimum vCore = `2`
+- Maximum vCore = `4`
+- Minimum memory = `512Mb`
+- Maximum Memory = `1Gb`
+
+You would set the definition your server group so that it matches the below configuration:
+
+```json
+ scheduling:
+ default:
+ resources:
+ requests:
+ memory: 256Mi
+ roles:
+ coordinator:
+ resources:
+ limits:
+ cpu: "4"
+ memory: 1Gi
+ requests:
+ cpu: "2"
+ memory: 512Mi
+ worker:
+ resources:
+ limits:
+ cpu: "4"
+ memory: 1Gi
+ requests:
+ cpu: "2"
+ memory: 512Mi
```
-## Scale down the server group
+If you are not familiar with the `vi` editor, see a description of the commands you may need [here](https://www.computerhope.com/unix/uvi.htm):
+- Edit mode: `i`
+- Move around with arrows
+- Stop editing: `esc`
+- Exit without saving: `:qa!`
+- Exit after saving: `:qw!`
-To scale down the server group you execute the same command but set lesser values for the settings you want to scale down.
-To remove the requests and/or limits, specify its value as empty string.
## Reset to default values
-To reset core/memory limits/requests parameters to their default values, edit them and pass an empty string instead of an actual value. For example, if you want to reset the core limit (cl) parameter, run the following commands:
-- on a Linux client:
+To reset core/memory limits/requests parameters to their default values, edit them and pass an empty string instead of an actual value. For example, if you want to reset the core limit parameter, run the following commands:
```console
- azdata arc postgres server edit -n <servergroup name> -cl ""
+azdata arc postgres server edit -n postgres01 --cores-request coordinator='',worker=''
+azdata arc postgres server edit -n postgres01 --cores-limit coordinator='',worker=''
``` -- on a Windows client:
-
+or
```console
- azdata arc postgres server edit -n <servergroup name> -cl '""'
+azdata arc postgres server edit -n postgres01 --cores-request coordinator='',worker='' --cores-limit coordinator='',worker=''
``` - ## Next steps - [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
azure-arc Using Extensions In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/using-extensions-in-postgresql-hyperscale-server-group.md
PostgreSQL is at its best when you use it with extensions. In fact, a key element of our own Hyperscale functionality is the Microsoft-provided `citus` extension that is installed by default, which allows Postgres to transparently shard data across multiple nodes. - [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Supported extensions The standard [`contrib`](https://www.postgresql.org/docs/12/contrib.html) extensions and the following extensions are already deployed in the containers of your Azure Arc enabled PostgreSQL Hyperscale server group:-- [`citus`](https://github.com/citusdata/citus), v: 9.4. The Citus extension by [Citus Data](https://www.citusdata.com/) is loaded by default as it brings the Hyperscale capability to the PostgreSQL engine. Dropping the Citus extension from your Azure Arc PostgreSQL Hyperscale server group is not supported.-- [`pg_cron`](https://github.com/citusdata/pg_cron), v: 1.2
+- [`citus`](https://github.com/citusdata/citus), v: 10.0. The Citus extension by [Citus Data](https://www.citusdata.com/) is loaded by default as it brings the Hyperscale capability to the PostgreSQL engine. Dropping the Citus extension from your Azure Arc PostgreSQL Hyperscale server group is not supported.
+- [`pg_cron`](https://github.com/citusdata/pg_cron), v: 1.3
- [`pgaudit`](https://www.pgaudit.org/), v: 1.4 - plpgsql, v: 1.0 - [`postgis`](https://postgis.net), v: 3.0.2 - [`plv8`](https://plv8.github.io/), v: 2.3.14
+- [`pg_partman`](https://github.com/pgpartman/pg_partman), v: 4.4.1/
+- [`tdigest`](https://github.com/tvondra/tdigest), v: 1.0.1
Updates to this list will be posted as it evolves over time.
This guide will take in a scenario to use two of these extensions:
|`postgis` |No |Yes | |`plv8` |No |Yes |
-## Add extensions to the shared_preload_libraries
-For details about that are shared_preload_libraries please read the PostgreSQL documentation [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES):
+## Add extensions to the `shared_preload_libraries`
+For details about that are `shared_preload_libraries`, read the PostgreSQL documentation [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES):
- This step isn't needed for the extensions that are part of `contrib`-- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next next paragraph [Create extensions](#create-extensions).
+- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next paragraph [Create extensions](#create-extensions).
### Add an extension at the creation time of a server group ```console
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/custom-locations.md
Last updated 04/05/2021
-+ description: "Use custom locations to deploy Azure PaaS services on Azure Arc enabled Kubernetes clusters"
A conceptual overview of this feature is available in [Custom locations - Azure
- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0. -- `connectedk8s` (version >= 1.1.0), `k8s-extension` (version >= 0.2.0) and `customlocation` (version >= 0.1.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
+- `connectedk8s` (version >= 1.1.0), `k8s-extension` (version >= 0.2.0), and `customlocation` (version >= 0.1.0) Azure CLI extensions. Install these Azure CLI extensions by running the following commands:
```azurecli az extension add --name connectedk8s
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
> [!NOTE] > 1. Custom Locations feature is dependent on the Cluster Connect feature. So both features have to be enabled for custom locations to work. > 2. `az connectedk8s enable-features` needs to be run on a machine where the `kubeconfig` file is pointing to the cluster on which the features are to be enabled.
+> 3. If you are logged into Azure CLI using a service principal, [additional permissions](troubleshooting.md#enable-custom-locations-using-service-principal) have to be granted to the service principal before enabling the custom location feature.
## Create custom location
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
In this quickstart, we'll reap the benefits of Azure Arc enabled Kubernetes and
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
- * If you want to connect a OpenShift cluster to Azure Arc, you need to this execute the following command just once on your cluster before running `az connectedk8s connect`:
+ * If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `az connectedk8s connect`:
```console oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
In this quickstart, we'll reap the benefits of Azure Arc enabled Kubernetes and
az extension add --name connectedk8s ``` -- >[!TIP] > If the `connectedk8s` extension is already installed, update it to the latest version using the following command - `az extension update --name connectedk8s` - >[!NOTE] >The list of regions supported by Azure Arc enabled Kubernetes can be found [here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
eastus AzureArcTest
> [!TIP] > The above command without the location parameter specified creates the Azure Arc enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc enabled Kubernetes resource in a different location, specify either `--location <region>` or `-l <region>` when running the `az connectedk8s connect` command.
+> [!NOTE]
+> If you are logged into Azure CLI using a service principal, [additional permissions](troubleshooting.md#enable-custom-locations-using-service-principal) are required on the service principal for enabling the custom location feature when connecting the cluster to Azure Arc.
+ ## Verify cluster connection View a list of your connected clusters with the following command:
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/troubleshooting.md
Azure Monitor for containers requires its DaemonSet to be run in privileged mode
```console juju config kubernetes-worker allow-privileged=true
-```
+```
+
+## Enable custom locations using service principal
+
+When you are connecting your cluster to Azure Arc or when you are enabling custom locations feature on an existing cluster, you may observe the following warning:
+
+```console
+Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation.
+```
+
+The above warning is observed when you have used a service principal to log into Azure and this service principal doesn't have permissions to get information of the application used by Azure Arc service. Run the following commands to grant the required permissions:
+
+```console
+az ad app permission add --id <service-principal-app-id> --api 00000002-0000-0000-c000-000000000000 --api-permissions 3afa6a7d-9b1a-42eb-948e-1650a849e176=Role
+az ad app permission admin-consent --id <service-principal-app-id>
+```
+
+Once above permissions are granted, you can now proceed to [enabling the custom location feature](custom-locations.md#enable-custom-locations-on-cluster) on the cluster.
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
Previously updated : 08/18/2020 Last updated : 04/28/2021 # Quickstart: Create an Azure Cache for Redis using an ARM template
Learn how to create an Azure Resource Manager template (ARM template) that deplo
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-redis-cache%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cache%2Fredis-cache%2Fazuredeploy.json)
## Prerequisites
If your environment meets the prerequisites and you're familiar with using ARM t
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-redis-cache/). The following resources are defined in the template:
To check for the latest templates, see [Azure Quickstart Templates](https://azur
1. Select the following image to sign in to Azure and open the template.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-redis-cache%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cache%2Fredis-cache%2Fazuredeploy.json)
1. Select or enter the following values: * **Subscription**: select an Azure subscription used to create the data share and the other resources.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Sets the DNS server used by an app when resolving IP addresses. This setting is
||| |WEBSITE\_DNS\_SERVER|168.63.129.16|
+## WEBSITE\_ENABLE\_BROTLI\_ENCODING
+
+Controls whether Brotli encoding is used for compression instead of the default gzip compression. When `WEBSITE_ENABLE_BROTLI_ENCODING` is set to `1`, Brotli encoding is used; otherwise gzip encoding is used.
+ ## WEBSITE\_MAX\_DYNAMIC\_APPLICATION\_SCALE\_OUT The maximum number of instances that the app can scale out to. Default is no limit.
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid-trigger.md
In Azure Functions 2.x and higher, you also have the option to use the following
> [!NOTE] > In Functions v1 if you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`.
+### Additional types
+Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace.
+ # [C# Script](#tab/csharp-script) In Azure Functions 1.x, you can use the following parameter types for the Event Grid trigger:
In Azure Functions 2.x and higher, you also have the option to use the following
> [!NOTE] > In Functions v1 if you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`. For information about how to reference NuGet packages in a C# script function, see [Using NuGet packages](functions-reference-csharp.md#using-nuget-packages)
+### Additional types
+Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace.
+ # [Java](#tab/java) The Event Grid event instance is available via the parameter associated to the `EventGridTrigger` attribute, typed as an `EventSchema`. See the [example](#example) for more detail.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid.md
Working with the trigger and bindings requires that you reference the appropriat
[Update your extensions]: ./functions-bindings-register.md [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+#### Event Grid extension 3.x and higher
+
+A new version of the Event Grid bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/3.0.0-beta.1). For .NET applications, it changes the types that you can bind to, replacing the types from `Microsoft.Azure.EventGrid.Models` with newer types from [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid).
+
+> [!NOTE]
+> The preview package is not included in an extension bundle and must be installed manually. For .NET apps, add a reference to the package. For all other app types, see [Update your extensions].
+
+[core tools]: ./functions-run-local.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
+[Update your extensions]: ./functions-bindings-register.md
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+ ### Functions 1.x Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus-output.md
The following table explains the binding configuration properties that you set i
|**name** | n/a | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | |**queueName**|**QueueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|**TopicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
-|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus". If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".<br><br>To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-quickstart-portal.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.|
+|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus". If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".<br><br>To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-quickstart-portal.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic.<br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
|**accessRights** (v1 only)|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
When working with C# functions:
* To access the session ID, bind to a [`Message`](/dotnet/api/microsoft.azure.servicebus.message) type and use the `sessionId` property.
+### Additional types
+
+Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+
+- [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage)
+ # [C# Script](#tab/csharp-script) Use the following parameter types for the output binding:
When working with C# functions:
* To access the session ID, bind to a [`Message`](/dotnet/api/microsoft.azure.servicebus.message) type and use the `sessionId` property.
+### Additional types
+Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusmessage) instead of the one in the[Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+
+- [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage)
+ # [Java](#tab/java) Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than the built-in output binding.
Use the [Azure Service Bus SDK](../service-bus-messaging/index.yml) rather than
| Service Bus | [Service Bus Error Codes](../service-bus-messaging/service-bus-messaging-exceptions.md) | | Service Bus | [Service Bus Limits](../service-bus-messaging/service-bus-quotas.md) |
-<a name="host-json"></a>
-
-## host.json settings
-
-This section describes the global configuration settings available for this binding in versions 2.x and higher. The example host.json file below contains only the settings for this binding. For more information about global configuration settings, see [host.json reference for Azure Functions version](functions-host-json.md).
-
-> [!NOTE]
-> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
-
-```json
-{
- "version": "2.0",
- "extensions": {
- "serviceBus": {
- "prefetchCount": 100,
- "messageHandlerOptions": {
- "autoComplete": true,
- "maxConcurrentCalls": 32,
- "maxAutoRenewDuration": "00:05:00"
- },
- "sessionHandlerOptions": {
- "autoComplete": false,
- "messageWaitTimeout": "00:00:30",
- "maxAutoRenewDuration": "00:55:00",
- "maxConcurrentSessions": 16
- }
- }
- }
-}
-```
-
-If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` will be honored. If you have `isSessionsEnabled` set to `false`, the `messageHandlerOptions` will be honored.
-
-|Property |Default | Description |
-||||
-|prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
-|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
-|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
-|maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
- ## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following table explains the binding configuration properties that you set i
|**queueName**|**QueueName**|Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**subscriptionName**|**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
-|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus". If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".<br><br>To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-quickstart-portal.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic. |
+|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus". If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".<br><br>To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-quickstart-portal.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic. <br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).|
|**accessRights**|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**isSessionsEnabled**|**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
The following parameter types are available for the queue or topic message:
* A custom type - If the message contains JSON, Azure Functions tries to deserialize the JSON data. * `BrokeredMessage` - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method.
-* [`MessageReceiver`](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container (required when [`autoComplete`](functions-bindings-service-bus-output.md#hostjson-settings) is set to `false`)
+* [`MessageReceiver`](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container (required when [`autoComplete`](functions-bindings-service-bus.md#hostjson-settings) is set to `false`)
These parameter types are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
+### Additional types
+Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusReceivedMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+
+- [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage)
+ # [C# Script](#tab/csharp-script) The following parameter types are available for the queue or topic message:
The following parameter types are available for the queue or topic message:
These parameters are for Azure Functions version 1.x; for 2.x and higher, use [`Message`](/dotnet/api/microsoft.azure.servicebus.message) instead of `BrokeredMessage`.
+### Additional types
+Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusReceivedMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) instead of the one in the [Microsoft.Azure.ServiceBus](/dotnet/api/microsoft.azure.servicebus.message) namespace. This version drops support for the legacy `Message` type in favor of the following types:
+
+- [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage)
+
+### Additional types
+Apps using the 5.0.0 or higher version of the Service Bus extension use the `ServiceBusReceivedMessage` type in [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) instead of the one in [Microsoft.Azure.ServiceBus namespace](/dotnet/api/microsoft.azure.servicebus.message). This version drops support for the legacy `Message` type in favor of the following types:
+
+- [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody)
+ # [Java](#tab/java) The incoming Service Bus message is available via a `ServiceBusQueueMessage` or `ServiceBusTopicMessage` parameter.
The Service Bus trigger provides several [metadata properties](./functions-bindi
|`ReplyTo`|`string`|The reply to queue address.| |`SequenceNumber`|`long`|The unique number assigned to a message by the Service Bus.| |`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender.|
+|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. (For version 5.x+ of the extension this is not supported, please use `ApplicationProperties`.)|
See [code examples](#example) that use these properties earlier in this article.
+### Additional message metadata
+
+The below metadata properties are supported for apps using 5.0.0 of the extension or higher. These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class.
+
+|Property|Type|Description|
+|--|-|--|
+|`ApplicationProperties`|`ApplicationProperties`|Properties set by the sender. Use this in place of the `UserProperties` metadata property.|
+|`Subject`|`string`|The application-specific label which can be used in place of the `Label` metadata property.|
+|`MessageActions`|`ServiceBusMessageActions`|The set of actions which can be performed on a `ServiceBusReceivedMessage`. This can be used in place of the `MessageReceiver` metadata property.
+|`SessionActions`|`ServiceBusSessionMessageActions`|The set of actions that can be performed on a session and a `ServiceBusReceivedMessage`. This can be used in place of the `MessageSession` metadata property.|
+ ## Next steps - [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
Working with the trigger and bindings requires that you reference the appropriat
[Update your extensions]: ./functions-bindings-register.md [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+#### Service Bus extension 5.x and higher
+
+A new version of the Service Bus bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/5.0.0-beta.2). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
+
+> [!NOTE]
+> The preview package is not included in an extension bundle and must be installed manually. For .NET apps, add a reference to the package. For all other app types, see [Update your extensions].
+
+[core tools]: ./functions-run-local.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
+[Update your extensions]: ./functions-bindings-register.md
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+ ### Functions 1.x Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. +
+<a name="host-json"></a>
+
+## host.json settings
+
+This section describes the global configuration settings available for this binding in versions 2.x and higher. The example host.json file below contains only the settings for this binding. For more information about global configuration settings, see [host.json reference for Azure Functions version](functions-host-json.md).
+
+> [!NOTE]
+> For a reference of host.json in Functions 1.x, see [host.json reference for Azure Functions 1.x](functions-host-json-v1.md).
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "prefetchCount": 100,
+ "messageHandlerOptions": {
+ "autoComplete": true,
+ "maxConcurrentCalls": 32,
+ "maxAutoRenewDuration": "00:05:00"
+ },
+ "sessionHandlerOptions": {
+ "autoComplete": false,
+ "messageWaitTimeout": "00:00:30",
+ "maxAutoRenewDuration": "00:55:00",
+ "maxConcurrentSessions": 16
+ }
+ }
+ }
+}
+```
+
+If you have `isSessionsEnabled` set to `true`, the `sessionHandlerOptions` will be honored. If you have `isSessionsEnabled` set to `false`, the `messageHandlerOptions` will be honored.
+
+|Property |Default | Description |
+||||
+|prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
+|maxAutoRenewDuration|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
+|autoComplete|true|Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
+|maxConcurrentSessions|2000|The maximum number of sessions that can be handled concurrently per scaled instance.|
+
+### Additional settings for version 5.x+
+
+The example host.json file below contains only the settings for version 5.0.0 and higher of the Service Bus extension.
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "serviceBusOptions": {
+ "retryOptions":{
+ "mode": "exponential",
+ "tryTimeout": "00:00:10",
+ "delay": "00:00:00.80",
+ "maxDelay": "00:01:00",
+ "maxRetries": 4
+ },
+ "prefetchCount": 100,
+ "autoCompleteMessages": true,
+ "maxAutoLockRenewalDuration": "00:05:00",
+ "maxConcurrentCalls": 32,
+ "maxConcurrentSessions": 10,
+ "maxMessages": 2000,
+ "sessionIdleTimeout": "00:01:00",
+ "maxAutoLockRenewalDuration": "00:05:00"
+ }
+ }
+ }
+}
+```
+
+When using service bus extension version 5.x and higher, the following global configuration settings are supported in addition to the 2.x settings in `ServiceBusOptions`.
+
+|Property |Default | Description |
+||||
+|prefetchCount|0|Gets or sets the number of messages that the message receiver can simultaneously request.|
+|autoCompleteMessages|true|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.|
+|maxAutoLockRenewalDuration|00:05:00|This should be used in place of `maxAutoRenewDuration`|
+|maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.|
+|maxConcurrentSessions|8|The maximum number of sessions that can be handled concurrently per scaled instance.|
+|maxMessages|1000|The maximum number of messages that will be passed to each function call. This only applies for functions that receive a batch of messages.|
+|sessionIdleTimeout|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session.|
+
+### Retry settings
+
+In addition to the above configuration properties when using version 5.x and higher of the service bus extension, you can also configure `RetryOptions` from within the `ServiceBusOptions`. These settings determine whether a failed operation should be retried, and, if so, the amount of time to wait between retry attempts. The options also control the amount of time allowed for receiving messages and other interactions with the Service Bus service.
+
+|Property |Default | Description |
+||||
+|mode|Exponential|The approach to use for calculating retry delays. The default exponential mode will retry attempts with a delay based on a back-off strategy where each attempt will increase the duration that it waits before retrying. The `Fixed` mode will retry attempts at fixed intervals with each delay having a consistent duration.|
+|tryTimeout|00:00:10|The maximum duration to wait for an operation per attempt.|
+|delay|00:00:00.80|The delay or back-off factor to apply between retry attempts.|
+|maxDelay|00:01:00|The maximum delay to allow between retry attempts|
+|maxRetries|3|The maximum number of retry attempts before considering the associated operation to have failed.|
+++ ## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-input.md
import logging
import azure.functions as func
-# The type func.InputStream is not supported for blob input binding.
# The input binding field inputblob can either be 'bytes' or 'str' depends # on dataType in function.json, 'binary' or 'string'. def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
azure-functions Functions Create First Function Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-function-resource-manager.md
Completing this quickstart incurs a small cost of a few USD cents or less in you
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-function-app-create-dynamic%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Ffunction-app-create-dynamic%2Fazuredeploy.json)
## Prerequisites
After you've created your project locally, you create the resources required to
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-function-app-create-dynamic/). The following four Azure resources are created by this template:
The following four Azure resources are created by this template:
```azurecli-interactive read -p "Enter a resource group name that is used for generating resource names:" resourceGroupName && read -p "Enter the location (like 'eastus' or 'northeurope'):" location &&
-templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-function-app-create-dynamic/azuredeploy.json" &&
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json" &&
az group create --name $resourceGroupName --location "$location" && az deployment group create --resource-group $resourceGroupName --template-uri $templateUri && echo "Press [ENTER] to continue ..." &&
read
```powershell-interactive $resourceGroupName = Read-Host -Prompt "Enter a resource group name that is used for generating resource names" $location = Read-Host -Prompt "Enter the location (like 'eastus' or 'northeurope')"
-$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-function-app-create-dynamic/azuredeploy.json"
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json"
New-AzResourceGroup -Name $resourceGroupName -Location "$location" New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-deployment-slots.md
Azure Functions deployment slots have the following limitations:
- The number of slots available to an app depends on the plan. The Consumption plan is only allowed one deployment slot. Additional slots are available for apps running under the App Service plan. - Swapping a slot resets keys for apps that have an `AzureWebJobsSecretStorageType` app setting equal to `files`.-- Slots aren't available for the Linux Consumption plan. ## Support levels
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
Configuration setting can be found in [SendGrid triggers and bindings](functions
## serviceBus
-Configuration setting can be found in [Service Bus triggers and bindings](functions-bindings-service-bus-output.md#host-json).
+Configuration setting can be found in [Service Bus triggers and bindings](functions-bindings-service-bus.md#host-json).
## singleton
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
$TemplateParams = @{"appName" = "<function-app-name>"}
New-AzResourceGroupDeployment -ResourceGroupName "MyResourceGroup" -TemplateFile template.json -TemplateParameterObject $TemplateParams -Verbose ```
-To test out this deployment, you can use a [template like this one](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-function-app-create-dynamic/azuredeploy.json) that creates a function app on Windows in a Consumption plan. Replace `<function-app-name>` with a unique name for your function app.
+To test out this deployment, you can use a [template like this one](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json) that creates a function app on Windows in a Consumption plan. Replace `<function-app-name>` with a unique name for your function app.
## Next steps
Learn more about how to develop and configure Azure Functions.
<!-- LINKS -->
-[Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/101-function-app-create-dynamic/azuredeploy.json
+[Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json
[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/101-function-app-create-dedicated/azuredeploy.json
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Identity-based connections are supported by the following trigger and binding ex
| Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) | No | | Azure Queue | [Version 5.0.0-beta1 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) | No | | Azure Event Hubs | [Version 5.0.0-beta1 or later](./functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher) | No |
+| Azure Service Bus | [Version 5.0.0-beta2 or later](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher) | No |
> [!NOTE] > Support for identity-based connections is not yet available for storage connections used by the Functions runtime for core behaviors. This means that the `AzureWebJobsStorage` setting must be a connection string.
An identity-based connection for an Azure service accepts the following properti
| Property | Required for Extensions | Environment variable | Description | ||||| | Service URI | Azure Blob, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. |
-| Fully Qualified Namespace | Event Hubs | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hub namespace. |
+| Fully Qualified Namespace | Event Hubs, Service Bus | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hubs and Service Bus namespace. |
Additional options may be supported for a given connection type. Please refer to the documentation for the component making the connection.
The following roles cover the primary permissions needed for each extension in n
| Azure Blobs | [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader), [Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | | Azure Queues | [Storage Queue Data Reader](../role-based-access-control/built-in-roles.md#storage-queue-data-reader), [Storage Queue Data Message Processor](../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor), [Storage Queue Data Message Sender](../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender), [Storage Queue Data Contributor](../role-based-access-control/built-in-roles.md#storage-queue-data-contributor) | | Event Hubs | [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver), [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender), [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) |
+| Service Bus | [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver), [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender), [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) |
> [!IMPORTANT] > Some permissions might be exposed by the service that are not necessary for all contexts. Where possible, adhere to the **principle of least privilege**, granting the identity only required privileges. For example, if the app just needs to read from a blob, use the [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role as the [Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner) includes excessive permissions for a read operation.
azure-functions Functions Cli Mount Files Storage Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
# Mount a file share to a Python function app using Azure CLI
-This Azure Functions sample script creates a function app and creates a share in Azure Files. It them mounts the share so that the data can be accessed by your functions.
+This Azure Functions sample script creates a function app and creates a share in Azure Files. It then mounts the share so that the data can be accessed by your functions.
>[!NOTE] >The function app created runs on Python version 3.7. Azure Functions also [supports Python versions 3.6 and 3.8](../functions-reference-python.md#python-version).
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
The query results are transformed into a number that is compared against the thr
### Frequency
-The interval in which the query is run. Can be set from 5 minutes to one day. Must be equal to or less than the [query time range](#query-time-range) to not miss log records.
+> [!NOTE]
+> There are currently no additional charges for 1-minute frequency log alerts. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using 1-minute frequency log alerts after the notice period, you will be billed at the applicable rate.
+
+The interval in which the query is run. Can be set from 1 minute to one day. Must be equal to or less than the [query time range](#query-time-range) to not miss log records.
For example, if you set the time period to 30 minutes and frequency to 1 hour. If the query is run at 00:00, it returns records between 23:30 and 00:00. The next time the query would run is 01:00 that would return records between 00:30 and 01:00. Any records created between 00:00 and 00:30 would never be evaluated.
+To use 1-minute frequency alerts you need to set a property via the API. When creating new or updating existing log alert rules in API Version `2020-05-01-preview` - in `properties` section, add `evaluationFrequency` with value `PT1M` of type `String`. When creating new or updating existing log alert rules in API Version `2018-04-16` - in `schedule` section, add `frequencyInMinutes` with value `1` of type `Int`.
+ ### Number of violations to trigger alert You can specify the alert evaluation period and the number of failures needed to trigger an alert. Allowing you to better define an impact time to trigger an alert.
For example, if your rule [**Aggregation granularity**](#aggregation-granularity
## State and resolving alerts
-Log alerts are stateless. Alerts fire each time the condition is met, even if fired previously. Fired alerts don't resolve. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md). You can also mute actions to prevent them from triggering for a period after an alert rule fired.
+Log alerts can either be stateless or stateful (currently in preview when using the API).
-In workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
+Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired. In Log Analytics Workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
See this alert evaluation example:
See this alert evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
+Stateful alerts fire once per incident and resolve. When creating new or updating existing log alert rules, add the `autoMitigate` flag with value `true` of type `Boolean`, under the `properties` section. You can use this feature in these API versions: `2018-04-16` and `2020-05-01-preview`.
+ ## Pricing and billing of log alerts Pricing information is located in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
Pricing information is located in the [Azure Monitor pricing page](https://azure
* Learn about [creating in log alerts in Azure](./alerts-log.md). * Understand [webhooks in log alerts in Azure](../alerts/alerts-log-webhook.md). * Learn about [Azure Alerts](./alerts-overview.md).
-* Learn more about [Log Analytics](../logs/log-query-overview.md).
+* Learn more about [Log Analytics](../logs/log-query-overview.md).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-dependencies.md
For web pages, Application Insights JavaScript SDK automatically collects AJAX c
## Advanced SQL tracking to get full SQL Query > [!NOTE]
-> Azure Functions requires separate settings to enable SQL text collection, see [configure monitoring for Azure Functions](../../azure-functions/configure-monitoring.md) to learn more.
+> Azure Functions requires separate settings to enable SQL text collection: within [host.json](../../azure-functions/functions-host-json.md#applicationinsights) set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
For SQL calls, the name of the server and database is always collected and stored as name of the collected `DependencyTelemetry`. There's an additional field called 'data', which can contain the full SQL query text.
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-vm-vmss-apps.md
There are two ways to enable application monitoring for Azure virtual machines a
* The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the .NET SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more. #### Java
- * For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#auto-collected-requests-dependencies-logs-and-metrics), with a multitude of [additional configurations](./java-standalone-config.md)
+ * For Java, **[Application Insights Java 3.0 agent](./java-in-process-agent.md)** is the recommended approach. The most popular libraries and frameworks, as well as logs and dependencies are [auto-collected](./java-in-process-agent.md#auto-collected-requests), with a multitude of [additional configurations](./java-standalone-config.md)
### Code-based via SDK
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/codeless-overview.md
As we're adding additional integrations, the auto-instrumentation capability mat
||--|--|--|--|--| |Azure App Service on Windows | GA, OnBD* | GA, opt-in | In progress | In progress | Not supported | |Azure App Service on Linux | N/A | Not supported | In progress | Public Preview | Not supported |
-|Azure App Service on AKS | N/A | In design | In design | In design | Not supported |
|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions Windows - dependencies | Not supported | Not supported | Public Preview | Not supported | Not supported | |Azure Kubernetes Service | N/A | In design | Through agent | In design | Not supported |
The versatile Java standalone agent works on any environment, there's no need to
* [Application Insights Overview](./app-insights-overview.md) * [Application map](./app-map.md)
-* [End-to-end performance monitoring](../app/tutorial-performance.md)
+* [End-to-end performance monitoring](../app/tutorial-performance.md)
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
In the `applicationinsights.json` file, you can additionally configure:
See [configuration options](./java-standalone-config.md) for full details.
-## Auto-collected requests, dependencies, logs, and metrics
-
-### Requests
+## Auto-collected requests
* JMS Consumers * Kafka Consumers
See [configuration options](./java-standalone-config.md) for full details.
* Servlets * Spring Scheduling
-### Dependencies with distributed trace propagation
+## Auto-collected dependencies
+
+Auto-collected dependencies plus downstream distributed trace propagation:
* Apache HttpClient and HttpAsyncClient * gRPC
See [configuration options](./java-standalone-config.md) for full details.
* Netty client * OkHttp
-### Other dependencies
+Auto-collected dependencies (without downstream distributed trace propagation):
* Cassandra * JDBC * MongoDB (async and sync) * Redis (Lettuce and Jedis)
-### Logs
+## Auto-collected logs
* java.util.logging * Log4j (including MDC properties) * SLF4J/Logback (including MDC properties)
-### Metrics
+## Auto-collected metrics
* Micrometer (including Spring Boot Actuator metrics) * JMX Metrics
-### Azure SDKs (preview)
+## Azure SDKs (preview)
See the [configuration options](./java-standalone-config.md#auto-collected-azure-sdk-telemetry-preview)
-to enable this preview feature and capture the telemetry emitted by these Azure SDKs:
+to enable this preview feature and auto-collect the telemetry emitted by these Azure SDKs:
* [App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+ * [Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript-click-analytics-plugin.md
This plugin automatically tracks click events on web pages and uses data-* attri
Users can set up the Click Analytics Auto-collection plugin via npm.
-### npm setup
+### NPM setup
Install npm package:
const appInsights = new ApplicationInsights({ config: configObj });
appInsights.loadAppInsights(); ```
+## Snippet Setup (ignore if using NPM setup)
+
+```html
+<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.6.2.min.js"></script>
+<script type="text/javascript">
+ var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
+ // Click Analytics configuration
+ var clickPluginConfig = {
+ autoCapture : true,
+ dataTags: {
+ useDefaultContentNameOrId: true
+ }
+ }
+ // Application Insights Configuration
+ var configObj = {
+ instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [
+ clickPluginInstance
+ ],
+ extensionConfig: {
+ [clickPluginInstance.identifier] : clickPluginConfig
+ },
+ };
+ // Application Insights Snippet code
+ !function(T,l,y){<!-- Removed the Snippet code for brevity -->}(window,document,{
+ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
+ crossOrigin: "anonymous",
+ cfg: configObj
+ });
+</script>
+```
+ ## How to effectively use the plugin 1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/data-platform-metrics.md
na Previously updated : 02/20/2021 Last updated : 04/27/2021
Below are the instructions on how to configure and view multi-dimensional perfor
## Retention of Metrics
-For most resources in Azure, metrics are stored for 93 days. There are some exceptions:
+For most resources in Azure, platform metrics are stored for 93 days. There are some exceptions:
**Guest OS metrics**-- **Classic guest OS metrics**. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) or the [Linux Diagnostic Extension (LAD)](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is guaranteed to be at least 14 days, though no actual expiration date is written to the storage account. For performance reasons, the portal limits how much data it displays based on volume. Therefore, the actual number of days retrieved by the portal can be longer than 14 days if the volume of data being written is not very large. -- **Guest OS metrics sent to Azure Monitor Metrics**. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or via the [InfluxData Telegraf Agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines. Retention for these metrics is 93 days.-- **Guest OS metrics collected by Log Analytics agent**. These are performance counters collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days, and can be extended up to 2 years.
+- **Classic guest OS metrics** - 14 days and sometimes more. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) or the [Linux Diagnostic Extension (LAD)](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is guaranteed to be at least 14 days, though no actual expiration date is written to the storage account. For performance reasons, the portal limits how much data it displays based on volume. Therefore, the actual number of days retrieved by the portal can be longer than 14 days if the volume of data being written is not very large.
+- **Guest OS metrics sent to Azure Monitor Metrics** - 93 days. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or the [InfluxData Telegraf Agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines, or the newer [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) (AMA) via data collection rules. Retention for these metrics is 93 days.
+- **Guest OS metrics collected by Log Analytics agent** - 31 days to 2 years. These are performance counters collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days, and can be extended up to 2 years.
-**Application Insights log-based metrics**.
-- Behind the scene, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention matches the retention of events in underlying logs. For Application Insights resources, logs are stored for 90 days.
+**Application Insights log-based metrics**. varies. - Behind the scene, [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md) translate into log queries. Their retention matches the retention of events in underlying logs (31 days to 2 years). For Application Insights resources, logs are stored for 90 days.
> [!NOTE]
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data from a Log Analyti
- Brazil south east - Norway East - UAE North-- You can create two export rules in a workspace -- in can be one rule to event hub and one rule to storage account.
+- You can have up to 10 enabled rules in your workspace. Additional rules above 10 can be created in disable state.
+- Destination must be unique across all export rules in your workspace.
- The destination storage account or event hub must be in the same region as the Log Analytics workspace. - Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters to an event hub. Tables with longer names will not be exported. - Append blob support for Azure Data Lake Storage is now in [limited public preview](https://azure.microsoft.com/updates/append-blob-support-for-azure-data-lake-storage-preview/)
Log Analytics data export can write append blobs to immutable storage accounts w
Data is sent to your event hub in near-real-time as it reaches Azure Monitor. An event hub is created for each data type that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. If you want the exported data to reach a specific event hub, or if you have a table with a name that exceeds the 47 character limit, you can provide your own event hub name and export all data for defined tables to it. > [!IMPORTANT]
-> The [number of supported event hubs per namespace is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you export more than 10 tables, provide your own event hub name to export all your tables to that event hub.
+> The [number of supported event hubs per 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you export more than 10 tables, either split the tables between several export rules to different event hub namespaces, or provide event hub name in the export rule and export all tables to that event hub.
Considerations:
-1. 'Basic' event hub sku supports lower event size [limit](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-tiers) and some logs in your workspace can exceed it and be dropped. We recommend to use 'Standard' or 'Dedicated' event hub as export destination.
+1. 'Basic' event hub tier supports lower [event size](../../event-hubs/event-hubs-quotas.md) and some logs in your workspace can exceed it and be dropped. We recommend to use 'Standard' or 'Dedicated' event hub as export destination.
2. The volume of exported data often increase over time, and the event hub scale needs to be increased to handle larger transfer rates and avoid throttling scenarios and data latency. You should use the auto-inflate feature of Event Hubs to automatically scale up and increase the number of throughput units and meet usage needs. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md) for details. ## Prerequisites Following are prerequisites that must be completed before configuring Log Analytics data export. -- The storage account and event hub must already be created and must be in the same region as the Log Analytics workspace. If you need to replicate your data to other storage accounts, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md).
+- Destinations must be created prior to the export rule configuration and should be in the same region as your Log Analytics workspace. If you need to replicate your data to other storage accounts, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md).
- The storage account must be StorageV1 or StorageV2. Classic storage is not supported - If you have configured your storage account to allow access from selected networks, you need to add an exception in your storage account settings to allow Azure Monitor to write to your storage.
If you have configured your Storage Account to allow access from selected networ
[![Storage account firewalls and virtual networks](media/logs-data-export/storage-account-vnet.png)](media/logs-data-export/storage-account-vnet.png#lightbox) ### Create or update data export rule
-A data export rule defines the tables for which data is exported and the destination. You can create a single rule for each destination currently.
+A data export rule defines the tables for which data is exported and the destination. You can have 10 enabled rules in your workspace when any additional rule above 10 must be in disable state. A destination must be unique across all export rules in your workspace.
+
+> [!NOTE]
+> Data export sends logs to destinations that you own while these have some limits: [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts), [event hub namespace quota](../../event-hubs/event-hubs-quotas.md). ItΓÇÖs recommended that you monitor your destinations for throttling and apply measures when nearing the destination limit. For example:
+> - Set auto-inflate feature in event hub to automatically scale up and increase the number of TUs (throughput units). You can request more TUs when auto-inflate is at max
+> - Splitting tables to several export rules where each is to different destinations
Export rule should include tables that you have in your workspace. Run this query for a list of available tables in your workspace.
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-installation.md
# Install Azure Application Consistent Snapshot tool
-This article provides a guide for installation of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+This article provides a guide for installation of the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files or Azure Large Instance.
+
+> [!IMPORTANT]
+> Distributed installations are the only option for **Azure Large Instance** systems as they are deployed in a private network. Therefore AzAcSnap installations must be done on each system to ensure connectivity.
## Introduction
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-introduction.md
AzAcSnap leverages the volume snapshot and replication functionalities in Azure
- **Support for disaster recovery** AzAcSnap leverages storage volume replication to provide options for recovering replicated application-consistent snapshots at a remote site.
-AzAcSnap is a single binary. It does not need additional agents or plug-ins to interact with the database or the storage (Azure NetApp Files via Azure Resource Manager, and Azure Large Instance via SSH). AzAcSnap must be installed on a system that has connectivity to the database and the storage. However, the flexibility of installation and configuration allows for either a single centralized installation or a fully distributed installation with copies installed on each database installation.
+AzAcSnap is a single binary. It does not need additional agents or plug-ins to interact with the database or the storage (Azure NetApp Files via Azure Resource Manager, and Azure Large Instance via SSH). AzAcSnap must be installed on a system that has connectivity to the database and the storage. However, the flexibility of installation and configuration allows for either a single centralized installation (Azure NetApp Files only) or a fully distributed installation (Azure NetApp Files and Azure Large Instance) with copies installed on each database installation.
## Architecture overview
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-cost-model.md
na ms.devlang: na Previously updated : 09/22/2020 Last updated : 04/30/2021 # Cost model for Azure NetApp Files
For cost model specific to cross-region replication, see [Cost model for cross-r
Azure NetApp Files is billed on provisioned storage capacity. Provisioned capacity is allocated by creating capacity pools. Capacity pools are billed based on $/provisioned-GiB/month in hourly increments. The minimum size for a single capacity pool is 4 TiB, and capacity pools can be subsequently expanded in 1-TiB increments. Volumes are created within capacity pools. Each volume is assigned a quota that decrements from the pools-provisioned capacity. The quota that can be assigned to volumes ranges from a minimum of 100 GiB to a maximum of 100 TiB.
-For an active volume, capacity consumption against quota is based on logical (effective) capacity.
+For an active volume, capacity consumption against quota is based on logical (effective) capacity, either on active filesystem or snapshot data. A volume can only contain so much data as the set size (quota).
-If the actual capacity consumption of a volume exceeds its storage quota, the volume can continue to grow. Writes will still be permitted as long as the actual volume size is less than the system limit (100 TiB).
+The total used capacity in a capacity pool against its provisioned amount is the sum of the actual consumption of all volumes within the pool:
-The total used capacity in a capacity pool against its provisioned amount is the sum of the greater of either the assigned quota or actual consumption of all volumes within the pool:
+ ![Expression showing total used capacity calculation.](../media/azure-netapp-files/azure-netapp-files-total-used-capacity.png)
- ![Total used capacity calculation](../media/azure-netapp-files/azure-netapp-files-total-used-capacity.png)
-
-The diagram below illustrates these concepts.
-* We have a capacity pool with 4 TiB of provisioned capacity. The pool contains three volumes.
- * Volume 1 is assigned a quota of 2 TiB and has 800 GiB of consumption.
- * Volume 2 is assigned a quota of 1 TiB and has 100 GiB of consumption.
- * Volume 3 is assigned a quota of 500 GiB but has 800 GiB of consumption (overage).
-* The capacity pool is metered for 4 TiB of capacity (the provisioned amount).
- 3.8 TiB of capacity is consumed (2 TiB and 1 TiB of quota from Volumes 1 and 2, and 800 GiB of actual consumption for Volume 3). And 200 GiB of capacity is remaining.
-
- ![Capacity pool with three volumes](../media/azure-netapp-files/azure-netapp-files-capacity-pool-with-three-vols.png)
-
-## Overage in capacity consumption
-
-When the total used capacity of a pool exceeds its provisioned capacity, data writes are still permitted. After the grace period (one hour), if the used capacity of the pool still exceeds its provisioned capacity, then the pool size will be automatically increased in increments of 1 TiB until the provisioned capacity is greater than the total used capacity. For example, in the illustration above, if Volume 3 continues to grow and the actual consumption reaches 1.2 TiB, then after the grace period, the pool will automatically be resized to 5 TiB. The result is that the provisioned pool capacity (5 TiB) exceeds the used capacity (4.2 TiB).
-
-Although the capacity pool size automatically grows to meet the demand of the volume, it isnΓÇÖt automatically reduced when the volume size decreases. If you want to down-size the capacity pool after a volume size decrease (for example, after data cleanup of a volume), you need to _manually_ reduce the capacity pool size.
-
-## Manual changes of the pool size
-
-You can manually increase or decrease the pool size. However, the following constraints apply:
-* Service minimum and maximum limits
- See the article about [resource limits](azure-netapp-files-resource-limits.md).
-* A 1-TiB increment after the initial 4-TiB minimum purchase
-* A one-hour minimum billing increment
-* The provisioned pool size may not be decreased to less than the total used capacity in the pool.
-* For capacity pools with manual QoS, the pool size can only be decreased if the size and service level provide more throughput than the actual assigned throughput of all volumes.
-
-## Behavior of maximum-size pool overage
-
-The maximum size of a capacity pool that you can create or resize to is 500 TiB. When the total used capacity in a capacity pool exceeds 500 TiB, the following situations will occur:
-* Data writes will still be permitted (if the volume is below the system maximum of 100 TiB).
-* After the one-hour grace period, the pool will be automatically resized in 1-TiB increments, until the pool provisioned capacity exceeds total used capacity.
-* The additional provisioned and billed pool capacity exceeding 500 TiB cannot be used to assign volume quota. It also cannot be used to expand performance QoS limits.
- See [service levels](azure-netapp-files-service-levels.md) about performance limits and QoS sizing.
+## Capacity consumption of snapshots
-The diagram below illustrates these concepts:
-* We have a capacity pool with a Premium storage tier and a 500-TiB capacity. The pool contains nine volumes.
- * Volumes 1 through 8 are assigned a quota of 60 TiB each. The total used capacity is 480 TiB.
- Each volume has a QoS limit of 3.75 GiB/s of throughput (60 TiB * 64 MiB/s).
- * Volume 9 is assigned a quota of 20 TiB.
- Volume 9 has a QoS limit of 1.25 GiB/s of throughput (20 TiB * 64 MiB/s).
-* Volume 9 is an overage scenario. It has 25 TiB of actual consumption.
- * After the one-hour grace period, the capacity pool will be resized to 505 TiB.
- That is, total used capacity = 8 * 60-TiB quota for Volumes 1 through 8, and 25 TiB of actual consumption for Volume 9.
- * The billed capacity is 505 TiB.
- * Volume quota for Volume 9 cannot be increased (because the total assigned quota for the pool may not exceed 500 TiB).
- * Additional QoS throughput may not be assigned (because the total QoS for the pool is still based on 500 TiB).
+The capacity consumption of snapshots in Azure NetApp Files is charged against the quota of the parent volume. As a result, it shares the same billing rate as the capacity pool to which the volume belongs. However, unlike the active volume, snapshot consumption is measured based on the incremental capacity consumed. Azure NetApp Files snapshots are differential in nature. Depending on the change rate of the data, the snapshots often consume much less capacity than the logical capacity of the active volume. For example, assume that you have a snapshot of a 500-GiB volume that only contains 10 GiB of differential data.
+The capacity consumption that is counted towards the volume quota for the active filesystem and snapshot would be 510 GiB, not 1000 GiB. As a general rule, a recommended 20% of capacity can be assumed to retain a week's worth of snapshot data (depending on snapshot frequency and application daily block level change rates).
- ![Capacity pool with nine volumes](../media/azure-netapp-files/azure-netapp-files-capacity-pool-with-nine-vols.png)
+The following diagram illustrates the concepts.
-## Capacity consumption of snapshots
+* Assume a capacity pool with 40 TiB of provisioned capacity. The pool contains three volumes:
+ * Volume 1 is assigned a quota of 20 TiB and has 13 TiB (12 TiB active, 1 TiB snapshots) of consumption.
+ * Volume 2 is assigned a quota of 1 TiB and has 450 GiB of consumption.
+ * Volume 3 is assigned a quota of 14 TiB but has 8.8 TiB (8 TiB active, 800 GiB snapshots) of consumption.
+* The capacity pool is metered for 40 TiB of capacity (the provisioned amount). 22.25 TiB of capacity is consumed (13 TiB, 450 GiB and 8.8 TiB of quota from Volumes 1, 2 and 3). The capacity pool has 17.75 TiB of capacity remaining.
-The capacity consumption of snapshots in Azure NetApp Files is charged against the quota of the parent volume. As a result, it shares the same billing rate as the capacity pool to which the volume belongs. However, unlike the active volume, snapshot consumption is measured based on the incremental capacity consumed. Azure NetApp Files snapshots are differential in nature. Depending on the change rate of the data, the snapshots often consume much less capacity than the logical capacity of the active volume. For example, assume that you have a snapshot of a 500-GiB volume that only contains 10 GiB of differential data. The capacity that is charged against the volume quota for that snapshot would be 10 GiB, not 500 GiB.
+![Diagram showing capacity pool with three volumes.](../media/azure-netapp-files/azure-netapp-files-capacity-pool-with-three-vols.png)
## Next steps
The capacity consumption of snapshots in Azure NetApp Files is charged against t
* [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Cost model for cross-region replication](cross-region-replication-introduction.md#cost-model-for-cross-region-replication)
+* [Understand volume quota](volume-quota-introduction.md)
+* [Monitor the capacity of a volume](monitor-volume-capacity.md)
+* [Resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
+* [Capacity management FAQs](azure-netapp-files-faqs.md#capacity-management-faqs)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/23/2021 Last updated : 04/30/2021 # FAQs About Azure NetApp Files
For an NFS volume to automatically mount at VM start or reboot, add an entry to
See [Mount or unmount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) for details.
-### Why does the DF command on NFS client not show the provisioned volume size?
-
-The volume size reported in DF is the maximum size the Azure NetApp Files volume can grow to. The size of the Azure NetApp Files volume in DF command is not reflective of the quota or size of the volume. You can get the Azure NetApp Files volume size or quota through the Azure portal or the API.
- ### What NFS version does Azure NetApp Files support? Azure NetApp Files supports NFSv3 and NFSv4.1. You can [create a volume](azure-netapp-files-create-volumes.md) using either NFS version.
Although SMB encryption has impact to both the client (CPU overhead for encrypti
Azure NetApp Files provides capacity pool and volume usage metrics. You can also use Azure Monitor to monitor usage for Azure NetApp Files. See [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md) for details.
-### Can I manage Azure NetApp Files through Azure Storage Explorer?
-
-No. Azure NetApp Files is not supported by Azure Storage Explorer.
- ### How do I determine if a directory is approaching the limit size?
+You can use the `stat` command from a client to see whether a directory is approaching the [maximum size limit](azure-netapp-files-resource-limits.md#resource-limits) for directory metadata (320 MB).
See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#directory-limit) for the limit and calculation.
-<!-- You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
+<!--
+You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files containing non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
Size: 4096 Blocks: 8 IO Block: 65536 directory
``` -->
+### Does snapshot space count towards the usable / provisioned capacity of a volume?
+
+Yes, the [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the provisioned space in the volume. In case the volume runs full, consider taking the following actions:
+
+* [Resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md).
+* [Remove older snapshots](azure-netapp-files-manage-snapshots.md#delete-snapshots) to free up space in the hosting volume.
+
+### Does Azure NetApp Files support auto-grow for volumes or capacity pools?
+
+No, Azure NetApp Files volumes and capacity pool do not auto-grow upon filling up. See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md).
+
+You can use the community supported [Logic Apps ANFCapacityManager tool](https://github.com/ANFTechTeam/ANFCapacityManager) to manage capacity-based alert rules. The tool can automatically increase volume sizes to prevent your volumes from running out of space.
+
+### Does the destination volume of a replication count towards hard volume quota?
+
+No, the destination volume of a replication does not count towards hard volume quota.
+
+### Can I manage Azure NetApp Files through Azure Storage Explorer?
+
+No. Azure NetApp Files is not supported by Azure Storage Explorer.
+ ## Data migration and protection FAQs ### How do I migrate data to Azure NetApp Files?
The requirements for data migration from on premises to Azure NetApp Files are a
Azure NetApp Files provides NFS and SMB volumes. Any file based-copy tool can be used to replicate data between Azure regions.
+The [cross-region replication](cross-region-replication-introduction.md) functionality enables you to asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. Additionally, you can [create a new volume by using a snapshot of an existing volume](azure-netapp-files-manage-snapshots.md#restore-a-snapshot-to-a-new-volume).
+ NetApp offers a SaaS based solution, [NetApp Cloud Sync](https://cloud.netapp.com/cloud-sync-service). The solution enables you to replicate NFS or SMB data to Azure NetApp Files NFS exports or SMB shares. You can also use a wide array of free tools to copy data. For NFS, you can use workloads tools such as [rsync](https://rsync.samba.org/examples.html) to copy and synchronize source data into an Azure NetApp Files volume. For SMB, you can use workloads [robocopy](/windows-server/administration/windows-commands/robocopy) in the same manner. These tools can also replicate file or folder permissions.
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 04/30/2021 # Resize a capacity pool or a volume
-You can change the size of a capacity pool or a volume as necessary.
+You can change the size of a capacity pool or a volume as necessary, for example, when a volume or capacity pool fills up.
-## Resize the capacity pool
+For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacity of a volume](monitor-volume-capacity.md).
+
+## Resize the capacity pool using the Azure portal
You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than 4 TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
-1. From the Manage NetApp Account blade, click the capacity pool that you want to resize.
-2. Right-click the capacity pool name or click the "…" icon at the end of the capacity pool’s row to display the context menu.
-3. Use the context menu options to resize or delete the capacity pool.
+1. From the NetApp Account view, go to **Capacity pools**, and click the capacity pool that you want to resize.
+2. Right-click the capacity pool name or click the "…" icon at the end of the capacity pool row to display the context menu. Click **Resize**.
+3. In the Resize pool window, specify the pool size. Click **OK**.
+
+ ![Screenshot that shows pool context menu.](../media/azure-netapp-files/resize-pool-context-menu.png)
+
+ ![Screenshot that shows Resize pool window.](../media/azure-netapp-files/resize-pool-window.png)
-## Resize a volume
+## Resize a volume using the Azure portal
You can change the size of a volume as necessary. A volume's capacity consumption counts against its pool's provisioned capacity.
-1. From the Manage NetApp Account blade, click **Volumes**.
-2. Right-click the name of the volume that you want to resize or click the "…" icon at the end of the volume's row to display the context menu.
-3. Use the context menu options to resize or delete the volume.
+1. From the NetApp Account view, go to **Volumes**, and click the volume that you want to resize.
+2. Right-click the volume name or click the "…" icon at the end of the volume's row to display the context menu. Click **Resize**.
+3. In the Update volume quota window, specify the quota for the volume. Click **OK**.
+
+ ![Screenshot that shows volume context menu.](../media/azure-netapp-files/resize-volume-context-menu.png)
+
+ ![Screenshot that shows Update Volume Quota window.](../media/azure-netapp-files/resize-volume-quota-window.png)
+
+## Resizing the capacity pool or a volume using Azure CLI
+
+You can use the following commands of the [Azure command line (CLI) tools](azure-netapp-files-sdk-cli.md) to resize a capacity pool or a volume:
+
+* [`az netappfiles pool`](/cli/azure/netappfiles/pool?preserve-view=true&view=azure-cli-latest)
+* [`az netappfiles volume`](/cli/azure/netappfiles/volume?preserve-view=true&view=azure-cli-latest)
+
+## Resizing the capacity pool or a volume using REST API
+
+You can build automation to handle the capacity pool and volume size change.
+
+See [REST API for Azure NetApp Files](azure-netapp-files-develop-with-rest-api.md) and [REST API using PowerShell for Azure NetApp Files](develop-rest-api-powershell.md).
+
+The REST API specification and example code for Azure NetApp Files are available through the [resource-manager GitHub directory](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager/Microsoft.NetApp/stable).
## Resize a cross-region replication destination volume
The following table describes the destination volume resizing behavior based on
- [Set up a capacity pool](azure-netapp-files-set-up-capacity-pool.md) - [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md)-- [Dynamically change the service level of a volume](dynamic-change-volume-service-level.md)
+- [Dynamically change the service level of a volume](dynamic-change-volume-service-level.md)
+- [Understand volume quota](volume-quota-introduction.md)
+- [Monitor the capacity of a volume](monitor-volume-capacity.md)
+- [Capacity management FAQs](azure-netapp-files-faqs.md#capacity-management-faqs)
azure-netapp-files Monitor Volume Capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/monitor-volume-capacity.md
+
+ Title: Monitor the capacity of an Azure NetApp Files volume | Microsoft Docs
+description: Describes ways to monitor the capacity utilization of an Azure NetApp Files volume.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 04/30/2021++
+# Monitor the capacity of a volume
+
+This article describes ways to monitor the capacity utilization of an Azure NetApp Files volume.
+
+## Using Windows or Linux clients
+
+This section shows how to use a Windows or Linux client to monitor the volume capacity. The scenarios described in this section assume a volume configured at 1 TiB size (quota) on a 4 TiB Ultra service-level capacity pool.
+
+### Windows (SMB) clients
+
+You can use Windows clients to check the used and available capacity of a volume through the network mapped drive properties. You can use one of the following two methods:
+
+* Go to File Explorer, right-click the mapped drive, and select **Properties** to display capacity.
+
+ [ ![Screenshot that shows Explorer drive properties and volume properties.](../media/azure-netapp-files/monitor-explorer-drive-properties.png) ](../media/azure-netapp-files/monitor-explorer-drive-properties.png#lightbox)
+
+* Use the `dir` command at the command prompt:
+
+ ![Screenshot that shows using the dir command to display capacity.](../media/azure-netapp-files/monitor-volume-properties-dir-command.png)
+
+The *available space* is accurate using File Explorer or the `dir` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
+
+### Linux (NFS) clients
+
+Linux clients can check the used and available capacity of a volume using the [df command](https://linux.die.net/man/1/df).
+
+The `-h` option shows the size, including used and available space in human readable format (using M, G and T unit sizes).
+
+The following snapshot shows volume capacity reporting in Linux:
+
+![Screenshot that shows volume capacity reporting in Linux.](../media/azure-netapp-files/monitor-volume-properties-linux-command.png)
+
+The *available space* is accurate using the `df` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
+
+## Using Azure portal
+Azure NetApp Files leverages the standard [Azure Monitor](/azure/azure-monitor/overview) functionality. As such, you can use Azure Monitor to monitor Azure NetApp Files volumes.
+
+## Using Azure CLI
+
+You can use the [`az netappfiles volume`](/cli/azure/netappfiles/volume?view=azure-cli-latest&preserve-view=true) commands of the [Azure command line (CLI) tools](azure-netapp-files-sdk-cli.md) to monitor a volume.
+
+## Using REST API
+
+See [REST API for Azure NetApp Files](azure-netapp-files-develop-with-rest-api.md) and [REST API using PowerShell for Azure NetApp Files](develop-rest-api-powershell.md).
+
+The REST API specification and example code for Azure NetApp Files are available through the [resource-manager GitHub directory](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager/Microsoft.NetApp/stable).
+
+## Next steps
+
+* [Understand volume quota](volume-quota-introduction.md)
+* [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md)
+* [Resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
+* [Capacity management FAQs](azure-netapp-files-faqs.md#capacity-management-faqs)
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/volume-hard-quota-guidelines.md
You can use the portal or the CLI to manually increase the volume or capacity po
##### Portal
-You can [change the size of a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-volume) as necessary. A volume's capacity consumption counts against its pool's provisioned capacity.
+You can [change the size of a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-volume-using-the-azure-portal) as necessary. A volume's capacity consumption counts against its pool's provisioned capacity.
1. From the Manage NetApp Account blade, click **Volumes**. 2. Right-click the name of the volume that you want to resize or click the `…` icon at the end of the volume's row to display the context menu.
You can [change the size of a volume](azure-netapp-files-resize-capacity-pools-o
![Screenshot that shows the Update Volume Quota window.](../media/azure-netapp-files/hard-quota-update-volume-quota.png)
-In some cases, the hosting capacity pool does not have sufficient capacity to resize the volumes. However, you can [change the capacity pool size](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-the-capacity-pool) in 1-TiB increments or decrements. The capacity pool size cannot be smaller than 4 TiB. *Resizing the capacity pool changes the purchased Azure NetApp Files capacity.*
+In some cases, the hosting capacity pool does not have sufficient capacity to resize the volumes. However, you can [change the capacity pool size](azure-netapp-files-resize-capacity-pools-or-volumes.md#resizing-the-capacity-pool-or-a-volume-using-azure-cli) in 1-TiB increments or decrements. The capacity pool size cannot be smaller than 4 TiB. *Resizing the capacity pool changes the purchased Azure NetApp Files capacity.*
1. From the Manage NetApp Account blade, click the capacity pool that you want to resize. 2. Right-click the capacity pool name or click the `…` icon at the end of the capacity pool’s row to display the context menu.
azure-netapp-files Volume Quota Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/volume-quota-introduction.md
+
+ Title: Understand volume quota for Azure NetApp Files | Microsoft Docs
+description: Provides an overview about volume quota. Also provides references about monitoring and managing volume and pool capacity.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 04/30/2021++
+# Understand volume quota
+
+This article provides an overview about volume quota for Azure NetApp Files. It also provides references to details that can help you monitor and manage the capacity of a volume or capacity pool.
+
+## Behaviors of volume quota
+
+* The storage capacity of an Azure NetApp Files volume is limited to the set size (quota) of the volume.
+
+* When volume consumption maxes out, neither the volume nor the underlying capacity pool grows automatically. Instead, the volume will receive an ΓÇ£out of spaceΓÇ¥ condition. However, you can [resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) as needed. You should actively [monitor the capacity of a volume](monitor-volume-capacity.md) and the underlying capacity pool.
+
+* Depending on the capacity pool type, the size (quota) of an Azure NetApp Files volume has an impact on its bandwidth performance and the provisioned capacity. See the [auto QoS pool type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) for details.
+
+* The capacity consumed by volume [snapshots](snapshots-introduction.md) counts towards the provisioned space in the volume.
+
+* Volume quota doesn't apply to a [replication destination volume](cross-region-replication-introduction.md).
+
+* See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) about the calculation of capacity consumption and overage in capacity consumption.
+
+## Next steps
+
+* [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md)
+* [Monitor the capacity of a volume](monitor-volume-capacity.md)
+* [Resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
+* [Capacity management FAQs](azure-netapp-files-faqs.md#capacity-management-faqs)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 04/21/2021 Last updated : 04/30/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
## April 2021
+* [Manual volume and capacity pool management](volume-quota-introduction.md) (hard quota)
+
+ The behavior of Azure NetApp Files volume and capacity pool provisioning has changed to a manual and controllable mechanism. The storage capacity of a volume is limited to the set size (quota) of the volume. When volume consumption maxes out, neither the volume nor the underlying capacity pool grows automatically. Instead, the volume will receive an ΓÇ£out of spaceΓÇ¥ condition. However, you can [resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) as needed. You should actively [monitor the capacity of a volume](monitor-volume-capacity.md) and the underlying capacity pool.
+
+ This behavior change is a result of the following key requests indicated by many users:
+
+ * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has now been corrected.
+ * The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected.
+ * Users want to see and maintain a direct correlation between volume size (quota) and performance. The previous behavior allowed for (implicit) over-subscription of a volume (capacity) and capacity pool auto-grow. As such, users could not make a direct correlation until the volume quota had been actively set or reset. This behavior has now been corrected.
+
+ Users have requested direct control over provisioned capacity. Users want to control and balance storage capacity and utilization. They also want to control cost along with the application-side and client-side visibility of available, used, and provisioned capacity and the performance of their application volumes. With this new behavior, all this capability has now been enabled.
+ * [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview) [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. FSLogix solutions can also be used to create more portable computing sessions when you use physical devices. FSLogix can be used to provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To further enhance FSLogix resiliency to storage service maintenance events, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) for user profile containers. See Azure NetApp Files [Windows Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop) for additional information.
azure-relay Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/private-link-service.md
Title: Integrate Azure Relay with Azure Private Link Service description: Learn how to integrate Azure Relay with Azure Private Link Service Last updated 09/24/2020-++ # Integrate Azure Relay with Azure Private Link
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
For SQL Database limits, see [SQL Database resource limits for single databases]
## Azure Synapse Analytics limits
-For Azure Synapse Analytics limits, see [Azure Synapse resource limits](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md).
## Azure Files and Azure File Sync To learn more about the limits for Azure Files and File Sync, see [Azure Files scalability and performance targets](../../storage/files/storage-files-scale-targets.md).
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 04/28/2021 Last updated : 04/29/2021 # Use auto-failover groups to enable transparent and coordinated failover of multiple databases
When you set up a failover group between primary and secondary SQL Managed Insta
- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below. > [!IMPORTANT]
- > [On 9/22/2020 we announced global virtual network peering for newly created virtual clusters](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). That means that global virtual network peering is supported for SQL Managed Instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL Managed Instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details.
+ > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](https://docs.microsoft.com/azure/azure-sql/database/maintenance-window) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
- The two SQL Managed Instance VNets cannot have overlapping IP addresses. - You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Previously updated : 03/23/2021 Last updated : 04/28/2021 # Maintenance window (Preview)
Last updated 03/23/2021
The maintenance window feature allows you to configure maintenance schedule for [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL managed instance](../managed-instance/sql-managed-instance-paas-overview.md) resources making impactful maintenance events predictable and less disruptive for your workload. > [!Note]
-> Maintenance window feature does not protect from unplanned events, like hardware failures, that may cause short connection interruptions.
+> The maintenance window feature only protects from planned impact from upgrades or scheduled maintenance. It does not protect from all failover causes; exceptions that may cause short connection interruptions outside of a maintenance window include hardware failures, cluster load balancing, and database reconfigurations due to events like a change in database Service Level Objective.
## Overview
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
Previously updated : 04/24/2021 Last updated : 04/29/2021 # Connectivity architecture for Azure SQL Managed Instance
When connections start inside SQL Managed Instance (as with backups and audit lo
To address customer security and manageability requirements, SQL Managed Instance is transitioning from manual to service-aided subnet configuration.
-With service-aided subnet configuration, the user is in full control of data (TDS) traffic, while SQL Managed Instance takes responsibility to ensure uninterrupted flow of management traffic in order to fulfill an SLA.
+With service-aided subnet configuration, the customer is in full control of data (TDS) traffic, while SQL Managed Instance control plane takes responsibility to ensure uninterrupted flow of management traffic in order to fulfill an SLA.
Service-aided subnet configuration builds on top of the virtual network [subnet delegation](../../virtual-network/subnet-delegation-overview.md) feature to provide automatic network configuration management and enable service endpoints.
Service endpoints could be used to configure virtual network firewall rules on s
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. The subnet must have these characteristics: -- **Dedicated subnet:** The SQL Managed Instance subnet can't contain any other cloud service that's associated with it, and it can't be a gateway subnet. The subnet can't contain any resource but SQL Managed Instance, and you can't later add other types of resources in the subnet.
+- **Dedicated subnet:** The managed instance's subnet can't contain any other cloud service that's associated with it, but other managed instances are allowed and it can't be a gateway subnet. The subnet can't contain any resource but the managed instance(s), and you can't later add other types of resources in the subnet.
- **Subnet delegation:** The SQL Managed Instance subnet needs to be delegated to the `Microsoft.Sql/managedInstances` resource provider. - **Network security group (NSG):** An NSG needs to be associated with the SQL Managed Instance subnet. You can use an NSG to control access to the SQL Managed Instance data endpoint by filtering traffic on port 1433 and ports 11000-11999 when SQL Managed Instance is configured for redirect connections. The service will automatically provision and keep current [rules](#mandatory-inbound-security-rules-with-service-aided-subnet-configuration) required to allow uninterrupted flow of management traffic. - **User defined route (UDR) table:** A UDR table needs to be associated with the SQL Managed Instance subnet. You can add entries to the route table to route traffic that has on-premises private IP ranges as a destination through the virtual network gateway or virtual network appliance (NVA). Service will automatically provision and keep current [entries](#mandatory-user-defined-routes-with-service-aided-subnet-configuration) required to allow uninterrupted flow of management traffic.
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sap-hana-database.md
Backups run in accordance with the policy schedule. You can run a backup on-dema
1. In the vault menu, select **Backup items**. 2. In **Backup Items**, select the VM running the SAP HANA database, and then select **Backup now**.
-3. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**. This backup will be retained according to the policy associated with this backup item.
+3. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**. This backup will be retained for 45 days.
4. Monitor the portal notifications. You can monitor the job progress in the vault dashboard > **Backup Jobs** > **In progress**. Depending on the size of your database, creating the initial backup may take a while. By default, the retention of on-demand backups is 45 days.
backup Backup Sql Server Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-azure-troubleshoot.md
If you'd like to trigger a restore on the healthy SQL instances, do the followin
| Error message | Possible causes | Recommended action | |||| | This SQL database does not support the requested backup type. | Occurs when the database recovery model doesn't allow the requested backup type. The error can happen in the following situations: <br/><ul><li>A database that's using a simple recovery model doesn't allow log backup.</li><li>Differential and log backups aren't allowed for a master database.</li></ul>For more detail, see the [SQL Server recovery models](/sql/relational-databases/backup-restore/recovery-models-sql-server) documentation. | If the log backup fails for the database in the simple recovery model, try one of these options:<ul><li>If the database is in simple recovery mode, disable log backups.</li><li>Use the [SQL Server documentation](/sql/relational-databases/backup-restore/view-or-change-the-recovery-model-of-a-database-sql-server) to change the database recovery model to full or bulk logged. </li><li> If you don't want to change the recovery model, and you have a standard policy to back up multiple databases that can't be changed, ignore the error. Your full and differential backups will work per schedule. The log backups will be skipped, which is expected in this case.</li></ul>If it's a master database and you've configured differential or log backup, use either of the following steps:<ul><li>Use the portal to change the backup policy schedule for the master database, to full.</li><li>If you have a standard policy to back up multiple databases that can't be changed, ignore the error. Your full backup will work per schedule. Differential or log backups won't happen, which is expected in this case.</li></ul> |
-| Operation canceled as a conflicting operation was already running on the same database. | See the [blog entry about backup and restore limitations](https://deep.data.blog/2008/12/30/concurrency-of-full-differential-and-log-backups-on-the-same-database/) that run concurrently.| [Use SQL Server Management Studio (SSMS) to monitor the backup jobs](manage-monitor-sql-database-backup.md). After the conflicting operation fails, restart the operation.|
-### UserErrorSQLPODoesNotExist
+### OperationCancelledBecauseConflictingOperationRunningUserError
| Error message | Possible causes | Recommended action | ||||
+| Operation cancelled as a conflicting operation was already running on the same database. | The following are the cases where this error code might surface:<br><ul><li>Adding or dropping files to a database while a backup is happening.</li><li>Shrinking files while database backups are happening.</li><li>A database backup by another backup product configured for the database is in progress and a backup job is triggered by Azure Backup extension.</li></ul>| Disable the other backup product to resolve the issue.
++
+### UserErrorFileManipulationIsNotAllowedDuringBackup
+
+| Error message | Possible causes | Recommended actions |
+||||
+| Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. | You may get this error when the triggered on-demand, or the scheduled backup job has conflicts with an already running backup operation triggered by Azure Backup extension on the same database.<br> The following are the scenarios when this error code might display:<br><ul><li>Full backup is running on the database and another Full backup is triggered.</li><li>Diff backup is running on the database and another Diff backup is triggered.</li><li>Log backup is running on the database and another Log backup is triggered.</li></ul>| After the conflicting operation fails, restart the operation.
++
+### UserErrorSQLPODoesNotExist
+
+| Error message | Possible causes | Recommended actions |
+||||
| SQL database does not exist. | The database was either deleted or renamed. | Check if the database was accidentally deleted or renamed.<br/><br/> If the database was accidentally deleted, to continue backups, restore the database to the original location.<br/><br/> If you deleted the database and don't need future backups, then in the Recovery Services vault, select **Stop backup** with **Retain Backup Data** or **Delete Backup Data**. For more information, see [Manage and monitor backed-up SQL Server databases](manage-monitor-sql-database-backup.md). ### UserErrorSQLLSNValidationFailure
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Log chain is broken. | The database or the VM is backed up through another backup solution, which truncates the log chain.|<ul><li>Check if another backup solution or script is in use. If so, stop the other backup solution. </li><li>If the backup was an on-demand log backup, trigger a full backup to start a new log chain. For scheduled log backups, no action is needed because the Azure Backup service will automatically trigger a full backup to fix this issue.</li>| ### UserErrorOpeningSQLConnection
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Azure Backup is not able to connect to the SQL instance. | Azure Backup can't connect to the SQL Server instance. | Use the additional details on the Azure portal error menu to narrow down the root causes. Refer to [SQL backup troubleshooting](/sql/database-engine/configure-windows/troubleshoot-connecting-to-the-sql-server-database-engine) to fix the error.<br/><ul><li>If the default SQL settings don't allow remote connections, change the settings. See the following articles for information about changing the settings:<ul><li>[MSSQLSERVER_-1](/sql/relational-databases/errors-events/mssqlserver-1-database-engine-error)</li><li>[MSSQLSERVER_2](/sql/relational-databases/errors-events/mssqlserver-2-database-engine-error)</li><li>[MSSQLSERVER_53](/sql/relational-databases/errors-events/mssqlserver-53-database-engine-error)</li></ul></li></ul><ul><li>If there are login issues, use these links to fix them:<ul><li>[MSSQLSERVER_18456](/sql/relational-databases/errors-events/mssqlserver-18456-database-engine-error)</li><li>[MSSQLSERVER_18452](/sql/relational-databases/errors-events/mssqlserver-18452-database-engine-error)</li></ul></li></ul> | ### UserErrorParentFullBackupMissing
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | First full backup is missing for this data source. | Full backup is missing for the database. Log and differential backups are parents to a full backup, so be sure to take full backups before triggering differential or log backups. | Trigger an on-demand full backup. | ### UserErrorBackupFailedAsTransactionLogIsFull
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Cannot take backup as transaction log for the data source is full. | The database transactional log space is full. | To fix this issue, refer to the [SQL Server documentation](/sql/relational-databases/errors-events/mssqlserver-9002-database-engine-error). | ### UserErrorCannotRestoreExistingDBWithoutForceOverwrite
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Database with same name already exists at the target location | The target restore destination already has a database with the same name. | <ul><li>Change the target database name.</li><li>Or, use the force overwrite option on the restore page.</li> | ### UserErrorRestoreFailedDatabaseCannotBeOfflined
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Restore failed as the database could not be brought offline. | While you're doing a restore, the target database needs to be brought offline. Azure Backup can't bring this data offline. | Use the additional details on the Azure portal error menu to narrow down the root causes. For more information, see the [SQL Server documentation](/sql/relational-databases/backup-restore/restore-a-database-backup-using-ssms). | ### WlExtGenericIOFaultUserError
-|Error Message |Possible causes |Recommended Action |
+|Error Message |Possible causes |Recommended Actions |
|||| |An input/output error occurred during the operation. Please check for the common IO errors on the virtual machine. | Access permissions or space constraints on the target. | Check for the common IO errors on the virtual machine. Ensure that the target drive / network share on the machine: <li> has read/write permission for the account NT AUTHORITY\SYSTEM on the machine. <li> has enough space for the operation to complete successfully.<br> For more information, see [Restore as files](restore-sql-database-azure-vm.md#restore-as-files). | ### UserErrorCannotFindServerCertificateWithThumbprint
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Cannot find the server certificate with thumbprint on the target. | The master database on the destination instance doesn't have a valid encryption thumbprint. | Import the valid certificate thumbprint used on the source instance, to the target instance. | ### UserErrorRestoreNotPossibleBecauseLogBackupContainsBulkLoggedChanges
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | The log backup used for recovery contains bulk-logged changes. It cannot be used to stop at an arbitrary point in time according to the SQL guidelines. | When a database is in bulk-logged recovery mode, the data between a bulk-logged transaction and the next log transaction can't be recovered. | Choose a different point in time for recovery. [Learn more](/sql/relational-databases/backup-restore/recovery-models-sql-server). ### FabricSvcBackupPreferenceCheckFailedUserError
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Backup preference for SQL Always On Availability Group cannot be met as some nodes of the Availability Group are not registered. | Nodes required to perform backups aren't registered or are unreachable. | <ul><li>Ensure that all the nodes required to perform backups of this database are registered and healthy, and then retry the operation.</li><li>Change the backup preference for the SQL Server Always On availability group.</li></ul> | ### VMNotInRunningStateUserError
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | SQL server VM is either shutdown and not accessible to Azure Backup service. | The VM is shut down. | Ensure that the SQL Server instance is running. | ### GuestAgentStatusUnavailableUserError
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Azure Backup service uses Azure VM guest agent for doing backup but guest agent is not available on the target server. | The guest agent isn't enabled or is unhealthy. | [Install the VM guest agent](../virtual-machines/extensions/agent-windows.md) manually. | ### AutoProtectionCancelledOrNotValid
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| | Auto-protection Intent was either removed or is no more valid. | When you enable auto-protection on a SQL Server instance, **Configure Backup** jobs run for all the databases in that instance. If you disable auto-protection while the jobs are running, then the **In-Progress** jobs are canceled with this error code. | Enable auto-protection once again to help protect all the remaining databases. | ### CloudDosAbsoluteLimitReached
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| Operation is blocked as you have reached the limit on number of operations permitted in 24 hours. | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. <br> For example: If you've hit the limit for the number of configure backup jobs that can be triggered per day, and you try to configure backup on a new item, you'll see this error. | Typically, retrying the operation after 24 hours resolves this issue. However, if the issue persists, you can contact Microsoft support for help. ### CloudDosAbsoluteLimitReachedWithRetry
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| Operation is blocked as the vault has reached its maximum limit for such operations permitted in a span of 24 hours. | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br> For example: If you have a large number of datasources protected with a policy and you try to modify that policy, it will trigger configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.| Azure Backup service will automatically retry this operation after 24 hours. ### WorkloadExtensionNotReachable
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| AzureBackup workload extension operation failed. | The VM is shut down, or the VM can't contact the Azure Backup service because of internet connectivity issues.| <li> Ensure the VM is up and running and has internet connectivity.<li> [Re-register extension on the SQL Server VM](manage-monitor-sql-database-backup.md#re-register-extension-on-the-sql-server-vm). ### UserErrorVMInternetConnectivityIssue
-| Error message | Possible causes | Recommended action |
+| Error message | Possible causes | Recommended actions |
|||| The VM is not able to contact Azure Backup service due to internet connectivity issues. | The VM needs outbound connectivity to Azure Backup Service, Azure Storage, or Azure Active Directory services.| <li> If you use NSG to restrict connectivity, then you should use the *AzureBackup* service tag to allows outbound access to Azure Backup Service, and similarly for the Azure AD (*AzureActiveDirectory*) and Azure Storage(*Storage*) services. Follow these [steps](./backup-sql-server-database-azure-vms.md#nsg-tags) to grant access. <li> Ensure DNS is resolving Azure endpoints. <li> Check if the VM is behind a load balancer blocking internet access. By assigning public IP to the VMs, discovery will work. <li> Verify there's no firewall/antivirus/proxy that are blocking calls to the above three target services.
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-db-restore.md
To restore the backup data as files instead of a database, choose **Restore as F
![Select restore point](media/sap-hana-db-restore/select-restore-point.png) 1. All the backup files associated with the selected restore point are dumped into the destination path.
-1. Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full and differential backups, and the other folder named `Log` contains the log backups.
+1. Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential, and incremental).
1. Move these restored files to the SAP HANA server where you want to restore them as a database. 1. Then follow these steps: 1. Set permissions on the folder / directory where the backup files are stored using the following command:
To restore the backup data as files instead of a database, choose **Restore as F
In the command above: * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed 1. Restore using the newly generated catalog file through HANA Studio or run the HDBSQL restore query with this newly generated catalog. HDBSQL queries are listed below:
To restore the backup data as files instead of a database, choose **Restore as F
* `<DatabaseName@HostName>` - Name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it doesn't need be specified for restores done on the same HANA server from where the backup is taken. * `<PathToGeneratedCatalogInStep3>` - Path to the catalog file generated in **Step C** * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C** * To restore to a particular full or differential backup:
To restore the backup data as files instead of a database, choose **Restore as F
* `<DatabaseName@HostName>` - the name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it need not be specified for restores done on the same HANA server from where the backup is taken. * `<PathToGeneratedCatalogInStep3>` - the path to the catalog file generated in **Step C** * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C** ### Restore to a specific point in time
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-backup-cli.md
Once the script is run, the SAP HANA instance can be registered with the Recover
```azurecli-interactive az backup container register --resource-group saphanaResourceGroup \ --vault-name saphanaVault \
- --location westus2 \
--workload-type SAPHANA \ --backup-management-type AzureWorkload \ --resource-id VMResourceId
To protect and configure backup on a database, one at a time, we use the [az bac
```azurecli-interactive az backup protection enable-for-azurewl --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
--policy-name saphanaPolicy \ --protectable-item-name "saphanadatabase;hxe;hxe" \ --protectable-item-type SAPHANADatabase \
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sap-hana-restore-cli.md
Typically, a network share path, or path of a mounted Azure file share when spec
>[!NOTE] >To restore the database backup files on an Azure file share mounted on the target registered VM, make sure that root account has read/ write permissions on the Azure file share.
-Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full and differential backups, and the other folder named `Log` contains the log backups.
+Based on the type of restore point chosen (**Point in time** or **Full & Differential**), you'll see one or more folders created in the destination path. One of the folders named `Data_<date and time of restore>` contains the full backups, and the other folder named `Log` contains the log backups and other backups (such as differential and incremental).
Move these restored files to the SAP HANA server where you want to restore them as a database. Then follow these steps to restore the database:
Move these restored files to the SAP HANA server where you want to restore them
In the command above: * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed 1. Restore using the newly generated catalog file through HANA Studio or run the HDBSQL restore query with this newly generated catalog. HDBSQL queries are listed below:
Move these restored files to the SAP HANA server where you want to restore them
* `<DatabaseName@HostName>` - Name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it doesn't need to be specified for restores done on the same HANA server from where the backup is taken. * `<PathToGeneratedCatalogInStep3>` - Path to the catalog file generated in **Step 3** * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step 3** * To restore to a particular full or differential backup:
Move these restored files to the SAP HANA server where you want to restore them
* `<DatabaseName@HostName>` - the name of the database whose backup is used for restore and the **host** / SAP HANA server name on which this database resides. The `USING SOURCE <DatabaseName@HostName>` option specifies that the data backup (used for restore) is of a database with a different SID or name than the target SAP HANA machine. So it need not be specified for restores done on the same HANA server from where the backup is taken. * `<PathToGeneratedCatalogInStep3>` - the path to the catalog file generated in **Step 3** * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
* `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step 3** ## Next steps
blockchain Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/migration-guide.md
- Title: Azure Blockchain Service retirement notification and guidance
-description: Migrate Azure Blockchain Service to a managed or self-managed blockchain offering
Previously updated : 04/28/2021--
-#Customer intent: As a network operator, I want to migrate Azure Blockchain Service to an alterative offering so that I can use blockchain after Azure Blockchain Service retirement.
--
-# Migrate Azure Blockchain Service
-
-You can migrate ledger data from Azure Blockchain Service to an alternate offering. Azure Blockchain Service public preview is being retired and you are advised to evaluate the following alternatives based on your development status of being in production or evaluation.
-
-## Evaluate alternatives
-
-The first step when planning a migration is to evaluate alternative offerings. The following guidance is based on your development phase.
-
-### Production or pilot phase
-
-If you have already deployed and developed a blockchain solution that is in the production or pilot phase, consider the following alternatives.
-
-#### Quorum Blockchain Service
-
-Quorum Blockchain Service is a managed offering by ConsenSys on Azure that supports Quorum as ledger technology.
--- **Managed offering** - Quorum Blockchain Service has no extra management overhead compared to Azure Blockchain Service.-- **Ledger technology** - Based on ConsenSys Quorum which is an enhanced version of the GoQuorum Ledger technology used in Azure Blockchain Service. No new learning is required. For more information, see the [Consensys Quorum FAQ](https://consensys.net/quorum/faq).-- **Continuity** - You can migrate your existing data on to Quorum Blockchain Service by ConsenSys. For more information, see [Migrate data from Azure Blockchain Service](#migrate-data-from-azure-blockchain-service)-
-For more information, see [Quorum Blockchain Service](https://consensys.net/QBS).
-
-#### Azure VM-based deployment
-
-There are several blockchain resource management templates you can use to deploy blockchain on IaaS VMs.
--- **Ledger technology** - You can continue to use Quorum ledger technology including the new ConsenSys Quorum.-- **Self-management** - Once deployed, you manage the infrastructure and blockchain stack.-
-### New deployment or evaluation phase
-
-If you are starting to develop a new solution or are in an evaluation phase, consider the following alternatives based on your scenario requirements.
--- [Quorum template from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.quorum-dev-quickstart?tab=Overview) -- [Besu template from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/consensys.hyperledger-besu-quickstart?tab=Overview)-
-## Migrate data from Azure Blockchain Service
-
-Based on your current development state, you can either opt to use existing ledger data on Azure Blockchain Service or start a new network and use the solution of your choice. We recommend creating a new consortium based on a solution of your choice in all scenarios where you do not need or intend to use existing ledger data on Azure Blockchain Service.
-
-### Open support case
-
-Open a Microsoft Support ticket to pause the consortium and export your blockchain data.
-
-1. Use the Azure portal to open a support ticket. In *Problem description*, enter the following details:
-
- ![Support ticket problem description form in the Azure portal](./media/migration-guide/problem-description.png)
-
- | Field | Response |
- |-| |
- | Issue type | Technical |
- | Service | Azure Blockchain Service - Preview |
- | Summary | Request data for migration |
- | Problem type | other |
-
-1. In *Additional details*, include the following details:
-
- ![Support ticket additional details form in the Azure portal](./media/migration-guide/additional-details.png)
-
- - Subscription ID or Azure Resource Manager resource ID
- - Tenant
- - Consortium name
- - Region
- - Member name
- - Preferred Datetime for initiating migration
-
-If your consortium has multiple members, each member is required to open a separate support ticket for respective member data.
-
-### Pause consortium
-
-You are required to coordinate with members of consortium to data export since the consortium will be paused for data export and transactions during this time will fail.
-
-Azure Blockchain Service team pauses the consortium, exports a snapshot of data, and makes the data available through SAS URL for download in an encrypted format. The consortium is resumed after taking the snapshot.
-
-> [!IMPORTANT]
-> You should stop all applications initiating new
-> blockchain transactions on to the network. Active applications may lead to data loss or your original and migrated networks being out of sync.
-
-### Download data
-
-Download the data using the Microsoft Support provided SAS URL link.
-
-> [!IMPORTANT]
-> You are required to download your data within seven days.
-
-Decrypt the data using the API access key. You can [get the key from the Azure portal](configure-transaction-nodes.md#access-keys) or [through the REST API](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys).
-
-> [!CAUTION]
-> Only the default transaction node API access key 1 is used to encrypt all the nodes data of that member.
->
-> Do not reset the API access key in between of the migration.
-
-You can use the data with either ConsenSys Quorum Blockchain service or your IaaS VM-based deployment.
-
-For ConsenSys Quorum Blockchain Service migration, contact ConsenSys at [qbsmigration@consensys.net](mailto:qbsmigration@consensys.net).
-
-For using the data with your IaaS VM-based deployment, follow the steps in the [Azure VM based Quorum guidance](#azure-vm-based-quorum-guidance) section of this article.
-
-### Delete resources
-
-Once you have completed your data copy, it is recommended that you delete the Azure Blockchain member resources. You will continue to get billed while these resources exist.
-
-## Azure VM-based Quorum guidance
-
-Use the following the steps to create transaction nodes and validator nodes.
-
-### Transaction node
-
-A transaction node has two components. Tessera is used for the private transactions and Geth is used for the Quorum application. Validator nodes require only the Geth component.
-
-#### Tessera
-
-1. Install Java 11. For example, `apt install default-jre`.
-1. Update paths in `tessera-config.json`. Change all references of `/working-dir/**` to `/opt/blockchain/data/working-dir/**`.
-1. Update the IP address of other transaction nodes as per new IP address. HTTPS won't work since it is not enabled in the Tessera configuration. For information on how to configure TLS, see the [Tessera configure TLS](https://docs.tessera.consensys.net/en/stable/HowTo/Configure/TLS/) article.
-1. Update NSG rules to allow inbound connections to port 9000.
-1. Run Tessera using the following command:
-
- ```bash
- java -Xms512M -Xmx1731M -Dlogback.configurationFile=/tessera/logback-tessera.xml -jar tessera.jar -configfile /opt/blockchain/data/working-dir/tessera-config.json > tessera.log 2>&1 &
- ```
-
-#### Geth
-
-1. Update IPs in enode addresses in `/opt/blockchain/data/working-dir/dd/static-nodes.json`. Public IP address is allowed.
-1. Make the same IP address changes under StaticNodes key in `/geth/config.toml`.
-1. Update NSG rules to allow inbound connections to port 30303.
-1. Run Geth using the following commands:
-
- ```bash
- export NETWORK_ID='' # Get network ID from metadata. The network ID is the same for consortium.
-
- PRIVATE_CONFIG=tm.ipc geth --config /geth/config.toml --datadir /opt/blockchain/data/working-dir/dd --networkid $NETWORK_ID --istanbul.blockperiod 5 --nodiscover --nousb --allow-insecure-unlock --verbosity 3 --txpool.globalslots 80000 --txpool.globalqueue 80000 --txpool.accountqueue 50000 --txpool.accountslots 50000 --targetgaslimit 700000000 --miner.gaslimit 800000000 --syncmode full --rpc --rpcaddr 0.0.0.0 --rpcport 3100 --rpccorsdomain '*' --rpcapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul --ws --wsaddr 0.0.0.0 --wsport 3000 --wsorigins '*' --wsapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul
- ```
-
-### Validator Node
-
-Validator node steps are similar to the transaction node except that Geth startup command will have the additional flag `-mine`. Tessera is not started on a validator node. To run Geth without a paired Tessera, you pass `PRIVATE_CONFIG=ignore` in the Geth command. Run Geth using the following commands:
-
-```bash
-export NETWORK_ID=`j q '.APP_SETTINGS | fromjson | ."network-id"' env.json`
-
-PRIVATE_CONFIG=ignore geth --config /geth/config.toml --datadir /opt/blockchain/data/working-dir/dd --networkid $NETWORK_ID --istanbul.blockperiod 5 --nodiscover --nousb --allow-insecure-unlock --verbosity 3 --txpool.globalslots 80000 --txpool.globalqueue 80000 --txpool.accountqueue 50000 --txpool.accountslots 50000 --targetgaslimit 700000000 --miner.gaslimit 800000000 --syncmode full --rpc --rpcaddr 0.0.0.0 --rpcport 3100 --rpccorsdomain '*' --rpcapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul --ws --wsaddr 0.0.0.0 --wsport 3000 --wsorigins '*' --wsapi admin,db,eth,debug,net,shh,txpool,personal,web3,quorum,istanbul ΓÇômine
-```
-
-## Upgrading Quorum
-
-Azure Blockchain Service may be in running one of the following listed versions of Quorum. You can choose to use the same Quorum version or follow the below steps to use latest version of ConsenSys Quorum.
-
-### Upgrade Quorum version 2.6.0 or 2.7.0 to ConsenSys 21.1.0
-
-Upgrading from Quorum version 2.6 or 2.7 version is straightforward. Download and update using the following links.
-1. Download [ConsenSys Quorum and related binaries v21.1.0](https://github.com/ConsenSys/quorum/releases/tag/v21.1.0).
-1. Download the latest version of Tessera [tessera-app-21.1.0-app.jar](https://github.com/ConsenSys/tessera/releases/tag/tessera-21.1.0).
-
-### Upgrade Quorum version 2.5.0 to ConsenSys 21.1.0
-
-1. Download [ConsenSys Quorum and related binaries v21.1.0](https://github.com/ConsenSys/quorum/releases/tag/v21.1.0).
-1. Download the latest version of Tessera [tessera-app-21.1.0-app.jar](https://github.com/ConsenSys/tessera/releases/tag/tessera-21.1.0).
-For versions 2.5.0, there are some minor genesis file changes. Make the following changes in the genesis file.
-
-1. The value `byzantiumBlock` was set to 1 and it cannot be less than `constantinopleBlock` which is 0. Set the `byzantiumBlock` value to 0.
-1. Set `petersburgBlock`, `istanbulBlock` to a future block. This value should be same across all nodes.
-1. This step is optional. `ceil2Nby3Block` was incorrectly placed in Azure Blockchain Service Quorum 2.5.0 version. This needs to be inside the istanbul config and set the value future block. This value should be same across all nodes.
-1. Run geth to reinitialize genesis block using following command:
-
- ```bash
- `geth --datadir "Data Directory Path" init "genesis file path"`
- ```
-
-1. Run Geth.
-
-## Exported data reference
-
-This section describes the metadata, and folder structure to help import the data into your IaaS VM deployment.
-
-### Metadata info
-
-| Name | Sample | Description |
-|--|--|--|
-| consortium_name | \<ConsortiumName\> | Consortium name (unique across Azure Blockchain Service). |
-| Consortium_Member_Count || Number of members in the consortium |
-| member_name | \<memberName\> | Blockchain member name (unique across Azure Blockchain Service). |
-| node_name | transaction-node | Node name (each member has multiple nodes). |
-| network_id | 543 | Geth network ID. |
-| is_miner | False | Is_Miner == true (Validator Node), Is_Miner == false (Transaction node) |
-| quorum_version | 2.7.0 | Version of Quorum |
-| tessera_version | 0.10.5 | Tessera version |
-| java_version | java-11-openjdk-amd64 | Java version Tessera uses |
-| CurrentBlockNumber | | Current block number for the blockchain network |
-
-## Migrated Data Folder structure
-
-At the top level, there are folders that correspond to each of the nodes of the members.
--- **Standard SKU** - Two validator nodes (validator-node-0
-and validator-node-1)
-- **Basic SKU** - One validator node (validator-node-0)-- **Transaction Node** - Default transaction node named transaction-node.-
-Other transaction node folders are named after the transaction node name.
-
-### Node level folder structure
-
-Each node level folder contains a zip file that is encrypted using the encryption key. For details on the obtaining the encryption key, see the [Download data](#download-data) section of this article.
-
-| Directory/File | Description |
-|-|--|
-| /config/config.toml | Geth parameters. Command line parameters take precedence |
-| /config/genesis.json | Genesis file |
-| /config/logback-tessera.xml | Logback configuration for Tessera |
-| /config/static-nodes.json | Static nodes. Bootstrap nodes are removed and auto-discovery is disabled. |
-| /config/tessera-config.json | Tessera configuration |
-| /data/c/ | Tessera DB |
-| /data/dd/ | Geth data directory |
-| /env/env | Metadata |
-| /keys/ | Tessera keys |
-| /scripts/ | Startup scripts (provided for reference only) |
-
-## Frequently asked questions
-
-### What does service retirement mean for existing customers?
-
-The existing Azure Blockchain Service deployments cannot be continued beyond retirement of the service. Start evaluating alternatives suggested in this article before retirement based on your requirements.
-
-### What happens to existing deployments after the announcement of retirement?
-
-Existing deployments are supported for 120 days from the day of the retirement announcement. Evaluate the suggested alternatives, migrate the data to the alternate offering, operate your requirement on the alternative offering, and start migrating from the deployment on Azure Blockchain Service.
-
-### How long will the existing deployments be supported on Azure Blockchain Service?
-
-Existing deployments are supported for 120 days from the day of retirement announcement.
-
-### Will I be allowed to create new Azure Blockchain members while in retirement phase?
-
-While in retirement phase, no new member creation or deployments are supported.
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-msft-http-debug-headers.md
X-Cache: TCP_HIT | This header is returned when the content is served from the C
X-Cache: TCP_REMOTE_HIT | This header is returned when the content is served from the CDN regional cache (Origin shield layer) X-Cache: TCP_MISS | This header is returned when there is a cache miss, and the content is served from the Origin. -
+For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](/azure/frontdoor/front-door-http-headers-protocol#front-door-to-backend).
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
- Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/extensions.md
For more information, see [Apply the Windows Azure diagnostics extension in Clou
## Anti Malware Extension An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Note that Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, hence these cmdlets can be used for configuring Antimalware.
-For more information, see [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](https://docs.microsoft.com/azure/security/fundamentals/antimalware-code-samples#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)
+For more information, see [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)
-To know more about Azure Antimalware, please visit [here](https://docs.microsoft.com/azure/security/fundamentals/antimalware)
+To know more about Azure Antimalware, please visit [here](../security/fundamentals/antimalware.md)
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/faq.md
Cloud Services (extended support) deployment only supports the Stopped- Allocate
Cloud Services (extended support) deployments cannot scale across multiple clusters, availability zones and regions. ### How can I get the deployment ID for my Cloud Service (extended support)
-Deployment ID aka Private ID can be accessed using the [CloudServiceInstanceView](https://docs.microsoft.com/rest/api/compute/cloudservices/getinstanceview#cloudserviceinstanceview) API. It is also available on the Azure portal under the Role and Instances blade of the Cloud Service (extended support)
+Deployment ID aka Private ID can be accessed using the [CloudServiceInstanceView](/rest/api/compute/cloudservices/getinstanceview#cloudserviceinstanceview) API. It is also available on the Azure portal under the Role and Instances blade of the Cloud Service (extended support)
### Are there any pricing differences between Cloud Services (classic) and Cloud Services (extended support)? Cloud Services (extended support) uses Azure Key Vault and Basic (ARM) Public IP addresses. Customers requiring certificates need to use Azure Key Vault for certificate management ([learn more](https://azure.microsoft.com/pricing/details/key-vault/) about Azure Key Vault pricing.)  Each Public IP address for Cloud Services (extended support) is charged separately ([learn more](https://azure.microsoft.com/pricing/details/ip-addresses/) about Public IP Address pricing.)
Cloud Services (extended support) has adopted the same process as other compute
No. Key Vault is a regional resource and customers need one Key Vault in each region. However, one Key Vault can be used for all deployments within a given region. ## Next steps
-To start using Cloud Services (extended support), see [Deploy a Cloud Service (extended support) using PowerShell](deploy-powershell.md)
+To start using Cloud Services (extended support), see [Deploy a Cloud Service (extended support) using PowerShell](deploy-powershell.md)
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-common-errors.md
Common migration errors and mitigation steps.
| Error message | Details | ||| | The resource type could not be found in the namespace `Microsoft.Compute` for api version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#setup-access-for-migration) for CloudServices feature flag to access public preview. |
-| The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
-| The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
+| The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
+| The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment is not located in a virtual network. Refer [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | | Migration of deployment deployment-name in cloud service cloud-service-name is not supported because it is in region region-name. Allowed regions: [list of available regions]. | Region is not yet supported for migration. | | The Deployment deployment-name in cloud service cloud-service-name cannot be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. |
Common migration errors and mitigation steps.
| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value is not supported for DestinationVirtualNetwork property in the REST request body. | | The deployment {0} cannot be migrated because the CSPKG is not available. | Upgrade the deployment and try again. | | The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. |
-| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. |
| Deployment {0} in HostedService {1} has not been prepared for Migration. | Run prepare on the cloud service before running the commit operation. |
-| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
-| UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
-| XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
+| XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. |
| Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). | | The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) | ## Next steps
-For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md)
+For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md)
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
This article provides an overview on the platform-supported migration tool and how to use it to migrate [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Azure Cloud Services (extended support)](overview.md).
-The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](https://docs.microsoft.com/azure/virtual-machines/migration-classic-resource-manager-overview).
+The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](../virtual-machines/migration-classic-resource-manager-overview.md).
> [!IMPORTANT] > Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
The migration tool utilizes the same APIs and has the same experience as the [Vi
Refer to the following resources if you need assistance with your migration: -- [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html): Microsoft and community support for migration.
+- [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html): Microsoft and community support for migration.
- [Azure Migration Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%22pesId%22:%22e79dcabe-5f77-3326-2112-74487e1e5f78%22,%22supportTopicId%22:%22fca528d2-48bd-7c9f-5806-ce5d5b1d226f%22%7D): Dedicated support team for technical assistance during migration. Customers without technical support can use [free support capability](https://aka.ms/cs-migration-errors) provided specifically for this migration. - If your company/organization has partnered with Microsoft or works with Microsoft representatives such as cloud solution architects or technical account managers, reach out to them for more resources for migration. - Complete [this survey](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR--AgudUMwJKgRGMO84rHQtUQzZYNklWUk4xOTFXVFBPOFdGOE85RUIwVC4u) to provide feedback or raise issues to the Cloud Services (extended support) product team.
To perform this migration, you must be added as a coadministrator for the subscr
``` ## How is migration for Cloud Services (classic) different from Virtual Machines (classic)?
-Azure Service Manager supports two different compute products, [Azure Virtual Machines (classic)](https://docs.microsoft.com/previous-versions/azure/virtual-machines/windows/classic/tutorial-classic) and [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) or Web/ Worker roles. The two products differ based on the deployment type that lies within the Cloud Service. Azure Cloud Services (classic) uses Cloud Service containing deployments with Web/Worker roles. Azure Virtual Machines (classic) uses a cloud service containing deployments with IaaS VMs.
+Azure Service Manager supports two different compute products, [Azure Virtual Machines (classic)](/previous-versions/azure/virtual-machines/windows/classic/tutorial-classic) and [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) or Web/ Worker roles. The two products differ based on the deployment type that lies within the Cloud Service. Azure Cloud Services (classic) uses Cloud Service containing deployments with Web/Worker roles. Azure Virtual Machines (classic) uses a cloud service containing deployments with IaaS VMs.
The list of supported scenarios differ between Cloud Services (classic) and Virtual Machines (classic) because of differences in the deployment types.
These are top scenarios involving combinations of resources, features, and Cloud
| Service | Configuration | Comments | ||||
-| [Azure AD Domain Services](https://docs.microsoft.com/azure/active-directory-domain-services/migrate-from-classic-vnet) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
+| [Azure AD Domain Services](../active-directory-domain-services/migrate-from-classic-vnet.md) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing either a prod or staging slot deployment can be migrated | | Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | |Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration
These are top scenarios involving combinations of resources, features and Cloud
| Resource | Next steps / work-around | |||
-| Auto Scale Rules | Migration goes through but rules are dropped. [Recreate the rules](https://docs.microsoft.com/azure/cloud-services-extended-support/configure-scaling) after migration on Cloud Services (extended support). |
-| Alerts | Migration goes through but alerts are dropped. [Recreate the rules](https://docs.microsoft.com/azure/cloud-services-extended-support/enable-alerts) after migration on Cloud Services (extended support). |
+| Auto Scale Rules | Migration goes through but rules are dropped. [Recreate the rules](./configure-scaling.md) after migration on Cloud Services (extended support). |
+| Alerts | Migration goes through but alerts are dropped. [Recreate the rules](./enable-alerts.md) after migration on Cloud Services (extended support). |
| VPN Gateway | Remove the VPN Gateway before beginning migration and then recreate the VPN Gateway once migration is complete. | | Express Route Gateway (in the same subscription as Virtual Network only) | Remove the Express Route Gateway before beginning migration and then recreate the Gateway once migration is complete. |
-| Quota | Quota is not migrated. [Request new quota](https://docs.microsoft.com/azure/azure-resource-manager/templates/error-resource-quota#solution) on Azure Resource Manager prior to migration for the validation to be successful. |
+| Quota | Quota is not migrated. [Request new quota](../azure-resource-manager/templates/error-resource-quota.md#solution) on Azure Resource Manager prior to migration for the validation to be successful. |
| Affinity Groups | Not supported. Remove any affinity groups before migration. |
-| Virtual networks using [virtual network peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. |
+| Virtual networks using [virtual network peering](../virtual-network/virtual-network-peering-overview.md)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. |
| Virtual networks that contain App Service environments | Not supported | | Virtual networks that contain HDInsight services | Not supported. | Virtual networks that contain Azure API Management deployments | Not supported. <br><br> To migrate the virtual network, change the virtual network of the API Management deployment. This is a no downtime operation. |
These are top scenarios involving combinations of resources, features and Cloud
| Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. | | Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. | | Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. |
-| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins are not recommended](https://docs.microsoft.com/azure/cloud-services-extended-support/deploy-prerequisite#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).|
+| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins are not recommended](./deploy-prerequisite.md#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).|
| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. | Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The migration will complete, but the role sizes will be updated to use modern role sizes. There is no change in cost or SKU properties and virtual machine will not be rebooted for this change. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)| | Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. |
Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deploy
- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates) - Use the Get API to get the latest copy of the deployment files.
- - Get the template using [Portal](https://docs.microsoft.com/azure/azure-resource-manager/templates/export-template-portal), [PowerShell](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-powershell#export-resource-groups-to-templates), [CLI](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-cli#export-resource-groups-to-templates), and [Rest API](https://docs.microsoft.com/rest/api/resources/resourcegroups/exporttemplate)
- - Get the .csdef file using [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) or [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/rest-get-package).
- - Get the .cscfg file using [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) or [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/rest-get-package).
+ - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [Rest API](/rest/api/resources/resourcegroups/exporttemplate)
+ - Get the .csdef file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
+ - Get the .cscfg file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
Customers need to update their tooling and automation to start using the new API
## Next steps - [Overview of Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md) - Migrate to Cloud Services (extended support) using the [Azure portal](in-place-migration-portal.md)-- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
+- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-portal.md
If you're not able to add a co-administrator, contact a service administrator or
**Sign up for Migration resource provider**
-1. Register with the migration resource provider `Microsoft.ClassicInfrastructureMigrate` and preview feature `Cloud Services` under Microsoft.Compute namespace using the [Azure portal](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1).
+1. Register with the migration resource provider `Microsoft.ClassicInfrastructureMigrate` and preview feature `Cloud Services` under Microsoft.Compute namespace using the [Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
1. Wait five minutes for the registration to complete then check the status of the approval. ## Migrate your Cloud Service resources
If you're not able to add a co-administrator, contact a service administrator or
Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations". ## Next steps
-Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-powershell.md
Planning is the most important step for a successful migration experience. Revie
## 2) Install the latest version of PowerShell There are two main options to install Azure PowerShell: [PowerShell Gallery](https://www.powershellgallery.com/profiles/azure-sdk/) or [Web Platform Installer (WebPI)](https://aka.ms/webpi-azps). WebPI receives monthly updates. PowerShell Gallery receives updates on a continuous basis. This article is based on Azure PowerShell version 2.1.0.
-For installation instructions, see [How to install and configure Azure PowerShell](https://docs.microsoft.com/powershell/azure/servicemanagement/install-azure-ps?view=azuresmps-4.0.0&preserve-view=true).
+For installation instructions, see [How to install and configure Azure PowerShell](/powershell/azure/servicemanagement/install-azure-ps?preserve-view=true&view=azuresmps-4.0.0).
## 3) Ensure Admin permissions To perform this migration, you must be added as a coadministrator for the subscription in the [Azure portal](https://portal.azure.com).
Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName
## Next steps
-Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-technical-details.md
This article discusses the technical details regarding the migration tool as per
### Service Configuration and Service Definition files - The .cscfg and .csdef files needs to be updated for Cloud Services (extended support) with minor changes. - The names of resources like virtual network and VM SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)-- Customers can retrieve their new deployments through [PowerShell](https://docs.microsoft.com/powershell/module/az.cloudservice/?view=azps-5.4.0#cloudservice&preserve-view=true) and [Rest API](https://docs.microsoft.com/rest/api/compute/cloudservices/get).
+- Customers can retrieve their new deployments through [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) and [Rest API](/rest/api/compute/cloudservices/get).
### Cloud Service and deployments - Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots.
As part of migration, the resource names are changed, and few Cloud Services fea
- Customers can use PowerShell or Rest API to abort or commit. ### How much time can the operations take?<br>
-Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
+Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
cloud-services-extended-support Role Startup Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/role-startup-failure.md
To deploy your cloud service with IntelliTrace turned on:
## Next steps -- Learn how to [troubleshoot cloud service role issues by using Azure PaaS computer diagnostics data](https://docs.microsoft.com/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data).
+- Learn how to [troubleshoot cloud service role issues by using Azure PaaS computer diagnostics data](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data).
cognitive-services Csharptutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Tutorials/CSharpTutorial.md
When no longer needed, delete the folder into which you cloned the `Microsoft/Co
## Next steps > [!div class="nextstepaction"]
-> [Get started with Face service](../../Face/Tutorials/FaceAPIinCSharpTutorial.md)
+> [Get started with Face service](../../face/quickstarts/client-libraries.md?pivots=programming-language-csharp)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog
### Computer Vision v3.2 GA The Computer Vision API v3.2 is now generally available with the following updates:
-* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions and content displayed in the image. This is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](https://docs.microsoft.com/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtocallvisionapi) and [overview](https://docs.microsoft.com/azure/cognitive-services/computer-vision/overview-image-analysis) to learn more.
-* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy and gory visual content. This is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](https://docs.microsoft.com/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtocallvisionapi) and [overview](https://docs.microsoft.com/azure/cognitive-services/computer-vision/overview-image-analysis) to learn more.
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions and content displayed in the image. This is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy and gory visual content. This is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages. * [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## Cognitive Service updates
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
+[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Face How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-how-to-install-containers.md
Previously updated : 02/23/2021 Last updated : 04/28/2021 keywords: on-premises, Docker, container, identify
-# Install and run Face containers (Preview)
+# Install and run Face containers (Retiring)
> [!IMPORTANT]
-> The limit for Face container users has been reached. We are not currently accepting new applications for the Face container.
+> The Face container preview is no longer accepting applications and the container has been deprecated as of April 29th 2021. The Face container will be fully retired on July 26th 2021.
Azure Cognitive Services Face API provides a Linux Docker container that detects and analyzes human faces in images. It also identifies attributes, which include face landmarks such as noses and eyes, gender, age, and other machine-predicted facial features. In addition to detection, Face can check if two faces in the same image or different images are the same by using a confidence score. Face also can compare faces against a database to see if a similar-looking or identical face already exists. It also can organize similar faces into groups by using shared visual traits.
cognitive-services Face Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-resource-container-config.md
Previously updated : 04/01/2020 Last updated : 04/29/2021
-# Configure Face Docker containers
+# Configure Face Docker containers (Retiring)
+
+> [!IMPORTANT]
+> The Face container preview is no longer accepting applications and the container has been deprecated as of April 29th 2021. The Face container will be fully retired on July 26th 2021.
The **Face** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
cognitive-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/what-is-luis.md
This documentation contains the following article types:
* **Simplicity**: LUIS offloads you from the need of in-house AI expertise or any prior machine learning knowledge. With only a few clicks you can build your own conversational AI application. You can build your custom application by following one of our [quickstarts](luis-get-started-create-app.md), or you can use one of our [prebuilt domain](luis-get-started-create-app.md) apps. * **Security, Privacy and Compliance**: Backed by Azure infrastructure, LUIS offers enterprise-grade security, privacy, and compliance. Your data remains yours; you can delete your data at any time. Your data is encrypted while itΓÇÖs in storage. Learn more about this [here](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy).
-* **Integration**: easily integrate your LUIS app with other Microsoft services like [Microsoft Bot framework](https://docs.microsoft.com/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../Speech-Service/quickstarts/intent-recognition.md).
+* **Integration**: easily integrate your LUIS app with other Microsoft services like [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md).
## LUIS Scenarios
-* [Build an enterprise-grade conversational bot](https://docs.microsoft.com/azure/architecture/reference-architectures/ai/conversational-bot): This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.
-* [Commerce Chatbot](https://docs.microsoft.com/azure/architecture/solution-ideas/articles/commerce-chatbot): Together, the Azure Bot Service and Language Understanding service enable developers to create conversational interfaces for various scenarios like banking, travel, and entertainment.
-* [Controlling IoT devices using a Voice Assistant](https://docs.microsoft.com/azure/architecture/solution-ideas/articles/iot-controlling-devices-with-voice-assistant): Create seamless conversational interfaces with all of your internet-accessible devices-from your connected television or fridge to devices in a connected power plant.
+* [Build an enterprise-grade conversational bot](/azure/architecture/reference-architectures/ai/conversational-bot): This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.
+* [Commerce Chatbot](/azure/architecture/solution-ideas/articles/commerce-chatbot): Together, the Azure Bot Service and Language Understanding service enable developers to create conversational interfaces for various scenarios like banking, travel, and entertainment.
+* [Controlling IoT devices using a Voice Assistant](/azure/architecture/solution-ideas/articles/iot-controlling-devices-with-voice-assistant): Create seamless conversational interfaces with all of your internet-accessible devices-from your connected television or fridge to devices in a connected power plant.
## Application Development life cycle
This documentation contains the following article types:
- **Build**: Use your authoring resource to develop your app. Start by defining [intents](luis-concept-intent.md) and [entities](luis-concept-entity-types.md). Then, add training [utterances](luis-concept-utterance.md) for each intent. - **Test and Improve**: Start testing your model with other utterances to get a sense of how the app behaves, and you can decide if any improvement is needed. You can improve your application by following these [best practices](luis-concept-best-practices.md). - **Publish**: Deploy your app for prediction and query the endpoint using your prediction resource. Learn more about authoring and prediction resources [here](luis-how-to-azure-subscription.md#luis-resources). -- **Connect**: Connect to other services such as [Microsoft Bot framework](https://docs.microsoft.com/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../Speech-Service/quickstarts/intent-recognition.md).
+- **Connect**: Connect to other services such as [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md).
- **Refine**: [Review endpoint utterances](luis-concept-review-endpoint-utterances.md) to improve your application with real life examples Learn more about planning and building your application [here](luis-how-plan-your-app.md).
Learn more about planning and building your application [here](luis-how-plan-you
[flow]: /connectors/luis/ [authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087 [endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356
-[qnamaker]: https://qnamaker.ai/
+[qnamaker]: https://qnamaker.ai/
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Overview/overview.md
QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows
QnA Maker is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications.
-QnA Maker doesn't store customer data. All customer data (question answers and chatlogs) is stored in the region the customer deploys the dependent service instances in. For more details on dependent services see [here](https://docs.microsoft.com/azure/cognitive-services/qnamaker/concepts/plan?tabs=v1).
+QnA Maker doesn't store customer data. All customer data (question answers and chatlogs) is stored in the region the customer deploys the dependent service instances in. For more details on dependent services see [here](../concepts/plan.md?tabs=v1).
## When to use QnA Maker
We offer quickstarts in most popular programming languages, each designed to tea
QnA Maker provides everything you need to build, manage, and deploy your custom knowledge base. > [!div class="nextstepaction"]
-> [Review the latest changes](../whats-new.md)
+> [Review the latest changes](../whats-new.md)
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
``` ## Certificate Revocation Checks
-When connecting to the Speech Service, the Speech SDK will verify that the TLS certificate used by the Speech Service has not been revoked. To conduct this check, the Speech SDK will need access to the CRL distribution points for Certificate Authorities used by Azure. A list of possible CRL download locations can be found in [this document](https://docs.microsoft.com/azure/security/fundamentals/tls-certificate-changes). If a certificate has been revoked or the CRL cannot be downloaded the Speech SDK will abort the connection and raise the Canceled event.
+When connecting to the Speech Service, the Speech SDK will verify that the TLS certificate used by the Speech Service has not been revoked. To conduct this check, the Speech SDK will need access to the CRL distribution points for Certificate Authorities used by Azure. A list of possible CRL download locations can be found in [this document](../../security/fundamentals/tls-certificate-changes.md). If a certificate has been revoked or the CRL cannot be downloaded the Speech SDK will abort the connection and raise the Canceled event.
In the event the network where the Speech SDK is being used from is configured in a manner that does not permit access to the CRL download locations, the CRL check can either be disabled or set to not fail if the CRL cannot be retrieved. This configuration is done through the configuration object used to create a Recognizer object.
speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")?
## Next steps > [!div class="nextstepaction"]
-> [About the Speech SDK](speech-sdk.md)
+> [About the Speech SDK](speech-sdk.md)
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
This article outlines the differences between the Bing Speech APIs and the Speec
A single Speech service subscription key grants access to the following features. Each is metered separately, so you're charged only for the features you use. * [Speech-to-text](speech-to-text.md)
-* [Custom speech-to-text](/azure/cognitive-services/speech-service/custom-speech-overview)
+* [Custom speech-to-text](./custom-speech-overview.md)
* [Text-to-speech](text-to-speech.md) * [Custom text-to-speech voices](./how-to-custom-voice-create-voice.md) * [Speech translation](speech-translation.md) (does not include [Text translation](../translator/translator-info-overview.md))
For Speech service, SDK, and API support, visit the Speech service [support page
* [Speech service release notes](releasenotes.md) * [What is the Speech service](overview.md)
-* [Speech service and Speech SDK documentation](speech-sdk.md#get-the-speech-sdk)
+* [Speech service and Speech SDK documentation](speech-sdk.md#get-the-speech-sdk)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
More samples have been added and are constantly being updated. For the latest se
## Cognitive Services Speech SDK 0.2.12733: 2018-May release
-This release is the first public preview release of the Cognitive Services Speech SDK.
+This release is the first public preview release of the Cognitive Services Speech SDK.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API f
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| **Concurrent Request limit - Base model** | 1 | 100 (default value) |
+| **Concurrent Request limit - Base model endpoint** | 1 | 100 (default value) |
| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
-| **Concurrent Request limit - Custom model** | 1 | 20 (default value) |
+| **Concurrent Request limit - Custom endpoint** | 1 | 100 (default value) |
| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> | #### Batch Transcription
The next sections describe specific cases of adjusting quotas.<br/>
Jump to [Text-to-Speech. Increasing Transcription Concurrent Request limit for Custom voice](#text-to-speech-increasing-transcription-concurrent-request-limit-for-custom-voice) ### Speech-to-text: increasing online transcription concurrent request limit
-By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 20 per Custom endpoint (Custom model). For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
+By default the number of concurrent requests is limited to 100 per Speech resource (Base model) and to 100 per Custom endpoint (Custom model). For Standard pricing tier this amount can be increased. Before submitting the request, ensure you are familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
>[!NOTE]
-> If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (20) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource.
+> If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource.
Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
While using SSML, keep in mind that special characters, such as quotation marks,
Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used, and when an element is required or optional. > [!IMPORTANT]
-> Don't forget to use double quotes around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML may not recognize attribute values that are not in quotes.
+> Don't forget to use double quotes around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML might not recognize attribute values that are not in quotes.
## Create an SSML document
Each SSML document is created with SSML elements (or tags). These elements are u
| Attribute | Description | Required / Optional | |--|-|| | `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is 1.0. | Required |
-| `xml:lang` | Specifies the language of the root document. The value may contain a lowercase, two-letter language code (for example, `en`), or the language code and uppercase country/region (for example, `en-US`). | Required |
+| `xml:lang` | Specifies the language of the root document. The value can contain a lowercase, two-letter language code (for example, `en`), or the language code and uppercase country/region (for example, `en-US`). | Required |
| `xmlns` | Specifies the URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is http://www.w3.org/2001/10/synthesis. | Required | ## Choose a voice for text-to-speech
Currently, speaking style adjustments are supported for the following neural voi
* `zh-CN-XiaoxuanNeural` (Preview) * `zh-CN-XiaoruiNeural` (Preview)
+> [!NOTE]
+> Voices in preview are only available in these 3 regions: East US, West Europe and Southeast Asia.
+ The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Currently, speaking style adjustments are supported for Chinese (Mandarin, Simplified) neural voices. Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name will not be changed. Currently, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
Above changes are applied at the sentence level, and styles and role-plays vary
<mstts:express-as role="string" style="string"></mstts:express-as> ``` > [!NOTE]
-> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.
+> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.
**Attributes**
Use this table to determine which speaking styles are supported for each neural
| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. | | | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. | | | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| `zh-CN-YunxiNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
+| `zh-CN-YunxiNeural` | `style="assistant"` | Expresses a warm and relaxed tone for digital assistants |
+| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
| | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. | | | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. | | | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
Use this table to determine which speaking styles are supported for each neural
| | `style="embarrassed"` | Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable | | | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. | | | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaomoNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
+| `zh-CN-XiaomoNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
+| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
+| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
+| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
+| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
| | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy | | | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaoxuanNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
+| `zh-CN-XiaoxuanNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
+| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
+| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
+| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
+| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
| | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy | | | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaoruiNeural` | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
+| `zh-CN-XiaoruiNeural` | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
+| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
Use this table to check the supported roles and their definitions.
A good place to start is by trying out the slew of educational apps that are hel
`p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, the text-to-speech service automatically determines the structure of the SSML document.
-The `p` element may contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `sub`, `mstts:express-as`, and `s`.
+The `p` element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `sub`, `mstts:express-as`, and `s`.
-The `s` element may contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `mstts:express-as`, and `sub`.
+The `s` element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `mstts:express-as`, and `sub`.
**Syntax**
The `s` element may contain text and the following elements: `audio`, `break`, `
The `ph` element is used to for phonetic pronunciation in SSML documents. The `ph` element can only contain text, no other elements. Always provide human-readable speech as a fallback.
-Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter may represent multiple spoken sounds. Consider the different pronunciations of the letter "c" in the words "candy" and "cease", or the different pronunciations of the letter combination "th" in the words "thing" and "those".
+Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different pronunciations of the letter "c" in the words "candy" and "cease", or the different pronunciations of the letter combination "th" in the words "thing" and "those".
> [!NOTE] > Phonemes tag is not supported for these 5 voices (et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural and mt-MT-GarceNeural) at the moment.
Phonetic alphabets are composed of phones, which are made up of letters, numbers
| Attribute | Description | Required / Optional | |--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you may specify.<ul><li>`ipa` &ndash; <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet </a></li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash;<a href="https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm" target="_blank"> Universal Phone Set</a></li></ul><br>The alphabet applies only to the `phoneme` in the element.. | Optional |
+| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet </a></li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash;<a href="https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm" target="_blank"> Universal Phone Set</a></li></ul><br>The alphabet applies only to the `phoneme` in the element.. | Optional |
| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, the text-to-speech (TTS) service rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes. | **Examples**
For more information on the detailed Speech service phonetic alphabet, see the [
## Adjust prosody
-The `prosody` element is used to specify changes to pitch, contour, range, rate, duration, and volume for the text-to-speech output. The `prosody` element may contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
+The `prosody` element is used to specify changes to pitch, contour, range, rate, duration, and volume for the text-to-speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. The text-to-speech service limits or substitutes values that are not supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
Because prosodic attribute values can vary over a wide range, the speech recogni
| Attribute | Description | Required / Optional | |--|-||
-| `pitch` | Indicates the baseline pitch for the text. You may express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st", that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st", that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
| `contour` |Contour now supports both neural and standard voices. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch, using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `range` | A value that represents the range of pitch for the text. You may express `range` using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
-| `rate` | Indicates the speaking rate of the text. You may express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
+| `range` | A value that represents the range of pitch for the text. You can express `range` using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
+| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
| `duration` | The period of time that should elapse while the speech synthesis (TTS) service reads the text, in seconds or milliseconds. For example, *2s* or *1800ms*. Duration supports standard voices only.| Optional |
-| `volume` | Indicates the volume level of the speaking voice. You may express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. For example, 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. For example, +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. For example, 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. For example, +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
### Change speaking rate
Pitch changes can be applied to standard voices at the word or sentence-level. W
| Attribute | Description | Required / Optional | |--|-|| | `interpret-as` | Indicates the content type of element's text. For a list of types, see the table below. | Required |
-| `format` | Provides additional information about the precise formatting of the element's text for content types that may have ambiguous formats. SSML defines formats for content types that use them (see table below). | Optional |
+| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them (see table below). | Optional |
| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional | <!-- I don't understand the last sentence. Don't we know which one Cortana uses? -->
The following are the supported content types for the `interpret-as` and `format
| `digits`, `number_digit` | | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." | | `fraction` | | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." | | `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option". |
-| `telephone` | | The text is spoken as a telephone number. The `format` attribute may contain digits that represent a country code. For example, "1" for the United States or "39" for Italy. The speech synthesis engine may use this information to guide its pronunciation of a phone number. The phone number may also include the country code, and if so, takes precedence over the country code in the `format`. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. For example, "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format`. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. The following are valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." | **Usage**
-The `say-as` element may contain only text.
+The `say-as` element can only contain text.
**Example**
The speech synthesis engine speaks the following example as "Your first request
## Add recorded audio
-`audio` is an optional element that allows you to insert MP3 audio into an SSML document. The body of the audio element may contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. Additionally, the `audio` element can contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
+`audio` is an optional element that allows you to insert MP3 audio into an SSML document. The body of the audio element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. Additionally, the `audio` element can contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
Any audio included in the SSML document must meet these requirements:
You can subscribe to the `BookmarkReached` event in Speech SDK to get the bookma
# [C#](#tab/csharp)
-For more information, see <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see <a href="/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached" target="_blank"> `BookmarkReached` </a>.
```csharp synthesizer.BookmarkReached += (s, e) =>
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [C++](#tab/cpp)
-For more information, see <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see <a href="/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached" target="_blank"> `BookmarkReached` </a>.
```cpp synthesizer->BookmarkReached += [](const SpeechSynthesisBookmarkEventArgs& e)
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Java](#tab/java)
-For more information, see <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached" target="_blank"> `BookmarkReached` </a>.
+For more information, see <a href="/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached" target="_blank"> `BookmarkReached` </a>.
```java synthesizer.BookmarkReached.addEventListener((o, e) -> {
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Python](#tab/python)
-For more information, see <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached" target="_blank"> `bookmark_reached` </a>.
+For more information, see <a href="/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached" target="_blank"> `bookmark_reached` </a>.
```python # The unit of evt.audio_offset is tick (1 tick = 100 nanoseconds), divide it by 10,000 to convert to milliseconds.
Bookmark reached, audio offset: 1462.5ms, bookmark text: flower_2.
# [JavaScript](#tab/javascript)
-For more information, see <a href="https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached" target="_blank"> `bookmarkReached`</a>.
+For more information, see <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached" target="_blank"> `bookmarkReached`</a>.
```javascript synthesizer.bookmarkReached = function (s, e) {
For the example SSML above, the `bookmarkReached` event will be triggered twice,
# [Objective-C](#tab/objectivec)
-For more information, see <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
```objectivec [synthesizer addBookmarkReachedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisBookmarkEventArgs *eventArgs) {
For more information, see <a href="/objectivec/cognitive-services/speech/spxspee
## Next steps
-* [Language support: voices, locales, languages](language-support.md)
+* [Language support: voices, locales, languages](language-support.md)
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
Previously updated : 01/13/2021 Last updated : 04/28/2021
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/sentence-alignment.md
For a training to succeed, the table below shows the minimum number of sentences
> - Training will not start and will fail if the 10,000 minimum sentence count for Training is not met. > - Tuning and Testing are optional. If you do not provide them, the system will remove an appropriate percentage from Training to use for validation and testing. > - You can train a model using only dictionary data. Please refer to [What is Dictionary](./what-is-dictionary.md).
-> - If your dictionary contains more than 250,000 sentences, **[Document Translator](https://docs.microsoft.com/azure/cognitive-services/translator/document-translation/overview)** is likely a better choice.
+> - If your dictionary contains more than 250,000 sentences, **[Document Translator](../document-translation/overview.md)** is likely a better choice.
## Next steps -- Learn how to use a [dictionary](what-is-dictionary.md) in Custom Translator.
+- Learn how to use a [dictionary](what-is-dictionary.md) in Custom Translator.
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
The following headers are included with each Document Translator API request:
> [!IMPORTANT] >
-> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp) for ways to securely store and access your credentials.
+> For the code samples below, you'll hard-code your key and endpoint where indicated; remember to remove the key from your code when you're done, and never post it publicly. See [Azure Cognitive Services security](../../cognitive-services-security.md?tabs=command-line%2ccsharp) for ways to securely store and access your credentials.
> > You may need to update the following fields, depending upon the operation: >>>
The table below lists the limits for data that you send to Document Translation
> [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../custom-translator/overview.md) >
->
+>
cognitive-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/azure-container-instance-recipe.md
description: Learn how to deploy Cognitive Services Containers on Azure Containe
-+ Last updated 12/18/2020
cognitive-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/create-account-resource-manager-template.md
Previously updated : 3/22/2021 Last updated : 04/28/2021
Create a resource using an Azure Resource Manager template (ARM template). This
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy your cognitive service to Azure](../media/template-deployments/deploy-to-azure.svg "Deploy your cognitive service to Azure")](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-cognitive-services-universalkey%2Fazuredeploy.json)
+[![Deploy your cognitive service to Azure](../media/template-deployments/deploy-to-azure.svg "Deploy your cognitive service to Azure")](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cognitiveservices%2Fcognitive-services-universalkey%2Fazuredeploy.json)
## Prerequisites
If your environment meets the prerequisites and you're familiar with using ARM t
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-cognitive-services-universalkey/). One Azure resource is defined in the template: * [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts): creates a Cognitive Services resource.
One Azure resource is defined in the template:
1. Click the **Deploy to Azure** button.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-cognitive-services-universalkey%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cognitiveservices%2Fcognitive-services-universalkey%2Fazuredeploy.json)
2. Enter the following values.
Run the following script using the Azure Command Line Interface (CLI) [On your l
```azurecli-interactive read -p "Enter a name for your new resource group:" resourceGroupName && read -p "Enter the location (i.e. centralus):" location &&
-templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-cognitive-services-universalkey/azuredeploy.json" &&
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.cognitiveservices/cognitive-services-universalkey/azuredeploy.json" &&
az group create --name $resourceGroupName --location "$location" && az deployment group create --resource-group $resourceGroupName --template-uri $templateUri && echo "Press [ENTER] to continue ..." &&
read
[!INCLUDE [Register Azure resource for subscription](./includes/register-resource-subscription.md)] - ## Review deployed resources # [Portal](#tab/portal)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/whats-new.md
description: Understand the latest changes to the Form Recognizer API. - Previously updated : 04/14/2021 Last updated : 04/28/2021
The Form Recognizer service is updated on an ongoing basis. Use this article to
## April 2021 <!-- markdownlint-disable MD029 -->
-### SDK updates (API version 2.1-preview.3)
+### SDK preview updates for API version 2.1-preview.3
+
+### [**C#**](#tab/csharp)
-### **C# version 3.1.0-beta.4**
+NuGet package version 3.1.0-beta.4
* **New methods to analyze data from identity documents**:
The Form Recognizer service is updated on an ongoing basis. Use this article to
**[RecognizeInvoicesOptions](/dotnet/api/azure.ai.formrecognizer.recognizeinvoicesoptions?view=azure-dotnet-preview&preserve-view=true)**</br> **[RecognizeReceiptsOptions](/dotnet/api/azure.ai.formrecognizer.recognizereceiptsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
- The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the p age numbers and ranges separated by commas: `2, 5-7`.
+ The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the p age numbers and ranges separated by commas: `2, 5-7`.
* **New property `ReadingOrder` supported for the following class**:
The Form Recognizer service is updated on an ongoing basis. Use this article to
* **[StartRecognizeCustomForms](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecustomforms?view=azure-dotnet-preview&preserve-view=true#Azure_AI_FormRecognizer_FormRecognizerClient_StartRecognizeCustomForms_System_String_System_IO_Stream_Azure_AI_FormRecognizer_RecognizeCustomFormsOptions_System_Threading_CancellationToken_)** method now throws a `RequestFailedException()` when an invalid file is passed.
-### **Java version 3.1.0-beta.3**
+### [**Java**](#tab/java)
+
+Maven artifact package dependency version 3.1.0-beta.3
* **New methods to analyze data from identity documents**:
The Form Recognizer service is updated on an ongoing basis. Use this article to
* **New keyword argument `ReadingOrder` supported for the following methods**:
-* **[beginRecognizeContent](https://docs.microsoft.com/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?view=azure-java-preview&preserve-view=true)**</br>
+* **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br>
**[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br> The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. * The client defaults to the latest supported service version, which currently is **2.1-preview.3**.
-### **JavaScript version 3.1.0-beta.3**
+### [**JavaScript**](#tab/javascript)
+
+npm package version 3.1.0-beta.3
* **New methods to analyze data from identity documents**:
The Form Recognizer service is updated on an ongoing basis. Use this article to
`gender`ΓÇöpossible values are `M` `F` or `X`.</br> `country`ΓÇöpossible values follow [ISO alpha-3](https://www.iso.org/obp/ui/#search) three-letter country code string.
-* **New option `pages` supported by all form recognition methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
+* New option `pages` supported by all form recognition methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/readingorder?view=azure-node-preview&preserve-view=true)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This should not cause any API compatibility issues except in certain edge cases (undefined valueType).
+* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This update should not cause any API compatibility issues except in certain edge cases (undefined valueType).
* Migrated to the **2.1-preview.3** Form Recognizer service endpoint for all REST API calls.
-### **Python version 3.1.0b4**
+### [**Python**](#tab/python)
+
+pip package version 3.1.0b4
* **New methods to analyze data from identity documents**:
The Form Recognizer service is updated on an ongoing basis. Use this article to
The `readingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. ++ ## March 2021 **Form Recognizer v2.1 public preview 3 is now available.** v2.1-preview.3 has been released, including the following features:
The Form Recognizer service is updated on an ongoing basis. Use this article to
**Form Recognizer v2.1 public preview 2 is now available.** v2.1-preview.2 has been released, including the following features: -- **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
+* **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
> [Learn more about the prebuilt invoice model](concept-invoices.md) :::image type="content" source="./media/invoice-example.jpg" alt-text="invoice example" lightbox="./media/invoice-example.jpg"::: -- **Enhanced table extraction** - Form Recognizer now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Form Recognizer can extract data from tables, including complex tables with merged columns, rows, no borders and more.
+* **Enhanced table extraction** - Form Recognizer now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Form Recognizer can extract data from tables, including complex tables with merged columns, rows, no borders and more.
:::image type="content" source="./media/tables-example.jpg" alt-text="tables example" lightbox="./media/tables-example.jpg"::: - > [Learn more about Layout extraction](concept-layout.md) -- **Client library update** - The latest versions of the [client libraries](quickstarts/client-library.md) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.-- **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)-- **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.-- **Quality improvements** - Extraction improvements including single digit extraction improvements.-- **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data will be extracted without writing any code.
+* **Client library update** - The latest versions of the [client libraries](quickstarts/client-library.md) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
+* **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)
+* **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.
+* **Quality improvements** - Extraction improvements including single digit extraction improvements.
+* **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data will be extracted without writing any code.
> [Try out the Form Recognizer Sample Tool](https://fott-preview.azurewebsites.net/) ![FOTT example](./media/ui-preview.jpg) -- **Feedback Loop** - When Analyzing files via the sample labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.-- **Auto Label Documents** - Automatically labels additional documents based on previous labeled documents in the project.
+* **Feedback Loop** - When Analyzing files via the sample labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
+* **Auto Label Documents** - Automatically labels additional documents based on previous labeled documents in the project.
## August 2020
The Form Recognizer service is updated on an ongoing basis. Use this article to
**Form Recognizer v2.1 public preview is now available.** V2.1-preview.1 has been released, including the following features: --- **REST API reference is available** - View the [v2.1-preview.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)-- **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).-- **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key value pairs for selection marks.-- **Model Compose** - allows multiple models to be composed and called with a single model ID. When a you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.-- **Model name** - add a friendly name to your custom models for easier management and tracking.-- **[New pre-built model for Business Cards](concept-business-cards.md)** for extracting common fields in English, language business cards.-- **[New locales for pre-built Receipts](concept-receipts.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN-- **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
+* **REST API reference is available** - View the [v2.1-preview.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)
+* **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).
+* **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key value pairs for selection marks.
+* **Model Compose** - allows multiple models to be composed and called with a single model ID. When a you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
+* **Model name** - add a friendly name to your custom models for easier management and tracking.
+* **[New pre-built model for Business Cards](concept-business-cards.md)** for extracting common fields in English, language business cards.
+* **[New locales for pre-built Receipts](concept-receipts.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
+* **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
**v2.0** includes the following update: -- The [client libraries](quickstarts/client-library.md) for NET, Python, Java, and JavaScript have entered General Availability.
+* The [client libraries](quickstarts/client-library.md) for NET, Python, Java, and JavaScript have entered General Availability.
**New samples** are available on GitHub. -- The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.-- The [sample labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](quickstarts/label-tool.md) for getting started with the tool.-- The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
+* The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.
+* The [sample labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](quickstarts/label-tool.md) for getting started with the tool.
+* The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
## July 2020
The Form Recognizer service is updated on an ongoing basis. Use this article to
See the [Sample labeling tool](./quickstarts/label-tool.md#specify-tag-value-types) guide to learn how to use this feature. - * **Table visualization** The sample labeling tool now displays tables that were recognized in the document. This feature lets you view the tables that have been recognized and extracted from the document, prior to labeling and analyzing. This feature can be toggled on/off using the layers option. The following image is an example of how tables are recognized and extracted:
Complete a [quickstart](quickstarts/client-library.md) to get started writing a
## See also
-* [What is Form Recognizer?](./overview.md)
+* [What is Form Recognizer?](./overview.md)
cognitive-services Text Analytics How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
description: Use the Docker containers for the Text Analytics API to perform nat
-+
You must meet the following prerequisites before using Text Analytics containers
[!INCLUDE [Gathering required parameters](../../containers/includes/container-gathering-required-parameters.md)]
-If you're using the Text Analytics for health container, the [responsible AI](https://docs.microsoft.com/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
+If you're using the Text Analytics for health container, the [responsible AI](/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
## The host computer
In this article, you learned concepts and workflow for downloading, installing,
## Next steps * Review [Configure containers](../text-analytics-resource-container-config.md) for configuration settings
-* Refer to [Frequently asked questions (FAQ)](../text-analytics-resource-faq.md) to resolve issues related to functionality.
+* Refer to [Frequently asked questions (FAQ)](../text-analytics-resource-faq.md) to resolve issues related to functionality.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
| Italian | `it` | Γ£ô | 2019-10-01 | | | Japanese | `ja` | Γ£ô | 2019-10-01 | | | Korean | `ko` | Γ£ô | 2019-10-01 | |
-| Norwegian (Bokmål) | `no` | ✓ | 2020-07-01 | |
+| Norwegian (Bokmål) | `no` | ✓ | 2020-04-01 | |
| Portuguese (Brazil) | `pt-BR` | Γ£ô | 2020-04-01 | | | Portuguese (Portugal) | `pt-PT` | Γ£ô | 2019-10-01 | `pt` also accepted | | Spanish | `es` | Γ£ô | 2019-10-01 | |
-| Turkish | `tr` | Γ£ô | 2020-07-01 | |
+| Turkish | `tr` | Γ£ô | 2020-04-01 | |
### Opinion mining (v3.1-preview only)
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
The following sections in this article provides a list of services that are part
|[Custom Vision Service](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | |[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. See [Face quickstart](./face/quickstarts/client-libraries.md) to get started with the service.| |[Form Recognizer](./form-recognizer/index.yml "Form Recognizer")|Form Recognizer identifies and extracts key-value pairs and table data from form documents; then outputs structured data including the relationships in the original file. See [Form Recognizer quickstart](./form-recognizer/quickstarts/client-library.md) to get started.|
-|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video. See [Video Indexer quickstart](/azure/media-services/video-indexer/video-indexer-get-started) to get started.|
+|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video. See [Video Indexer quickstart](../media-services/video-indexer/video-indexer-get-started.md) to get started.|
## Speech APIs
Cognitive Services provides several support options to help you move forward wit
* [Create a Cognitive Services account](cognitive-services-apis-create-account.md "Create a Cognitive Services account") * [What's new in Cognitive Services docs](whats-new-docs.md "What's new in Cognitive Services docs")
-* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
+* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f
- `participantsAdded` - when a user is added as a chat thread participant. - `participantsRemoved` - when an existing participant is removed from the chat thread.
-Real-time notifications can be used to provide a real-time chat experience for your users. To send push notifications for messages missed by your users while they were away, Communication Services integrates with Azure Event Grid to publish chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fcommunication-services%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json).
+Real-time notifications can be used to provide a real-time chat experience for your users. To send push notifications for messages missed by your users while they were away, Communication Services integrates with Azure Event Grid to publish chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](../../../event-grid/event-schema-communication-services.md?bc=https%3a%2f%2fdocs.microsoft.com%2fen-us%2fazure%2fbread%2ftoc.json&toc=https%3a%2f%2fdocs.microsoft.com%2fen-us%2fazure%2fcommunication-services%2ftoc.json).
## Build intelligent, AI powered chat experiences
This way, the message history will contain both original and translated messages
> [Get started with chat](../../quickstarts/chat/get-started.md) The following documents may be interesting to you: -- Familiarize yourself with the [Chat SDK](sdk-features.md)
+- Familiarize yourself with the [Chat SDK](sdk-features.md)
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
This article provides information about limitations and known issues related to the Azure Communication Services Calling SDKs. > [!IMPORTANT]
-> There are multiple factors that can affect the quality of your calling experience. Refer to the **[network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements)** documentation to learn more about Communication Services network configuration and testing best practices.
+> There are multiple factors that can affect the quality of your calling experience. Refer to the **[network requirements](./voice-video-calling/network-requirements.md)** documentation to learn more about Communication Services network configuration and testing best practices.
## JavaScript SDK
If the user was sending video before refreshing, the `videoStreams` collection w
### It's not possible to render multiple previews from multiple devices on web
-This is a known limitation. For more information, refer to the [calling SDK overview](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+This is a known limitation. For more information, refer to the [calling SDK overview](./voice-video-calling/calling-sdk-features.md).
### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS
If access to devices are granted, after some time, device permissions are reset.
<br/>Operating System: iOS ### Sometimes it takes a long time to render remote participant videos
-During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/network-requirements) documentation for network configuration guidance.
+During an ongoing group call, _User A_ sends video and then _User B_ joins the call. Sometimes, User B doesn't see video from User A, or User A's video begins rendering after a long delay. This issue could be caused by a network environment that requires further configuration. Refer to the [network requirements](./voice-video-calling/network-requirements.md) documentation for network configuration guidance.
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
Azure Communication Services integrates with [Azure Event Grid](https://azure.mi
:::image type="content" source="./media/notifications/acs-events-int.png" alt-text="Diagram showing how Communication Services integrates with Event Grid.":::
-Learn more about [event handling in Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
+Learn more about [event handling in Azure Communication Services](../../event-grid/event-schema-communication-services.md).
## Deliver push notifications via Azure Notification Hubs
In case that you regenerated the connection string of your linked Azure Notifica
## Next steps * For an introduction to Azure Event Grid, see [What is Event Grid?](../../event-grid/overview.md)
-* To learn more on the Azure Notification Hub concepts, see [Azure Notification Hubs documentation](../../notification-hubs/index.yml)
+* To learn more on the Azure Notification Hub concepts, see [Azure Notification Hubs documentation](../../notification-hubs/index.yml)
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
# Reference documentation overview - The following table details the available Communication Services packages along with corresponding reference documentation: <!--note that this table also exists here and should be synced: https://github.com/Azure/Communication/blob/master/README.md -->
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Communication Services APIs are documented alongside other Azure REST APIs in [d
| Assembly | Namespaces| Protocols | Capabilities | ||-||--|
-| Azure Resource Manager | Azure.ResourceManager.Communication | [REST](https://docs.microsoft.com/rest/api/communication/communicationservice)| Provision and manage Communication Services resources|
+| Azure Resource Manager | Azure.ResourceManager.Communication | [REST](/rest/api/communication/communicationservice)| Provision and manage Communication Services resources|
| Common | Azure.Communication.Common| REST | Provides base types for other SDKs |
-| Identity | Azure.Communication.Identity| [REST](https://docs.microsoft.com/rest/api/communication/communicationidentity)| Manage users, access tokens|
+| Identity | Azure.Communication.Identity| [REST](/rest/api/communication/communicationidentity)| Manage users, access tokens|
| Phone numbers _(beta)_| Azure.Communication.PhoneNumbers| [REST](/rest/api/communication/phonenumbers)| Acquire and manage phone numbers |
-| Chat | Azure.Communication.Chat| [REST](https://docs.microsoft.com/rest/api/communication/) with proprietary signaling | Add real-time text based chat to your applications |
-| SMS| Azure.Communication.SMS | [REST](https://docs.microsoft.com/rest/api/communication/sms)| Send and receive SMS messages|
+| Chat | Azure.Communication.Chat| [REST](/rest/api/communication/) with proprietary signaling | Add real-time text based chat to your applications |
+| SMS| Azure.Communication.SMS | [REST](/rest/api/communication/sms)| Send and receive SMS messages|
| Calling| Azure.Communication.Calling | Proprietary transport | Use voice, video, screen-sharing, and other real-time data communication capabilities | The Azure Resource Manager, Identity, and SMS SDKs are focused on service integration, and in many cases security issues arise if you integrate these functions into end-user applications. The Common and Chat SDKs are suitable for service and client applications. The Calling SDK is designed for client applications. An SDK focused on service scenarios is in development.
Publishing locations for individual SDK packages are detailed below.
## REST API Throttles
-Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a `429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](https://docs.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request).
+Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a `429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
| API | Throttle | ||| | [All Search Telephone Number Plan APIs](/rest/api/communication/phonenumbers) | 4 requests/day | | [Purchase Telephone Number Plan](/rest/api/communication/phonenumbers/purchasephonenumbers) | 1 purchase a month |
-| [Send SMS](https://docs.microsoft.com/rest/api/communication/sms/send) | 200 requests/minute |
+| [Send SMS](/rest/api/communication/sms/send) | 200 requests/minute |
## SDK platform support details
For more information, see the following SDK overviews:
To get started with Azure Communication - [Create Azure Communication Resources](../quickstarts/create-communication-resource.md)-- Generate [User Access Tokens](../quickstarts/access-tokens.md)
+- Generate [User Access Tokens](../quickstarts/access-tokens.md)
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sms-faq.md
Opt-outs for US toll-free numbers are mandated and enforced by US carriers.
## How can I receive messages using Azure Communication Services?
-Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/telephony-sms/handle-sms-events) to setup your event-grid to receive messages.
+Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](../../quickstarts/telephony-sms/handle-sms-events.md) to setup your event-grid to receive messages.
## Can I send/receive long messages (>2048 chars)?
In the United States, Azure Communication Services does not check for landline n
## Can I send messages to multiple recipients?
-Yes, you can make one request with multiple recipients. Follow this [quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/telephony-sms/send?pivots=programming-language-csharp) to send messages to multiple recipients.
+Yes, you can make one request with multiple recipients. Follow this [quickstart](../../quickstarts/telephony-sms/send.md?pivots=programming-language-csharp) to send messages to multiple recipients.
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
You might want to optimize further if:
| Network optimization task | Details | | :-- | :-- |
-| Plan your network | In this documentation you can find minimal requirements to your network for calls. Refer to the [Teams example for planning your network](https://docs.microsoft.com/microsoftteams/tutorial-network-planner-example) |
+| Plan your network | In this documentation you can find minimal requirements to your network for calls. Refer to the [Teams example for planning your network](/microsoftteams/tutorial-network-planner-example) |
| External name resolution | Be sure that all computers running the Azure Communications Services SDKs can resolve external DNS queries to discover the services provided by Azure Communication Servicers and that your firewalls are not preventing access. Please ensure that the SDKs can resolve addresses *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io | | Maintain session persistence | Make sure your firewall doesn't change the mapped Network Address Translation (NAT) addresses or ports for UDP
-Validate NAT pool size | Validate the network address translation (NAT) pool size required for user connectivity. When multiple users and devices access Azure Communication Services using [Network Address Translation (NAT) or Port Address Translation (PAT)](https://docs.microsoft.com/office365/enterprise/nat-support-with-office-365), ensure that the devices hidden behind each publicly routable IP address do not exceed the supported number. Ensure that adequate public IP addresses are assigned to the NAT pools to prevent port exhaustion. Port exhaustion will contribute to internal users and devices being unable to connect to the Azure Communication Services |
-| Intrusion Detection and Prevention Guidance | If your environment has an [Intrusion Detection](https://docs.microsoft.com/azure/network-watcher/network-watcher-intrusion-detection-open-source-tools) or Prevention System (IDS/IPS) deployed for an extra layer of security for outbound connections, allow all Azure Communication Services URLs |
-| Configure split-tunnel VPN | We recommend that you provide an alternate path for Teams traffic that bypasses the virtual private network (VPN), commonly known as [split-tunnel VPN](https://docs.microsoft.com/windows/security/identity-protection/vpn/vpn-routing). Split tunneling means that traffic for Azure Communications Services doesn't go through the VPN but instead goes directly to Azure. Bypassing your VPN will have a positive impact on media quality, and it reduces load from the VPN devices and the organization's network. To implement a split-tunnel VPN, work with your VPN vendor. Other reasons why we recommend bypassing the VPN: <ul><li> VPNs are typically not designed or configured to support real-time media.</li><li> VPNs might also not support UDP (which is required for Azure Communication Services)</li><li>VPNs also introduce an extra layer of encryption on top of media traffic that's already encrypted.</li><li>Connectivity to Azure Communication Services might not be efficient due to hair-pinning traffic through a VPN device.</li></ul>|
-| Implement QoS | [Use Quality of Service (QoS)](https://docs.microsoft.com/microsoftteams/qos-in-teams) to configure packet prioritization. This will improve call quality and help you monitor and troubleshoot call quality. QoS should be implemented on all segments of a managed network. Even when a network has been adequately provisioned for bandwidth, QoS provides risk mitigation in the event of unanticipated network events. With QoS, voice traffic is prioritized so that these unanticipated events don't negatively affect quality. |
+Validate NAT pool size | Validate the network address translation (NAT) pool size required for user connectivity. When multiple users and devices access Azure Communication Services using [Network Address Translation (NAT) or Port Address Translation (PAT)](/office365/enterprise/nat-support-with-office-365), ensure that the devices hidden behind each publicly routable IP address do not exceed the supported number. Ensure that adequate public IP addresses are assigned to the NAT pools to prevent port exhaustion. Port exhaustion will contribute to internal users and devices being unable to connect to the Azure Communication Services |
+| Intrusion Detection and Prevention Guidance | If your environment has an [Intrusion Detection](../../../network-watcher/network-watcher-intrusion-detection-open-source-tools.md) or Prevention System (IDS/IPS) deployed for an extra layer of security for outbound connections, allow all Azure Communication Services URLs |
+| Configure split-tunnel VPN | We recommend that you provide an alternate path for Teams traffic that bypasses the virtual private network (VPN), commonly known as [split-tunnel VPN](/windows/security/identity-protection/vpn/vpn-routing). Split tunneling means that traffic for Azure Communications Services doesn't go through the VPN but instead goes directly to Azure. Bypassing your VPN will have a positive impact on media quality, and it reduces load from the VPN devices and the organization's network. To implement a split-tunnel VPN, work with your VPN vendor. Other reasons why we recommend bypassing the VPN: <ul><li> VPNs are typically not designed or configured to support real-time media.</li><li> VPNs might also not support UDP (which is required for Azure Communication Services)</li><li>VPNs also introduce an extra layer of encryption on top of media traffic that's already encrypted.</li><li>Connectivity to Azure Communication Services might not be efficient due to hair-pinning traffic through a VPN device.</li></ul>|
+| Implement QoS | [Use Quality of Service (QoS)](/microsoftteams/qos-in-teams) to configure packet prioritization. This will improve call quality and help you monitor and troubleshoot call quality. QoS should be implemented on all segments of a managed network. Even when a network has been adequately provisioned for bandwidth, QoS provides risk mitigation in the event of unanticipated network events. With QoS, voice traffic is prioritized so that these unanticipated events don't negatively affect quality. |
| Optimize WiFi | Similar to VPN, WiFi networks aren't necessarily designed or configured to support real-time media. Planning for, or optimizing, a WiFi network to support Azure Communication Services is an important consideration for a high-quality deployment. Consider these factors: <ul><li>Implement QoS or WiFi Multimedia (WMM) to ensure that media traffic is getting prioritized appropriately over your WiFi networks.</li><li>Plan and optimize the WiFi bands and access point placement. The 2.4 GHz range might provide an adequate experience depending on access point placement, but access points are often affected by other consumer devices that operate in that range. The 5 GHz range is better suited to real-time media due to its dense range, but it requires more access points to get sufficient coverage. Endpoints also need to support that range and be configured to leverage those bands accordingly.</li><li>If you're using dual-band WiFi networks, consider implementing band steering. Band steering is a technique implemented by WiFi vendors to influence dual-band clients to use the 5 GHz range.</li><li>When access points of the same channel are too close together, they can cause signal overlap and unintentionally compete, resulting in a degraded user experience. Ensure that access points that are next to each other are on channels that don't overlap.</li></ul> Each wireless vendor has its own recommendations for deploying its wireless solution. Consult your WiFi vendor for specific guidance.|
Validate NAT pool size | Validate the network address translation (NAT) pool siz
### Operating system and Browsers (for JavaScript SDKs) Azure Communication Services voice/video SDKs support certain operating systems and browsers.
-Learn about the operating systems and browsers that the calling SDKs support in the [calling conceptual documentation](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+Learn about the operating systems and browsers that the calling SDKs support in the [calling conceptual documentation](./calling-sdk-features.md).
## Next steps The following documents may be interesting to you: -- Learn more about [calling libraries](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features)-- Learn about [Client-server architecture](https://docs.microsoft.com/azure/communication-services/concepts/client-and-server-architecture)-- Learn about [Call flow topologies](https://docs.microsoft.com/azure/communication-services/concepts/call-flows)
+- Learn more about [calling libraries](./calling-sdk-features.md)
+- Learn about [Client-server architecture](../client-and-server-architecture.md)
+- Learn about [Call flow topologies](../call-flows.md)
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
After navigating to your Communication Services resource, select **Keys** from t
You can also access key information using Azure CLI, like your resource group or the keys for a specific resource.
-Install [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your Azure account.
+Install [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your Azure account.
```azurecli az login ```
In this quickstart you learned how to:
> * Delete the resource > [!div class="nextstepaction"]
-> [Create your first user access tokens](access-tokens.md)
+> [Create your first user access tokens](access-tokens.md)
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
You can subscribe to specific events to tell Event Grid which of the SMS events
If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
-Check out the full list of [events supported by Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
+Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
:::image type="content" source="./media/handle-sms-events/select-events-create-eventsub.png" alt-text="Screenshot showing the SMS Received and SMS Delivery Report Received event types being selected.":::
To view event triggers, we must generate events in the first place.
- `SMS Received` events are generated when the Communication Services phone number receives a text message. To trigger an event, just send a message from your phone to the phone number attached to your Communication Services resource. - `SMS Delivery Report Received` events are generated when you send an SMS to a user using a Communication Services phone number. To trigger an event, you are required to enable `Delivery Report` in the options of the [sent SMS](../telephony-sms/send.md). Try sending a message to your phone with `Delivery Report`. Completing this action incurs a small cost of a few USD cents or less in your Azure account.
-Check out the full list of [events supported by Azure Communication Services](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
+Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
### Receiving SMS events
Once you complete either action above you will notice that `SMS Received` and `S
:::image type="content" source="./media/handle-sms-events/sms-delivery-report-received.png" alt-text="Screenshot showing the Event Grid Schema for an SMS Delivery Report Event.":::
-Learn more about the [event schemas and other eventing concepts](https://docs.microsoft.com/azure/event-grid/event-schema-communication-services).
+Learn more about the [event schemas and other eventing concepts](../../../event-grid/event-schema-communication-services.md).
## Clean up resources
You may also want to:
- [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md)
+ - [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/logic-app.md
# Quickstart: Send SMS messages in Azure Logic Apps with Azure Communication Services
-By using the [Azure Communication Services SMS](../../overview.md) connector and [Azure Logic Apps](../../../logic-apps/logic-apps-overview.md), you can create automated workflows, or *logic apps*, that can send SMS messages. This quickstart shows how to automatically send text messages in response to a trigger event, which is the first step in a logic app workflow. A trigger event can be an incoming email message, a recurrence schedule, an [Azure Event Grid](../../../event-grid/overview.md) resource event, or any other [trigger that's supported by Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+By using the [Azure Communication Services SMS](../../overview.md) connector and [Azure Logic Apps](../../../logic-apps/logic-apps-overview.md), you can create automated workflows that can send SMS messages. This quickstart shows how to automatically send text messages in response to a trigger event, which is the first step in a logic app workflow. A trigger event can be an incoming email message, a recurrence schedule, an [Azure Event Grid](../../../event-grid/overview.md) resource event, or any other [trigger that's supported by Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
:::image type="content" source="./media/logic-app/azure-communication-services-connector.png" alt-text="Screenshot that shows the Azure portal, which is open to the Logic App Designer, and shows an example logic app that uses the Send SMS action for the Azure Communication Services connector.":::
To add the **Send SMS** action as a new step in your workflow by using the Azure
1. When you're done, on the designer toolbar, select **Save**.
-Next, run your logic app for testing.
+Next, run your logic app workflow for testing.
## Test your logic app
-To manually start your logic app, on the designer toolbar, select **Run**. Or, you can wait for your logic app to trigger. In both cases, the logic app should send an SMS message to your specified destination phone number. For more information about running your logic app, review [how to run your logic app](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-your-logic-app)
+To manually start your workflow, on the designer toolbar, select **Run**. Or, you can wait for the trigger to fire. In both cases, the workflow should send an SMS message to your specified destination phone number. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow).
## Clean up resources
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
public async Task<ActionResult> PostAsync([FromBody] object request)
```
-The above code depends on the `Microsoft.Azure.EventGrid` NuGet package. To learn more about Event Grid endpoint validation, visit the [endpoint validation documentation](https://docs.microsoft.com/azure/event-grid/receive-events#endpoint-validation)
+The above code depends on the `Microsoft.Azure.EventGrid` NuGet package. To learn more about Event Grid endpoint validation, visit the [endpoint validation documentation](../../../event-grid/receive-events.md#endpoint-validation)
We'll then subscribe this webhook to the `recording` event:
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps For more information, see the following articles: -- Check out our [web calling sample](https://docs.microsoft.com/azure/communication-services/samples/web-calling-sample)-- Learn about [Calling SDK capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/calling-client-samples?pivots=platform-web)-- Learn more about [how calling works](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/about-call-types)
+- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Learn about [Calling SDK capabilities](./calling-client-samples.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps For more information, see the following articles: -- Check out our [web calling sample](https://docs.microsoft.com/azure/communication-services/samples/web-calling-sample)-- Learn about [Calling SDK capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/calling-client-samples?pivots=platform-web)-- Learn more about [how calling works](https://docs.microsoft.com/azure/communication-services/concepts/voice-video-calling/about-call-types)-
+- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Learn about [Calling SDK capabilities](./calling-client-samples.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/overview.md
# Samples - Azure Communication Services has many samples available, which you can use to test out ACS services and features before creating your own application or use case. ## Application samples
connectors Connectors Create Api Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-sharepoint.md
Title: Connect to SharePoint from Azure Logic Apps
-description: Automate tasks and workflows that monitor and manage resources in SharePoint Online or SharePoint Server on premises by using Azure Logic Apps
+description: Monitor and manage resources in SharePoint Online or SharePoint Server on premises by using Azure Logic Apps
ms.suite: integration-+ Previously updated : 08/25/2018 Last updated : 04/27/2021 tags: connectors
-# Monitor and manage SharePoint resources with Azure Logic Apps
+# Connect to SharePoint resources with Azure Logic Apps
-With Azure Logic Apps and the SharePoint connector,
-you can create automated tasks and workflows that
-monitor and manage resources, such as files, folders,
-lists, items, persons, and so on, in SharePoint
-Online or in SharePoint Server on premises, for example:
+To automate tasks that monitor and manage resources, such as files, folders, lists, and items, in SharePoint Online or in on-premises SharePoint Server, you can create automated integration workflows by using Azure Logic Apps and the SharePoint connector.
+
+The following list describes example tasks that you can automate:
* Monitor when files or items are created, changed, or deleted. * Create, get, update, or delete items.
Online or in SharePoint Server on premises, for example:
* Send HTTP requests to SharePoint. * Get entity values.
-You can use triggers that get responses from SharePoint and
-make the output available to other actions. You can use actions
-in your logic apps to perform tasks in SharePoint. You can also
-have other actions use the output from SharePoint actions.
-For example, if you regularly fetch files from SharePoint,
-you can send messages to your team by using the Slack connector.
-If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
+In your logic app workflow, you can use a trigger that monitors events in SharePoint and makes the output available to other actions. You can then use actions to perform various tasks in SharePoint. You can also include other actions that use the output from SharePoint actions. For example, if you regularly retrieve files from SharePoint, you can send email alerts about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). Or, try this [quickstart to create your first example logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
-
-* Your SharePoint site address and user credentials
+* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
- Your credentials authorize your logic app to create
- a connection and access your SharePoint account.
+* Your SharePoint site address and user credentials. You need these credentials so that you can authorize your workflow to access your your SharePoint account.
-* Before you can connect logic apps to on-premises
-systems such as SharePoint Server, you need to
-[install and set up an on-premises data gateway](../logic-apps/logic-apps-gateway-install.md).
-That way, you can specify to use your gateway installation when
-you create the SharePoint Server connection for your logic app.
+* For connections to an on-premises SharePoint server, you need to [install and set up the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
-* Basic knowledge about
-[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+ You can then select the gateway resource to use when you create the SharePoint Server connection from your workflow.
-* The logic app where you want to access your SharePoint account.
-To start with a SharePoint trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-To use a SharePoint action, start your logic app with a trigger,
-such as a Salesforce trigger, if you have a Salesforce account.
+* The logic app workflow where you need access to your SharePoint site or server.
- For example, you can start your logic app with the
- **When a record is created** Salesforce trigger.
- This trigger fires each time that a new record,
- such as a lead, is created in Salesforce.
- You can then follow this trigger with the SharePoint
- **Create file** action. That way, when the new
- record is created, your logic app creates a file
- in SharePoint with information about that new record.
+ * To start the workflow with a SharePoint trigger, you need a blank logic app workflow.
+ * To add a SharePoint action, your workflow needs to already have a trigger.
## Connect to SharePoint [!INCLUDE [Create connection general intro](../../includes/connectors-create-connection-general-intro.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
+## Add a trigger
-1. For blank logic apps, in the search box,
-enter "sharepoint" as your filter.
-Under the triggers list, select the trigger you want.
+1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the Logic App Designer, if not open already.
- -or-
+1. On the designer, in the search box, enter `sharepoint` as the search term. Select the **SharePoint** connector.
- For existing logic apps, under the last step where
- you want to add a SharePoint action, choose **New step**.
- In the search box, enter "sharepoint" as your filter.
- Under the actions list, select the action you want.
+1. From the **Triggers** list, select the trigger that you want to use.
- To add an action between steps,
- move your pointer over the arrow between steps.
- Choose the plus sign (**+**) that appears,
- and then select **Add an action**.
+1. When you are prompted to sign in and create a connection, choose one of the following options:
-1. When you're prompted to sign in,
-provide the necessary connection information.
-If you're using SharePoint Server,
-make sure you select **Connect via on-premises data gateway**.
-When you're done, choose **Create**.
+ * For SharePoint Online, select **Sign in** and authenticate your user credentials.
+ * For SharePoint Server, select **Connect via on-premises data gateway**. Provide the request information about the gateway resource to use, the authentication type, and other necessary details.
-1. Provide the necessary details for your selected trigger
-or action and continue building your logic app's workflow.
+1. When you're done, select **Create**.
-## Connector reference
+ After your workflow successfully creates the connection, your selected trigger appears.
-For technical details about triggers, actions, and limits, which are
-described by the connector's OpenAPI (formerly Swagger) description,
-review the connector's [reference page](/connectors/sharepoint/).
+1. Provide the information to set up the trigger and continue building your workflow.
-## Get support
+## Add an action
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
+1. From the Azure portal, Visual Studio Code, or Visual Studio, open your logic app workflow in the Logic App Designer, if not open already.
-## Next steps
+1. Choose one of the following options:
+
+ * To add an action as the currently last step, select **New step**.
+ * To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**), and then select **Add an action**.
+
+1. Under **Choose an operation**, in the search box, enter `sharepoint` as the search term. Select the **SharePoint** connector.
+
+1. From the **Actions** list, select the action that you want to use.
+
+1. When you are prompted to sign in and create a connection, choose one of the following options:
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+ * For SharePoint Online, select **Sign in** and authenticate your user credentials.
+ * For SharePoint Server, select **Connect via on-premises dat